diff --git a/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_content_list.json b/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2d95c9cf9a003657888733db38168aa436c169c8 --- /dev/null +++ b/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ddacac375b3589a7e7f529a9086d4fbf641c8f29b089e6a280e609b2511f780 +size 177944 diff --git a/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_model.json b/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b164d44a72982fd305e5a7276cf246368f1ecd9a --- /dev/null +++ b/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20f015c340a83f240cb578aad110cf7cf58c528ea100dfc4e02d29406996c5e8 +size 206436 diff --git a/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_origin.pdf b/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a6c37b691040806b005026d2c2bd3c8ad771bf62 --- /dev/null +++ b/rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b3ec21e0dc106dcd15fb679a53a7a540f1b55ffce1a0f3e81be6bce17badc82 +size 1063861 diff --git a/rankingpolicygradient/full.md b/rankingpolicygradient/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3cabd5a2795d63656c034c758ce539c82190ed14 --- /dev/null +++ b/rankingpolicygradient/full.md @@ -0,0 +1,841 @@ +# RANKING POLICY GRADIENT + +# Kaixiang Lin + +Department of Computer Science and Engineering + +Michigan State University + +East Lansing, MI 48824-4403, USA + +linkaixi@msu.edu + +# Jiayu Zhou + +Department of Computer Science and Engineering + +Michigan State University + +East Lansing, MI 48824-4403, USA + +jiayuz@msu.edu + +# ABSTRACT + +Sample inefficiency is a long-lasting problem in reinforcement learning (RL). The state-of-the-art estimates the optimal action values while it usually involves an extensive search over the state-action space and unstable optimization. Towards the sample-efficient RL, we propose ranking policy gradient (RPG), a policy gradient method that learns the optimal rank of a set of discrete actions. To accelerate the learning of policy gradient methods, we establish the equivalence between maximizing the lower bound of return and imitating a near-optimal policy without accessing any oracles. These results lead to a general off-policy learning framework, which preserves the optimality, reduces variance, and improves the sample-efficiency. We conduct extensive experiments showing that when consolidating with the off-policy learning framework, RPG substantially reduces the sample complexity, comparing to the state-of-the-art. + +# 1 INTRODUCTION + +One of the major challenges in reinforcement learning (RL) is the high sample complexity (Kakade et al., 2003), which is the number of samples must be collected to conduct successful learning. There are different reasons leading to poor sample efficiency of RL (Yu, 2018). Because policy gradient algorithms directly optimizing return estimated from rollouts (e.g., REINFORCE (Williams, 1992)) could suffer from high variance (Sutton & Barto, 2018), value function baselines were introduced by actor-critic methods to reduce the variance and improve the sample-efficiency. However, since a value function is associated with a certain policy, the samples collected by former policies cannot be readily used without complicated manipulations (Degris et al., 2012) and extensive parameter tuning (Nachum et al., 2017). Such an on-policy requirement increases the difficulty of sample-efficient learning. + +On the other hand, off-policy methods, such as one-step $Q$ -learning (Watkins & Dayan, 1992) and variants of deep $Q$ networks (DQN) (Mnih et al., 2015; Hessel et al., 2017; Dabney et al., 2018; Van Hasselt et al., 2016; Schaul et al., 2015), enjoys the advantage of learning from any trajectory sampled from the same environment (i.e., off-policy learning), are currently among the most sample-efficient algorithms. These algorithms, however, often require extensive searching (Bertsekas & Tsitsiklis, 1996, Chap. 5) over the large state-action space to estimate the optimal action value function. Another deficiency is that, the combination of off-policy learning, bootstrapping, and function approximation, making up what Sutton & Barto (2018) called the "deadly triad", can easily lead to unstable or even divergent learning (Sutton & Barto, 2018, Chap. 11). These inherent issues limit their sample-efficiency. + +Towards addressing the aforementioned challenge, we approach the sample-efficient reinforcement learning from a ranking perspective. Instead of estimating optimal action value function, we concentrate on learning optimal rank of actions. The rank of actions depends on the relative action values. As long as the relative action values preserve the same rank of actions as the optimal action values ( $Q$ -values), we choose the same optimal action. To learn optimal relative action values, we propose the ranking policy gradient (RPG) that optimizes the actions' rank with respect to the long-term reward by learning the pairwise relationship among actions. + +Ranking Policy Gradient (RPG) that directly optimizes relative action values to maximize the return is a policy gradient method. The track of off-policy actor-critic methods (Degris et al., 2012; Gu et al., 2016; Wang et al., 2016) have made substantial progress on improving the sample-efficiency + +of policy gradient. However, the fundamental difficulty of learning stability associated with the bias-variance trade-off remains (Nachum et al., 2017). In this work, we first exploit the equivalence between RL optimizing the lower bound of return and supervised learning that imitates a specific optimal policy. Build upon this theoretical foundation, we propose a general off-policy learning framework that equips the generalized policy iteration (Sutton & Barto, 2018, Chap. 4) with an external step of supervised learning. The proposed off-policy learning not only enjoys the property of optimality preserving (unbiasedness), but also largely reduces the variance of policy gradient because of its independence of the horizon and reward scale. Besides, we empirically show that there is a trade-off between optimality and sample-efficiency. Last but not least, we demonstrate that the proposed approach, consolidating the RPG with off-policy learning, significantly outperforms the state-of-the-art (Hessel et al., 2017; Bellemare et al., 2017; Dabney et al., 2018; Mnih et al., 2015). + +# 2 RELATED WORKS + +Sample Efficiency. The sample efficient reinforcement learning can be roughly divided into two categories. The first category includes variants of $Q$ -learning (Mnih et al., 2015; Schaul et al., 2015; Van Hasselt et al., 2016; Hessel et al., 2017). The main advantage of $Q$ -learning methods is the use of off-policy learning, which is essential towards sample efficiency. The representative DQN (Mnih et al., 2015) introduced deep neural network in $Q$ -learning, which further inspired a track of successful DQN variants such as Double DQN (Van Hasselt et al., 2016), Dueling networks (Wang et al., 2015), prioritized experience replay (Schaul et al., 2015), and RAINBOW (Hessel et al., 2017). The second category is the actor-critic approaches. Most of recent works (Degris et al., 2012; Wang et al., 2016; Gruslys et al., 2018) in this category leverage importance sampling by re-weighting the samples to correct the estimation bias and reduce variance. Its main advantage is in the wall-clock times due to the distributed framework, firstly presented in (Mnih et al., 2016), instead of the sample-efficiency. As of the time of writing, the variants of DQN (Hessel et al., 2017; Dabney et al., 2018; Bellemare et al., 2017; Schaul et al., 2015; Van Hasselt et al., 2016) are among the algorithms of most sample efficiency, which are adopted as our baselines for comparison. + +RL as supervised learning. Numerous amount of works have developed the connections between RL and supervised learning such as Expectation-Maximization algorithms (Dayan & Hinton, 1997; Peters & Schaal, 2007; Kober & Peters, 2009; Abdelmaleki et al., 2018), Entropy-Regularized RL (Oh et al., 2018; Haarnoja et al., 2018), and Interactive Imitation Learning (IIL) (Daumé et al., 2009; Syed & Schapire, 2010; Ross & Bagnell, 2010; Ross et al., 2011; Sun et al., 2017; Hester et al., 2018; Osa et al., 2018). EM-based approaches utilize the probabilistic framework to transfer RL maximizing lower bound of return as a re-weighted regression problem while it requires on-policy estimation on the expectation step. Entropy-Regularized RL optimizing entropy augmented objectives can lead to off-policy learning without the usage of importance sampling while it converges to soft optimality (Haarnoja et al., 2018). + +Of the three tracks in prior works, the IIL is most closely related to our work. The IIL works firstly pointed out the connection between imitation learning and reinforcement learning (Ross & Bagnell, 2010; Syed & Schapire, 2010; Ross et al., 2011) and explore the idea of facilitating reinforcement learning by imitating experts. However, most of imitation learning algorithms assume the access to the expert policy or demonstrations. Our off-policy learning framework can be interpreted as an online imitation learning approach that constructs expert demonstrations during the exploration without soliciting experts, and conducts supervised learning to maximize return at the same time. + +In conclusion, our approach is different from the prior work in terms of at least one of the following aspects: objectives, oracle assumptions, the optimality of learned policy, and on-policy requirement. More concretely, the proposed method is able to learn both deterministic and stochastic optimal policy in terms of long-term reward, without access to the oracle (such as expert policy or expert demonstration) and it can be trained both empirically and theoretically in an off-policy fashion. Due to the space limits, we defer the detailed discussion of the related work in the Appendix Section 10.1. + +# 3 NOTATIONS AND PROBLEM SETTING + +In this paper, we consider a finite horizon $T$ , discrete time Markov Decision Process (MDP) with a finite discrete state space $S$ and for each state $s \in S$ , the action space $\mathcal{A}_s$ is finite. The environment dynamics is denoted as $\mathbf{P} = \{p(s'|s,a),\forall s,s' \in S,a \in \mathcal{A}_s\}$ . We note that the dimension of action space can vary given different states. We use $m = \max_s\| \mathcal{A}_s\|$ to denote the maximal action dimension among all possible states. Our goal is to maximize the expected sum of rewards, or return + +$J(\theta) = \mathbf{E}_{\tau, \pi_{\theta}}[\sum_{t=1}^{T} r(s_t, a_t)]$ , where $|r(s, a)| < \infty, \forall s, a$ . In this case, the optimal deterministic Markovian policy always exists (Puterman, 2014)[Proposition 4.4.3]. The upper bound of trajectory reward $(r(\tau))$ is denoted as $R_{\max} = \max_{\tau} r(\tau)$ . A comprehensive list of notations are elaborated in the Appendix Table 1. + +# 4 RANKING POLICY GRADIENT + +Value function estimation is widely used in advanced RL algorithms (Mnih et al., 2015; 2016; Schulman et al., 2017; Gruslys et al., 2018; Hessel et al., 2017; Dabney et al., 2018) to facilitate the learning process. In practice, the on-policy requirement of value function estimations in actor-critic methods has largely increased the difficulty of sample-efficient learning (Degris et al., 2012; Gruslys et al., 2018). With the advantage of off-policy learning, the DQN (Mnih et al., 2015) variants are currently among the most sample-efficient algorithms (Hessel et al., 2017; Dabney et al., 2018; Bellemare et al., 2017). For complicated tasks, the value function can align with the relative relationship of action's return, but the absolute values are hardly accurate (Mnih et al., 2015; Ilyas et al., 2018). + +The above observations motivate us to look at the decision phase of RL from a different prospect: Given a state, the decision making is to perform a relative comparison over available actions and then choose the best action, which can lead to relatively higher return than others. Therefore, an alternative solution is to learn the optimal rank of the actions. In this section, we show how to optimize the rank of actions to maximize the return, and thus avoid the necessity of accurate estimation for optimal action value function. To learn the rank of actions, we focus on learning relative action value $(\lambda$ -values), defined as follows: + +Definition 1 (Relative action value ( $\lambda$ -values)). For a state $s$ , the relative action values of $m$ actions $(\lambda(s, a_k), k = 1, \dots, m)$ is a list of scores that denotes the rank of actions. If $\lambda(s, a_i) > \lambda(s, a_j)$ , then action $a_i$ is ranked higher than action $a_j$ . + +The optimal relative action values should preserve the same optimal action as the optimal action values: + +$$ +\operatorname * {a r g m a x} _ {a} \lambda (s, a) = \operatorname * {a r g m a x} _ {a} Q ^ {\pi_ {*}} (s, a) +$$ + +where $Q^{\pi_{*}}(s, a_{i})$ and $\lambda(s, a_{i})$ represent the optimal action value and the relative action value of action $a_{i}$ , respectively. We omit the model parameter $\theta$ in $\lambda_{\theta}(s, a_{i})$ for concise presentation. + +Remark 1. The $\lambda$ -values are different from the advantage function $A^{\pi}(s,a) = Q^{\pi}(s,a) - V^{\pi}(s)$ . The advantage functions quantitatively show the difference of return taking different actions following the current policy $\pi$ . The $\lambda$ -values only determine the relative order of actions and its magnitudes are not the estimations of returns. + +To learn the $\lambda$ -values, we can construct a probabilistic model of $\lambda$ -values such that the best action has the highest probability to be selected than others. Inspired by learning to rank (Burges et al., 2005), we consider the pairwise relationship among all actions, by modeling the probability (denoted as $p_{ij}$ ) of an action $a_i$ to be ranked higher than any action $a_j$ as follows: + +$$ +p _ {i j} = \frac {\exp (\lambda (s , a _ {i}) - \lambda (s , a _ {j}))}{1 + \exp (\lambda (s , a _ {i}) - \lambda (s , a _ {j}))}, \tag {1} +$$ + +where $p_{ij} = 0.5$ means the relative action value of $a_i$ is same as that of the action $a_j$ , $p_{ij} > 0.5$ indicates that the action $a_i$ is ranked higher than $a_j$ . Given the independent Assumption 1, we can represent the probability of selecting one action as the multiplication of a set of pairwise probabilities in Eq (1). Formally, we define the pairwise ranking policy in Eq (2). Please refer to Section 10.10 in the Appendix for the discussions on feasibility of Assumption 1. + +Definition 2. The pairwise ranking policy is defined as: + +$$ +\pi (a = a _ {i} | s) = \Pi_ {j = 1, j \neq i} ^ {m} p _ {i j}, \tag {2} +$$ + +where the $p_{ij}$ is defined in Eq (1). The probability depends on the relative action values $q = [\lambda_1, \dots, \lambda_m]$ . The highest relative action value leads to the highest probability to be selected. + +Assumption 1. For a state $s$ , the set of events $E = \{e_{ij} | \forall i \neq j\}$ are conditionally independent, where $e_{ij}$ denotes the event that action $a_i$ is ranked higher than action $a_j$ . The independence of the events is conditioned on a MDP and a stationary policy. + +Our ultimate goal is to maximize the long-term reward through optimizing the pairwise ranking policy or equivalently optimizing pairwise relationship among the action pairs. Ideally, we would like the pairwise ranking policy selects the best action with the highest probability and the highest $\lambda$ -value. To achieve this goal, we resort to the policy gradient method. Formally, we propose the ranking policy gradient method (RPG), as shown in Theorem 1. + +Theorem 1 (Ranking Policy Gradient Theorem). For any MDP, the gradient of the expected long-term reward $J(\theta) = \sum_{\tau} p_{\theta}(\tau) r(\tau)$ w.r.t. the parameter $\theta$ of a pairwise ranking policy (Def 2) can be approximated by: + +$$ +\nabla_ {\theta} J (\theta) \approx \mathbf {E} _ {\tau \sim \pi_ {\theta}} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \left(\sum_ {j = 1, j \neq i} ^ {m} \left(\lambda_ {i} - \lambda_ {j}\right) / 2\right) r (\tau) \right], \tag {3} +$$ + +and the deterministic pairwise ranking policy $\pi_{\theta}$ is: $a = \arg \max_{i} \lambda_{i}$ , $i = 1, \ldots, m$ , where $\lambda_{i}$ denotes the relative action value of action $a_{i}$ ( $\lambda_{\theta}(s_{t}, a_{t})$ , $a_{i} = a_{t}$ ), $s_{t}$ and $a_{t}$ denote the $t$ -th state-action pair in trajectory $\tau$ , $\lambda_{j}$ , $\forall j \neq i$ denote the relative action values of all other actions that were not taken given state $s_{t}$ in trajectory $\tau$ , i.e., $\lambda_{\theta}(s_{t}, a_{j})$ , $\forall a_{j} \neq a_{t}$ . + +The proof of Theorem 1 is available in Appendix Section 10.2. Theorem 1 states that optimizing the discrepancy between the relative action values of the best action and all other actions, is optimizing the pairwise relationships that maximize the return. One limitation of RPG is that it is not convenient for the tasks where only optimal stochastic policies exist since the pairwise ranking policy takes extra efforts to construct a probability distribution [see Section 10.3 in Appendix]. In order to learn the stochastic policy, we introduce Listwise Policy Gradient (LPG) that optimizes the probability of ranking a specific action on the top of a set of actions, with respect to the return. In the context of RL, this top one probability is the probability of action $a_{i}$ to be chosen, which is equal to the sum of probability all possible permutations that map action $a_{i}$ in the top. Inspired by listwise learning to rank approach (Cao et al., 2007), the top one probability can be modeled by the softmax function. Therefore, LPG is equivalent to the REINFORCE (Williams, 1992) algorithm with a softmax layer. LPG provides another interpretation of REINFORCE algorithm from the perspective of learning the optimal ranking and enables the learning of both deterministic policy and stochastic policy. Due to the space limit, we defer the detailed description of LPG in Appendix Section 10.4. + +To this end, seeking sample-efficiency motivates us to learn the relative relationship (RPG (Theorem 1) and LPG (Theorem 4)) of actions, instead of seeking accurate estimation of optimal action values and then choosing action greedily. However, both of the RPG and LPG belong to policy gradient methods, which suffers from large variance and the on-policy learning requirement (Sutton & Barto, 2018). Therefore, the direct implementation of RPG or LPG is still far from sample-efficient. In the next section, we will describe a general off-policy learning framework empowered by supervised learning, which provides an alternative way to accelerate learning, preserve optimality, and reduce variance. + +# 5 OFF-POLICY LEARNING AS SUPERVISED LEARNING + +In this section, we discuss the connections and discrepancies between RL and supervised learning, and our results lead to a sample-efficient off-policy learning paradigm for RL. The main result in this section is Theorem 2, which casts the problem of maximizing the lower bound of return into a supervised learning problem, given one relatively mild Assumption 2 and practical Assumptions 1,3. As we show by Lemma 4 in the Appendix that assumptions are valid in a range of RL tasks. The central idea is to collect only the near-optimal trajectories when the learning agent interacts with the environment, and imitate the near-optimal policy by maximizing the log likelihood of the state-action pairs from near-optimal trajectories. With the road map in mind, we then begin to introduce our approach as follows. + +In a discrete action MDP with finite states and horizon, given the near-optimal policy $\pi_{*}$ , the stationary state distribution is given by: $p_{\pi_*(s)} = \sum_{\tau} p(s|\tau) p_{\pi_*(\tau)}(\tau)$ , where $p(s|\tau)$ is the probability of a certain state given a specific trajectory $\tau$ and is not associated with any policies, and only + +$p_{\pi_*(\tau)}$ is related to the policy parameters. The stationary distribution of state-action pairs is thus: $p_{\pi_*(s,a)} = p_{\pi_*(s)\pi_*(a|s)}$ . In this section, we consider the MDP that each initial state will lead to at least one (near)-optimal trajectory. For a more general case, please refer to the discussion in Appendix 10.5. In order to connect supervised learning (i.e., imitating a near-optimal policy) with RL and enable sample-efficient off-policy learning, we first introduce the trajectory reward shaping (TRS), defined as follows: + +Definition 3 (Trajectory Reward Shaping, TRS). Given a fixed trajectory $\tau$ , its trajectory reward is shaped as follows: + +$$ +w (\tau) = \left\{ \begin{array}{l l} 1, i f r (\tau) \geq c \\ 0, o. w. \end{array} \right. +$$ + +where $c = R_{\mathrm{max}} - \epsilon$ is a problem-dependent near-optimal trajectory reward threshold that indicates the least reward of near-optimal trajectory, $\epsilon \geq 0$ and $\epsilon \ll R_{\mathrm{max}}$ . We denote the set of all possible near-optimal trajectories as $\mathcal{T} = \{\tau | w(\tau) = 1\}$ , i.e., $w(\tau) = 1, \forall \tau \in \mathcal{T}$ . + +Remark 2. The threshold $c$ indicates a trade-off between the sample-efficiency and the optimality. The higher the threshold, the less frequently it will hit the near-optimal trajectories during exploration, which means it has higher sample complexity, while the final performance is better (see Figure 3). + +Remark 3. The trajectory reward can be reshaped to any positive functions that are not related to policy parameter $\theta$ . For example, if we set $w(\tau) = r(\tau)$ , the conclusions in this section still hold (see Eq (38) in Appendix, Section 10.6). For the sake of simplicity, we set $w(\tau) = 1$ . + +Different from the reward shaping works (Ng et al., 1999), we directly shape the trajectory reward, which will enable the smooth transform from RL to SL. After shaping the trajectory reward, we can transfer the goal of RL from maximizing the return to maximize the long-term performance (Def 4). + +Definition 4 (Long-term Performance). + +$$ +\sum_ {\tau} p _ {\theta} (\tau) w (\tau) \tag {4} +$$ + +The long-term performance is the expected shaped trajectory reward, as shown in Eq (4). By Def 3, the expectation over all trajectories is the equal to that over the near-optimal trajectories in $\mathcal{T}$ , i.e., $\sum_{\tau}p_{\theta}(\tau)w(\tau) = \sum_{\tau \in \mathcal{T}}p_{\theta}(\tau)w(\tau)$ . + +The optimality is preserved after trajectory reward shaping $(\epsilon = 0, c = R_{\mathrm{max}})$ since the optimal policy $\pi_*$ maximizing long-term performance is also an optimal policy for original MDP, i.e., $\sum_{\tau} p_{\pi_*(\tau)} r(\tau) = \sum_{\tau \in \mathcal{T}} p_{\pi_*(\tau)} r(\tau) = R_{\mathrm{max}}$ , where $\pi_* = \arg \max_{\pi_\theta} \sum_{\tau} p_{\pi_\theta}(\tau) w(\tau)$ and $p_{\pi_*(\tau)} = 0, \forall \tau \notin \mathcal{T}$ (see Lemma 2 in Appendix 10.6). Similarly, when $\epsilon > 0$ , the optimal policy after trajectory reward shaping is a near-optimal policy for original MDP. Note that most policy gradient methods use softmax function, in which we have $\exists \tau \notin \mathcal{T}, p_{\pi_\theta}(\tau) > 0$ (see Lemma 3 in Appendix 10.6). Therefore when softmax is used to model a policy, it will not converge to an exact optimal policy. On the other hand, ideally, the discrepancy of the performance between them can be arbitrarily small based on the universal approximation (Hornik et al., 1989) with general conditions on the activation function and Theorem 1. in (Syed & Schapire, 2010). + +Essentially, we use TRS to filter out near-optimal trajectories and then we maximize the probabilities of near-optimal trajectories to maximize the long-term performance. This procedure can be approximated by maximizing the log-likelihood of near-optimal state-action pairs, which is a supervised learning problem. Before we state our main results, we first introduce the definition of uniformly near-optimal policy (Def 5) and a prerequisite (Asm. 2) specifying the applicability of the results. + +Definition 5 (Uniformly Near-Optimal Policy, UNOP). The Uniformly Near-Optimal Policy $\pi_{*}$ is the policy whose probability distribution over near-optimal trajectories $(\mathcal{T})$ is a uniform distribution. i.e. $p_{\pi_*(\tau)} = \frac{1}{|\mathcal{T}|}, \forall \tau \in \mathcal{T}$ , where $|\mathcal{T}|$ is the number of near-optimal trajectories. When we set $c = R_{\max}$ , it is an optimal policy in terms of both maximizing return and long-term performance. In the case of $c = R_{\max}$ , the corresponding uniform policy is an optimal policy, we denote this type of optimal policy as uniformly optimal policy (UOP). + +Assumption 2 (Existence of Uniformly Near-Optimal Policy). We assume the existence of Uniformly Near-Optimal Policy (Def 5). + +Based on Lemma 4 in Appendix Section 10.9, Assumption 2 is satisfied for certain MDPs that have deterministic dynamics. Other than Assumption 2, all other assumptions in this work (Assumptions 1,3) can almost always be satisfied in practice, based on empirical observation. With these relatively mild assumptions, we present the following long-term performance theorem, which shows the close connection between supervised learning and RL. + +Theorem 2 (Long-term Performance Theorem). Maximizing the lower bound of expected long-term performance (Eq (4)) is maximizing the log-likelihood of state-action pairs sampled from an uniformly (near)-optimal policy $\pi_{*}$ , which is a supervised learning problem: + +$$ +\underset {\theta} {\arg \max } \sum_ {s, a} p _ {\pi_ {*}} (s, a) \log \pi_ {\theta} (a | s) \tag {5} +$$ + +The optimal policy of maximizing the lower bound is also the optimal policy of maximizing the long-term performance and the return. + +Remark 4. It is worth noting that Theorem 2 does not require a uniformly near-optimal policy $\pi_{*}$ to be deterministic. The only requirement is the existence of a uniformly near-optimal policy. + +Remark 5. Maximizing the lower bound of long-term performance is to maximize the lower bound of long-term reward since we can set $w(\tau) = r(\tau)$ and $\sum_{\tau} p_{\theta}(\tau) r(\tau) \geq \sum_{\mathcal{T}} p_{\theta}(\tau) w(\tau)$ . An optimal policy of maximizing this lower bound is also an optimal policy of maximizing the long-term performance when $c = R_{\max}$ , thus maximizing the return. + +The proof of Theorem 2 can be found in Appendix, Section 10.6. Theorem 2 indicates that we break the dependency between current policy $\pi_{\theta}$ and the environment dynamics, which means off-policy learning is able to be conducted by the above supervised learning approach. Furthermore, we point out that there is a potential discrepancy between imitating UNOP by maximizing log likelihood (even when the optimal policy's samples are given) and the reinforcement learning since we are maximizing a lower bound of expected long-term performance (or equivalently the return over the near-optimal trajectories only) instead of return over all trajectories. In practice, the state-action pairs from an optimal policy is hard to construct while the uniform characteristic of UNOP can alleviate this issue (see Sec 6). Towards sample-efficient RL, we apply Theorem 2 to RPG, which reduces the ranking policy gradient to a classification problem by Corollary 1. + +Corollary 1 (Ranking performance policy gradient). Optimizing the lower bound of expected long-term performance (defined in Eq (4)) using pairwise ranking policy (Eq (2)) can be approximately optimized by the following loss: + +$$ +\min _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) \left(\sum_ {j = 1, j \neq i} ^ {m} \max (0, m a r g i n + \lambda (s, a _ {j}) - \lambda (s, a _ {i}))\right), \tag {6} +$$ + +where margin is a small positive value. We set margin equal to one in our experiments. + +The proof of Corollary 1 can be found in Appendix, section 10.7. Similarly, we can reduce LPG to a classification problem (see Appendix 10.7.1). One advantage of casting RL to SL is variance reduction. With the proposed off-policy supervised learning, we can reduce the upper bound of the policy gradient variance, as shown in the Corollary 2. Before introducing the variance reduction results, we first make the following standard assumption similar to (Degris et al., 2012, A1). Furthermore, the assumption is guaranteed for bounded continuously differentiable policy such as softmax function. + +Assumption 3. We assume the maximum norm of policy gradient is finite, i.e. + +$$ +\exists C < \infty , s. t. \| \nabla_ {\theta} \log \pi_ {\theta} (a | s) \| _ {\infty} \leq C, \forall s \in \mathcal {S}, a \in \mathcal {A} _ {s} +$$ + +Corollary 2 (Policy gradient variance reduction). The upper bound of the variance of each dimension of policy gradient is $O(T^2 C^2 M^2)$ . The upper bound of gradient variance of maximizing the lower bound of long-term performance Eq (5) is $O(C^2)$ , where $C$ is the maximum norm of log gradient based on Assumption 3, $M$ is the maximum absolute value of trajectory reward (i.e., $M \geq |r(\tau)|, \forall \tau$ ), and $T$ is the horizon. The upper bound of gradient variance by supervised learning compared to that of the regular policy gradient is reduced by an order of $O(T^2 M^2)$ , given $M > 1, T > 1$ , which is a very common situation in practice, and a stationary policy. + +![](images/6b8b6e153d5ae1392c11744965214e01bbdd1982713fb0f642b8fa8cebf85243.jpg) +Figure 1: The off-policy learning as supervised learning framework for general policy gradient methods. + +The proof of Corollary 2 can be found in Appendix 10.8. This corollary shows that the variance of regular policy gradient is upper-bounded by the square of time horizon and the maximum trajectory reward. It is aligned with our intuition and empirical observation: the longer the horizon the harder the learning. Also, the common reward shaping tricks such as truncating the reward to $[-1,1]$ (Castro et al., 2018) can help the learning since it reduces variance by decreasing the range of trajectory reward. With supervised learning, we concentrate the difficulty of long-time horizon into the exploration phase, which is an inevitable issue for all RL algorithms, and we drop the dependence on $T$ and $M$ for policy variance. Thus, it is more stable and efficient to train the policy using supervised learning. One potential limitation of this method is that the trajectory reward threshold $c$ is task-specific, which is crucial to the final performance and sample-efficiency. In many applications such as Dialogue system (Li et al., 2017), recommender system (Melville & Sindhwani, 2011), etc., we design the reward function to guide the learning process, in which $c$ is naturally known. For the cases that we have no prior knowledge on the reward function of MDP, we treat $c$ as a tuning parameter to balance the optimality and efficiency, as we empirically verified in Figure 3. The major theoretical uncertainty on general tasks is the existence of a uniformly near-optimal policy, which is negligible to the empirical performance. The rigorous theoretical analysis of this problem is beyond the scope of this work. + +# 6 AN ALGORITHMIC FRAMEWORK FOR OFF-POLICY LEARNING + +Based on the discussions in Section 5, we exploit the advantage of reducing RL into supervised learning via a proposed two-stages off-policy learning framework. As we illustrated in Figure 1, the proposed framework contains the following two stages: + +Generalized Policy Iteration for Exploration. The goal of the exploration stage is to collect different near-optimal trajectories as frequently as possible. Under the off-policy framework, the exploration agent and the learning agent can be separated. Therefore, any existing RL algorithm can be used during the exploration. The principle of this framework is using the most advanced RL agents as an exploration strategy in order to collect more near-optimal trajectories and leave the policy learning to the supervision stage. + +Supervision. In this stage, we imitate the uniformly near-optimal policy, UNOP (Def 5). Although we have no access to the UNOP, we can approximate the state-action distribution from UNOP by collecting the near-optimal trajectories only. The near-optimal samples are constructed online and we are not given any expert demonstration or expert policy beforehand. This step provides a sample-efficient approach to conduct exploitation, which enjoys the superiority of stability (Figure 2), variance reduction (Corollary 2), and optimality preserving (Theorem 2). + +The two-stage algorithmic framework can be directly incorporated in RPG and LPG to improve sample efficiency. The implementation of RPG is given in Algorithm 1, and LPG follows the same procedure except for the difference in the loss function. The main requirement of Alg. 1 is on the exploration efficiency and the MDP structure. During the exploration stage, a sufficient amount of the different near-optimal trajectories need to be collected for constructing a representative supervised learning training dataset. Theoretically, this requirement always holds [see Appendix Section 10.9, Lemma 5], while the number of episodes explored could be prohibitively large, which makes this + +algorithm sample-inefficient. This could be a practical concern of the proposed algorithm. However, according to our extensive empirical observations, we notice that long before the value function based state-of-the-art converges to near-optimal performance, enough amount of near-optimal trajectories are already explored. + +Therefore, we point out that instead of estimating optimal action value functions and then choosing action greedily, using value function to facilitate the exploration and imitating UNOP is a more sample-efficient approach. As illustrated in Figure 1, value based methods with off-policy learning, bootstrapping, and function approximation could lead to a divergent optimization (Sutton & Barto, 2018, Chap. 11). In contrast to resolving the instability, we circumvent this issue via constructing a stationary target using the samples from (near)-optimal trajectories, and perform imitation learning. This two-stage approach can avoid the extensive exploration of the suboptimal state-action space and reduce the substantial number of samples needed for estimating optimal action values. In the MDP where we have a high probability of hitting the near-optimal trajectories (such as PONG), the supervision stage can further facilitate the exploration. It should be emphasized that our work focuses on improving the sample-efficiency through more effective exploitation, rather than developing novel exploration method. Please refer to the Appendix Section 10.11 for more discussion on exploration efficiency. + +![](images/5bd57ebb75c8f65c0e1322f0894cadaa417fec03e7662341312e474c32ea229e.jpg) +7 EXPERIMENTAL RESULTS + +![](images/eb3e4e63bed16f90ff1b854912a99e5ab929c88235b3d029e4f12ff69afcbb57.jpg) + +![](images/131a68842bc0c4a1d38b55330201ba2023949652fbb7e578daeac21dee1f21a4.jpg) + +![](images/61bfa7fdf0b61053852e3e9c2b7ca0e71f7c1eb0abcf2e9b0d54d66d22fb5dce.jpg) + +![](images/607e0230bb02c65473e3f2078d3a884400b35bf7409319e652017217839d83a0.jpg) + +![](images/b91a90d20c18c658da9d5bb4894896f913d11f48ba39ff8d68212b779eb28479.jpg) + +![](images/ebea95f72d2e1079567685b8c7a434930cdb101bcec916eae841f2e5d6f34026.jpg) +Figure 2: The training curves of the proposed RPG and state-of-the-art. All results are averaged over random seeds from 1 to 5. The $x$ -axis represents the number of steps interacting with the environment (we update the model every four steps) and the $y$ -axis represents the averaged training episodic return. The error bars are plotted with a confidence interval of $95\%$ . + +![](images/37646b6db38cf043ef24a91cb257ec302aa54bd7cacc82c348077956f228ca57.jpg) + +![](images/0cf5a8cb1b5ce8b34f6bb29034bc65a7efca6fd3072d6c51207f9f908966376e.jpg) + +To evaluate the sample-efficiency of Ranking Policy Gradient (RPG), we focus on Atari 2600 games in OpenAI gym Bellemare et al. (2013); Brockman et al. (2016), without randomly repeating the previous action. We compare our method with the state-of-the-art baselines including DQN Mnih et al. (2015), C51 Bellemare et al. (2017), IQN Dabney et al. (2018), RAINBOW Hessel et al. (2017), and self-imitation learning (SIL) Oh et al. (2018). For reproducibility, we use the implementation provided in Dopamine framework1 Castro et al. (2018) for all baselines and proposed methods, except + +for SIL using the official implementation $^{2}$ . Following the standard evaluation protocol Oh et al. (2018); Hessel et al. (2017); Dabney et al. (2018); Bellemare et al. (2017), we report the training performance of all baselines as the increase of interactions with the environment, or proportionally the number of training iterations. We run the algorithms with five random seeds and report the average rewards with $95\%$ confidence intervals. The implementation details of the proposed RPG and its variants are given as follows: + +EPG: EPG is the stochastic listwise policy gradient (see Appendix Eq (18)) incorporated with the proposed off-policy learning. More concretely, we apply trajectory reward shaping (TRS, Def 3) to all trajectories encountered during exploration and train vanilla policy gradient using the off-policy samples. This is equivalent to minimizing the cross-entropy loss (see Appendix Eq (69)) over the near-optimal trajectories. + +LPG: LPG is the deterministic listwise policy gradient with the proposed off-policy learning. The only difference between EPG and LPG is that LPG chooses action deterministically (see Appendix Eq (17)) during evaluation. + +RPG: RPG explores the environment using a separate EPG agent in PONG and IQN in other games. Then RPG conducts supervised learning by minimizing the hinge loss Eq (6). It is worth noting that the exploration agent (EPG or IQN) can be replaced by any existing exploration method. In our RPG implementation, we collect all trajectories with the trajectory reward no less than the threshold $c$ without eliminating the duplicated trajectories and we empirically found it is a reasonable simplification. More details of hyperparameters are provided in the Appendix Section 10.12. + +Sample-efficiency: As the results shown in Figure 2, our approach, RPG, significantly outperforms the state-of-the-art baselines in terms of sample-efficiency at all tasks. Furthermore, RPG not only achieved the most sample-efficient results, but also reached the highest final performance at ROBOTANK, DOUBLEDUNK, PITFALL, and PONG, comparing to any model-free state-of-the-art. In reinforcement learning, the stability of algorithm should be emphasized as an important issue. As we can see from the results, the performance of baselines varies from task to task. There is no single baseline consistently outperforms others. In contrast, due to the reduction from RL to supervised learning, RPG is consistently stable and effective across different environments. In addition to the stability and efficiency, RPG enjoys simplicity at the same time. In the environment PONG, it is surprising that RPG without any complicated exploration method largely surpassed the sophisticated value-function based approaches. + +# 7.1 ABLATION STUDY + +The effectiveness of pairwise ranking policy and off-policy learning as supervised learning. To get a better understanding of the underlying reasons that RPG is more sample-efficient than DQN variants, we performed ablation studies in the PONG environment by varying the combination of policy functions with the proposed off-policy learning. The results of EPG, LPG, and RPG are shown in the bottom right, Figure 2. Recall that EPG and LPG use listwise policy gradient (vanilla policy gradient using softmax as policy function) to conduct exploration, the off-policy learning minimizes the cross-entropy loss Eq (69). In contrast, RPG shares the same exploration method as EPG and LPG while uses pairwise ranking policy Eq (2) in off-policy learning that minimizes hinge loss Eq (6). We can see that RPG is more sample-efficient than EPG/LPG. We also compared the most advanced on-policy method Proximal Policy Optimization (PPO) Schulman et al. (2017) with EPG, LPG, and RPG. The proposed off-policy learning largely surpassed the best on-policy method. Therefore, we conclude that off-policy as supervised learning contributes to the sample-efficiency substantially, while pairwise ranking policy can further accelerate the learning. In addition, we compare RPG to off-policy policy gradient approaches: ACER Wang et al. (2016) and self-imitation learning Oh et al. (2018). As the results shown, the proposed off-policy learning framework is more sample-efficient than the state-of-the-art off-policy policy gradient approaches. + +The optimality-efficiency trade-off. As reported in Figure 3, we empirically demonstrated the trade-off between the sample-efficiency and optimality, which is controlled by the trajectory reward threshold (as defined in Def 3). The higher value of trajectory reward threshold suggests we + +![](images/0df7fe2611abbb96a62f2b4acaa6070d88fe6fabb5a2e9edbbd2d23cdbb6d6ee.jpg) +Figure 3: The trade-off between sample efficiency and optimality on DOUBLEDUNK, BREAKOUT, BANKHEIST. As the trajectory reward threshold $(c)$ increase, more samples are needed for the learning to converge, while it leads to better final performance. We denote the value of $c$ by the numbers at the end of legends. + +![](images/128a969394cec899e98302ccc093fa629bec533a803cdba0447ad94c98aee5c2.jpg) + +![](images/c0cb185c75e5641a620bfce6321fd5656c9c8b4fa007ace2b6ef1767bf977651.jpg) + +have higher requirement on defining near-optimal trajectory. This will increase the difficulty of collecting near-optimal samples during exploration, while it ensures a better final performance. These experimental results also justified that RPG is also effective in the absence of prior knowledge on trajectory reward threshold, with a mild cost on introducing an additional tuning parameter. + +# 8 CONCLUSIONS + +In this work, we introduced ranking policy gradient (RPG) methods that, for the first time, resolve RL problem from a ranking perspective. Furthermore, towards the sample-efficient RL, we propose an off-policy learning framework that allows RL agents to be trained in a supervised learning paradigm. The off-policy learning framework uses generalized policy iteration for exploration and exploit the stableness of supervised learning for policy learning, which accomplishes the unbiasedness, variance reduction, off-policy learning, and sample efficiency at the same time. Last but not least, empirical results show that RPG achieves superior performance as compared to the state-of-the-art. + +# 9 ACKNOWLEDGEMENT + +The authors would like to thank Yuan Liang, Qiaozi Gao, Rundong Zhao, Shaohua Yang, Qi Wang, Boyang Liu, Juanyuan Hong, and Mengying Sun for proofreading the earlier version of this manuscript and gave helpful suggestions on the writing. This material is based in part upon work supported by the National Science Foundation (NSF) IIS-1749940, IIS-1615597, and the Office of Naval Research (ONR) N00014-17-1-2265. + +# REFERENCES + +Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920, 2018. +Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253-279, 2013. +Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. arXiv preprint arXiv:1707.06887, 2017. +Dimitri P Bertsekas and John N Tsitsiklis. Neuro-dynamic programming, volume 5. Athena Scientific Belmont, MA, 1996. +Christopher M Bishop. Pattern recognition and machine learning. Springer, 2006. +Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. +Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pp. 89-96. ACM, 2005. +Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In ICML, pp. 129-136. ACM, 2007. + +Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, and Marc G. Bellemare. Dopamine: A research framework for deep reinforcement learning. CoRR, abs/1812.06110, 2018. URL http://arxiv.org/abs/1812.06110. +Will Dabney, Georg Ostrovski, David Silver, and Rémi Munos. Implicit quantile networks for distributional reinforcement learning. arXiv preprint arXiv:1806.06923, 2018. +Hal Daumé, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3): 297-325, 2009. +Peter Dayan and Geoffrey E Hinton. Using expectation-maximization for reinforcement learning. Neural Computation, 9(2):271-278, 1997. +Thomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. arXiv preprint arXiv:1205.4839, 2012. +Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017. +Benjamin Eysenbach and Sergey Levine. If maxent rl is the answer, what is the question? arXiv preprint arXiv:1910.01913, 2019. +Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, and Remi Munos. The reactor: A fast and sample-efficient actor-critic agent for reinforcement learning. 2018. +Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, and Sergey Levine. Q-prop: Sample-efficient policy gradient with an off-policy critic. arXiv preprint arXiv:1611.02247, 2016. +Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pp. 1856-1865, 2018. +Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. arXiv preprint arXiv:1710.02298, 2017. +Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning from demonstrations. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359-366, 1989. +Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Are deep policy gradient algorithms truly policy gradient algorithms? arXiv preprint arXiv:1811.02553, 2018. +Sham Machandranath Kakade et al. On the sample complexity of reinforcement learning. PhD thesis, University of London London, England, 2003. +Jens Kober and Jan R Peters. Policy search for motor primitives in robotics. In Advances in neural information processing systems, pp. 849-856, 2009. +Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. End-to-end task-completion neural dialogue systems. arXiv preprint arXiv:1703.01008, 2017. +Prem Melville and Vikas Sindhwani. Recommender systems. In Encyclopedia of machine learning, pp. 829-838. Springer, 2011. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928-1937, 2016. +Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1054-1062, 2016. + +Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2775-2785, 2017. +Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278-287, 1999. +Brendan O'Donoghue. Variational bayesian reinforcement learning with regret bounds. arXiv preprint arXiv:1807.09647, 2018. +Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, and Volodymyr Mnih. Combining policy gradient and q-learning. arXiv preprint arXiv:1611.01626, 2016. +Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. arXiv preprint arXiv:1806.05635, 2018. +Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J Andrew Bagnell, Pieter Abbeel, Jan Peters, et al. An algorithmic perspective on imitation learning. Foundations and Trends® in Robotics, 7(1-2):1-179, 2018. +Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In Proceedings of the 24th international conference on Machine learning, pp. 745-750. ACM, 2007. +Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. Neural networks, 21 (4):682-697, 2008. +Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. +Stéphane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 661-668, 2010. +Stephane Ross and J Andrew Bagnell. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979, 2014. +Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635, 2011. +Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017. +Wen Sun, Arun Venkatraman, Geoffrey J Gordon, Byron Boots, and J Andrew Bagnell. Deeply aggravated: Differentiable imitation learning for sequential prediction. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3309-3318. JMLR.org, 2017. +Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. +Umar Syed and Robert E Schapire. A reduction from apprenticeship learning to classification. In Advances in Neural Information Processing Systems, pp. 2253-2261, 2010. +Ahmed Touati, Pierre-Luc Bacon, Doina Precup, and Pascal Vincent. Convergent tree-backup and retrace with function approximation. arXiv preprint arXiv:1705.09322, 2017. +Leslie G Valiant. A theory of the learnable. In Proceedings of the sixteenth annual ACM symposium on Theory of computing, pp. 436-445. ACM, 1984. +Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In AAAI, volume 2, pp. 5. Phoenix, AZ, 2016. +Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Van Hasselt, Marc Lanctot, and Nando De Freitas. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015. + +Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224, 2016. +Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992. +Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992. +Yang Yu. Towards sample efficient reinforcement learning. In *IJCAI*, pp. 5739-5743, 2018. + +# 10 APPENDIX + +In Table 1 we provide a brief summary of important notations used in the paper: + +
NotationsDefinition
λijThe discrepancy of the relative action value of action i and action j. λij = λi - λj, where λi = λ(s, ai). Notice that the value here is not the estimation of return, it represents which action will have relatively higher return if followed.
Qπ(s, a)The action value function or equivalently the estimation of return taking action a at state s, following policy π.
pijpij = P(λi > λj) denotes the probability that i-th action is to be ranked higher than j-th action. Notice that pij is controlled by θ through λi, λj
τA trajectory τ = {s(τ, t), a(τ, t)}Tt=1collected from the environment. It is worth noting that this trajectory is not associated with any policy. It only represents a series of state-action pairs. We also use the abbreviation st = s(τ, t), at = a(τ, t).
r(τ)The trajectory reward r(τ) = ∑Tt=1r(st, at) is the sum of reward along one trajectory.
RmaxRmax is the maximum trajectory reward, i.e., Rmax = maxτ r(τ). Since we focus on MDPs with finite horizon and immediate reward, therefore the trajectory reward is bounded.
MM is the upper bound of the absolute value trajectory reward, i.e., |r(τ)| ≤ M, ∀τ.
∑τThe summation over all possible trajectories τ.
pθ(τ)The probability of a specific trajectory is collected from the environment given policy πθ. pθ(τ) = p(st)ΠTt=1πθ(at|st)p(st+1|st, at)
TThe set of all possible near-optimal trajectories. |T| denotes the number of near-optimal trajectories in T.
nThe number of training samples or equivalently state action pairs sampled from uniformly optimal policy.
mThe number of discrete actions.
+ +Table 1: Notations + +# 10.1 A DISCUSSION OF PRIOR WORKS ON REDUCING RL TO SL. + +There are two main distinctions between supervised learning and reinforcement learning. In supervised learning, the data distribution $\mathcal{D}$ is static and training samples are assumed to be sampled i.i.d. from $\mathcal{D}$ . On the contrary, the data distribution is dynamic in RL. It is determined by both environment dynamics and the learning policy. The policy keeps evolving during the learning process, which results in the dynamic data distribution in RL. Secondly, the training samples we collected are not independently distributed due to the change of learning policy. These intrinsic difficulties of RL make the learning algorithm unstable and sample-inefficient. However, if we review the state-of-the-art in RL community, every algorithm eventually acquires the policy, either explicitly or implicitly, which is a mapping from the state to an action or a probability distribution over the action space. Ultimately, there exists a supervised learning equivalent to the RL problem, if the optimal policies exist. The paradox is that it is almost impossible to construct this supervised learning equivalent on the fly, without knowing any optimal policy. Although what is the proper supervision still lingered in the RL community, pioneers have developed a set of insightful approaches to reduce RL into its SL counterpart over the past several decades. Roughly, we can classify the prior work into the following categories: + +- Expectation-Maximization (EM) Dayan & Hinton (1997); Peters & Schaal (2007); Kober & Peters (2009); Abdelmaleki et al. (2018), etc. +- Entropy-Regularized RL (ERL) O'Donoghue et al. (2016); Oh et al. (2018); Haarnoja et al. (2018), etc. +- Interactive Imitation Learning (IIL) Daumé et al. (2009); Syed & Schapire (2010); Ross & Bagnell (2010); Ross et al. (2011); Sun et al. (2017), etc. + +The early work in the EM track transfers objective by Jensen's inequality and the maximizing the lower bound of the original objective, which resembles Expectation-Maximization procedure and provides policy improvement + +Algorithm 1 Off-Policy Learning for Ranking Policy Gradient (RPG) +Require: The near-optimal trajectory reward threshold $c$ , the number of maximal training episodes $N_{max}$ . Maximum number of time steps in each episode $T$ , and batch size $b$ . +1: while episode $< N_{max}$ do +2: repeat +3: Retrieve state $s_t$ and sample action $a_t$ by the specified exploration agent (can be random, $\epsilon$ -greedy, or any RL algorithms). +4: Collect the experience $e_t = (s_t, a_t, r_t, s_{t+1})$ and store to the replay buffer. +5: $t = t + 1$ +6: if t % update step == 0 then +7: Sample a batch of experience $\{e_j\}_{j=1}^b$ from the near-optimal replay buffer. +8: Update $\pi_\theta$ based on the hinge loss Eq (6) for RPG. +9: Update exploration agent using samples from regular replay buffer (In simple MDPs such as PONG where we access to near-optimal trajectory frequently, we can use near-optimal replay buffer to update exploration agent). +10: end if +11: until terminal $s_t$ or $t - t_{start} >= T$ +12: if return $\sum_{t=1}^{T} r_t \geq c$ then +13: Take the near-optimal trajectory $e_t, t = 1, \ldots, T$ in the latest episode from the regular replay buffer into near-optimal replay buffer. +14: end if +15: if t % evaluation step == 0 then +16: Evaluate the RPG agent by greedily choosing the action. If the best performance is reached, then stop training. +17: end if +18: end while + +guarantee. While pioneering at the time, these works typically focus on the simplified RL setting, such as in Dayan & Hinton (1997) the reward function is not associated with the state or in Peters & Schaal (2008) the goal is to maximize the expected immediate reward and the state distribution is assumed to be fixed. Later on in Kober & Peters (2009), the authors extended the EM framework from immediate reward into episodic return. Recent advance Abdolmaleki et al. (2018) utilizes the EM-framework on a relative entropy objective, which adds a parameter prior as regularization. As mentioned in the paper, the evaluation step using Retrace Munos et al. (2016) can be unstable even with linear function approximation Touati et al. (2017). In general, the estimation step in EM-based algorithms involves on-policy evaluation, which is one difficulty shared for any policy gradient methods. One of the main motivation that we want to transfer the RL into a supervised learning task is the off-policy learning enable sample efficiency. + +To achieve off-policy learning, PGQ O'Donoghue et al. (2016) connected the entropy-regularized policy gradient with Q-learning under the constraint of small regularization. In the similar framework, Soft Actor-Critic Haarnoja et al. (2018) was proposed to enable sample-efficient and faster convergence under the framework of entropy-regularized RL. It is able to converge to the optimal policy that optimizes the long-term reward along with policy entropy. It is an efficient way to model the suboptimal behavior and empirically it is able to learn a reasonable policy. Although recently the discrepancy between the entropy-regularized objective and original long-term reward has been discussed in O'Donoghue (2018); Eysenbach & Levine (2019), they focus on learning stochastic policy while the proposed framework is feasible for both learning deterministic optimal policy (Corollary 1) and stochastic optimal policy (Corollary 6). In Oh et al. (2018), this work shares similarity to our work in terms of the method we collecting the samples. They collect good samples based on the past experience and then conduct the imitation learning w.r.t those good samples. However, we differentiate at how do we look at the problem theoretically. This self-imitation learning procedure was eventually connected to lower-bound-soft-Q-learning, which belongs to entropy-regularized reinforcement learning. We comment that there is a trade-off between sample-efficiency and modeling suboptimal behaviors. The more strict requirement we have on the samples collected we have less chance to hit the samples while we are more close to imitating the optimal behavior. + +From the track of interactive imitation learning, initial representative works such as Ross & Bagnell (2010); Ross et al. (2011) firstly pointed out the main discrepancy between imitation learning and reinforcement learning is the i.i.d. assumption does not hold and provides SMILE Ross & Bagnell (2010) and DAGGER Ross et al. (2011) to overcome the distribution mismatch. The theorem 2.1 in Ross & Bagnell (2010) firstly analyzed if the learned policy fails to imitate the expert with a certain probability, what is the performance degradation comparing to the expert. While the theorem seems to resemble the long-term performance theorem 2, it considers the learning policy is trained through the state distribution induced by the expert, instead of state-action distribution as we did in Theorem 2. Their theorem thus may be more applicable to the situation where an interactive procedure is needed, such as querying the expert during the training process. On the contrary, we focus on directly applying + +
MethodsObjectiveCont. ActionOptimalityOff-PolicyNo Oracle
EMX
ERLX✓†
IILX
RPGX
+ +Table 2: A comparison with prior work on reducing RL to SL. The objective column denotes whether the goal is to maximize long-term reward. The Cont. Action column denotes whether the method is applicable for both continuous action space and discrete action space. The Optimality denotes whether the algorithms can model the optimal policy. The $\sqrt{\dagger}$ denotes the optimality achieved by ERL is w.r.t. the entropy regularize objective instead of return. The Off-Policy column denotes if the algorithm enable off-policy learning. The No Oracle column denotes if the algorithms need to access to certain type of oracle (expert policy or expert demonstrations). + +supervised learning approach without having access to the expert to label the data. The optimal state-action pairs are collected during exploration and conducting supervised learning on the replay buffer will provide a performance guarantee in terms of long-term expected reward. Concurrently, a resemble of theorem 2.1 in Ross & Bagnell (2010) is Theorem 1 in Syed & Schapire (2010), the authors reduce the apprenticeship learning to classification, under the assumption that the apprentice policy is deterministic and the misclassification rate at all time steps is bounded, which we do not make. Within the IIL track, later on the AGGREVATE Ross & Bagnell (2014) was proposed to incorporate the information of action costs to facilitate imitation learning, and a differentiable version called AGGREVATED Sun et al. (2017) was recently developed and achieved impressive empirical results. Recently, hinge loss was combined with regular $Q$ -learning loss as a pre-training step for learning from demonstration Hester et al. (2018) or as a surrogate loss for imitating optimal trajectories Osa et al. (2018). In this work, we show that hinge loss constructs a new type of policy gradient method and can learn optimal policy directly. + +In conclusion, our method approaches the problem of reducing RL to SL from a unique perspective that is different from all prior work. With our reformulation from RL to SL, the proposed off-policy framework preserves optimality and reduces variance simultaneously. Furthermore, it also leads to stable optimization since we are imitating a stationary target (UNOP), and it is agnoistic to the knowledge of Oracle, such as expert policy or demonstrations. A multi-aspect comparison between the proposed method and relevant prior studies is summarized in Table 2. + +# 10.2 RANKING POLICY GRADIENT THEOREM + +The proof of Theorem 1 can be found as follows: + +Proof. The following proof is based on direct policy differentiation Peters & Schaal (2008); Williams (1992). For concise presentation, the subscript $t$ for action value $\lambda_i, \lambda_j$ , and $p_{ij}$ is omitted. + +$$ +\begin{array}{l} \nabla_ {\theta} J (\theta) = \nabla_ {\theta} \sum_ {\tau} p _ {\theta} (\tau) r (\tau) \\ = \sum_ {\tau} p _ {\theta} (\tau) \nabla_ {\theta} \log p _ {\theta} (\tau) r (\tau) \\ = \sum_ {\tau} p _ {\theta} (\tau) \nabla_ {\theta} \log \left(p (s _ {0}) \Pi_ {t = 1} ^ {T} \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right)\right) r (\tau) \\ = \sum_ {\tau} p _ {\theta} (\tau) \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (a _ {t} | s _ {t}) r (\tau) \\ = \mathbf {E} _ {\tau \sim \pi_ {\theta}} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) r (\tau) \right] \\ = \mathbf {E} _ {\tau \sim \pi_ {\theta}} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \left(\Pi_ {j = 1, j \neq i} ^ {m} p _ {i j}\right) r (\tau) \right] \\ = \mathbf {E} _ {\tau \sim \pi_ {\theta}} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \sum_ {j = 1, j \neq i} ^ {m} \log \left(\frac {e ^ {\lambda_ {i j}}}{1 + e ^ {\lambda_ {i j}}}\right) r (\tau) \right] \\ \end{array} +$$ + +$$ += \mathbf {E} _ {\tau \sim \pi_ {\theta}} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \sum_ {j = 1, j \neq i} ^ {m} \log \left(\frac {1}{1 + e ^ {\lambda_ {j i}}}\right) r (\tau) \right] \tag {7} +$$ + +$$ +\approx \mathbf {E} _ {\tau \sim \pi_ {\theta}} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \left(\sum_ {j = 1, j \neq i} ^ {m} \left(\lambda_ {i} - \lambda_ {j}\right) / 2\right) r (\tau) \right] \tag {8} +$$ + +where the trajectory is a series of state-action pairs from $t = 1, \dots, T$ , i.e. $\tau = s_1, a_1, s_2, a_2, \dots, s_T$ . From Eq (7) to Eq (8), we use the first-order Taylor expansion of $\log(1 + e^x)|_{x=0} = \log 2 + \frac{1}{2} x + O(x^2)$ to further simplify the ranking policy gradient. + +# 10.3 DISCUSSION ON THE PROBABILITY DISTRIBUTION OF RPG + +Corollary 3. The pairwise ranking policy as shown in Eq (2) constructs a probability distribution over the set of actions when the action space $m$ is equal to 2, given any relative action values $\lambda_{i}, i = 1,2$ . For the cases with $m > 2$ , this conclusion does not hold in general. + +It is easy to verify that $\pi(a_i|s) > 0$ , $\sum_{i=1}^{2}\pi(a_i|s) = 1$ holds and the same conclusion cannot be applied to $m > 2$ by constructing counterexamples. However, we can introduce a dummy action $a'$ to form a probability distribution for RPG. During policy learning, the algorithm will increase the probability of best actions and the probability of dummy action will decrease. Ideally, if RPG converges to an optimal deterministic policy, the probability of taking best action is equal to one and $\pi(a'|s) = 0$ . Similarly, we can introduce a dummy trajectory $\tau'$ with trajectory reward $r(\tau') = 0$ and $p_\theta(\tau') = 1 - \sum_\tau p_\theta(\tau)$ . The trajectory probability forms a probability distribution since $\sum_\tau p_\theta(\tau) + p_\theta(\tau') = 1$ and $p_\theta(\tau) \geq 0 \forall \tau$ and $p_\theta(\tau') \geq 0$ . The proof of a valid trajectory probability is similar to the following proof on $\pi(a|s)$ is a valid probability distribution with a dummy action. The practical influence of this is negligible since our goal is to increase the probability of (near)-optimal trajectories. To present in a clear way, we avoid mentioning dummy trajectory $\tau'$ in Proof 10.2 while it can be seamlessly included. + +Condition 1 (The range of $\lambda$ -value). We restrict the range of $\lambda$ -values in $RPG$ so that it satisfies $\lambda_{m} \geq \ln(m^{\frac{1}{m-1}} - 1)$ , where $\lambda_{m} = \min_{i,j} \lambda_{ji}$ , $m$ is the action dimension. + +This condition can be easily satisfied since in RPG we only focus on the relative relationship of $\lambda$ -values and we can constrain its range so that $\lambda_{m}$ satisfies the condition 1. Furthermore, since we can see that $m^{\frac{1}{m - 1}} > 1$ is decreasing w.r.t to action dimension $m$ . The larger the action dimension, the less constraint we have on the $\lambda$ -values. + +Corollary 4. Given Condition 1, we introduce a dummy action $a'$ and set $\pi(a = a'|s) = 1 - \sum_{i} \pi(a = a_i|s)$ , which will construct a valid probability distribution $(\pi(a|s))$ over the action space $\mathcal{A} \cup a'$ . + +Proof. Since we have $\pi(a = a_i|s) > 0 \forall i = 1, \dots, m$ and $\sum_{i} \pi(a = a_i|s) + \pi(a = a'|s) = 1$ . To prove this is a valid probability distribution, we only need to show that $\pi(a = a'|s) \geq 0$ , $\forall m \geq 2$ , i.e. $\sum_{i} \pi(a = a_i|s) \leq 1$ , $\forall m \geq 2$ . Let $\lambda_m = \min_{i,j} \lambda_{ji}$ , + +$$ +\sum_ {i} \pi (a = a _ {i} | s) \tag {9} +$$ + +$$ += \sum_ {i} \Pi_ {j = 1, j \neq i} ^ {m} p _ {i j} \tag {10} +$$ + +$$ += \sum_ {i} \Pi_ {j = 1, j \neq i} ^ {m} \frac {1}{1 + e ^ {\lambda_ {j i}}} \tag {11} +$$ + +$$ +\leq \sum_ {i} \Pi_ {j = 1, j \neq i} ^ {m} \frac {1}{1 + e ^ {\lambda_ {m}}} \tag {12} +$$ + +$$ += m \left(\frac {1}{1 + e ^ {\lambda_ {m}}}\right) ^ {m - 1} \text {u s e C o n d i t i o n} 1 \tag {13} +$$ + +$$ +\leq 1 \tag {14} +$$ + +# 10.4 LISTWISE POLICY GRADIENT + +In order to learn the stochastic policy that optimizes the ranking of actions with respect to the return, we now introduce the Listwise Policy Gradient (LPG) method. In RL, we want to optimize the probability of each action + +$(a_{i})$ to be ranked higher among all actions, which is the sum of the probabilities of all permutations such that the action $a_{i}$ in the top position of the list. This probability is computationally prohibitive since we need to consider the probability of $m!$ permutations. Luckily, based on Cao et al. (2007) [Theorem 6], we can model the such probability of action $a_{i}$ to be ranked highest given a set of relative action values by a simple softmax formulation, as described in Theorem 3. + +Theorem 3 (Theorem 6 Cao et al. (2007)). Given the relative action values $q = [\lambda_1, \dots, \lambda_m]$ , the probability of action $i$ to be taken (i.e. to be ranked on the top of the list) is: + +$$ +\pi \left(a _ {t} = a _ {i} \mid s _ {t}\right) = \frac {\phi \left(\lambda_ {i}\right)}{\sum_ {j = 1} ^ {m} \phi \left(\lambda_ {j}\right)} \tag {15} +$$ + +where $\phi(*)$ is any increasing, strictly positive function. A common choice of $\phi$ is the exponential function. + +Closely built upon the foundations from learning to rank Cao et al. (2007), we present the listwise policy gradient method, as introduced in Theorem 4. + +Theorem 4 (Listwise Policy Gradient Theorem). For any MDP, the gradient of the long-term reward $J(\theta) = \sum_{\tau} p_{\theta}(\tau) r(\tau)$ w.r.t. the parameter $\theta$ of listwise ranking policy takes the following form: + +$$ +\nabla_ {\theta} J (\theta) = \mathbf {E} _ {\tau \sim \pi_ {\theta}} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \left(\log \frac {e ^ {\lambda_ {i}}}{\sum_ {j = 1} ^ {m} e ^ {\lambda_ {j}}}\right) r (\tau) \right], \tag {16} +$$ + +where the listwise ranking policy $\pi_{\theta}$ parameterized by $\theta$ is given by Eq (17) for tasks with deterministic optimal policies: + +$$ +a = \underset {i} {\arg \max } \lambda_ {i}, \quad i = 1, \dots , m \tag {17} +$$ + +or Eq (18) for stochastic optimal policies: + +$$ +a \sim \pi (* | s), \quad i = 1, \dots , m \tag {18} +$$ + +where the policy takes the form as in Eq (19) + +$$ +\pi \left(a = a _ {i} \mid s _ {t}\right) = \frac {e ^ {\lambda_ {i}}}{\sum_ {j = 1} ^ {m} e ^ {\lambda_ {j}}} \tag {19} +$$ + +is the probability that action $i$ being ranked highest, given the current state and all the relative action values $\lambda_1\ldots \lambda_m$ . + +The proof of Theorem 4 exactly follows the direct policy differentiation Peters & Schaal (2008); Williams (1992) by replacing the policy to the form of the softmax function. The action probability $\pi(a_i|s), \forall i = 1, \dots, m$ forms a probability distribution over the set of discrete actions [Cao et al. (2007) Lemma 7]. Theorem 4 states that the vanilla policy gradient Williams (1992) parameterized by a softmax layer is optimizing the probability of each action to be ranked highest, with respect to the long-term reward. + +# 10.5 DISCUSSIONS ON THE OPTIMALITY PRESERVING + +Condition 2 (Initial States). The (near)-optimal trajectories will cover all initial states of MDP. i.e. $\{s(\tau, 1) | \forall \tau \in \mathcal{T}\} = \{s(\tau, 1) | \forall \tau\}$ , where $\mathcal{T} = \{\tau | w(\tau) = 1\} = \{\tau | r(\tau) \geq c\}$ . + +The Condition 2 describes what type of MDPs is directly applicable to the trajectory reward shaping (TRS, Def 3). The MDPs satisfying this condition cover a wide range of tasks such as Dialogue System Li et al. (2017), Go Silver et al. (2017), video games Bellemare et al. (2013) and all MDPs with only one initial state. If we want to preserve the optimality by TRS, the optimal trajectories of MDP needs to cover all initial states or equivalently, all initial states will lead to at least one optimal trajectory. Similarly, the near-optimality is preserved for all MDPs that its near-optimal trajectories cover all initial states. + +Theoretically, it is possible to transfer more general MDPs to satisfy Condition 2 and preserve the optimality with potential-based reward shaping $\mathrm{Ng}$ et al. (1999). More concretely, consider the deterministic binary tree MDP $(\mathcal{M}_1)$ with the set of initial states $S_{1} = \{s_{1}, s_{1}^{\prime}\}$ as defined in Figure 4. There are eight possible trajectories in $\mathcal{M}_1$ . Let $r(\tau_1) = 10 = R_{\max}$ , $r(\tau_8) = 3$ , $r(\tau_i) = 2$ , $\forall i = 2, \ldots, 7$ . Therefore, this MDP does not satisfy Condition 2. We can compensate the trajectory reward of best trajectory starting from $s_0^\prime$ to the $R_{\max}$ by shaping the reward with the potential-based function $\phi(s_7^\prime) = 7$ and $\phi(s) = 0$ , $\forall s \neq s_7^\prime$ . This reward shaping requires more prior knowledge, which may not be feasible in practice. A more realistic method is to design a dynamic trajectory reward shaping approach. In the beginning, we set $c(s) = \min_{s \in S_1} r(\tau | s(\tau, 1) = s)$ , $\forall s \in S_1$ . Take $\mathcal{M}_1$ as an example, $c(s) = 3$ , $\forall s \in S_1$ . During the exploration stage, we track the current best trajectory of each initial state and update $c(s)$ with its trajectory reward. + +Nevertheless, if the Condition 2 is not satisfied, we need more sophisticated prior knowledge other than a predefined trajectory reward threshold $c$ to construct the replay buffer (training dataset of UNOP). The practical implementation of trajectory reward shaping and rigorously theoretical study for general MDPs are beyond the scope of this work. + +![](images/8b29537fabbcfa2e1b231fd307d5c6c01031d8cf5e9d2108c06db74618590e92.jpg) +Figure 4: The binary tree structure MDP with two initial states $(S_{1} = \{s_{1}, s_{1}^{\prime}\})$ , similar as discussed in Sun et al. (2017). Each path from the root to the leaf node denotes one possible trajectory in the MDP. + +![](images/12dc11ba92975d8b875a628ee981a4e6f47e5fce5a6b0607ac4b5ec48fd79381.jpg) + +# 10.6 PROOF OF LONG-TERM PERFORMANCE THEOREM 2 + +In this subsection, we reduce maximizing RL objective into a supervised learning problem with Theorem 2. Before that, we first prove Lemma 1 to link the log probability of a trajectory $\tau$ to its state-action distribution. Then using this lemma, we can connect the trajectory probability of UNOP with its state-action distribution, from which we prove the Theorem 2. + +Lemma 1. Given a specific trajectory $\tau$ , the averaged state-action pair log-likelihood over horizon $T$ is equal to the weighted sum over the entire state-action space, i.e.: + +$$ +\frac {1}{T} \sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) = \sum_ {s, a} p (s, a \mid \tau) \log \pi_ {\theta} (a \mid s) \tag {20} +$$ + +where the sum in the right hand side is the summation over all possible state-action pairs. It is worth noting that $p(s, a|\tau)$ is not related to any policy parameters. It is the probability of a specific state-action pair $(s, a)$ in a specific trajectory $\tau$ . + +Proof. Given a trajectory $\tau = \{(s(\tau, 1), a(\tau, 1)), \dots, (s(\tau, T), a(\tau, T))\} = \{(s_1, a_1), \dots, (s_T, a_T)\}$ , denote the unique state action pairs in this trajectory as $U(\tau) = \{(s_i, a_i)\}_{i=1}^n$ , where $n$ is the number of unique state-action pairs in $\tau$ and $n \leq T$ . The number of occurrences of a state-action pair $(s_i, a_i)$ in the trajectory $\tau$ is denoted as $|(s_i, a_i)|$ . + +$$ +\begin{array}{l} \frac {1}{T} \sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) (21) \\ = \sum_ {i = 1} ^ {n} \frac {\left| \left(s _ {i} , a _ {i}\right) \right|}{T} \log \pi_ {\theta} \left(a _ {i} \mid s _ {i}\right) (22) \\ = \sum_ {i = 1} ^ {n} p \left(s _ {i}, a _ {i} \mid \tau\right) \log \pi_ {\theta} \left(a _ {i} \mid s _ {i}\right) (23) \\ = \sum_ {(s, a) \in U (\tau)} p (s, a | \tau) \log \pi_ {\theta} (a | s) (24) \\ = \sum_ {(s, a) \in U (\tau)} p (s, a | \tau) \log \pi_ {\theta} (a | s) + \sum_ {(s, a) \notin U (\tau)} p (s, a | \tau) \log \pi_ {\theta} (a | s) (25) \\ = \sum_ {(s, a)} p (s, a | \tau) \log \pi_ {\theta} (a | s) (26) \\ \end{array} +$$ + +where from Eq (24) to Eq (25) we use $\begin{array}{r}\sum_{(s,a)\in U(\tau)}p(s,a|\tau) = \sum_{i = 1}^{n}p(s_i,a_i|\tau) = \sum_{i = 1}^{n}\frac{|(s_i,a_i)|}{T} = 1, \end{array}$ $\therefore p(s,a|\tau) = 0,\forall (s,a)\notin U(\tau).$ + +This thus completes the proof. + +![](images/9a851be6a39a817706a3733a54801e38d42e69fe218a43dd12a7f91fe17806bf.jpg) + +Now we are ready to prove the Theorem 2: + +Proof. The following proof holds for arbitrary subset of trajectories $\mathcal{T}$ which is determined by the threshold $c$ in Def 5. The $\pi_{*}$ is associated with $c$ and this subset of trajectories. + +$$ +\arg \max _ {\theta} \sum_ {\tau} p _ {\theta} (\tau) w (\tau) \tag {27} +$$ + +$$ +\because w (\tau) = 0, i f \tau \notin \mathcal {T} \tag {28} +$$ + +$$ += \arg \max _ {\theta} \frac {1}{| \mathcal {T} |} \sum_ {\tau \in \mathcal {T}} p _ {\theta} (\tau) w (\tau) \tag {29} +$$ + +$$ +\text {u s e} \quad 3 \because p _ {\theta} (\tau) > 0 \text {a n d} w (\tau) > 0, \therefore \sum_ {\tau \in \mathcal {T}} p _ {\theta} (\tau) w (\tau) > 0 \tag {30} +$$ + +$$ += \arg \max _ {\theta} \log \left(\frac {1}{| \mathcal {T} |} \sum_ {\tau \in \mathcal {T}} p _ {\theta} (\tau) w (\tau)\right) \tag {31} +$$ + +$$ +\because \log \left(\sum_ {i = 1} ^ {n} x _ {i} / n\right) \geq \sum_ {i = 1} ^ {n} \log \left(x _ {i}\right) / n, \forall i, x _ {i} > 0, \text {w e h a v e :} \tag {32} +$$ + +$$ +\log \left(\frac {1}{| \mathcal {T} |} \sum_ {\tau \in \mathcal {T}} p _ {\theta} (\tau) w (\tau)\right) \geq \sum_ {\tau \in \mathcal {T}} \frac {1}{| \mathcal {T} |} \log p _ {\theta} (\tau) w (\tau) \tag {33} +$$ + +The lower bound holds when $p_{\theta}(\tau)w(\tau) = \frac{1}{|\mathcal{T}|}, \forall \tau \in \mathcal{T}$ . To this end, we maximize the lower bound of the expected long-term performance. + +$$ +\begin{array}{l} \arg \max _ {\theta} \sum_ {\tau \in \mathcal {T}} \frac {1}{| \mathcal {T} |} \log p _ {\theta} (\tau) w (\tau) (34) \\ = \arg \max _ {\theta} \sum_ {\tau \in \mathcal {T}} \log \left(p \left(s _ {1}\right) \Pi_ {t = 1} ^ {T} \left(\pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right)\right) w (\tau)\right) (35) \\ = \arg \max _ {\theta} \sum_ {\tau \in \mathcal {T}} \log \left(p \left(s _ {1}\right) \left(\Pi_ {t = 1} ^ {T} \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right)\right) \left(\Pi_ {t = 1} ^ {T} p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) w (\tau)\right)\right) (36) \\ = \arg \max _ {\theta} \sum_ {\tau \in \mathcal {T}} \left(\log p \left(s _ {1}\right) + \sum_ {t = 1} ^ {T} \log p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) + \sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) + \log w (\tau)\right) (37) \\ \end{array} +$$ + +This is the reason that $w(\tau)$ can be set as arbitrary positive constant (38) + +$$ +\begin{array}{l} = \arg \max _ {\theta} \frac {1}{| \mathcal {T} |} \sum_ {\tau \in \mathcal {T}} \sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) (39) \\ = \arg \max _ {\theta} \frac {1}{| \mathcal {T} | T} \sum_ {\tau \in \mathcal {T}} \sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) (40) \\ = \arg \max _ {\theta} \frac {1}{| \mathcal {T} |} \sum_ {\tau \in \mathcal {T}} \frac {1}{T} \sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) \text {U s e A s s u m p t i o n 2 t h e e x i s e n c e o f U N O P .} (41) \\ = \arg \max _ {\theta} \sum_ {\tau \in \mathcal {T}} p _ {\pi_ {*}} (\tau) \frac {1}{T} \left(\sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right)\right) (42) \\ \end{array} +$$ + +where $\pi_{*}$ is a UNOP (Def 5). (43) + +$$ +\therefore p _ {\pi_ {*}} (\tau) = 0 \forall \tau \notin \mathcal {T} \tag {44} +$$ + +$$ +\text {E q} (4 4) \text {c a n b e e s t a b l i s h e d b a s e d o n} \sum_ {\tau \in \mathcal {T}} p _ {\pi_ {*}} (\tau) = \sum_ {\tau \in \mathcal {T}} 1 / | \mathcal {T} | = 1 \tag {45} +$$ + +$$ +\begin{array}{l} = \arg \max _ {\theta} \sum_ {\tau} p _ {\pi_ {*}} (\tau) \frac {1}{T} \left(\sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right)\right) \text {U s e l e m m a} \\ = \arg \max _ {\theta} \sum_ {\tau} p _ {\pi_ {*}} (\tau) \sum_ {s, a} p (s, a | \tau) \log \pi_ {\theta} (a | s) \tag {46} \\ \end{array} +$$ + +The 2nd sum is over all possible state-action pairs. $(s,a)$ represents a specific state-action pair. + +$$ +\begin{array}{l} = \arg \max _ {\theta} \sum_ {\tau} \sum_ {s, a} p _ {\pi_ {*}} (\tau) p (s, a | \tau) \log \pi_ {\theta} (a | s) (47) \\ = \arg \max _ {\theta} \sum_ {s, a} \sum_ {\tau} p _ {\pi_ {*}} (\tau) p (s, a | \tau) \log \pi_ {\theta} (a | s) (48) \\ \end{array} +$$ + +$$ += \arg \max _ {\theta} \sum_ {s, a} p _ {\pi_ {*}} (s, a) \log \pi_ {\theta} (a | s) \tag {49} +$$ + +In this proof we use $s_t = s(\tau, t)$ and $a_t = a(\tau, t)$ as abbreviations, which denote the $t$ -th state and action in the trajectory $\tau$ , respectively. $|\mathcal{T}|$ denotes the number of trajectories in $\mathcal{T}$ . We also use the definition of $w(\tau)$ to only focus on near-optimal trajectories. We set $w(\tau) = 1$ for simplicity but it will not affect the conclusion if set to other constants. + +Optimality: Furthermore, the optimal solution for the objective function Eq (49) is a uniformly (near)-optimal policy $\pi_{*}$ . + +$$ +\begin{array}{l} \arg \max _ {\theta} \sum_ {s, a} p _ {\pi_ {*}} (s, a) \log \pi_ {\theta} (a | s) (50) \\ = \arg \max _ {\theta} \sum_ {s} p _ {\pi_ {*}} (s) \sum_ {a} \pi_ {*} (a | s) \log \pi_ {\theta} (a | s) (51) \\ = \arg \max _ {\theta} \sum_ {s} p _ {\pi_ {*}} (s) \sum_ {a} \pi_ {*} (a | s) \log \pi_ {\theta} (a | s) - \sum_ {s} p _ {\pi_ {*}} (s) \sum_ {a} \log \pi_ {*} (a | s) (52) \\ = \arg \max _ {\theta} \sum_ {s} p _ {\pi_ {*}} (s) \sum_ {a} \pi_ {*} (a | s) \log \frac {\pi_ {\theta} (a | s)}{\pi_ {*} (a | s)} (53) \\ = \arg \max _ {\theta} \sum_ {s} p _ {\pi_ {*}} (s) \sum_ {a} - K L \left(\pi_ {*} (a | s) \mid \mid \pi_ {\theta} (a | s)\right) = \pi_ {*} (54) \\ \end{array} +$$ + +Therefore, the optimal solution of Eq (49) is also the (near)-optimal solution for the original RL problem since $\sum_{\tau}p_{\pi_*(}\tau)r(\tau) = \sum_{\tau \in \mathcal{T}}\frac{1}{|\mathcal{T}|} r(\tau)\geq c = R_{\max} - \epsilon$ . The optimal solution is obtained when we set $c = R_{\mathrm{max}}$ . + +Lemma 2. Given any optimal policy $\pi$ of MDP satisfying Condition 2, $\forall \tau \notin \mathcal{T}$ , we have $p_{\pi}(\tau) = 0$ , where $\mathcal{T}$ denotes the set of all possible optimal trajectories in this lemma. If $\exists \tau \notin \mathcal{T}$ , such that $p_{\pi}(\tau) > 0$ , then $\pi$ is not optimal policy. + +Proof. We prove this by contradiction. We assume $\pi$ is an optimal policy. If $\exists \tau' \notin \mathcal{T}$ , such that 1) $p_{\pi}(\tau') \neq 0$ , or equivalently: $p_{\pi}(\tau') > 0$ since $p_{\pi}(\tau') \in [1,0]$ , and 2) $\tau' \notin \mathcal{T}$ . We can find a better policy $\pi'$ by satisfying the following three conditions: + +$$ +p _ {\pi^ {\prime}} \left(\tau^ {\prime}\right) = 0 \text {a n d} +$$ + +$$ +p _ {\pi^ {\prime}} \left(\tau_ {1}\right) = p _ {\pi} \left(\tau_ {1}\right) + p _ {\pi} \left(\tau^ {\prime}\right), \tau_ {1} \in \mathcal {T} \text {a n d} +$$ + +$$ +p _ {\pi^ {\prime}} (\tau) = p _ {\pi} (\tau), \forall \tau \notin \left\{\tau^ {\prime}, \tau_ {1} \right\} +$$ + +Since $p_{\pi'}(\tau) \geq 0, \forall \tau$ and $\sum_{\tau} p_{\pi'}(\tau) = 1$ , therefore $p_{\pi'}$ constructs a valid probability distribution. Then the expected long-term performance of $\pi'$ is greater than that of $\pi$ : + +$$ +\begin{array}{l} \sum_ {\tau} p _ {\pi^ {\prime}} (\tau) w (\tau) - \sum_ {\tau} p _ {\pi} (\tau) w (\tau) \\ = \sum_ {\tau \notin \left\{\tau^ {\prime}, \tau_ {1} \right\}} p _ {\pi^ {\prime}} (\tau) w (\tau) + p _ {\pi^ {\prime}} (\tau_ {1}) * w (\tau_ {1}) + p _ {\pi^ {\prime}} \left(\tau^ {\prime}\right) * w \left(\tau^ {\prime}\right) \\ - \left(\sum_ {\tau \notin \left\{\tau^ {\prime}, \tau_ {1} \right\}} p _ {\pi} (\tau) w (\tau) + p _ {\pi} \left(\tau_ {1}\right) * w \left(\tau_ {1}\right) + p _ {\pi} \left(\tau^ {\prime}\right) * w \left(\tau^ {\prime}\right)\right) \\ = p _ {\pi^ {\prime}} \left(\tau_ {1}\right) * w \left(\tau_ {1}\right) + p _ {\pi^ {\prime}} \left(\tau^ {\prime}\right) * w \left(\tau^ {\prime}\right) - \left(p _ {\pi} \left(\tau_ {1}\right) * w \left(\tau_ {1}\right) + p _ {\pi} \left(\tau^ {\prime}\right) * w \left(\tau^ {\prime}\right)\right) \\ \because \tau^ {\prime} \notin \mathcal {T},.. w (\tau^ {\prime}) = 0 \text {a n d} \tau_ {1} \in \mathcal {T},.. w (\tau) = 1 \\ = p _ {\pi^ {\prime}} \left(\tau_ {1}\right) - p _ {\pi} \left(\tau_ {1}\right) \\ = p _ {\pi} \left(\tau_ {1}\right) + p _ {\pi} \left(\tau^ {\prime}\right) - p _ {\pi} \left(\tau_ {1}\right) = p _ {\pi} \left(\tau^ {\prime}\right) > 0 \\ \end{array} +$$ + +Essentially, we can find a policy $\pi'$ that has higher probability on the optimal trajectory $\tau_1$ and zero probability on $\tau'$ . This indicates that it is a better policy than $\pi$ . Therefore, $\pi$ is not an optimal policy and it contradicts our assumption, which proves that such $\tau'$ does not exist. Therefore, $\forall \tau \notin \mathcal{T}$ , we have $p_{\pi}(\tau) = 0$ . + +Lemma 3 (Policy Performance). $\forall \tau, p_{\theta}(\tau) > 0$ , if the policy takes the form as in Eq (15) or Eq (2). This means for all possible trajectories allowed by the environment, the policy takes the form of either ranking policy or softmax will generate this trajectory with probability $p_{\theta}(\tau) > 0$ . It is worth noting that because of this property, $\pi_{\theta}$ is not an optimal policy according to Lemma 2, though it can be arbitrarily close to the optimal policy. + +Proof. + +$$ +\because p (\tau) = p \left(s _ {1}\right) \Pi_ {t = 1} ^ {T} \left(\pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right)\right) \tag {55} +$$ + +$$ +\text {a n d} \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) > 0, p \left(s _ {1}\right) > 0, p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) > 0. \tag {56} +$$ + +$$ +i f p (s _ {t + 1} | s _ {t}, a _ {t}) = 0 \text {o r} p (s _ {1}) = 0, +$$ + +then the probability of sampling $\tau$ from any policy is zero. This trajectory does not exist. + +$$ +\therefore p _ {\theta} (\tau) > 0. \tag {57} +$$ + +This thus completes the proof. + +# 10.7 PROOF OF PERFORMANCE POLICY GRADIENT COROLLARIES + +Corollary 5 (Ranking performance policy gradient). Optimizing the lower bound of expected long-term performance (defined in Eq (4)) using pairwise ranking policy (Eq (2)) can be approximately optimized by the following loss: + +$$ +\left. \min _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) \left(\sum_ {j = 1, j \neq i} ^ {m} \max (0, m a r g i n + \lambda (s, a _ {j}) - \lambda (s, a _ {i}))\right), \right. \tag {58} +$$ + +where margin is a small positive value. We set margin equal to one in our experiments. + +Proof. In RPG, the policy $\pi_{\theta}(a|s)$ is defined as in Eq (2). We then replace the action probability distribution in Eq (5) with the RPG policy. + +$$ +\because \pi (a = a _ {i} | s) = \Pi_ {j = 1, j \neq i} ^ {m} p _ {i j} \tag {59} +$$ + +Because RPG is fitting a deterministic optimal policy, + +we denote the optimal action given sate $s$ as $a_{i}$ , then we have + +$$ +\max _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) \log \pi \left(a _ {i} \mid s\right) \tag {60} +$$ + +$$ += \max _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) \log \left(\Pi_ {j \neq i, j = 1} ^ {m} p _ {i j}\right) \tag {61} +$$ + +$$ += \max _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) \log \Pi_ {j \neq i, j = 1} ^ {m} \frac {1}{1 + e ^ {\lambda_ {j i}}} \tag {62} +$$ + +$$ += \min _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) \sum_ {j \neq i, j = 1} ^ {m} \log \left(1 + e ^ {\lambda_ {j i}}\right) \text {f i r s t o r d e r T a y l o r e x p a n s i o n} \tag {63} +$$ + +$$ +\approx \min _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) \sum_ {j \neq i, j = 1} ^ {m} \lambda_ {j i} \quad \text {s . t .} | \lambda_ {i j} | = c < 1, \forall i, j, s \tag {64} +$$ + +$$ += \min _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) \sum_ {j \neq i, j = 1} ^ {m} \left(\lambda_ {j} - \lambda_ {i}\right) \quad \text {s . t .} | \lambda_ {i} - \lambda_ {j} | = c < 1, \forall i, j, s \tag {65} +$$ + +$$ +\Rightarrow \min _ {\theta} \sum_ {s, a _ {i}} p _ {\pi_ {*}} (s, a _ {i}) L \left(s _ {i}, a _ {i}\right) \tag {66} +$$ + +where the pairwise loss $L(s, a_i)$ is defined as: + +$$ +\mathcal {L} (s, a _ {i}) = \sum_ {j = 1, j \neq i} ^ {| A |} \max (0, \operatorname {m a r g i n} + \lambda (s, a _ {j}) - \lambda (s, a _ {i})), \tag {67} +$$ + +$$ +\text {w h e r e} \quad \text {m a i n} \quad \text {E q} \quad (6 6) \text {i s a s m a l l p o s i t i v e c o n s t a n t}. \tag {68} +$$ + +From Eq (65) to Eq (66), we consider learning a deterministic optimal policy $a_{i} = \pi^{*}(s)$ , where we use index $i$ to denote the optimal action at each state. The optimal $\lambda$ -values minimizing Eq (65) (denoted by $\lambda^1$ ) need to satisfy $\lambda_i^1 = \lambda_j^1 + c, \forall j \neq i$ . The optiaml $\lambda$ -values minimizing Eq (66) (denoted by $\lambda^2$ ) need to satisfy $\lambda_i^2 = \max_{j \neq i} \lambda_j^2 + \text{margin}, \forall j \neq i$ . In both cases, the optimal policies from solving Eq (65) and Eq (66) are the same: $\pi(s) = \arg \max_k \lambda_k^1 = \arg \max_k \lambda_k^2 = a_i$ . Therefore, we use Eq (66) as a surrogate optimization problem of Eq (65). + +# 10.7.1 LISTWISE PERFORMANCE POLICY GRADIENT + +Corollary 6 (Listwise performance policy gradient). Optimizing the lower bound of expected long-term performance by the listwise ranking policy (Eq (19)) is equivalent to: + +$$ +\max _ {\theta} \sum_ {s} p _ {\pi_ {*}} (s) \sum_ {i = 1} ^ {m} \pi_ {*} \left(a _ {i} \mid s\right) \log \frac {e ^ {\lambda_ {i}}}{\sum_ {j = 1} ^ {m} e ^ {\lambda_ {j}}} \tag {69} +$$ + +The proof of Corollary 6 is a direct application of theorem 2 by replacing policy with the softmax function. + +# 10.8 POLICY GRADIENT VARIANCE REDUCTION + +Corollary 7 (Variance reduction). Given a stationary policy, $\mathcal{L}$ , the upper bound of the variance of each dimension of policy gradient is $O(T^{2}C^{2}M^{2})$ . The upper bound of gradient variance of maximizing the lower bound of long-term performance Eq (5) is $O(C^2)$ , where $C$ is the maximum norm of log gradient based on Assumption 3. The upper bound of gradient variance by supervised learning compared to that of the regular policy gradient is reduced by an order of $O(T^{2}M^{2})$ , given $M > 1, T > 1$ , which is a very common situation in practice. + +Proof. The regular policy gradient of policy $\pi_{\theta}$ is given as Williams (1992): + +$$ +\sum_ {\tau} p _ {\theta} (\tau) \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \left(\pi_ {\theta} (a (\tau , t) | s (\tau , t))\right) r (\tau) \right] \tag {70} +$$ + +The regular policy gradient variance of the $i$ -th dimension is denoted as follows: + +$$ +\operatorname {V a r} \left(\sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \left(\pi_ {\theta} (a (\tau , t) | s (\tau , t)) _ {i}\right) r (\tau)\right) \tag {71} +$$ + +We denote $x_{i}(\tau) = \sum_{t=1}^{T} \nabla_{\theta} \log (\pi_{\theta}(a(\tau, t)|s(\tau, t))_i)r(\tau)$ for convenience. Therefore, $x_{i}$ is a random variable. Then apply $var(x) = \mathbf{E}_{p_{\theta}(\tau)}[x^2] - \mathbf{E}_{p_{\theta}(\tau)}[x]^2$ , we have: + +$$ +\begin{array}{l} \operatorname {V a r} \left(\sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \left(\pi_ {\theta} (a (\tau , t) | s (\tau , t)) _ {i}\right) r (\tau)\right) (72) \\ = \operatorname {V a r} \left(x _ {i} (\tau)\right) (73) \\ = \sum_ {\tau} p _ {\theta} (\tau) x _ {i} (\tau) ^ {2} - \left[ \sum_ {\tau} p _ {\theta} (\tau) x _ {i} (\tau) \right] ^ {2} (74) \\ \leq \sum_ {\tau} p _ {\theta} (\tau) x _ {i} (\tau) ^ {2} (75) \\ = \sum_ {\tau} p _ {\theta} (\tau) \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \left(\pi_ {\theta} (a (\tau , t) | s (\tau , t)) _ {i}\right) r (\tau) \right] ^ {2} \left(\text {u s e} M \geq | r (\tau) |, \forall \tau\right) (76) \\ \leq \sum_ {\tau} p _ {\theta} (\tau) \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \left(\pi_ {\theta} (a (\tau , t) | s (\tau , t)) _ {i}\right) \right] ^ {2} M ^ {2} (77) \\ = M ^ {2} \sum_ {\tau} p _ {\theta} (\tau) \left[ \sum_ {t = 1} ^ {T} \sum_ {k = 1} ^ {T} \nabla_ {\theta} \log \left(\pi_ {\theta} (a (\tau , t) | s (\tau , t)) _ {i}\right) \nabla_ {\theta} \log \left(\pi_ {\theta} (a (\tau , k) | s (\tau , k) _ {i}) \right] \right. (78) \\ \leq M ^ {2} \sum_ {\tau} p _ {\theta} (\tau) \left[ \sum_ {t = 1} ^ {T} \sum_ {k = 1} ^ {T} C ^ {2} \right] (79) \\ = M ^ {2} \sum_ {\tau} p _ {\theta} (\tau) T ^ {2} C ^ {2} (80) \\ = T ^ {2} C ^ {2} M ^ {2} (81) \\ \end{array} +$$ + +The policy gradient of long-term performance (Def 4) + +$$ +\sum_ {s, a} p _ {\pi_ {*}} (s, a) \nabla_ {\theta} \log \pi_ {\theta} (a | s) \tag {82} +$$ + +The policy gradient variance of the $i$ -th dimension is denoted as + +$$ +\operatorname {v a r} \left(\nabla_ {\theta} \log \pi_ {\theta} (a | s) _ {i}\right) \tag {83} +$$ + +Then the upper bound is given by + +$$ +\begin{array}{l} \operatorname {v a r} \left(\nabla_ {\theta} \log \pi_ {\theta} (a | s) _ {i}\right) (84) \\ = \sum_ {s, a} p _ {\pi_ {*}} (s, a) \left[ \nabla_ {\theta} \log \pi_ {\theta} (a | s) _ {i} \right] ^ {2} - \left[ \sum_ {s, a} p _ {\pi_ {*}} (s, a) \nabla_ {\theta} \log \pi_ {\theta} (a | s) _ {i} \right] ^ {2} (85) \\ \leq \sum_ {s, a} p _ {\pi_ {*}} (s, a) \left[ \nabla_ {\theta} \log \pi_ {\theta} (a | s) _ {i} \right] ^ {2} \text {u s e A s s u m p t i o n 3} (86) \\ \leq \sum_ {s, a} p _ {\pi_ {*}} (s, a) C ^ {2} (87) \\ = C ^ {2} (88) \\ \end{array} +$$ + +This thus completes the proof. + +![](images/8fad19adf272e7d891868baa712003786fe78a8d1ac75ea4608e8a1f849c07be.jpg) + +# 10.9 DISCUSSIONS OF ASSUMPTION 2 + +In this section, we show that UNOP exists in a range of MDPs. Notice that the lemma 4 shows the sufficient conditions of satisfying Assumption 2 rather than necessary conditions. + +Lemma 4. For MDPs defined in Section 3 satisfying the following conditions: + +- Each initial state leads to one optimal trajectory. This also indicates $|\mathcal{S}_1| = |\mathcal{T}|$ , where $\mathcal{T}$ denotes the set of optimal trajectories in this lemma, $\mathcal{S}_1$ denotes the set of initial states. +- Determininistic transitions, i.e., $p(s'|s,a) \in \{0,1\}$ . +Uniform initial state distribution, i.e., $p(s_{1}) = \frac{1}{|\mathcal{T}|},\forall s_{1}\in S_{1}$ + +Then we have: $\exists \pi_{*}$ , where s.t. $p_{\pi_*}(\tau) = \frac{1}{|\mathcal{T}|}, \forall \tau \in \mathcal{T}$ . It means that a deterministic uniformly optimal policy always exists for this MDP. + +Proof. We can prove this by construction. The following analysis applies for any $\tau \in \mathcal{T}$ . + +$$ +\begin{array}{l} p _ {\pi_ {*}} (\tau) = \frac {1}{| T |} (89) \\ \Longleftrightarrow \log p _ {\pi_ {*}} (\tau) = - \log | \mathcal {T} | (90) \\ \Longleftrightarrow \log p \left(s _ {1}\right) + \sum_ {t = 1} ^ {T} \log p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) + \sum_ {t = 1} ^ {T} \log \pi_ {*} \left(a _ {t} \mid s _ {t}\right) = - \log | \mathcal {T} | (91) \\ \Longleftrightarrow \sum_ {t = 1} ^ {T} \log \pi_ {*} (a _ {t} | s _ {t}) = - \log p (s _ {1}) - \sum_ {t = 1} ^ {T} \log p (s _ {t + 1} | s _ {t}, a _ {t}) - \log | \mathcal {T} | (92) \\ \end{array} +$$ + +where we use $a_{t}, s_{t}$ as abbreviations of $a(\tau, t), s(\tau, t)$ . + +$$ +\text {W e} D (\tau) = - \log p (s _ {1}) - \sum_ {t = 1} ^ {T} \log p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) > 0 +$$ + +$$ +\Longleftrightarrow \sum_ {t = 1} ^ {T} \log \pi_ {*} \left(a _ {t} \mid s _ {t}\right) = D (\tau) - \log | \mathcal {T} | \tag {93} +$$ + +: we can obtain a uniformly optimal policy by solving the nonlinear programming: + +$$ +\sum_ {t = 1} ^ {T} \log \pi_ {*} (a (\tau , t) | s (\tau , t)) = D (\tau) - \log | \mathcal {T} | \forall \tau \in \mathcal {T} \tag {94} +$$ + +$$ +\log \pi_ {*} (a (\tau , t) | s (\tau , t)) = 0, \forall \tau \in \mathcal {T}, t = 1, \dots , T \tag {95} +$$ + +$$ +\sum_ {i = 1} ^ {m} \pi_ {*} (a _ {i} | s (\tau , t)) = 1, \forall \tau \in \mathcal {T}, t = 1, \dots , T \tag {96} +$$ + +![](images/016742100df87b152b4a253ac95ef951da3da66de373c3425b3af91c6fdea83a.jpg) +(a) + +![](images/de68f0c1401ca54730f0c02b56c67f7c11348a3fdff5288328f038680e297473.jpg) +(b) +Figure 5: The directed graph that describes the conditional independence of pairwise relationship of actions, where $Q_{1}$ denotes the return of taking action $a_{1}$ at state $s$ , following policy $\pi$ in $\mathcal{M}$ , i.e., $Q_{\mathcal{M}}^{\pi}(s, a_{1})$ . $I_{1,2}$ is a random variable that denotes the pairwise relationship of $Q_{1}$ and $Q_{2}$ , i.e., $I_{1,2} = 1$ , i.i.f. $Q_{1} \geq Q_{2}$ , o.w. $I_{1,2} = 0$ . + +Use the condition $p(s_1) = \frac{1}{|\mathcal{T}|}$ , then we have: + +$$ +\because \sum_ {t = 1} ^ {T} \log \pi_ {*} (a (\tau , t) | s (\tau , t)) = \sum_ {t = 1} ^ {T} \log 1 = 0 (\text {L H S o f} E q (9 4)) \tag {97} +$$ + +$$ +\because - \log p \left(s _ {1}\right) - \sum_ {t = 1} ^ {T} \log p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) - \log | \mathcal {T} | = \log | \mathcal {T} | - 0 - \log | \mathcal {T} | = 0 (\text {R H S o f} E q (9 4)) \tag {98} +$$ + +$$ +\therefore D (\tau) - \log | \mathcal {T} | = \sum_ {t = 1} ^ {T} \log \pi_ {*} (a (\tau , t) | s (\tau , t)), \forall \tau \in \mathcal {T}. \tag {99} +$$ + +Also the deterministic optimal policy satisfies the conditions in Eq (95 96). Therefore, the deterministic optimal policy is a uniformly optimal policy. This lemma describes one type of MDP in which UOP exists. From the above reasoning, we can see that as long as the system of non-linear equations Eq (94 95 96) has a solution, the uniformly (near)-optimal policy exists. + +Lemma 5 (Hit optimal trajectory). The probability that a specific optimal trajectory was not encountered given an arbitrary softmax policy $\pi_{\theta}$ is exponentially decreasing with respect to the number of training episodes. No matter a MDP has deterministic or probabilistic dynamics. + +Proof. Given a specific optimal trajectory $\tau = \{s(\tau, t), a(\tau, t)\}_{t=1}^{T}$ , and an arbitrary stationary policy $\pi_{\theta}$ , the probability that has never encountered at the $n$ -th episode is $[1 - p_{\theta}(\tau)]^n = \xi^n$ , based on lemma 3, we have $p_{\theta}(\tau) > 0$ , therefore we have $\xi \in [0,1)$ . + +# 10.10 DISCUSSIONS OF ASSUMPTION 1 + +Intuitively, given a state and a stationary policy $\pi$ , the relative relationships among actions can be independent, considering a fixed MDP $\mathcal{M}$ . The relative relationship among actions is the relative relationship of actions' return. Starting from the same state, following a stationary policy, the actions' return is determined by MDP properties such as environment dynamics, reward function, etc. + +More concretely, we consider a MDP with three actions $(a_{1},a_{2},a_{3})$ for each state. The action value $Q_{\mathcal{M}}^{\pi}$ satisfies the Bellman equation in Eq (100). Notice that in this subsection, we use $Q_{\mathcal{M}}^{\pi}$ to denote the action value that estimates the absolute value of return in $\mathcal{M}$ . + +$$ +Q _ {\mathcal {M}} ^ {\pi} (s, a _ {i}) = r (s, a _ {i}) + \max _ {a} \mathbf {E} _ {s ^ {\prime} \sim p (* | s, a)} Q _ {\mathcal {M}} ^ {\pi} \left(s ^ {\prime}, a\right), \forall i = 1, 2, 3. \tag {100} +$$ + +As we can see from Eq (100), $Q_{\mathcal{M}}^{\pi}(s,a_i)$ , $i = 1,2,3$ is only related to $s, \pi$ , and environment dynamics $\mathbf{P}$ . It means if $\pi$ , $\mathcal{M}$ and $s$ are given, the action values of three actions are determined. Therefore, we can use a directed graph Bishop (2006) to model the relationship of action values, as shown in Figure 5 (a). Similarly, if we only consider the ranking of actions, this ranking is consistent with the relationship of actions' return, which is also determined by $s, \pi$ , and $\mathbf{P}$ . Therefore, the pairwise relationship among actions can be described as the directed graph in Figure 5 (b), which establishes the conditional independence of actions' pairwise relationship. Based on the above reasoning, we conclude that Assumption 1 is realistic. + +# 10.11 EXPLORATION EFFICIENCY + +The proposed off-policy learning framework indicates the sample complexity is related to exploration efficiency and supervised learning efficiency. Given a specific MDP, the exploration efficiency of an exploration strategy can be quantified by how frequently we can encounter different (near)-optimal trajectories in the first $k$ episodes. The supervised learning efficiency under the probably approximately correct framework Valiant (1984) is how many samples we need to collect so that we can achieve good generalization performance with high probability. Jointly consider the efficiency in two stages, we can theoretically analyze the sample complexity of the proposed off-policy learning framework, which will be provided in the long version of this work. + +Improving exploration efficiency is not the focus of this work. In general, exploration efficiency is highly related to the properties of MDP, such as transition probabilities, horizon, action dimension, etc. The exploration strategy should be designed according to certain domain knowledge of the MDP to improve the efficiency. Therefore, we did not specify our exploration strategy but adopt the state-of-the-art to conduct exploration. + +Based on the above discussion, we can see that how frequently we can encounter different (near)-optimal trajectories is a bottleneck of sample efficiency for RPG. In the MDPs with small the transition probabilities of reaching the near-optimal trajectories, we rarely collect any near-optimal trajectories during the early stage of exploration. The benefit of applying the proposed off-policy framework would be limited. + +# 10.12 HYPERPARAMETERS + +We present the training details of ranking policy gradient in Table 3. The network architecture is the same as the convolution neural network used in DQN Mnih et al. (2015). We update the RPG network every four timesteps with a minibatch of size 32. The replay ratio is equal to eight for all baselines and RPG (except for ACER we use the default setting in openai baselines Dhariwal et al. (2017) for better performance). + +Table 3: Hyperparameters of RPG network + +
HyperparametersValue
ArchitectureConv(32-8×8-4)
-Conv(64-4×4-2)
-Conv(64-3×3-2)
-FC(512)
Learning rate0.0000625
Batch size32
Replay buffer size1000000
Update period4
Margin in Eq (6)1
\ No newline at end of file diff --git a/rankingpolicygradient/images.zip b/rankingpolicygradient/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..00fb9b3e7eea41a977970fce2ca1d6dbe133c987 --- /dev/null +++ b/rankingpolicygradient/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f16a8821e41aaef9948107b0c2ba4857b14896e38f82f402c54aa6298fa4476 +size 1128076 diff --git a/rankingpolicygradient/layout.json b/rankingpolicygradient/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6c771ea46090e00aefe7862d4eedf8db0dfb8e7f --- /dev/null +++ b/rankingpolicygradient/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3612141a2affbdb1fa0b2db735a3ac38c8d1e002e5c670a119d5dc32bebf4f3f +size 967965 diff --git a/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_content_list.json b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..674351ec54e40962da10d64fa01f15d0f909ef13 --- /dev/null +++ b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d93d54d4093e70b97128882fcf31f7f30af7edaee29ea385b6ce0aab367a6e9 +size 118331 diff --git a/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_model.json b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..313ed94e031c45e966d0d470e2753fbdb4f85739 --- /dev/null +++ b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df835dd438b3ed892df83b50e42c6c986fc6c2d235ef2d42d58b1016c548998d +size 137805 diff --git a/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_origin.pdf b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a2342ea693395eed2ce897013eeb6333cc186622 --- /dev/null +++ b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc34e842d5138821ce602661487847941804753a86301150a69e2890029e7929 +size 820395 diff --git a/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/full.md b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7a54b911439e8a0185c2cbbbbc3e3f77c4d179e0 --- /dev/null +++ b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/full.md @@ -0,0 +1,467 @@ +# RAPID LEARNING OR FEATURE REUSE? TOWARDS UNDERSTANDING THE EFFECTIVENESS OF MAML + +Aniruddh Raghu * + +MIT + +araghu@mit.edu + +Maithra Raghu * + +Cornell University & Google Brain + +maithrar@gmail.com + +Samy Bengio + +Google Brain + +Oriol Vinyals + +DeepMind + +# ABSTRACT + +An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning. An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks. Despite MAML's popularity, a fundamental open question remains – is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta-initialization already containing high quality features? We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor. This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of the underlying neural network. ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML. We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm). We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly. + +# 1 INTRODUCTION + +A central problem in machine learning is few-shot learning, where new tasks must be learned with a very limited number of labelled datapoints. A significant body of work has looked at tackling this challenge using meta-learning approaches (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017; Santoro et al., 2016; Ravi and Larochelle, 2016; Nichol and Schulman, 2018). Broadly speaking, these approaches define a family of tasks, some of which are used for training and others solely for evaluation. A proposed meta-learning algorithm then looks at learning properties that generalize across the different training tasks, and result in fast and efficient learning of the evaluation tasks. + +One highly successful meta-learning algorithm has been Model Agnostic Meta-Learning (MAML) (Finn et al., 2017). At a high level, the MAML algorithm is comprised of two optimization loops. The outer loop (in the spirit of meta-learning) aims to find an effective meta-initialization, from which the inner loop can perform efficient adaptation – optimize parameters to solve new tasks with very few labelled examples. This algorithm, with deep neural networks as the underlying model, has been highly influential, with significant follow on work, such as first order variants (Nichol and Schulman, 2018), probabilistic extensions (Finn et al., 2018), augmentation with generative modelling (Rusu et al., 2018), and many others (Hsu et al., 2018; Finn and Levine, 2017; Grant et al., 2018; Triantafillou et al., 2019). + +Despite the popularity of MAML, and the numerous followups and extensions, there remains a fundamental open question on the basic algorithm. Does the meta-initialization learned by the outer loop result in rapid learning on unseen test tasks (efficient but significant changes in the representations) or is the success primarily due to feature reuse (with the meta-initialization already providing high quality representations)? In this paper, we explore this question and its many surprising consequences. Our main contributions are: + +- We perform layer freezing experiments and latent representational analysis of MAML, finding that feature reuse is the predominant reason for efficient learning. +- Based on these results, we propose the ANIL (Almost No Inner Loop) algorithm, a significant simplification to MAML that removes the inner loop updates for all but the head (final layer) of a neural network during training and inference. ANIL performs identically to MAML on standard benchmark few-shot classification and RL tasks and offers computational benefits over MAML. +- We study the effect of the head of the network, finding that once training is complete, the head can be removed, and the representations can be used without adaptation to perform unseen tasks, which we call the No Inner Loop (NIL) algorithm. +- We study different training regimes, e.g. multiclass classification, multitask learning, etc, and find that the task specificity of MAML/ANIL at training facilitate the learning of better features. We also find that multitask training, a popular baseline with no task specificity, performs worse than random features. +- We discuss rapid learning and feature reuse in the context of other meta-learning approaches. + +# 2 RELATED WORK + +MAML (Finn et al., 2017) is a highly popular meta-learning algorithm for few-shot learning, achieving competitive performance on several benchmark few-shot learning problems (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Santoro et al., 2016; Ravi and Larochelle, 2016; Nichol and Schulman, 2018). It is part of the family of optimization-based meta-learning algorithms, with other members of this family presenting variations around how to learn the weights of the task-specific classifier. For example Lee and Choi (2018); Gordon et al. (2018); Bertinetto et al. (2018); Lee et al. (2019); Zhou et al. (2018) first learn functions to embed the support set and target examples of a few-shot learning task, before using the test support set to learn task specific weights to use on the embedded target examples. Harrison et al. (2018) also proceeds similarly, using a Bayesian approach. The method of Bao et al. (2019) explores a related approach, focusing on applications in text classification. + +Of these optimization-based meta-learning algorithms, MAML has been especially influential, inspiring numerous direct extensions in recent literature (Antoniou et al., 2018; Finn et al., 2018; Grant et al., 2018; Rusu et al., 2018). Most of these extensions critically rely on the core structure of the MAML algorithm, incorporating an outer loop (for meta-training), and an inner loop (for task-specific adaptation), and there is little prior work analyzing why this central part of the MAML algorithm is practically successful. In this work, we focus on this foundational question, examining how and why MAML leads to effective few-shot learning. To do this, we utilize analytical tools such as Canonical Correlation Analysis (CCA) (Raghu et al., 2017; Morcos et al., 2018) and Centered Kernel Alignment (CKA) (Kornblith et al., 2019) to study the neural network representations learned with the MAML algorithm, which also demonstrates MAML's ability to learn effective features for few-shot learning. + +Insights from this analysis lead to a simplified algorithm, ANIL, which almost completely removes the inner optimization loop with no reduction in performance. Prior works (Zintgraf et al., 2018; Javed and White, 2019) have proposed algorithms where some parameters are only updated in the outer loop and others only in the inner loop. However, these works are motivated by different questions, such as improving MAML's performance or learning better representations, rather than analysing rapid learning vs feature reuse in MAML. + +# 3 MAML, RAPID LEARNING, AND FEATURE REUSE + +Our goal is to understand whether the MAML algorithm efficiently solves new tasks due to rapid learning or feature reuse. In rapid learning, large representational and parameter changes occur during + +![](images/81dd4f870420f3af73dccf83577f0d04439ca170544741dcc4bf8228ff5a1b93.jpg) +Figure 1: Rapid learning and feature reuse paradigms. In Rapid Learning, outer loop training leads to a parameter setting that is well-conditioned for fast learning, and inner loop updates result in significant task specialization. In Feature Reuse, the outer loop leads to parameter values corresponding to reusable features, from which the parameters do not move significantly in the inner loop. + +![](images/ac4afde155ce79765b00c514b1d3fe082a89be47f2ca89b797128927bad25871.jpg) + +adaptation to each new task as a result of favorable weight conditioning from the meta-initialization. In feature reuse, the meta-initialization already contains highly useful features that can mostly be reused as is for new tasks, so little task-specific adaptation occurs. Figure 1 shows a schematic of these two hypotheses. + +We start off by overviewing the details of the MAML algorithm, and then we study the rapid learning vs feature reuse question via layer freezing experiments and analyzing latent representations of models trained with MAML. The results strongly support feature reuse as the predominant factor behind MAML's success. In Section 4, we explore the consequences of this, providing a significant simplification of MAML, the ANIL algorithm, and in Section 6, we outline the connections to meta-learning more broadly. + +# 3.1 OVERVIEW OF MAML + +The MAML algorithm finds an initialization for a neural network so that new tasks can be learnt with very few examples ( $k$ examples from each class for $k$ -shot learning) via two optimization loops: + +- Outer Loop: Updates the initialization of the neural network parameters (often called the meta-initialization) to a setting that enables fast adaptation to new tasks. +- Inner Loop: Performs adaptation: takes the outer loop initialization, and, separately for each task, performs a few gradient updates over the $k$ labelled examples (the support set) provided for adaptation. + +More formally, we first define our base model to be a neural network with meta-initialization parameters $\theta$ ; let this be represented by $f_{\theta}$ . We have a distribution $\mathcal{D}$ over tasks, and draw a batch $\{T_1,\dots,T_B\}$ of $B$ tasks from $\mathcal{D}$ . For each task $T_b$ , we have a support set of examples $S_{T_b}$ which are used for inner loop updates, and a target set of examples $\mathcal{Z}_{T_b}$ , which are used for outer loop updates. Let $\theta_i^{(b)}$ signify $\theta$ after $i$ gradient updates for task $T_b$ , and let $\theta_0^{(b)} = \theta$ . In the inner loop, during each update, we compute + +$$ +\theta_ {m} ^ {(b)} = \theta_ {m - 1} ^ {(b)} - \alpha \nabla_ {\theta_ {m - 1} ^ {(b)}} \mathcal {L} _ {S _ {T _ {b}}} \left(f _ {\theta_ {m - 1} ^ {(b)} (\theta)}\right) \tag {1} +$$ + +for $m$ fixed across all tasks, where $\mathcal{L}_{S_{T_b}}(f_{\theta_{m - 1}^{(b)}(\theta)})$ is the loss on the support set of $T_{b}$ after $m - 1$ inner loop updates. + +We then define the meta-loss as + +$$ +\mathcal {L} _ {\text {m e t a}} (\theta) = \sum_ {b = 1} ^ {B} \mathcal {L} _ {\mathcal {Z} _ {T _ {b}}} \left(f _ {\theta_ {m} ^ {(b)} (\theta)}\right) +$$ + +where $\mathcal{L}_{Z_{T_b}}(f_{\theta_m^{(b)}(\theta)})$ is the loss on the target set of $T_{b}$ after $m$ inner loop updates, making clear the dependence of $f_{\theta_m^{(b)}}$ on $\theta$ . The outer optimization loop then updates $\theta$ as + +$$ +\theta = \theta - \eta \nabla_ {\theta} \mathcal {L} _ {m e t a} (\theta) +$$ + +
Freeze layersMiniImageNet-5way-1shotMiniImageNet-5way-5shot
None46.9 ± 0.263.1 ± 0.4
146.5 ± 0.363.0 ± 0.6
1,246.4 ± 0.462.6 ± 0.6
1,2,346.3 ± 0.461.2 ± 0.5
1,2,3,446.3 ± 0.461.0 ± 0.6
+ +Table 1: Freezing successive layers (preventing inner loop adaptation) does not affect accuracy, supporting feature reuse. To test the amount of feature reuse happening in the inner loop adaptation, we test the accuracy of the model when we freeze (prevent inner loop adaptation) a contiguous block of layers at test time. We find that freezing even all four convolutional layers of the network (all layers except the network head) hardly affects accuracy. This strongly supports the feature reuse hypothesis: layers don't have to change rapidly at adaptation time; they already contain good features from the meta-initialization. + +At test time, we draw unseen tasks $\{T_1^{(test)},\dots,T_n^{(test)}\}$ from the task distribution, and evaluate the loss and accuracy on $\mathcal{Z}_{T_i^{(test)}}$ after inner loop adaptation using $S_{T_i^{(test)}}$ (e.g. loss is $\mathcal{L}_{\mathcal{Z}_{T_i^{(test)}}}\left(f_{\theta_m^{(i)}(\theta)}\right)$ ). + +# 3.2 RAPID LEARNING OR FEATURE REUSE? + +We now turn our attention to the key question: Is MAML's efficacy predominantly due to rapid learning or feature reuse? In investigating this question, there is an important distinction between the head (final layer) of the network and the earlier layers (the body of the network). In each few-shot learning task, there is a different alignment between the output neurons and classes. For instance, in task $\mathcal{T}_1$ , the (wlog) five output neurons might correspond, in order, to the classes (dog, cat, frog, cupcake, phone), while for a different task, $\mathcal{T}_2$ , they might correspond, in order, to (airplane, frog, boat, car, pumpkin). This means that the head must necessarily change for each task to learn the new alignment, and for the rapid learning vs feature reuse question, we are primarily interested in the behavior of the body of the network. We return to this in more detail in Section 5, where we present an algorithm (NIL) that does not use a head at test time. + +To study rapid learning vs feature reuse in the network body, we perform two sets of experiments: (1) We evaluate few-shot learning performance when freezing parameters after MAML training, without test time inner loop adaptation; (2) We use representational similarity tools to directly analyze how much the network features and representations change through the inner loop. We use the MiniImageNet dataset, a popular standard benchmark for few-shot learning, and with the standard convolutional architecture in Finn et al. (2017). Results are averaged over three random seeds. Full implementation details are in Appendix B. + +# 3.2.1 FREEZING LAYER REPRESENTATIONS + +To study the impact of the inner loop adaptation, we freeze a contiguous subset of layers of the network, during the inner loop at test time (after using the standard MAML algorithm, incorporating both optimization loops, for training). In particular, the frozen layers are not updated at all to the test time task, and must reuse the features learned by the meta-initialization that the outer loop converges to. We compare the few-shot learning accuracy when freezing to the accuracy when allowing inner loop adaptation. + +Results are shown in Table 1. We observe that even when freezing all layers in the network body, performance hardly changes. This suggests that the meta-initialization has already learned good enough features that can be reused as is, without needing to perform any rapid learning for each test time task. + +# 3.2.2 REPRESENTATIONAL SIMILARITY EXPERIMENTS + +We next study how much the latent representations (the latent functions) learned by the neural network change during the inner loop adaptation phase. Following several recent works (Raghu et al., 2017; Saphra and Lopez, 2018; Morcos et al., 2018; Maheswaranathan et al., 2019; Raghu et al., + +![](images/58b72d642c7029fe397afd36562c4e74d522e473ba719f65fa66339e0afcb5a7.jpg) +Figure 2: High CCA/CKA similarity between representations before and after adaptation for all layers except the head. We compute CCA/CKA similarity between the representation of a layer before the inner loop adaptation and after adaptation. We observe that for all layers except the head, the CCA/CKA similarity is almost 1, indicating perfect similarity. This suggests that these layers do not change much during adaptation, but mostly perform feature reuse. Note that there is a slight dip in similarity in the higher conv layers (e.g. conv3, conv4); this is likely because the slight representational differences in conv1, conv2 have a compounding effect on the representations of conv3, conv4. The head of the network must change significantly during adaptation, and this is reflected in the much lower CCA/CKA similarity. + +![](images/4d0e9473b33b6e5d482996ba2a569e28619ddfc0708a490f4ce72038a70fac2f.jpg) + +2019; Gotmare et al., 2018; Bau et al., 2018) we measure this by applying Canonical Correlation Analysis (CCA) to the latent representations of the network. CCA provides a way to compare representations of two (latent) layers $L_{1}$ , $L_{2}$ of a neural network, outputting a similarity score between 0 (not similar at all) and 1 (identical). For full details, see Raghu et al. (2017); Morcos et al. (2018). In our analysis, we take $L_{1}$ to be a layer before the inner loop adaptation steps, and $L_{2}$ after the inner loop adaptation steps. We compute CCA similarity between $L_{1}$ , $L_{2}$ , averaging the similarity score across different random seeds of the model and different test time tasks. Full details are in Appendix B.2 + +The result is shown in Figure 2, left pane. Representations in the body of the network (the convolutional layers) are highly similar, with CCA similarity scores of $>0.9$ , indicating that the inner loop induces little to no functional change. By contrast, the head of the network, which does change significantly in the inner loop, has a CCA similarity of less than 0.5. To further validate this, we also compute CKA (Centered Kernel Alignment) (Kornblith et al., 2019) (Figure 2 right), another similarity metric for neural network representations, which illustrates the same pattern. These representational analysis results strongly support the feature reuse hypothesis, with further results in the Appendix, Sections B.3 and B.4 providing yet more evidence. + +# 3.2.3 FEATURE REUSE HAPPENS EARLY IN LEARNING + +Having observed that the inner loop does not significantly affect the learned representations with a fully trained model, we extend our analysis to see whether the inner loop affects representations and features earlier on in training. We take MAML models at 10000, 20000, and 30000 iterations into training, perform freezing experiments (as in Section 3.2.1) and representational similarity experiments (as in Section 3.2.2). + +Results in Figure 3 show the same patterns from early in training, with CCA similarity between activations pre and post inner loop update on MiniImageNet-5way-5shot being very high for the body (just like Figure 2), and similar to Table 1, test accuracy remaining approximately the same when freezing contiguous subsets of layers, even when freezing all layers of the network body. This shows that even early on in training, significant feature reuse is taking place, with the inner loop having minimal effect on learned representations and features. Results for 1shot MiniImageNet are in Appendix B.5, and show very similar trends. + +# 4 THE ANIL (ALMOST NO INNER LOOP) ALGORITHM + +In the previous section we saw that for all layers except the head of the neural network, the metainitialization learned by the outer loop of MAML results in very good features that can be reused + +![](images/8d058a177fd33caa9cbdf725d94642865c8833370eed0218fb0bb18f79a8363c.jpg) +Figure 3: Inner loop updates have little effect on learned representations from early on in learning. Left pane: we freeze contiguous blocks of layers (no adaptation at test time), on MiniImageNet-5way-5shot and see almost identical performance. Right pane: representations of all layers except the head are highly similar pre/post adaptation – i.e. features are being reused. This is true from early (iteration 10000) in training. + +![](images/2245f9174388b1fa1860d7914fca2e898a94da5268f83841e57658c1a103e072.jpg) + +![](images/aa79b9050b01dd2f49c03b544b4b8fb240248ba701466c8ad112f565d4a5582e.jpg) +Figure 4: Schematic of MAML and ANIL algorithms. The difference between the MAML and ANIL algorithms: in MAML (left), the inner loop (task-specific) gradient updates are applied to all parameters $\theta$ , which are initialized with the meta-initialization from the outer loop. In ANIL (right), only the parameters corresponding to the network head $\theta_{head}$ are updated by the inner loop, during training and testing. + +![](images/a11a168f1a5458c84576683b9a343fcaf803b0c328370b605835df1d22940654.jpg) + +as is on new tasks. Inner loop adaptation does not significantly change the representations of these layers, even from early on in training. This suggests a natural simplification of the MAML algorithm: the ANIL (Almost No Inner Loop) algorithm. + +In ANIL, during training and testing, we remove the inner loop updates for the network body, and apply inner loop adaptation only to the head. The head requires the inner loop to allow it to align to the different classes in each task. In Section 5.1 we consider another variant, the NIL (No Inner Loop) algorithm, that removes the head entirely at test time, and uses learned features and cosine similarity to perform effective classification, thus avoiding inner loop updates altogether. + +For the ANIL algorithm, mathematically, let $\theta = (\theta_{1},\dots,\theta_{l})$ be the (meta-initialization) parameters for the $l$ layers of the network. Following the notation of Section 3.1, let $\theta_m^{(b)}$ be the parameters after $m$ inner gradient updates for task $\mathcal{T}_b$ . In ANIL, we have that: + +$$ +\theta_ {m} ^ {(b)} = \left(\theta_ {1}, \dots , (\theta_ {l}) _ {m - 1} ^ {(b)} - \alpha \nabla_ {(\theta_ {l}) _ {m - 1} ^ {(b)}} \mathcal {L} _ {S _ {b}} (f _ {\theta_ {m - 1} ^ {(b)}})\right) +$$ + +i.e. only the final layer gets the inner loop updates. As before, we then define the meta-loss, and compute the outer loop gradient update. The intuition for ANIL arises from Figure 3, where we observe that inner loop updates have little effect on the network body even early in training, suggesting the possibility of removing them entirely. Note that this is distinct to the freezing experiments, where we only removed the inner loop at inference time. Figure 4 presents the difference between MAML and ANIL, and Appendix C.1 considers a simple example of the gradient update in ANIL, showing how the ANIL update differs from MAML. + +Computational benefit of ANIL: As ANIL almost has no inner loop, it significantly speeds up both training and inference. We found an average speedup of $1.7\mathrm{x}$ per training iteration over MAML and an average speedup of $4.1\mathrm{x}$ per inference iteration. In Appendix C.5 we provide the full results. + +
MethodOmniglot-20way-1shotOmniglot-20way-5shotMiniImageNet-5way-1shotMiniImageNet-5way-5shot
MAML93.7 ± 0.796.4 ± 0.146.9 ± 0.263.1 ± 0.4
ANIL96.2 ± 0.598.0 ± 0.346.7 ± 0.461.5 ± 0.5
+ +
MethodHalfCheetah-DirectionHalfCheetah-Velocity2D-Navigation
MAML170.4 ± 21.0-139.0 ± 18.9-20.3 ± 3.2
ANIL363.2 ± 14.8-120.9 ± 6.3-20.1 ± 2.3
+ +Table 2: ANIL matches the performance of MAML on few-shot image classification and RL. On benchmark few-shot classification tasks MAML and ANIL have comparable accuracy, and also comparable average return (the higher the better) on standard RL tasks (Finn et al., 2017). + +![](images/ee1a5ffe4108766bd52391c8230935e199550e990b193437adc392879ae1e91b.jpg) +Figure 5: MAML and ANIL learn very similarly. Loss and accuracy curves for MAML and ANIL on MiniImageNet-5way-5shot, illustrating how MAML and ANIL behave similarly through the training process. + +Results of ANIL on Standard Benchmarks: We evaluate ANIL on few-shot image classification and RL benchmarks, using the same model architectures as the original MAML authors, for both supervised learning and RL. Further implementation details are in Appendix C.4. The results in Table 2 (mean and standard deviation of performance over three random initializations) show that ANIL matches the performance of MAML on both few-shot classification (accuracy) and RL (average return, the higher the better), demonstrating that the inner loop adaptation of the body is unnecessary for learning good features. + +MAML and ANIL Models Show Similar Behavior: MAML and ANIL perform equally well on few-shot learning benchmarks, illustrating that removing the inner loop during training does not hinder performance. To study the behavior of MAML and ANIL models further, we plot learning curves for both algorithms on MiniImageNet-5way-5shot, Figure 5. We see that loss and accuracy for both algorithms look very similar throughout training. We also look at CCA and CKA scores of the representations learned by both algorithms, Table 3. We observe that MAML-ANIL representations have the same average similarity scores as MAML-MAML and ANIL-ANIL representations, suggesting both algorithms learn comparable features (removing the inner loop doesn't change the kinds of features learned.) Further learning curves and representational similarity results are presented in Appendices C.2 and C.3. + +# 5 CONTRIBUTIONS OF THE NETWORK HEAD AND BODY + +So far, we have seen that MAML predominantly relies on feature reuse, with the network body (all layers except the last layer) already containing good features at meta-initialization. We also observe that such features can be learned even without inner loop adaptation during training (ANIL algorithm). The head, however, requires inner loop adaptation to enable task specificity. + +
Model PairCCA SimilarityCKA Similarity
MAML-MAML0.510.83
ANIL-ANIL0.510.86
ANIL-MAML0.500.83
+ +Table 3: MAML and ANIL models learn comparable representations. Comparing CCA/CKA similarity scores of the of MAML-ANIL representations (averaged over network body), and MAML-MAML and ANIL-ANIL similarity scores (across different random seeds) shows algorithmic differences between MAML/ANIL does not result in vastly different types of features learned. + +
MethodOmniglot-20way-1shotOmniglot-20way-5shotMiniImageNet-5way-1shotMiniImageNet-5way-5shot
MAML93.7 ± 0.796.4 ± 0.146.9 ± 0.263.1 ± 0.4
ANIL96.2 ± 0.598.0 ± 0.346.7 ± 0.461.5 ± 0.5
NIL96.7 ± 0.398.0 ± 0.0448.0 ± 0.762.2 ± 0.5
+ +Table 4: NIL algorithm performs as well as MAML and ANIL on few-shot image classification. Performance of MAML, ANIL, and NIL on few-shot image classification benchmarks. We see that with no test-time inner loop, and just learned features, NIL performs comparably to MAML and ANIL, indicating the strength of the learned features, and the relative lack of importance of the head at test time. + +In this section, we explore the contributions of the network head and body. We first ask: How important is the head at test time, when good features have already been learned? Motivating this question is that the features in the body of the network needed no adaptation at inference time, so perhaps they are themselves sufficient to perform classification, with no head. In Section 5.1, we find that test time performance is entirely determined by the quality of these representations, and we can use similarity of the frozen meta-initialization representations to perform unseen tasks, removing the head entirely. We call this the NIL (No Inner Loop) algorithm. + +Given this result, we next study how useful the head is at training (in ensuring the network body learns good features). We look at multiple different training regimes (some without the head) for the network body, and evaluate the quality of the representations. We find that MAML/ANIL result in the best representations, demonstrating the importance of the head during training for feature learning. + +# 5.1 THE HEAD AT TEST TIME AND THE NIL (NO INNER LOOP) ALGORITHM + +We study how important the head and task specific alignment are when good features have already been learned (through training) by the meta-initialization. At test time, we find that the representations can be used directly, with no adaptation, which leads to the No Inner Loop (NIL) algorithm: + +1. Train a few-shot learning model with ANIL/MAML algorithm as standard. We use ANIL training. +2. At test time, remove the head of the trained model. For each task, first pass the $k$ labelled examples (support set) through the body of the network, to get their penultimate layer representations. Then, for a test example, compute cosine similarities between its penultimate layer representation and those of the support set, using these similarities to weight the support set labels, as in Vinyals et al. (2016). + +The results for the NIL algorithm, following ANIL training, on few-shot classification benchmarks are given in Table 4. Despite having no network head and no task specific adaptation, NIL performs comparably to MAML and ANIL. This demonstrates that the features learned by the network body when training with MAML/ANIL (and reused at test time) are the critical component in tackling these benchmarks. + +# 5.2 TRAINING REGIMES FOR THE NETWORK BODY + +The NIL algorithm and results of Section 5.1 lead to the question of how important task alignment and the head are during training to ensure good features. Here, we study this question by examining the quality of features arising from different training regimes for the body. We look at (i) MAML + +
MethodMiniImageNet-5way-1shotMiniImageNet-5way-5shot
MAML training-NIL head48.4 ± 0.361.5 ± 0.8
ANIL training-NIL head48.0 ± 0.762.2 ± 0.5
Multiclass training-NIL head39.7 ± 0.354.4 ± 0.5
Multitask training-NIL head26.5 ± 1.134.2 ± 3.5
Random features-NIL head32.9 ± 0.643.2 ± 0.5
NIL training-NIL head38.3 ± 0.643.0 ± 0.2
+ +Table 5: MAML/ANIL training leads to superior features learned, supporting importance of head at training. Training with MAML/ANIL leads to superior performance over other methods which do not have task specific heads, supporting the importance of the head at training. + +and ANIL training; (ii) multiclass classification, where all of the training data and classes (from which training tasks are drawn) are used to perform standard classification; (iii) multitask training, a standard baseline, where no inner loop or task specific head is used, but the network is trained on all the tasks at the same time; (iv) random features, where the network is not trained at all, and features are frozen after random initialization; (v) NIL at training time, where there is no head and cosine distance on the representations is used to get the label. + +After training, we apply the NIL algorithm to evaluate test performance, and quality of features learned at training. The results are shown in Table 5. MAML and ANIL training performs best. Multitask training, which has no task specific head, performs the worst, even worse than random features (adding evidence for the need for task specificity at training to facilitate feature learning.) Using NIL during training performs worse than MAML/ANIL. These results suggest that the head is important at training to learn good features in the network body. + +In Appendix D.1, we study test time performance variations from using a MAML/ANIL head instead of NIL, finding (as suggested by Section 5.1) very little performance difference. Additional results on similarity between the representations of different training regimes is given in Appendix D.2. + +# 6 FEATURE REUSE IN OTHER META-LEARNING ALGORITHMS + +Up till now, we have closely examined the MAML algorithm, and have demonstrated empirically that the algorithm's success is primarily due to feature reuse, rather than rapid learning. We now discuss rapid learning vs feature reuse more broadly in meta-learning. By combining our results with an analysis of evidence reported in prior work, we find support for many meta-learning algorithms succeeding via feature reuse, identifying a common theme characterizing the operating regime of much of current meta-learning. + +# 6.1 OPTIMIZATION AND MODEL BASED META-LEARNING + +MAML falls within the broader class of optimization based meta-learning algorithms, which at inference time, directly optimize model parameters for a new task using the support set. MAML has inspired many other optimization-based algorithms, which utilize the same two-loop structure (Lee and Choi, 2018; Rusu et al., 2018; Finn et al., 2018). Our analysis so far has thus yielded insights into the feature reuse vs rapid learning question for this class of algorithms. Another broad class of meta-learning consists of model based algorithms, which also have notions of rapid learning and feature reuse. + +In the model-based setting, the meta-learning model's parameters are not directly optimized for the specific task on the support set. Instead, the model typically conditions its output on some representation of the task definition. One way to achieve this conditioning is to jointly encode the entire support set in the model's latent representation (Vinyals et al., 2016; Sung et al., 2018), enabling it to adapt to the characteristics of each task. This constitutes rapid learning for model based meta-learning algorithms. + +An alternative to joint encoding would be to encode each member of the support set independently, and apply a cosine similarity rule (as in Vinyals et al. (2016)) to classify an unlabelled example. This mode of operation is purely feature reuse – we do not use information defining the task to directly influence the decision function. + +If joint encoding gave significant test-time improvement over non-joint encoding, then this would suggest that rapid learning of the test-time task is taking place, as task specific information is being utilized to influence the model's decision function. However, on analyzing results in prior literature, this improvement appears to be minimal. Indeed, in e.g. Matching Networks (Vinyals et al., 2016), using joint encoding one reaches $44.2\%$ accuracy on MiniImageNet-5way-1shot, whereas with independent encoding one obtains $41.2\%$ : a small difference. More refined models suggest the gap is even smaller. For instance, in Chen et al. (2019), many methods for one shot learning were re-implemented and studied, and baselines without joint encoding achieved $48.24\%$ accuracy in MiniImageNet-5way-1shot, whilst other models using joint encoding such as Relation Net (Sung et al., 2018) achieve very similar accuracy of $49.31\%$ (they also report MAML, at $46.47\%$ ). As a result, we believe that the dominant mode of "feature reuse" rather than "rapid learning" is what has currently dominated both MAML-styled optimization based meta-learning and model based meta-learning. + +# 7 CONCLUSION + +In this paper, we studied a fundamental question on whether the highly successful MAML algorithm relies on rapid learning or feature reuse. Through a series of experiments, we found that feature reuse is the dominant component in MAML's efficacy on benchmark datasets. This insight led to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML that has identical performance on standard image classification and reinforcement learning benchmarks, and provides computational benefits. We further study the importance of the head (final layer) of a neural network trained with MAML, discovering that the body (lower layers) of a network is sufficient for few-shot classification at test time, allowing us to remove the network head for testing (NIL algorithm) and still match performance. We connected our results to the broader literature in meta-learning, identifying feature reuse to be a common mode of operation for other meta-learning algorithms also. Based off of our conclusions, future work could look at developing and analyzing new meta-learning algorithms that perform more rapid learning, which may expand the datasets and problems amenable to these techniques. We note that our study mainly considered benchmark datasets, such as Omniglot and MiniImageNet. It is an interesting future direction to consider rapid learning and feature reuse in MAML on other few-shot learning datasets, such as those from Triantafillou et al. (2019). + +# ACKNOWLEDGEMENTS + +The authors thank Geoffrey Hinton, Chelsea Finn, Hugo Larochelle and Chiyuan Zhang for helpful feedback on the methods and results. + +# REFERENCES + +Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your MAML. arXiv preprint arXiv:1810.09502, 2018. +Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. Few-shot text classification with distributional signatures. arXiv preprint arXiv:1908.06039, 2019. +Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. Identifying and controlling important neurons in neural machine translation. arXiv preprint arXiv:1811.01157, 2018. +Luca Bertinetto, Joao F Henriques, Philip HS Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136, 2018. +Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232, 2019. +Chelsea Finn and Sergey Levine. Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm. arXiv preprint arXiv:1710.11622, 2017. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126-1135. JMLR.org, 2017. +Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In Advances in Neural Information Processing Systems, pages 9516-9527, 2018. +Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard E Turner. Meta-Learning probabilistic inference for prediction. arXiv preprint arXiv:1805.09921, 2018. +Akhilesh Gotmare, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation. arXiv preprint arXiv:1810.13243, 2018. +Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical bayes. arXiv preprint arXiv:1801.08930, 2018. +James Harrison, Apoorva Sharma, and Marco Pavone. Meta-learning priors for efficient online bayesian regression. arXiv preprint arXiv:1807.08912, 2018. +Kyle Hsu, Sergey Levine, and Chelsea Finn. Unsupervised learning via meta-learning. arXiv preprint arXiv:1810.02334, 2018. +Khurram Javed and Martha White. Meta-learning representations for continual learning. arXiv preprint arXiv:1905.12588, 2019. +Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, volume 2, 2015. +Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. arXiv preprint arXiv:1905.00414, 2019. +Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10657-10665, 2019. +Yoonho Lee and Seungjin Choi. Gradient-based meta-learning with learned layerwise metric and subspace. arXiv preprint arXiv:1801.05558, 2018. + +Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John E Hopcroft. Convergent learning: Do different neural networks learn the same representations? In $FE@NIPS$ , pages 196-212, 2015. +Niru Maheswaranathan, Alex H. Willams, Matthew D. Golub, Surya Ganguli, and David Sussillo. Universality and individuality in neural dynamics across large populations of recurrent networks. arXiv preprint arXiv:1907.08549, 2019. +Ari S Morcos, Maithra Raghu, and Samy Bengio. Insights on representational similarity in neural networks with canonical correlation. arXiv preprint arXiv:1806.05759, 2018. +Alex Nichol and John Schulman. Reptile: A scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2, 2018. +Maithra Raghu. SVCCA Code and Tutorials. https://github.com/google/svcca. +Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. SVCCA: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems, pages 6076-6085, 2017. +Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding transfer learning with applications to medical imaging. arXiv preprint arXiv:1902.07208, 2019. +Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016. +Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. arXiv preprint arXiv:1807.05960, 2018. +Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning, pages 1842-1850, 2016. +Naomi Saphra and Adam Lopez. Understanding learning dynamics of language models with SVCCA. arXiv preprint arXiv:1811.00225, 2018. +Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077-4087, 2017. +Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199-1208, 2018. +Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. Meta-Dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019. +Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching Networks for one shot learning. In Advances in neural information processing systems, pages 3630-3638, 2016. +Liwei Wang, Lunjia Hu, Jiayuan Gu, Zhiqiang Hu, Yue Wu, Kun He, and John E. Hopcroft. To what extent do different neural networks learn the same representation: A Neuron Activation Subspace Match Approach. In NeurIPS 2018, 2018. +Fengwei Zhou, Bin Wu, and Zhenguo Li. Deep meta-learning: learning to learn in the concept space. arXiv preprint arXiv:1802.03596, 2018. +Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via meta-learning. arXiv preprint arXiv:1810.03642, 2018. + +# A FEW-SHOT IMAGE CLASSIFICATION DATASETS AND EXPERIMENTAL SETUPS + +We consider the few-shot learning paradigm for image classification to evaluate MAML and ANIL. We evaluate using two datasets often used for few-shot multiclass classification – the Omniglot dataset and the MiniImageNet dataset. + +Omniglot: The Omniglot dataset consists of over 1600 different handwritten character classes from 23 alphabets. The dataset is split on a character-level, so that certain characters are in the training set, and others in the validation set. We consider the 20-way 1-shot and 20-way 5-shot tasks on this dataset, where at test time, we wish our classifier to discriminate between 20 randomly chosen character classes from the held-out set, given only 1 or 5 labelled example(s) from each class from this set of 20 testing classes respectively. The model architecture used is identical to that in the original MAML paper, namely: 4 modules with a $3 \times 3$ convolutions and 64 filters with a stride of 2, followed by batch normalization, and a ReLU nonlinearity. The Omniglot images are downsampled to $28 \times 28$ , so the dimensionality of the last hidden layer is 64. The last layer is fed into a 20-way softmax. Our models are trained using a batch size of 16, 5 inner loop updates, and an inner learning rate of 0.1. + +MiniImageNet: The MiniImagenet dataset was proposed by Ravi and Larochelle (2016), and consists of 64 training classes, 12 validation classes, and 24 test classes. We consider the 5-way 1-shot and 5-way 5-shot tasks on this dataset, where the test-time task is to classify among 5 different randomly chosen validation classes, given only 1 and 5 labelled examples respectively. The model architecture is again identical to that in the original paper: 4 modules with a $3 \times 3$ convolutions and 32 filters, followed by batch normalization, ReLU nonlinearity, and $2 \times 2$ max pooling. Our models are trained using a batch size of 4. 5 inner loop update steps, and an inner learning rate of 0.01 are used. 10 inner gradient steps are used for evaluation at test time. + +# B ADDITIONAL DETAILS AND RESULTS: FREEZING AND REPRESENTATIONAL SIMILARITY + +In this section, we provide further experimental details and results from freezing and representational similarity experiments. + +# B.1 EXPERIMENTAL DETAILS + +We concentrate on MiniImageNet for our experiments in Section 3.2, as it is more complex than Omniglot. + +The model architecture used for our experiments is identical to that in the original paper: 4 modules with a $3 \times 3$ convolutions and 32 filters, followed by batch normalization, ReLU nonlinearity, and $2 \times 2$ max pooling. Our models are trained using a batch size of 4, 5 inner loop update steps, and an inner learning rate of 0.01. 10 inner gradient steps are used for evaluation at test time. We train models 3 times with different random seeds. Models were trained for 30000 iterations. + +# B.2 DETAILS OF REPRESENTATIONAL SIMILARITY + +CCA takes in as inputs $L_{1} = \{z_{1}^{(1)}, z_{2}^{(1)}, \dots, z_{m}^{(1)}\}$ and $L_{2} = \{z_{1}^{(2)}, z_{2}^{(1)}, \dots, z_{n}^{(2)}\}$ , where $L_{1}, L_{2}$ are layers, and $z_{i}^{(j)}$ is a neuron activation vector: the vector of outputs of neuron $i$ (of layer $L_{j}$ ) over a set of inputs $X$ . It then finds linear combinations of the neurons in $L_{1}$ and neurons in $L_{2}$ so that the resulting activation vectors are maximally correlated, which is summarized in the canonical correlation coefficient. Iteratively repeating this process gives a similarity score (in [0, 1] with 1 identical and 0 completely different) between the representations of $L_{1}$ and $L_{2}$ . + +We apply this to compare corresponding layers of two networks, net1 and net2, where net1 and net2 might differ due to training step, training method (ANIL vs MAML) or the random seed. When comparing convolutional layers, as described in Raghu, we perform the comparison over channels, + +![](images/5aea06fa6e29d86d53e90e0886e092f9fb22969f3e58399fe3575cb5a5bd2408.jpg) +Figure 6: Euclidean distance before and after finetuning for MiniImageNet. We compute the average (across tasks) Euclidean distance between the weights before and after inner loop adaptation, separately for different layers. We observe that all layers except for the final layer show very little difference before and after inner loop adaptation, suggesting significant feature reuse. + +![](images/e16057cfef7ce8d4bc74e2c3d883e3c8b6fce0e1e8d48f3b99a1ce3c8acbe1d8.jpg) + +flattening out over all of the spatial dimensions, and then taking the mean CCA coefficient. We average over three random repeats. + +# B.3 SIMILARITY BEFORE AND AFTER INNER LOOP WITH EUCLIDEAN DISTANCE + +In addition to assessing representational similarity with CCA/CKA, we also consider the simpler measure of Euclidean distance, capturing how much weights of the network change during the inner loop update (task-specific finetuning). We note that this experiment does not assess functional changes on inner loop updates as well as the CCA experiments do; however, they serve to provide useful intuition. + +We plot the per-layer average Euclidean distance between the initialization $\theta$ and the finetuned weights $\theta_{m}^{(b)}$ across different tasks $T_{b}$ , i.e. + +$$ +\frac {1}{N} \sum_ {b = 1} ^ {N} | | (\theta_ {l}) - (\theta_ {l}) _ {m} ^ {(b)} | | +$$ + +across different layers $l$ , for MiniImageNet in Figure 6. We observe that very quickly after the start of training, all layers except for the last layer have small Euclidean distance difference before and after finetuning, suggesting significant feature reuse. (Note that this is despite the fact that these layers have more parameters than the final layer.) + +# B.4 CCA SIMILARITY ACROSS RANDOM SEEDS + +The experiment in Section 3.2.2 compared representational similarity of $L_{1}$ and $L_{2}$ at different points in training (before/after inner loop adaptation) but corresponding to the same random seed. To complete the picture, it is useful to study whether representational similarity across different random seeds is also mostly unaffected by the inner loop adaptation. This motivates four natural comparisons: assume layer $L_{1}$ is from the first seed, and layer $L_{2}$ is from the second seed. Then we can compute the representational similarity between $(L_{1} \text{ pre}, L_{2} \text{ pre})$ , $(L_{1} \text{ pre}, L_{2} \text{ post})$ , $(L_{1} \text{ post}, L_{2} \text{ pre})$ and $(L_{1} \text{ post}, L_{2} \text{ post})$ , where pre/post signify whether we take the representation before or after adaptation. + +Prior work has shown that neural network representations may vary across different random seeds (Raghu et al., 2017; Morcos et al., 2018; Li et al., 2015; Wang et al., 2018), organically resulting in CCA similarity scores much less than 1. So to identify the effect of the inner loop on the representation, we plot the CCA similarities of (i) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ pre, $L_{2}$ post) and (ii) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ post, $L_{2}$ pre) and (iii) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ post, $L_{2}$ post) separately across the different random seeds and different layers. We then compute the line of best fit for each plot. If the line of best fit fits the data and is close to $y = x$ , this suggests that the inner loop adaptation doesn't affect the features much - the similarity before adaptation is very close to the similarity after adaptation. + +![](images/c09c48c4ba3cdb8e8c93271e955b39f3f7aec5e66fe7f2277167e5b60886223d.jpg) +Figure 7: Computing CCA similarity pre/post adaptation across different random seeds further demonstrates that the inner loop doesn't change representations significantly. We compute CCA similarity of $L_{1}$ from seed 1 and $L_{2}$ from seed 2, varying whether we take the representation pre (before) adaptation or post (after) adaptation. To isolate the effect of adaptation from inherent variation in the network representation across seeds, we plot CCA similarity of the representations before adaptation against representations after adaptation in three different combinations: (i) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ pre, $L_{1}$ post), (ii) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ pre, $L_{1}$ post) (iii) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ post, $L_{2}$ post). We do this separately across different random seeds and different layers. Then, we compute a line of best fit, finding that in all three plots, it is almost identical to $y = x$ , demonstrating that the representation does not change significantly pre/post adaptation. Furthermore a computation of the coefficient of determination $R^{2}$ gives $R^{2} \approx 1$ , illustrating that the data is well explained by this relation. In Figure 8, we perform this comparison with CKA, observing the same high level conclusions. + +![](images/335fc672187d6adb025cea194f08698102f3a5e80df89e51744185581044cf07.jpg) + +![](images/24ec9f04da3ed5f3fb81940b69cc2068ffc40b6c452573a6f5983959b3246bb9.jpg) + +![](images/e1c0c0144b596cb281eb1b438bf82394c5c718a02391f69a890f886ce5855749.jpg) +Figure 8: We perform the same comparison as in Figure 7, but with CKA instead. There is more variation in the similarity scores, but we still see a strong correlation between (Pre, Pre) and (Post, Post) comparisons, showing that representations do not change significantly over the inner loop. + +The results are shown in Figure 7. In all of the plots, we see that the line of best fit is almost exactly $y = x$ (even for the pre/pre vs post/post plot, which could conceivably be more different as both seeds change) and a computation of the coefficient of determination $R^2$ gives $R^2 \approx 1$ for all three plots. Putting this together with Figure 2, we can conclude that the inner loop adaptation step doesn't affect the representation learned by any layer except the head, and that the learned representations and features are mostly reused as is for the different tasks. + +# B.5 MINIIMAGENET-5WAY-1SHOT FREEZING AND CCA OVER TRAINING + +Figure 9 shows that from early on in training, on MiniImageNet-5way-1shot, that the CCA similarity between activations pre and post inner loop update is very high for all layers but the head. We further see that the validation set accuracy suffers almost no decrease if we remove the inner loop updates and freeze all layers but the head. This shows that even early on in training, the inner loop appears to have minimal effect on learned representations and features. This supplements the results seen in Figure 3 on MiniImageNet-5way-5shot. + +![](images/5c4c00c4497fb7961c8df8e7a7515cdbafbb8552cadb74c72827a667a4780ed7.jpg) +Figure 9: Inner loop updates have little effect on learned representations from early on in learning. We consider freezing and representational similarity experiments for MiniImageNet-5way-1shot. We see that early on in training (from as few as 10k iterations in), the inner loop updates have little effect on the learned representations and features, and that removing the inner loop updates for all layers but the head have little-to-no impact on the validation set accuracy. + +![](images/3dc607408617fab4ac2f6d20b49da8ad559f4d3898f2411ef78f7fe87619e754.jpg) + +# C ANIL ALGORITHM: MORE DETAILS + +In this section, we provide more details about the ANIL algorithm, including an example of the ANIL update, implementation details, and further experimental results. + +# C.1 AN EXAMPLE OF THE ANIL UPDATE + +Consider a simple, two layer linear network with a single hidden unit in each layer: $\hat{y}(x; \boldsymbol{\theta}) = \theta_2(\theta_1 x)$ . In this example, $\theta_2$ is the head. Consider the 1-shot regression problem, where we have access to examples $\left\{(x_1^{(t)}, y_1^{(t)}), (x_2^{(t)}, y_2^{(t)})\right\}$ for tasks $t = 1, \dots, T$ . Note that $(x_1^{(t)}, y_1^{(t)})$ is the (example, label) pair in the meta-training set (used for inner loop adaptation - support set), and $(x_2^{(t)}, y_2^{(t)})$ is the pair in the meta-validation set (used for the outer loop update - target set). + +In the few-shot learning setting, we firstly draw a set of $N$ tasks and labelled examples from our meta-training set: $\left\{(x_1^{(1)},y_1^{(1)}),\ldots ,(x_1^{(N)},y_1^{(N)})\right\}$ . Assume for simplicity that we only apply one gradient step in the inner loop. The inner loop updates for each task are thus defined as follows: + +$$ +\theta_ {1} ^ {(t)} \leftarrow \theta_ {1} - \frac {\partial L \left(\hat {y} \left(x _ {1} ^ {(t)} ; \boldsymbol {\theta}\right) , y _ {1} ^ {(t)}\right)}{\partial \theta_ {1}} \tag {1} +$$ + +$$ +\theta_ {2} ^ {(t)} \leftarrow \theta_ {2} - \frac {\partial L \left(\hat {y} \left(x _ {1} ^ {(t)} ; \boldsymbol {\theta}\right) , y _ {1} ^ {(t)}\right)}{\partial \theta_ {2}} \tag {2} +$$ + +where $L(\cdot, \cdot)$ is the loss function, (e.g. mean squared error) and $\theta_i^{(t)}$ refers to a parameter after inner loop update for task $t$ . + +The task-adapted parameters for MAML and ANIL are as follows. Note how only the head parameters change per-task in ANIL: + +$$ +\boldsymbol {\theta} _ {\mathrm {M A M L}} ^ {(t)} = \left[ \theta_ {1} ^ {(t)}, \theta_ {2} ^ {(t)} \right] \tag {3} +$$ + +$$ +\boldsymbol {\theta} _ {\text {A N I L}} ^ {(t)} = \left[ \theta_ {1}, \theta_ {2} ^ {(t)} \right] \tag {4} +$$ + +In the outer loop update, we then perform the following operations using the data from the meta-validation set: + +$$ +\theta_ {1} \leftarrow \theta_ {1} - \sum_ {t = 1} ^ {N} \frac {\partial L \left(\hat {y} \left(x _ {2} ^ {(t)} ; \boldsymbol {\theta} ^ {(t)}\right) , y _ {2} ^ {(t)}\right)}{\partial \theta_ {1}} \tag {5} +$$ + +$$ +\theta_ {2} \leftarrow \theta_ {2} - \sum_ {t = 1} ^ {N} \frac {\partial L \left(\hat {y} \left(x _ {2} ^ {(t)} ; \boldsymbol {\theta} ^ {(t)}\right) , y _ {2} ^ {(t)}\right)}{\partial \theta_ {2}} \tag {6} +$$ + +Considering the update for $\theta_{1}$ in more detail for our simple, two layer, linear network (the case for $\theta_{2}$ is analogous), we have the following update for MAML: + +$$ +\theta_ {1} \leftarrow \theta_ {1} - \sum_ {t = 1} ^ {N} \frac {\partial L \left(\hat {y} \left(x _ {2} ^ {(t)} ; \boldsymbol {\theta} _ {\mathrm {M A M L}} ^ {(t)}\right) , y _ {2} ^ {(t)}\right)}{\partial \theta_ {1}} \tag {7} +$$ + +$$ +\hat {y} (x _ {2} ^ {(t)}; \boldsymbol {\theta} _ {\mathrm {M A M L}} ^ {(t)}) = \left(\left[ \theta_ {2} - \frac {\partial L (\hat {y} (x _ {1} ^ {(t)} ; \boldsymbol {\theta}) , y _ {1} ^ {(t)})}{\partial \theta_ {2}} \right] \cdot \left[ \theta_ {1} - \frac {\partial L (\hat {y} (x _ {1} ^ {(t)} ; \boldsymbol {\theta}) , y _ {1} ^ {(t)})}{\partial \theta_ {1}} \right] \cdot x _ {2}\right) \tag {8} +$$ + +For ANIL, on the other hand, the update will be: + +$$ +\theta_ {1} \leftarrow \theta_ {1} - \sum_ {t = 1} ^ {N} \frac {\partial L \left(\hat {y} \left(x _ {2} ^ {(t)} ; \boldsymbol {\theta} _ {\mathrm {A N I L}} ^ {(t)}\right) , y _ {2} ^ {(t)}\right)}{\partial \theta_ {1}} \tag {9} +$$ + +$$ +\hat {y} \left(x _ {2} ^ {(t)}; \boldsymbol {\theta} _ {\mathrm {A N I L}} ^ {(t)}\right) = \left(\left[ \theta_ {2} - \frac {\partial L \left(\hat {y} \left(x _ {1} ^ {(t)} ; \boldsymbol {\theta}\right) , y _ {1} ^ {(t)}\right)}{\partial \theta_ {2}} \right] \cdot \theta_ {1} \cdot x _ {2}\right) \tag {10} +$$ + +Note the lack of inner loop update for $\theta_{1}$ , and how we do not remove second order terms in ANIL (unlike in first-order MAML); second order terms still persist through the derivative of the inner loop update for the head parameters. + +# C.2 ANIL LEARNS ALMOST IDENTICALLY TO MAML + +We implement ANIL on MiniImageNet and Omniglot, and generate learning curves for both algorithms in Figure 10. We find that learning proceeds almost identically for ANIL and MAML, showing that removing the inner loop has little effect on the learning dynamics. + +# C.3 ANIL AND MAML LEARN SIMILAR REPRESENTATIONS + +We compute CCA similarities across representations in a MAML seed and an ANIL seed, and then plot these against the same MAML seed representation compared to a different MAML seed (and similarly for ANIL). We find a strong correlation between these similarities (Figure 11), which suggests that MAML and ANIL are learning similar representations, despite their algorithmic differences. (ANIL and MAML are about as similar to each other as two ANILs are to each other, or two MAMLs are to each other.) + +# C.4 ANIL IMPLEMENTATION DETAILS + +Supervised Learning Implementation: We used the TensorFlow MAML implementation open-sourced by the original authors (Finn et al., 2017). We used the same model architectures as in the original MAML paper for our experiments, and train models 3 times with different random seeds. All models were trained for 30000 iterations, with a batch size of 4, 5 inner loop update steps, and an inner learning rate of 0.01. 10 inner gradient steps were used for evaluation at test time. + +Reinforcement Learning Implementation: We used the open source PyTorch implementation of MAML for RL $^{1}$ , due to challenges encountered when running the open-sourced TensorFlow + +![](images/f91997e73959b64f41dca3aa537fb711915162cbb39a6ffe6a375e88120ece4b.jpg) + +![](images/f43f37a015ac3cd9e276d608c59fa6a303d2d8497e05156d06ff59b315a1f883.jpg) + +![](images/33ed681e4ac2be282ab89175e79d76629c4cffea3664b20aa6d308210059498e.jpg) +Figure 10: ANIL and MAML on MiniImageNet and Omniglot. Loss and accuracy curves for ANIL and MAML on (i) MiniImageNet-5way-1shot (ii) MiniImageNet-5way-5shot (iii) Omniglot-20way-1shot. These illustrate how both algorithms learn very similarly over training. + +implementation from the original authors. We note that the results for MAML in these RL domains do not exactly match those in the original paper; this may be due to large variance in results, depending on the random initialization. We used the same model architecture as the original paper (two layer MLP with 100 hidden units in each layer), a batch size of 40, 1 inner loop update step with an inner learning rate of 0.1 and 20 trajectories for inner loop adaptation. We trained three MAML and ANIL models with different random initialization, and quote the mean and standard deviation of the results. As in the original MAML paper, for RL experiments, we select the best performing model over 500 iterations of training and evaluate this model at test time on a new set of tasks. + +![](images/cd57cf39370969616204420da073a0cfd72f874ae77de9f8e6e2cba06c5443bc.jpg) +Figure 11: Computing CCA similarity across different seeds of MAML and ANIL networks suggests these representations are similar. We plot the CCA similarity between an ANIL seed and a MAML seed, plotted against (i) the MAML seed compared to a different MAML seed (ii) the ANIL seed compared to a different ANIL seed. We observe a strong correlation of similarity scores in both (i) and (ii). This tells us that (i) two MAML representations vary about as much as MAML and ANIL representations (ii) two ANIL representations vary about as much as MAML and ANIL representations. In particular, this suggests that MAML and ANIL learn similar features, despite having significant algorithmic differences. + +![](images/589022bfb380a5a7bfa7720826406d7711d2e2b98fb8d5045dfa40b966472656.jpg) + +
Training: 5way-1shotTraining: 5way-5shot
Mean (s)Median (s)SpeedupMean (s)Median (s)Speedup
MAML0.150.1310.680.671
First Order MAML0.0890.0831.690.400.391.7
ANIL0.0840.0721.790.370.361.84
+ +
Inference: 5way-1shotInference: 5way-5shot
Mean (s)Median (s)SpeedupMean (s)Median (s)Speedup
MAML0.0830.07810.370.361
ANIL0.0200.0174.150.0760.0714.87
+ +Table 6: ANIL offers significant computational speedup over MAML, during both training and inference. Table comparing execution times and speedups of MAML, First Order MAML, and ANIL during training (above) and inference (below) on MiniImageNet domains. Speedup is calculated relative to MAML's execution time. We see that ANIL offers noticeable speedup over MAML, as a result of removing the inner loop almost completely. This permits faster training and inference. + +# C.5 ANIL IS COMPUTATIONALLY SIMPLER THAN MAML + +Table 6 shows results from a comparison of the computation time for MAML, First Order MAML, and ANIL, during training and inference, with the TensorFlow implementation described previously, on both MiniImageNet domains. These results are average time for executing forward and backward passes during training (above) and a forward pass during inference (bottom), for a task batch size of 1, and a target set size of 1. Results are averaged over 2000 such batches. Speedup is calculated relative to MAML's execution time. Each batches' images were loaded into memory before running the TensorFlow computation graph, to ensure that data loading time was not captured in the timing. Experiments were run on a single NVIDIA Titan-Xp GPU. + +During training, we see that ANIL is as fast as First Order MAML (which does not compute second order terms during training), and about $1.7\mathrm{x}$ as fast as MAML. This leads to a significant overall training speedup, especially when coupled with the fact that the rate of learning for these ANIL and MAML is very similar; see learning curves in Appendix C.2. Note that unlike First Order MAML, ANIL also performs very comparably to MAML on benchmark tasks (on some tasks, First Order MAML performs worse (Finn et al., 2017)). During inference, ANIL achieves over a 4x speedup over MAML (and thus also 4x over First Order MAML, which is identical to MAML at inference + +
MethodMiniImageNet-5way-1shotMiniImageNet-5way-5shot
MAML training-MAML head46.9 ± 0.263.1 ± 0.4
MAML training-NIL head48.4 ± 0.361.5 ± 0.8
ANIL training-ANIL head46.7 ± 0.461.5 ± 0.5
ANIL training-NIL head48.0 ± 0.762.2 ± 0.5
Multiclass pretrain-MAML head38.4 ± 0.854.6 ± 0.4
Multiclass pretrain-NIL head39.7 ± 0.354.4 ± 0.5
Multitask pretrain-MAML head26.5 ± 0.832.8 ± 0.6
Multitask pretrain-NIL head26.5 ± 1.134.2 ± 3.5
Random features-MAML head32.1 ± 0.543.1 ± 0.3
Random features-NIL head32.9 ± 0.643.2 ± 0.5
+ +Table 7: Test time performance is dominated by features learned, with no difference between NIL/MAML heads. We see identical performances of MAML/NIL heads at test time, indicating that MAML/ANIL training leads to better learned features. + +time). Both training and inference speedups illustrate the significant computational benefit of ANIL over MAML. + +# D FURTHER RESULTS ON THE NETWORK HEAD AND BODY + +# D.1 TRAINING REGIMES FOR THE NETWORK BODY + +We add to the results of Section 5.2 in the main text by seeing if training a head and applying that to the representations at test time (instead of the NIL algorithm) gives in any change in the results. As might be predicted by Section 5.1, we find no change the results. + +More specifically, we do the following: + +- We train MAML/ANIL networks as standard, and do standard test time adaptation. +- For multiclass training, we first (pre)train with multiclass classification, then throw away the head and freeze the body. We initialize a new e.g. 5-class head, and train that (on top of the frozen multiclass pretrained features) with MAML. At test time we perform standard adaptation. +- The same process is applied to multitask training. +- A similar process is applied to random features, except the network is initialized and then frozen. + +The results of this, along with the results from Table 5 in the main text is shown in Table 7. We observe very little performance difference between using a MAML/ANIL head and a NIL head for each training regime. Specifically, task performance is purely determined by the quality of the features and representations learned during training, with task-specific alignment at test time being (i) unnecessary (ii) unable to influence the final performance of the model (e.g. multitask training performance is equally with a MAML head as it is with a NIL-head.) + +# D.2 REPRESENTATIONAL ANALYSIS OF DIFFERENT TRAINING REGIMES + +In Table 8 we include results on using CCA and CKA on the representations learned by the different training methods. Specifically, we studied how similar representations of different training methods were to MAML training, finding a direct correlation with performance – training schemes learning representations most similar to MAML also performed the best. We computed similarity scores by averaging the scores over the first three conv layers in the body of the network. + +
Feature pairCCA SimilarityCKA Similarity
(MAML, MAML)0.510.83
(Multiclass pretrain, MAML)0.480.79
(Random features, MAML)0.400.72
(Multitask pretrain, MAML)0.280.65
+ +Table 8: MAML training most closely resembles multiclass pretraining, as illustrated by CCA and CKA similarities. On analyzing the CCA and CKA similarities between different baseline models and MAML (comparing across different tasks and seeds), we see that multiclass pretraining results in features most similar to MAML training. Multitask pretraining differs quite significantly from MAML-learned features, potentially due to the alignment problem. \ No newline at end of file diff --git a/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/images.zip b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..066b0d6a23e1b3732b701399f116dc3325c0ccf7 --- /dev/null +++ b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68a820e70f19c51a1f773fa075aa5e9039c31163c7db6cc2d235921fd5409cb6 +size 1012001 diff --git a/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/layout.json b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..de43233c408ea82dfd4f8614950484055cfcf71c --- /dev/null +++ b/rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6aa80476bd705c07b11142a4bd01b88023fa348db2f1535f3a44e9ac013056d6 +size 565535 diff --git a/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_content_list.json b/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..30b63458a5e89b48be5f1f702eef2262588c8635 --- /dev/null +++ b/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b87d42ac5ec8c846f2f8333b5328ed991e4073de01c8226044dfc55dbb149c09 +size 191221 diff --git a/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_model.json b/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_model.json new file mode 100644 index 0000000000000000000000000000000000000000..822561c79d397046dff3074f36a4c11b2e90ef3d --- /dev/null +++ b/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb4134c2f9c76104e31ddf1b1edb2458a944ff4a0b2b5664042b6240470eabaf +size 216443 diff --git a/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_origin.pdf b/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9725004729c1daacd954bfae2b0830b3436d2b8d --- /dev/null +++ b/reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b87365b28627eac6a98afc1e7bbfd1e0197bd8b89ece4736d3f7e7c1d3d5dbac +size 1190144 diff --git a/reanalysisofvariancereducedtemporaldifferencelearning/full.md b/reanalysisofvariancereducedtemporaldifferencelearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..96abe64a8dfeb0afe0a996a721932523bddbc635 --- /dev/null +++ b/reanalysisofvariancereducedtemporaldifferencelearning/full.md @@ -0,0 +1,881 @@ +# REANALYSIS OF VARIANCE REDUCED TEMPORAL DIFFERENCE LEARNING + +Tengyu Xu†, Zhe Wang†, Yi Zhou§, Yingbin Liang† + +† Department of ECE, The Ohio State University ,Columbus, OH 43210, USA +$^{\S}$ Department of ECE, The University of Utah, Salt Lake City, UT 84112, USA + +xu.3260@osu.edu, wang.10982@osu.edu, yi.zhou@utah.edu, liang.889@osu.edu + +# ABSTRACT + +Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance. A variance reduced TD (VRTD) algorithm was proposed by Korda and La (2015), which applies the variance reduction technique directly to the online TD learning with Markovian samples. In this work, we first point out the technical errors in the analysis of VRTD in Korda and La (2015), and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance. We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate. Furthermore, the variance error (for both i.i.d. and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD. As a result, the overall computational complexity of VRTD to attain a given accurate solution outperforms that of TD under Markov sampling and outperforms that of TD under i.i.d. sampling for a sufficiently small conditional number. + +# 1 INTRODUCTION + +In reinforcement learning (RL), policy evaluation aims to obtain the expected long-term reward of a given policy and plays an important role in identifying the optimal policy that achieves the maximal cumulative reward over time Bertsekas and Tsitsiklis (1995); Dayan and Watkins (1992); Rummery and Niranjan (1994). The temporal difference (TD) learning algorithm, originally proposed by Sutton (1988), is one of the most widely used policy evaluation methods, which uses the Bellman equation to iteratively bootstrap the estimation process and continually update the value function in an incremental way. In practice, if the state space is large or infinite, function approximation is often used to find an approximate value function efficiently. Theoretically, TD with linear function approximation has been shown to converge to the fixed point solution with i.i.d. samples and Markovian samples in Sutton (1988); Tsitsiklis and Van Roy (1997). The finite sample analysis of TD has also been studied in Bhandari et al. (2018); Dalal et al. (2018a); Cai et al. (2019); Srikant and Ying (2019). + +Since each iteration of TD uses one or a mini-batch of samples to estimate the mean of the pseudogradient ${}^{1}$ ,TD learning usually suffers from the inherent variance,which substantially degrades the convergence accuracy. Although a diminishing stepsize or very small constant stepsize can reduce the variance Bhandari et al. (2018); Srikant and Ying (2019), they also slow down the convergence significantly. + +Two approaches have been proposed to reduce the variance. The first approach is the so-called batch TD, which takes a fixed sample set and transforms the empirical mean square projected Bellman error (MSPBE) into an equivalent convex-concave saddle-point problem Du et al. (2017). Due to the finite-sample nature of such a problem, stochastic variance reduction techniques for conventional optimization can be directly applied here to reduce the variance. In particular, Du et al. (2017) showed that SVRG Johnson and Zhang (2013) and SAGA Defazio et al. (2014) can be applied to improve + +the performance of batch TD algorithms, and Peng et al. (2019) proposed two variants of SVRG to further save the computation cost. However, the analysis of batch TD does not take into account the statistical nature of the training samples, which are generated by a MDP. Hence, there is no guarantee of such obtained solutions to be close to the fixed point of TD learning. + +The second approach is the so-called TD with centering (CTD) algorithm proposed in Korda and La (2015), which introduces the variance reduction idea to the original TD learning algorithm. For the sake of better reflecting its major feature, we refer to CTD as Variance Reduced TD (VRTD) throughout this paper. Similarly to the SVRG in Johnson and Zhang (2013), VRTD has outer and inner loops. The beginning of each inner-loop (i.e. each epoch) computes a batch of sample pseudogradients so that each subsequent inner loop iteration modifies only one sample pseudo-gradient in the batch pseudo-gradients to reduce the variance. The main difference between VRTD and batch TD is that VRTD applies the variance reduction directly to TD learning rather than to a transformed optimization problem in batch TD. Though Korda and La (2015) empirically verified that VRTD has better convergence accuracy than vanilla TD learning, some technical errors in the analysis in Korda and La (2015) have been pointed out in follow up studies Dalal et al. (2018a); Narayanan and Szepesvári (2017). Furthermore, as we discuss in Section 3, the technical proof in Korda and La (2015) regarding the convergence of VRTD also has technical errors so that their results do not correctly characterize the impact of variance reduction on TD learning. Given the recent surge of interest in the finite time analysis of the vanilla TD Dalal et al. (2018a); Bhandari et al. (2018); Dalal et al. (2018b); Srikant and Ying (2019), it becomes imperative to reanalyze the VRTD and accurately understand whether and how variance reduction can help to improve the convergence accuracy over vanilla TD. Towards this end, this paper specifically addresses the following central questions. + +- For i.i.d. sampling, it has been shown in Dalal et al. (2018a); Bhandari et al. (2018) that vanilla TD converges only to a neighborhood of the fixed point for a constant stepsize and suffers from a constant error term caused by the variance of the stochastic pseudo-gradient at each iteration. For VRTD, does the variance reduction help to reduce such an error and improve the accuracy of convergence? How does the error depend on the variance reduction parameter, i.e., the batch size for variance reduction? +- For Markovian sampling, it has been shown in Bhandari et al. (2018) that the convergence of vanilla TD further suffers from a bias error due to the correlation among samples in addition to the variance error as in i.i.d. sampling. Does VRTD, which was designed to have reduced variance, also enjoy reduced bias error? If so, how does the bias error depend on the batch size for variance reduction? +- For both i.i.d. and Markovian sampling, to attain an $\epsilon$ -accurate solution, what is the overall computational complexity (the total number of computations of pseudo-gradients) of VRTD, and does VRTD have a reduced overall computational complexity compared to TD? + +# 1.1 OUR CONTRIBUTIONS + +Our main contributions are summarized in Table 1 and are described as follows. + +For i.i.d. sampling, we show that a slightly modified version of VRTD (for avoiding bias error) converges linearly to a neighborhood of the fixed point solution for a constant stepsize $\alpha$ , with the variance error at the order of $\mathcal{O}(\alpha / M)$ , where $M$ is the batch size for variance reduction. To attain an $\epsilon$ -accurate solution, the overall computational complexity (i.e., the total number of pseudo-gradient computations) of VRTD outperforms the vanilla TD algorithm Bhandari et al. (2018) if the condition number is small. + +For Markovian sampling, we show that VRTD has the same linear convergence and the same variance error reduction over the vanilla TD Dalal et al. (2018a); Bhandari et al. (2018) as i.i.d. sampling. More importantly, the variance reduction in VRTD also attains a substantially reduced bias error at the order of $\mathcal{O}(1 / M)$ over the vanilla TD Bhandari et al. (2018), where the bias error is at the order of $\mathcal{O}(\alpha)$ . As a result, VRTD outperforms vanilla TD in terms of the total computational complexity by a factor of $\log \frac{1}{\epsilon}$ . + +At the technical level, our analysis of bias error for Markovian sampling takes a different path from how the existing analysis of TD handles Markovian samples in Bhandari et al. (2018); Wang et al. (2017); Srikant and Ying (2019). Due to the batch average of stochastic pseudo-gradients adopted by VRTD to reduce the variance, the correlation among samples in different epochs is eliminated. Such an analysis explicitly explains why the variance reduction step helps to further reduce the bias error. + +Table 1: Comparison of performance of TD and VRTD algorithms. + +
AlgorithmVariance ErrorBias ErrorOverall Complexity
i.i.d. sampleTDO(α)NAO(1/ελ2A log (1/ε))
VRTDO(α/M)NAO(max{1/ε, 1/λ2A} log (1/ε))
Markovian sampleTDO(α)O(α)O(1/ελ2A log2 (1/ε))
VRTDO(α/M)O(1/M)O(max{1/ε, 1/ελ2A} log (1/ε))
+ +Note: The results on the performance of TD are due to Bhandari et al. (2018) and the results on the performance of VRTD (which are highlighted by the red color) are characterized by this work. + +# 1.2 RELATED WORK + +On-policy TD learning and variance reduction. On-policy TD learning aims to minimize the Mean Squared Bellman Error (MSBE) Sutton (1988) when samples are drawn independently from the stationary distribution of the corresponding MDP. The non-asymptotic convergence under i.i.d. sampling has been established in Dalal et al. (2018a) for TD with linear function approximation and for TD with overparameterized neural network approximation Cai et al. (2019). The convergence of averaged linear SA with constant stepsize has been studied in Lakshminarayanan and Szepesvari (2018). In the Markovian setting, the non-asymptotic convergence has been studied for on-policy TD in Bhandari et al. (2018); Karmakar and Bhatnagar (2016); Wang et al. (2019); Srikant and Ying (2019). Korda and La (2015) proposed a variance reduced CTD algorithm (called VRTD in this paper), which directly applies variance reduction technique to the TD algorithm. The analysis of VRTD provided in Korda and La (2015) has technical errors. The aim of this paper is to provide a technically solid analysis for VRTD to characterize the advantage of variance reduction. + +Variance reduced batch TD learning. Batch TD Lange et al. (2012) algorithms are generally designed for policy evaluation by solving an optimization problem on a fixed dataset. In Du et al. (2017), the empirical MSPBE is first transformed into a quadratic convex-concave saddle-point optimization problem and variance reduction methods of SVRG Johnson and Zhang (2013) and SAGA Defazio et al. (2014) were then incorporated into a primal-dual batch gradient method. Furthermore, Peng et al. (2019) applied two variants of variance reduction methods to solve the same saddle point problems, and showed that those two methods can save pseudo-gradient computation cost. + +We note that due to the extensive research in TD learning, we include here only studies that are highly related to our work, and cannot cover many other interesting topics on TD learning such as asymptotic convergence of TD learning Tadic (2001); Hu and Syed (2019), off-policy TD learning Sutton et al. (2008; 2009); Liu et al. (2015); Wang et al. (2017); Karmakar and Bhatnagar (2017), two time-scale TD algorithms Xu et al. (2019); Dalal et al. (2018b); Yu (2017), fitted TD algorithms Lee and He (2019), SARSA Zou et al. (2019) etc. The idea of the variance reduction algorithm proposed in Korda and La (2015) as well as the analysis techniques that we develop in this paper can potentially be useful for these algorithms. + +# 2 PROBLEM FORMULATION AND PRELIMINARIES + +# 2.1 ON-POLICY VALUE FUNCTION EVALUATION + +We describe the problem of value function evaluation over a Markov decision process (MDP) $(\mathcal{S},\mathcal{A},\mathsf{P},r,\gamma)$ , where each component is explained in the sequel. Suppose $\mathcal{S}\subset \mathbb{R}^d$ is a compact state space, and $\mathcal{A}$ is a finite action set. Consider a stationary policy $\pi$ , which maps a state $s\in S$ to the actions in $\mathcal{A}$ via a probability distribution $\pi (\cdot |s)$ . At time-step $t$ , suppose the process is in some state $s_t\in S$ , and an action $a_{t}\in \mathcal{A}$ is taken based on the policy $\pi (\cdot |s_t)$ . Then the transition kernel $\mathsf{P} = \mathsf{P}(s_{t + 1}|s_t,a_t)$ determines the probability of being at state $s_{t + 1}\in S$ in the next time-step, and the reward $r_t = r(s_t,a_t,s_{t + 1})$ is received, which is assumed to be bounded by $r_{\mathrm{max}}$ . We denote the associated Markov chain by $p(s^{\prime}|s) = \sum_{a\in \mathcal{A}}p(s^{\prime}|s,a)\pi (a|s)$ , and assume that it is ergodic. Let $\mu_{\pi}$ be the induced stationary distribution, i.e., $\sum_{s}p(s^{\prime}|s)\mu_{\pi}(s) = \mu_{\pi}(s^{\prime})$ . We define the value function for a policy $\pi$ as $v^{\pi}(s) = \mathbb{E}[\sum_{t = 0}^{\infty}\gamma^{t}r(s_{t},a_{t},s_{t + 1})|s_{0} = s,\pi ]$ , where $\gamma \in (0,1)$ is the discount + +factor. Define the Bellman operator $T^{\pi}$ for any function $\xi(s)$ as $T^{\pi}\xi(s) \coloneqq r^{\pi}(s) + \gamma \mathbb{E}_{s'|s}\xi(s')$ , where $r^{\pi}(s) = \mathbb{E}_{a,s'|s}r(s,a,s')$ is the expected reward of the Markov chain induced by the policy $\pi$ . It is known that $v^{\pi}(s)$ is the unique fixed point of the Bellman operator $T^{\pi}$ , i.e., $v^{\pi}(s) = T^{\pi}v^{\pi}(s)$ . In practice, since the MDP is unknown, the value function $v^{\pi}(s)$ cannot be directly obtained. The goal of policy evaluation is to find the value function $v^{\pi}(s)$ via sampling the MDP. + +# 2.2 TD LEARNING WITH LINEAR FUNCTION APPROXIMATION + +In order to find the value function efficiently particularly for large or infinite state space $S$ , we take the standard linear function approximation $\hat{v}(s,\theta) = \phi(s)^{\top}\theta$ of the value function, where $\phi(s)^{\top} = [\phi_1(s),\dots,\phi_d(s)]$ with $\phi_i(s)$ for $i = 1,2,\dots,d$ denoting the fixed basis feature functions of state $s$ , and $\theta \in \mathbb{R}^d$ is a parameter vector. Let $\Phi$ be the $|S|\times d$ feature matrix (with rows indexed by the state and columns corresponding to components of $\theta$ ). The linear function approximation can be written in the vector form as $\hat{v}(\theta) = \Phi\theta$ . Our goal is to find the fixed-point parameter $\theta^{*} \in \mathbb{R}^{d}$ that satisfies $\mathbb{E}_{\mu_{\pi}}\hat{v}(s,\theta^{*}) = \mathbb{E}_{\mu_{\pi}}T^{\pi}\hat{v}(s,\theta^{*})$ . The TD learning algorithm performs the following fixed-point iterative update to find such $\theta^{*}$ . + +$$ +\theta_ {t + 1} = \theta_ {t} + \alpha_ {t} g _ {x _ {t}} (\theta_ {t}) = \theta_ {t} + \alpha_ {t} \left(A _ {x _ {t}} \theta_ {t} + b _ {x _ {t}}\right), \tag {1} +$$ + +where $\alpha_{t} > 0$ is the stepsize, and $A_{x_t}$ and $b_{x_t}$ are specified below. For i.i.d. samples generated from the distribution $\mu_{\pi}$ , we denote the sample as $x_{t} = (s_{t},r_{t},s_{t}^{\prime})$ , and $A_{x_t} = \phi (s_t)(\gamma \phi (s_t^{\prime}) - \phi (s_t))^{\top}$ and $b_{x_t} = r(s_t)\phi (s_t)$ . For Markovian samples generated sequentially from a trajectory, we denote the sample as $x_{t} = (s_{t},r_{t},s_{t + 1})$ , and in this case $A_{x_t} = \phi (s_t)(\gamma \phi (s_{t + 1}) - \phi (s_t))^{\top}$ and $b_{x_t} = r(s_t)\phi (s_t)$ . We further define the mean pseudo-gradient $g(\theta) = A\theta +b$ where $A = \mathbb{E}_{\mu_{\pi}}[\phi (s)(\gamma \phi (s^{\prime}) - \phi (s))^{\top}]$ and $b = \mathbb{E}_{\mu_{\pi}}[r(s)\phi (s)]$ . We call $g(\theta)$ as pseudo-gradient for convenience due to its analogous role as in the gradient descent algorithm. It has been shown that the iteration in eq. (1) converges to the fix point $\theta^{*} = -A^{-1}b$ at a sublinear rate $\mathcal{O}(1 / t)$ with diminishing stepsize $\alpha_{t} = \mathcal{O}(1 / t)$ using both Markovian and i.i.d. samples Bhandari et al. (2018); Dalal et al. (2018a). Throughout the paper, we make the following standard assumptions Wang et al. (2017); Korda and La (2015); Tsitsiklis and Van Roy (1997); Bhandari et al. (2018). + +Assumption 1 (Problem solvability). The matrix $A$ is non-singular. + +Assumption 2 (Bounded feature). $\| \phi(s) \|_2 \leq 1$ for all $s \in S$ . + +Assumption 3 (Geometric ergodicity). The considered MDP is irreducible and aperiodic, and there exist constants $\kappa >0$ and $\rho \in (0,1)$ such that + +$$ +\sup _ {s \in \mathcal {S}} d _ {T V} (\mathbb {P} (s _ {t} \in \cdot | s _ {0} = s), \mu_ {\pi} (s)) \leq \kappa \rho^ {t}, \quad \forall t \geq 0, +$$ + +where $d_{TV}(P, Q)$ denotes the total-variation distance between the probability measures $P$ and $Q$ . + +Assumption 1 requires the matrix $A$ to be non-singular so that the optimal parameter $\theta^{*} = -A^{-1}b$ is well defined. Assumption 2 can be ensured by normalizing the basis functions $\{\phi_i\}_{i=1}^d$ . Assumption 3 holds for any time-homogeneous Markov chain with finite state-space and any uniformly ergodic Markov chains with general state space. + +# 3 THE VARIANCE REDUCED TD ALGORITHM + +In this section, we first introduce the variance-reduced TD (VRTD) algorithm proposed in Korda and La (2015) for Markovian sampling and then discuss the technical errors in the analysis of VRTD in Korda and La (2015). + +# 3.1 VRTD ALGORITHM KORDA AND LA (2015) + +Since the standard TD learning takes only one sample in each update as can be seen in eq. (1), it typically suffers from a large variance. This motivates the development of the VRTD algorithm in Korda and La (2015) (named as CTD in Korda and La (2015)). VRTD is formally presented in Algorithm 2, and we briefly introduce the idea below. The algorithm runs in a nested fashion with each inner-loop (i.e., each epoch) consists of $M$ updates. At the beginning of the $m$ -th epoch, a batch of $M$ samples are acquired and a batch pseudo-gradient $g_{m}(\tilde{\theta}_{m-1})$ is computed based on these samples as an estimator of the mean pseudo-gradient. Then, each inner-loop update randomly takes + +Algorithm 1 Variance Reduced TD with iid samples + +Input: batch size $M$ , learning rate $\alpha$ and initialization $\tilde{\theta}_0$ + +1: for $m = 1,2,\ldots ,S$ do + +2: $\theta_{m,0} = \tilde{\theta}_{m - 1}$ + +3: Sample a set $B_{m}$ with $M$ samples independently from the distribution $\mu_{\pi}$ + +4: $g_{m}(\tilde{\theta}_{m - 1}) = \frac{1}{M}\sum_{x_{i}\in B_{m}}g_{x_{i}}(\tilde{\theta}_{m - 1})$ + +5: for $t = 0,1,\dots,M - 1$ do + +6: Sample $x_{j_m,t}$ independently from the distribution $\mu_{\pi}$ + +7: $\theta_{m,t + 1} = \theta_{m,t} + \alpha \big(g_{x_{j_m,t}}(\theta_{m,t})$ + +8: $-g_{x_{j_{m,t}}}(\tilde{\theta}_{m-1}) + g_{m}(\tilde{\theta}_{m-1})$ + +9: end for + +10: set $\tilde{\theta}_m = \theta_{m,t}$ for randomly chosen $t\in$ $\{1,2,\dots,M\}$ + +11: end for + +Output: $\tilde{\theta}_{S}$ + +Algorithm 2 Variance Reduced TD with Markovian samples Korda and La (2015) + +Input: batch size $M$ , learning rate $\alpha$ and initialization $\tilde{\theta}_0$ + +1: for $m = 1,2,\dots,S$ do + +2: $\theta_{m,0} = \tilde{\theta}_{m - 1}$ + +3: $g_{m}(\tilde{\theta}_{m - 1}) = \frac{1}{M}\sum_{i = (m - 1)M}^{mM - 1}g_{x_i}(\tilde{\theta}_{m - 1})$ + +4: for $t = 0,1,\dots,M - 1$ do + +5: Sample $j_{m,t}$ uniformly at random in $\{(m - 1)M, \dots, mM - 1\}$ from trajectory + +6: $\theta_{m,t + 1} = \Pi_{R_{\theta}}\Big(\theta_{m,t} + \alpha \big(g_{x_{j_{m},t}}(\theta_{m,t})$ + +7: $-g_{x_{j_{m,t}}}\big(\tilde{\theta}_{m - 1}\big) + g_m(\tilde{\theta}_{m - 1})\big)\Big)$ + +8: end for + +9: set $\theta_{m} = \theta_{m,t}$ for randomly chosen $t\in$ $\{1,2,\dots,M\}$ + +10: end for + +Output: $\theta_{S}$ + +one sample from the batch, and updates the corresponding component in $g_{m}(\tilde{\theta}_{m-1})$ . Here, $\Pi_{R_{\theta}}$ in Algorithm 2 denotes the projection operator onto a norm ball with the radius $R_{\theta}$ . The idea is similar to the SVRG algorithm proposed in Johnson and Zhang (2013) for conventional optimization. Since a batch pseudo-gradient is used at each inner-loop update, the variance of the pseudo-gradient is expected to be reduced. + +# 3.2 TECHNICAL ERRORS IN KORDA AND LA (2015) + +In this subsection, we point out the technical errors in the analysis of VRTD in Korda and La (2015), which thus fails to provide the correct variance reduction performance for VRTD. + +At the high level, the batch pseudo-gradient $g_{m}(\tilde{\theta}_{m-1})$ computed at the beginning of each epoch $m$ should necessarily introduce a non-vanishing variance error for a fixed stepsize, because it cannot exactly equal the mean (i.e. population) pseudo-gradient $g(\tilde{\theta}_{m-1})$ . Furthermore, due to the correlation among samples, the pseudo-gradient estimator in expectation (with regard to the randomness of the sample trajectory) does not equal to the mean pseudo-gradient, which should further cause a non-vanishing bias error in the convergence bound. Unfortunately, the convergence bound in Korda and La (2015) indicates an exact convergence to the fixed point, which contradicts the aforementioned general understanding. More specifically, if the batch size $M = 1$ (with properly chosen $\lambda_{A}$ defined as $\lambda_{A} := 2|\lambda_{\max}(A + A^{\top})|$ ), VRTD reduces to the vanilla TD. However, the exact convergence result in Theorem 3 in Korda and La (2015) does not agree with that of vanilla TD characterized in the recent studies Bhandari et al. (2018), which has variance and bias errors. + +In Appendix B, we further provide a counter-example to show that one major technical step for characterizing the convergence bound in Korda and La (2015) does not hold. The goal of this paper is to provide a rigorous analysis of VRTD to characterize its variance reduction performance. + +# 4 MAIN RESULTS + +As aforementioned, the convergence of VRTD consists of two types of errors: the variance error due to inexact estimation of the mean pseudo-gradient and the bias error due to Markovian sampling. In this section, we first focus on the first type of error and study the convergence of VRTD under i.i.d. sampling. We then study the Markovian case to further analyze the bias. In both cases, we compare the performance of VRTD to that of the vanilla TD described in eq. (1) to demonstrate its advantage. + +# 4.1 CONVERGENCE ANALYSIS OF VRTD WITH I.I.D. SAMPLES + +For i.i.d. samples, it is expected that the bias error due to the time correlation among samples does not exist. However, if we directly apply VRTD (Algorithm 2) originally designed for Markovian samples, there would be a bias term due to the correlation between the batch pseudo-gradient estimate and every inner-loop updates. Thus, we slightly modify Algorithm 2 to Algorithm 1 to avoid the bias error in the convergence analysis with i.i.d. samples. Namely, at each inner-loop iteration, we draw a new sample from the stationary distribution $\mu_{\pi}$ for the update rather than randomly selecting one from the batch of samples drawn at the beginning of the epoch as in Algorithm 2. In this way, the new independent samples avoid the correlation with the batch pseudo-gradient evaluated at the beginning of the epoch. Hence, Algorithm 1 does not suffer from an extra bias error. + +To understand the convergence of Algorithm 1 at the high level, we first note that the sample batch pseudo-gradient cannot estimate the mean pseudo-gradient $g(\tilde{\theta}_{m-1})$ exactly due to its population nature. Then, we define $e_m(\tilde{\theta}_m) = g_m(\tilde{\theta}_{m-1}) - g(\tilde{\theta}_{m-1})$ as such a pseudo-gradient estimation error, our analysis (see Appendix D) shows that after each epoch update, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m, 0} \right] \\ \leq \frac {1 / M + 4 \alpha^ {2} (1 + \gamma) ^ {2}}{\alpha \lambda_ {A} - 4 \alpha^ {2} (1 + \gamma) ^ {2}} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {2 \alpha}{\lambda_ {A} - 4 \alpha (1 + \gamma) ^ {2}} \mathbb {E} \left[ \left\| e _ {m} (\tilde {\theta} _ {m - 1}) \right\| _ {2} ^ {2} \mid F _ {m, 0} \right], \tag {2} \\ \end{array} +$$ + +where $F_{m,0}$ denotes the $\sigma$ -field that includes all the randomness in sampling and updates before the $m$ -th epoch. The first term in the right-hand side of eq. (2) captures the contraction property of Algorithm 1 and the second term corresponds to the variance of the pseudo-gradient estimation error. It can be seen that due to such an error term, Algorithm 1 is expected to have guaranteed convergence only to a neighborhood of $\theta^{*}$ , when applying eq. (2) iteratively. Our further analysis shows that such an error term can still be well controlled (to be small) by choosing an appropriate value for the batch size $M$ , which captures the advantage of the variance reduction. The following theorem precisely characterizes the non-asymptotic convergence of Algorithm 1. + +Theorem 1. Consider the VRTD algorithm in Algorithm 1. Suppose Assumptions 1-3 hold. Set a constant stepsize $\alpha < \frac{\lambda_A}{8(1 + \gamma)^2}$ and the batch size $M > \frac{4(1 + \gamma)^2\alpha^2 + 1}{\alpha[\lambda_A - 8\alpha(1 + \gamma)^2]}$ . Then, for all $m \in \mathbb{N}$ , + +$$ +\mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \right] \leq C _ {1} ^ {m} \left\| \tilde {\theta} _ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {2 D _ {2} \alpha}{(1 - C _ {1}) (\lambda_ {A} - 4 \alpha (1 + \gamma) ^ {2}) M}, \tag {3} +$$ + +where $C_1 \coloneqq \left(4\alpha (1 + \gamma)^2 + \frac{4(1 + \gamma)^2\alpha^2 + 1}{\alpha M}\right)\frac{1}{\lambda_A - 4\alpha(1 + \gamma)^2}$ (with $C_1 < 1$ due to the choices of $\alpha$ and $M$ ), and $D_2 = 4((1 + \gamma)^2 R_\theta^2 + r_{\max}^2)$ . + +We note that the convergence rate in eq. (3) can be written in a simpler form as $\mathbb{E}[||\tilde{\theta}_m - \theta^* ||^2 ]\leq$ $C_1^m ||\tilde{\theta}_0 - \theta^* ||^2 +\mathcal{O}(\alpha /M)$ + +Theorem 1 shows that Algorithm 1 converges linearly (under a properly chosen constant stepsize) to a neighborhood of the fixed point solution, and the size of the neighborhood (i.e., the error term) has the order of $\mathcal{O}\left(\frac{\alpha}{M}\right)$ , which can be made as small as possible by properly increasing the batch size $M$ . This is in contrast to the convergence result of the vanilla TD, which suffers from the constant error term with order $\mathcal{O}(\alpha)$ Bhandari et al. (2018) for a fixed stepsize. Thus, a small stepsize $\alpha$ is required in vanilla TD to reduce the variance error, which, however, slows down the practical convergence significantly. In contrast, this is not a problem for VRTD, which can attain a high accuracy solution while still maintaining fast convergence at a desirable stepsize. + +We further note that if we have access to the mean pseudo-gradient $g(\tilde{\theta}_{m-1})$ in each epoch $m$ , then the error term $||\theta_m - \theta^*||_2^2$ becomes zero, and Algorithm 1 converges linearly to the exact fixed point solution, as the iteration number $m$ goes to infinity with respect to the conditional number $C_1$ , which is a positive constant and less than 1. This is similar to the conventional convergence of SVRG for strongly convex optimization Johnson and Zhang (2013). However, the proof here is very different. In Johnson and Zhang (2013), the convergence proof relies on the relationship between the gradient and the value of the objective function, but there is not such an objective function in the TD learning problem. Thus, the convergence of the parameter $\theta$ needs to be developed by exploiting the structure of the Bellman operator. + +Based on the convergence rate of VRTD characterized in Theorem 1 under i.i.d. sampling, we obtain the following bound on the overall computational complexity. + +Corollary 1. Suppose Assumptions 1-3 hold. Let $\alpha = \frac{\lambda_A}{16(1 + \gamma)^2}$ , $M = \lceil \max \left\{\frac{D_2}{3(1 - C_1)}\frac{1}{\epsilon},\frac{33(1 + \gamma)^2}{\lambda_A^2}\right\} \rceil$ . Then, for any $\epsilon >0$ , an $\epsilon$ -accuracy solution (i.e., $\mathbb{E}||\tilde{\theta}_m - \theta^* ||^2\leq \epsilon$ ) can be attained with at most $m = \lceil \log \frac{2\|\tilde{\theta}_0 - \theta^*\|_2^2}{\epsilon}\big{/}\log \frac{1}{C_1}\rceil$ iterations. Correspondingly, the total number of pseudo-gradient computations required by VRTD (i.e., Algorithm 1) under i.i.d. sampling to attain such an $\epsilon$ -accuracy solution is at most + +$$ +\mathcal {O} \left(\max \left\{\frac {1}{\epsilon}, \frac {1}{\lambda_ {A} ^ {2}} \right\} \log \left(\frac {1}{\epsilon}\right)\right). +$$ + +Proof. Given the values of $\alpha$ and $M$ in the theorem, it can be easily checked that $\mathbb{E}||\tilde{\theta}_m - \theta^*\| ^2\leq \epsilon$ for $m = \lceil \log \frac{2\|\tilde{\theta}_0 - \theta^*\|_2^2}{\epsilon}\big / \log \frac{1}{C_1}\rceil$ Then the total number of pseudo-gradient computations is given by $2mM$ that yields the desired order given in the theorem. + +As a comparison, consider the vanilla TD algorithm studied in Bhandari et al. (2018) with the constant stepsize $\alpha = O(\epsilon)$ . If the samples are i.i.d. generated, it can be shown (see Appendix F.1) that the vanilla TD requires $\mathcal{O}\left(\frac{1}{\epsilon\lambda_A^2}\log \left(\frac{1}{\epsilon}\right)\right)$ pseudo-gradient computations in total to obtain an $\epsilon$ -accuracy solution. Clearly, VRTD has lower computational complexity than vanilla TD if $\lambda_{A}$ is small. Such a comparison is similar in nature to the comparison between SVRG Johnson and Zhang (2013) and SGD in traditional optimization, where SVRG achieves better computational complexity than SGD for strongly convex objectives if the conditional number of the loss is small. + +# 4.2 CONVERGENCE ANALYSIS OF VRTD WITH MAKOVIAN SAMPLES + +In this section, we study the VRTD algorithm (i.e., Algorithm 2) with Markovian samples, in which samples are generated from one single MDP path. In such a case, we expect that the convergence of VRTD to have both the variance error due to the pseudo-gradient estimation (similar to the case with i.i.d. samples) and the bias error due to the correlation among samples. To understand this at the high level, we define the bias at each iteration as $\xi_{m}(\theta) = (\theta -\theta^{*})^{\top}(g_{m}(\theta) - g(\theta))$ . Then our analysis (see Appendix E) shows that after the update of each epoch, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m, 0} \right] \\ \leq \frac {1 / M + 3 \alpha^ {2} (1 + \gamma) ^ {2}}{\alpha \lambda_ {A} - 3 \alpha^ {2} (1 + \gamma) ^ {2}} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {3 \alpha}{\lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2}} \mathbb {E} \left[ \| g _ {m} (\theta^ {*}) \| _ {2} ^ {2} \mid F _ {m, 0} \right] \\ + \frac {2}{\left[ \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2} \right] M} \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \xi_ {m} \left(\theta_ {m, i}\right) \mid F _ {m, 0} \right] \tag {4} \\ \end{array} +$$ + +The first term on the right-hand side of eq. (4) captures the epochwise contraction property of Algorithm 2. The second term is due to the variance of the pseudo-gradient estimation, which captures how well the batch pseudo-gradient $g_{m}(\theta^{*})$ approximates the mean pseudo-gradient $g(\theta^{*})$ (note that $g(\theta^{*}) = 0$ ). Such a variance term can be shown to decay to zero as the batch size gets large similarly to the i.i.d. case. The third term captures the bias introduced by the correlation among samples in the $m$ -th epoch. To quantitatively understand this error term, we provide the following lemma that characterizes how the bias error is controlled by the batch size $M$ . + +Lemma 1. For any $m > 0$ and any $\theta \in B_{\theta}$ , which is a ball with the radius $R_{\theta}$ , we have + +$$ +\mathbb {E} [ \xi_ {m} (\theta) ] \leq \frac {\lambda_ {A}}{4} \mathbb {E} [ \| \theta - \theta^ {*} \| _ {2} ^ {2} | \mathcal {F} _ {n, 0} ] + \frac {8 [ 1 + (\kappa - 1) \rho ]}{\lambda_ {A} (1 - \rho) M} [ R _ {\theta} ^ {2} (1 + \gamma) ^ {2} + r _ {\max} ^ {2} ], +$$ + +where the expectation is over the random trajectory, $\theta$ is treated as a fixed variable, and $0 < C_0 < \infty$ is a constant depending only on the MDP. + +Lemma 1 shows that the bias error diminishes as the batch size $M$ increases and the algorithm approaches to the fixed point $\theta^{*}$ . To explain why this happens, the definition of $\xi_{m}(\theta)$ immediately yields the following bound: + +$$ +\xi_ {m} (\theta) \leq \frac {1}{\lambda_ {A}} \| g _ {n} (\theta) - g (\theta) \| _ {2} ^ {2} + \frac {\lambda_ {A}}{4} \| \theta - \theta^ {*} \| _ {2} ^ {2}. \tag {5} +$$ + +The first term on the right-hand-side of eq. (5) can be bounded by the concentration property for the ergodic process as $g_{m}(\theta) = \frac{1}{M}\sum_{i = (m - 1)M}^{mM - 1}g_{x_{i}}(\theta)\stackrel {a.s.}{\rightarrow}g(\theta)$ . As $M$ increases, the randomness due to the pseudo-gradient estimation is essentially averaged out due to the variance reduction step in VRTD, which implicitly eliminates its correlation from samples in the previous epochs. + +As a comparison, the bias error in vanilla TD has been shown to be bounded by $\mathbb{E}[\xi_n(\theta)] = \mathcal{O}(\alpha \log (1 / \alpha))$ Bhandari et al. (2018). In order to reduce the bias and achieve a high convergence accuracy, the stepsize $\alpha$ is required to be small, which causes the algorithm to run very slowly. The advantage of VRTD is that the bias can be reduced by choosing a sufficiently large batch size $M$ so that the stepsize can still be kept at a desirable constant to guarantee fast convergence. + +Theorem 2. Consider the VRTD algorithm in Algorithm 2. Suppose Assumptions 1-3 hold. Set the constant stepsize $\alpha < \frac{\lambda_A}{12(1 + \gamma)^2}$ and the batch size $M > \frac{1}{0.5\alpha\lambda_A - 6\alpha^2(1 + \gamma)^2}$ . Then, we have + +$$ +\mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \right] \leq C _ {1} ^ {m} \left\| \tilde {\theta} _ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {3 C _ {4} \alpha + C _ {2} / \lambda_ {A}}{\left(1 - C _ {1}\right) \left[ 0 . 5 \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2} \right] M}, \tag {6} +$$ + +where $C_1 = \frac{1 / M + 3\alpha^2(1 + \gamma)^2}{0.5\alpha\lambda_A - 3\alpha^2(1 + \gamma)^2}$ (with $C_1 < 1$ due to the choices for $\alpha$ and $M$ ), $C_2 = \frac{16[1 + (\kappa - 1)\rho][R_\theta^2(1 + \gamma)^2 + r_{\mathrm{max}}^2]}{1 - \rho}$ and $C_4 = [(1 + \gamma)R_\theta +r_{\mathrm{max}}]^2 +\frac{2\rho\kappa G[(1 + \gamma)R_\theta + r_{\mathrm{max}}]}{1 - \rho}$ . + +We note that the convergence rate in eq. (6) can be written in a simpler form as $\mathbb{E}[||\tilde{\theta}_m - \theta^* ||^2 ]\leq$ $C_1^m ||\tilde{\theta}_0 - \theta^* ||^2 +\mathcal{O}(1 / M)$ + +Theorem 2 shows that VRTD (i.e., Algorithm 2) with Markovian samples converges to a neighborhood of $\theta^{*}$ at a linear rate, and the size of the neighborhood (i.e., the convergence error) decays sublinearly with the batch size $M$ . More specifically, the first term in the right-hand side of eq. (6) captures the linear convergence of the algorithm, the second term corresponds to the sum of the cumulative pseudogradient estimation error and the cumulative bias error. For the fixed stepsize, the total convergence error is dominated by the sum of those two error terms with the order $\mathcal{O}(1 / M)$ . Therefore, the variance reduction in Algorithm 2 reduces both the variance and the bias of the pseudo-gradient estimator. + +Based on the convergence rate of VRTD characterized in Theorem 2 under Markovian sampling, we obtain the following bound on the corresponding computational complexity. + +Corollary 2. Suppose Assumptions 1-3 hold. Let $\alpha = \frac{\lambda_A}{24(1 + \gamma)^2}$ and $M = \lceil (\frac{32C_2 + 4C_4}{3(1 - C_1)} +100(1+$ $\gamma)^2)\max \{\frac{1}{\epsilon},\frac{1}{\epsilon\lambda_A^2}\} \rceil$ . Then, for any $\epsilon >0$ , an $\mathcal{E}$ -accuracy solution (i.e., $\mathbb{E}||\tilde{\theta}_m - \theta^* ||^2\leq \epsilon)$ can be attained with at most $m = \lceil \log \frac{2\|\tilde{\theta}_0 - \theta^*\|_2^2}{\epsilon} /\log \frac{1}{C_1}\rceil$ iterations. Correspondingly, the total number of pseudo-gradient computations required by VRTD (i.e., Algorithm 2) under Markovian sampling to attain such an $\mathcal{E}$ -accuracy solution is at most + +$$ +\mathcal {O} \left(\max \left\{\frac {1}{\epsilon}, \frac {1}{\epsilon \lambda_ {A} ^ {2}} \right\} \log \frac {1}{\epsilon}\right). +$$ + +Proof. Given the values of $\alpha$ and $M$ in the theorem, it can be easily checked that $\mathbb{E}||\tilde{\theta}_m - \theta^*\| ^2\leq \epsilon$ for $m = \lceil \log \frac{2\|\tilde{\theta}_0 - \theta^*\|_2^2}{\epsilon} /\log \frac{1}{C_1}\rceil$ . Then the total number of pseudo-gradient computations is given by $2mM$ that yields the desired order given in the theorem. + +As a comparison, consider the vanilla TD algorithm studied in Bhandari et al. (2018) with the constant stepsize $\alpha = O(\epsilon / \log(1/\epsilon))$ . Under Markovian sampling, it can be shown (see Appendix F.2) that vanilla TD requires $\mathcal{O}\left(\frac{1}{\epsilon \lambda_A^2} \log^2\left(\frac{1}{\epsilon}\right)\right)$ pseudo-gradient computations in total to obtain an $\epsilon$ -accuracy + +solution. Hence, in the Markovian setting, VRTD outperforms vanilla TD in terms of the total computational complexity by a factor of $\log \frac{1}{\epsilon}$ . To intuitively explain, we first note that the correlation among data samples in the Markovian case also causes a bias error in addition to the variance error. For VRTD, due to the variance reduction scheme, the bias and variance errors are kept at the same level (with respect to the batch size) so that the bias error does not cause order-level increase in the computational complexity for VRTD. However, for vanilla TD, the bias error dominates the variance error, which turns out to require more iterations to attain an $\epsilon$ -accurate solution, and yields an additional $\log \frac{1}{\epsilon}$ factor in the total complexity compared to VRTD. + +# 5 EXPERIMENTS + +In this section, we provide numerical results to verify our theoretical results. Note that in Appendix A, we provide further experiments on two problems in OpenAI Gym Brockman et al. (2016) and one experiment to demonstrate that VRTD is more sample-efficient than vanilla TD. + +We consider an MDP with $\gamma = 0.95$ and $|S| = 50$ . Each transition probability is randomly sampled from [0,1] and the transitions were normalized to one. The expected reward for each transition is also generated randomly in [0,1] and the reward on each transition was sampled without noise. Each component of the feature matrix $\Phi \in \mathbb{R}^{50\times 4}$ is randomly and uniformly sampled between 0 and 1. The baseline for comparison is the vanilla TD algorithm, which corresponds to the case with $M = 1$ in our figure. We conduct two experiments to investigate how the batch size $M$ for variance reduction affects the performance of VRTD with i.i.d. and Markovian samples. In the Markovian setting, we sample the data from a MDP trajectory. In the i.i.d. setting, we sample the data independently from the corresponding stationary distribution. In both experiments, we set the constant stepsize to be $\alpha = 0.1$ and we run the experiments for five different batch sizes: $M = 1,50,500,1000,2000$ . Our results are reported in Figure 1 and 2. All the plots report the square error over 1000 independent runs. In each case, the left figure illustrates the convergence process over the number of pseudo-gradient computations and the right figure shows the convergence errors averaged over the last 10000 iterations for different batch size values. It can be seen that in both i.i.d. and Markovian settings, the averaged error decreases as the batch size increases, which corroborates both Theorem 1 and Theorem 2. We also observe that increased batch size substantially reduces the error without much slowing down the convergence, demonstrating the desired advantage of variance reduction. Moreover, we observe that the error of VRTD with i.i.d samples is smaller than that of VRTD with Markovian samples under all batch size settings, which indicates that the correlation among Markovian samples introduces additional errors. + +![](images/28ebaa3db3c879368b05f60275480d9b299d5b0af39942f7a3aedf601c8f1d7c.jpg) +(a) left: iteration process; right: averaged convergence error + +![](images/d66ebb52a6aac15deae6c9ebcee58e02bc2d087dee5b7b7239d7155da4d6dea2.jpg) +Figure 1: Error decay of VRTD with i.i.d. sample + +# 6 CONCLUSION + +In this paper, we provided the convergence analysis for VRTD with both i.i.d. and Markovian samples. We developed a novel technique to bound the bias of the VRTD pseudo-gradient estimator. Our result demonstrates the advantage of VRTD over vanilla TD on the reduced variance and bias errors by the batch size. We anticipate that such a variance reduction technique and our analysis tools can be further applied to other RL algorithms. + +![](images/6d573902cc3f2a8564cb857ea622f333619aaef00ee6459b50339933482ac9ca.jpg) +(a) left: iteration process; right: averaged convergence error + +![](images/5df1f97f29436b647148d54502c6b6fe5c91a6afb7a2c36f90327b3a4bba96b9.jpg) +Figure 2: Error decay of VRTD with Markovian sample + +# ACKNOWLEDGMENTS + +The work was supported in part by US National Science Foundation under the grants CCF-1801855, ECCS-1818904, and CCF-1909291. The authors would like to thank Bowen Weng at the Ohio State University for the helpful discussions on the experiments. The authors would also like to thank a few anonymous reviewers for their suggestions on the analysis of the overall computational complexity as well as additional experiments, which significantly help to improve the quality of the paper. + +# REFERENCES + +Bertsekas, D. P. and Tsitsiklis, J. N. (1995). Neuro-dynamic programming: An overview. In Proceedings of 34th IEEE Conference on Decision and Control, volume 1, pages 560-564. +Bhandari, J., Russo, D., and Singal, R. (2018). A finite time analysis of temporal difference learning with linear function approximation. In Proc. Conference on Learning Theory (COLT), pages 1691-1692. +Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. (2016). OpenAI Gym. +Cai, Q., Yang, Z., Lee, J. D., and Wang, Z. (2019). Neural temporal-difference learning converges to global optima. arXiv preprint arXiv:1905.10027. +Dalal, G., Szörényi, B., Thoppe, G., and Mannor, S. (2018a). Finite sample analyses for TD (0) with function approximation. In Proc. AAAI Conference on Artificial Intelligence (AAAI). +Dalal, G., Szorenyi, B., Thoppe, G., and Mannor, S. (2018b). Finite sample analysis of two-timescale stochastic approximation with applications to reinforcement learning. In Proc. Conference on Learning Theory (COLT). +Dayan, P. and Watkins, C. (1992). Q-learning. Machine Learning, 8(3):279-292. +Dedecker, J. and Gouëzel, S. (2015). Subgaussian concentration inequalities for geometrically ergodic Markov chains. Electronic Communications in Probability, 20. +Defazio, A., Bach, F., and Lacoste-Julien, S. (2014). SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems (NIPS), pages 1646-1654. +Du, S. S., Chen, J., Li, L., Xiao, L., and Zhou, D. (2017). Stochastic variance reduction methods for policy evaluation. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 1049-1058. +Hu, B. and Syed, U. A. (2019). Characterizing the exact behaviors of temporal difference learning algorithms using Markov jump linear system theory. arXiv preprint arXiv:1906.06781. + +Johnson, R. and Zhang, T. (2013). Accelerating stochastic gradient descent using predictive variance reduction. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 315-323. +Karmakar, P. and Bhatnagar, S. (2016). Dynamics of stochastic approximation with Markov iteratedependent noise with the stability of the iterates not ensured. arXiv preprint arXiv:1601.02217. +Karmakar, P. and Bhatnagar, S. (2017). Two time-scale stochastic approximation with controlled Markov noise and off-policy temporal-difference learning. Mathematics of Operations Research, 43(1):130-151. +Korda, N. and La, P. (2015). On TD (0) with function approximation: Concentration bounds and a centered variant with exponential convergence. In Proc. International Conference on Machine Learning (ICML), pages 626-634. +Lakshminarayanan, C. and Szepesvari, C. (2018). Linear stochastic approximation: How far does constant step-size and iterate averaging go? In International Conference on Artificial Intelligence and Statistics, pages 1347-1355. +Lange, S., Gabel, T., and Riedmiller, M. (2012). Batch reinforcement learning. In Reinforcement learning, pages 45-73. Springer. +Lee, D. and He, N. (2019). Target-based temporal difference learning. In International Conference on Machine Learning (ICML). +Liu, B., Liu, J., Ghavamzadeh, M., Mahadevan, S., and Petrik, M. (2015). Finite-sample analysis of proximal gradient td algorithms. In Proc. Uncertainty in Artificial Intelligence (UAI), pages 504-513. AUAI Press. +Maei, H. R. (2011). Gradient temporal-difference learning algorithms. PhD thesis, University of Alberta. +Narayanan, C. and Szepesvári, C. (2017). Finite time bounds for temporal difference learning with function approximation: Problems with some "state-of-the-art" results. Technical report. +Peng, Z., Touati, A., Vincent, P., and Precup, D. (2019). SVRG for policy evaluation with fewer gradient evaluations. arXiv preprint arXiv:1906.03704. +Rummery, G. A. and Niranjan, M. (1994). On-line $Q$ -learning Using Connectionist Systems, volume 37. University of Cambridge, Department of Engineering Cambridge, England. +Srikant, R. and Ying, L. (2019). Finite-time error bounds for linear stochastic approximation and TD learning. In Proc. Conference on Learning Theory (COLT). +Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9-44. +Sutton, R. S., Maei, H. R., Precup, D., Bhatnagar, S., Silver, D., Szepesvári, C., and Wiewiora, E. (2009). Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proc. International Conference on Machine Learning (ICML), pages 993-1000. +Sutton, R. S., Szepesvári, C., and Maei, H. R. (2008). A convergent o(n) algorithm for off-policy temporal-difference learning with linear function approximation. Advances in Neural Information Processing Systems (NIPS), 21(21):1609-1616. +Tadic, V. (2001). On the convergence of temporal-difference learning with linear function approximation. Machine Learning, 42(3):241-267. +Tsitsiklis, J. N. and Van Roy, B. (1997). Analysis of temporal-difference learning with function approximation. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 1075-1081. +Wang, G., Li, B., and Giannakis, G. B. (2019). A multistep Lyapunov approach for finite-time analysis of biased stochastic approximation. arXiv preprint arXiv:1909.04299. + +Wang, Y., Chen, W., Liu, Y., Ma, Z.-M., and Liu, T.-Y. (2017). Finite sample analysis of the GTD policy evaluation algorithms in Markov setting. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 5504-5513. +Xu, T., Zou, S., and Liang, Y. (2019). Two time-scale off-policy td learning: Non-asymptotic analysis over markovian samples. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 10633-10643. +Yu, H. (2017). On convergence of some gradient-based temporal-differences algorithms for off-policy learning. arXiv preprint arXiv:1712.09652. +Zou, S., Xu, T., and Liang, Y. (2019). Finite-sample analysis for sarsa with linear function approximation. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 8665-8675. + +# Supplementary Materials + +# A ADDITIONAL EXPERIMENTS + +In this section, we assess the practical performance of VRTD (Algorithm 2) on two problems in OpenAI Gym Brockman et al. (2016), which are Frozen Lake $(4\times 4)$ and Mountain Car. We also provide an additional experiment to demonstrate that VRTD is more sample-efficient than vanilla TD. + +# A.1 FROZEN LAKE + +Frozen Lake is a game in OpenAI Gym, which is designed by an MDP with a finite state and action space. An agent starts from the starting point at $t = 0$ and can only transport to the neighbor blocks. It returns to the start-point every time it reaches a "hole" or the "goal". The agent receives a reward 1 only when it reaches the goal and 0 otherwise. Each transition probability is randomly sampled from [0, 1] and normalized to one, and each component of the feature matrix $\varPhi \in \mathbb{R}^{16 \times 4}$ is also randomly sampled from [0, 1]. Given the feature matrix and the transition probability, the ground truth value of $\theta^*$ can be calculated, which is used to evaluate the error in the experiments. We set the stepsize to be $\alpha = 0.1$ and run vanilla TD ( $M = 1$ ) and VRTD with the batch sizes $M = 50, 500, 1000, 2000$ . Note that $M = 1$ corresponds to the base line vanilla TD. We compute the squared error over 1000 independent runs. The left plot in Figure 3 shows the convergence process over the number of pseudo-gradient computations and the right plot in Figure 3 shows the convergence error averaged over the last 10000 iterations. It can be observed that VRTD achieves much smaller error than TD, and increasing the batch size for VRTD substantially reduces the error without much slowing down the convergence. + +![](images/9e55b9b8d5e1896b884ca229513288d7ddf036f9c8df6b0920854358ffe1d267.jpg) +(a) left: iteration process; right: averaged convergence error + +![](images/baca83d386e8befab1396f8e5dd8790555a838d37194a17a24e47956d01170c5.jpg) +Figure 3: Error decay of VRTD in Frozen Lake problem + +# A.2 MOUNTAIN CAR + +Mountain Car is a game in OpenAI Gym, which is driven by an MDP with an infinite state space and a finite action space. At each time step, an agent randomly chooses an action $\in$ {push left, push right, no push}. In this problem, the ground truth value of $\theta^{*}$ is not known. In order to quantify the performance of VRTD, we apply the error metric known as the norm of the expected TD update given by $\mathrm{NEU} = \|\mathbb{E}[\delta \phi]\|_2^2$ , where $\delta$ is the temporal difference Sutton et al. (2009); Maei (2011). The state sample is transformed into a feature vector with the dimension 20 using an approximation of a RBF kernel. The agent follows a random policy in our experiment and we initialize $\theta_0 = 0$ . At $t = 0$ , the agent starts from the lowest point, receives a reward of $-1$ at each time step, and returns to the starting point every time it reaches the goal. We set the stepsize to be $\alpha = 0.2$ and run vanilla TD ( $M = 1$ ) and VRTD with batch size $M = 1000$ . After every 10000 pseudo-gradient computations, learning is paused and the NEU is computed by averaging over 1000 test samples. We conduct 1000 independent runs and the results are reported by averaging over these + +runs. Figure 4 shows the convergence process of the NEU versus the number of pseudo-gradient computations. It can be seen that VRTD achieves smaller NEU than vanilla TD. + +![](images/a1dcca46770d2af2f7eb829867a4839ecbaeb9d488fd1268d56bd99131668c10.jpg) +Figure 4: NEU decay of VRTD in Mountain Car problem + +# A.3 COMPARISON BETWEEN VRTD AND TD WITH A CHANGING STEPSIZE + +In this subsection, we provide an additional experiment to compare the performance of VRTD given in Algorithm 2 (under constant stepsize) with the TD algorithm (under a changing stepsize as suggested by the reviewer). We adopt the same setting of Frozen Lake as in Appendix A.1. Let VRTD take a batch size $M = 5000$ and stepsize $\alpha = 0.1$ . For a fair comparison, we start TD with the same constant stepsize $\alpha = 0.1$ and then reduce the stepsize by half whenever the error stops decrease. The comparison is reported in Figure 5, where both curves are averaged over 1000 independent runs. The two algorithms are compared in terms of the squared error versus the total number of pseudo-gradient computations (equivalently, the total number of samples being used). It can be seen that VRTD reaches the required accuracy much faster than TD. + +![](images/74fd710c95372215841863ef4f7f7705e7fc72260f4c3576f117e82aed463047.jpg) +Figure 5: Comparison of error decay of VRTD and TD with changing stepsize in Frozen Lake problem + +# B A COUNTER EXAMPLE + +In this section, we use a counter-example to show that one major technical step for characterizing the convergence bound in Korda and La (2015) does not hold. Consider Step 4 in the proof of Theorem 3 in Korda and La (2015). For the following defined $\epsilon(\theta)$ + +$$ +\epsilon (\theta) = \left(\theta - \theta^ {*}\right) ^ {\top} \left[ \mathbb {E} \left(v ^ {\top} v \mid \mathcal {F} _ {n}\right) - \mathbb {E} _ {\Psi , \theta_ {n}} \left(v ^ {\top} v\right) \right] \left(\theta - \theta^ {*}\right), \tag {7} +$$ + +where $\varPsi$ denotes the stationary distribution of the corresponding Markov chain, Korda and La (2015) claimed that the following inequality holds + +$$ +\left\| \epsilon (\theta) \right\| _ {2} \leq 2 H \left\| \mathbb {E} (v | \mathcal {F} _ {n}) - \mathbb {E} _ {\Psi , \theta_ {n}} (v) \right\| _ {2}. \tag {8} +$$ + +This is not correct. Consider the following counter-example. Let the batch size $M = 3$ and the dimension of the feature vector be one, i.e., $\varPhi \in \mathbb{R}^{|S| \times 1}$ . Hence, all variables in eq. (8) and eq. (7) are scalars. Since the steps for proving eq. (8) in Korda and La (2015) do not have specific requirements for the transition kernel, eq. (8) should hold for any distribution of $v$ . Thus, suppose $v$ follows the uniform distribution over $[-3, 3]$ . Further assume that in the $n$ -th epoch, the samples of $v$ are given by $\{1, 2, -3\}$ . Recall that $\mathbb{E}(\cdot | \mathcal{F}_n)$ is the average over the batch samples in the $n$ -th epoch. We have: + +$$ +\mathbb {E} _ {\Psi , \theta_ {n}} (v) = 0, \quad \mathbb {E} _ {\Psi , \theta_ {n}} (v ^ {2}) = 3, \quad \mathbb {E} (v | \mathcal {F} _ {n}) = 0, \quad \mathbb {E} (v ^ {2} | \mathcal {F} _ {n}) = \frac {1 4}{3}. +$$ + +Substituting the above values into eq. (8) yields + +$$ +\left\| \epsilon (\theta) \right\| _ {2} = \left(\frac {1 4}{3} - 3\right) \left(\theta - \theta^ {*}\right) ^ {2} \leq 2 H \times 0 = 0, \tag {9} +$$ + +which obviously does not hold in general when $\theta \neq \theta^{*}$ . Consequently the second statement in Theorem 3 of Korda and La (2015), which is critically based on the above erroneous steps, does not hold. Hence, the first statement in the same theorem whose proof is based on the second statement cannot hold either. + +# C USEFUL LEMMAS + +In the rest of the paper, for any matrix $W \in \mathbb{R}^{d \times d}$ , we denote $\| W \|_2$ as the spectral norm of $W$ and $\| W \|_F$ as the Frobenius norm of $W$ . + +Lemma 2. For any $x_{i} = (s_{i}, r_{i}, s_{i}^{\prime})$ (i.i.d. sample) or $x_{i} = (s_{i}, r_{i}, s_{i+1})$ (Markovian sample), we have $\| A_{x_{i}} \|_{2} \leq 1 + \gamma$ and $\| b_{x_{i}} \|_{2} \leq r_{\max}$ . + +Proof. First consider the case when samples are i.i.d. Due to the definition of $A_{x_i}$ , we have + +$$ +\begin{array}{l} \left\| A _ {x _ {i}} \right\| _ {2} = \left\| \phi \left(s _ {i}\right) \left(\gamma \phi \left(s _ {i} ^ {\prime}\right) - \phi \left(s _ {i}\right)\right) ^ {\top} \right\| _ {2} \\ \leq \left\| \phi (s _ {i}) \left(\gamma \phi \left(s _ {i} ^ {\prime}\right) - \phi (s _ {i})\right) ^ {\top} \right\| _ {F} \\ \leq \gamma \left\| \phi (s _ {i}) \phi (s _ {i} ^ {\prime}) ^ {\top} \right\| _ {F} + \left\| \phi (s _ {i}) \phi (s _ {i}) ^ {\top} \right\| _ {F} \\ \leq 1 + \gamma . \\ \end{array} +$$ + +Then, consider $b_{x_i}$ : + +$$ +\left\| b _ {x _ {i}} \right\| _ {2} = \left\| r _ {x _ {i}} \phi (s _ {i}) \right\| _ {2} \leq r _ {\max } \left\| \phi (s _ {i}) \right\| _ {2} \leq r _ {\max }. +$$ + +Following similar steps, we can obtain the same upper bounds for the case with Markovian samples. + +Lemma 3. Let $G = (1 + \gamma)R_{\theta} + r_{\max}$ . Consider Algorithm 2. For any $m > 0$ and $0 \leq t \leq M - 1$ , we have $\left\| g_{x_{j_m,t}}(\theta_{m,t}) \right\|_2$ , $\left\| g_{x_{j_m,t}}(\tilde{\theta}_{m-1}) \right\|_2$ , $\left\| g_m(\tilde{\theta}_{m-1}) \right\|_2 \leq G$ . + +Proof. First, we bound $\left\| g_{x_{j_m,t}}(\theta_{m,t})\right\|_2$ as follows. + +$$ +\begin{array}{l} \left\| g _ {x _ {j _ {m}, t}} \left(\theta_ {m, t}\right) \right\| _ {2} = \left\| A _ {x _ {j _ {m}, t}} \theta_ {m, t} + b _ {\theta_ {m, t}} \right\| _ {2} \\ \leq \left\| A _ {x _ {j _ {m}, t}} \right\| _ {2} \left\| \theta_ {m, t} \right\| _ {2} + \left\| b _ {\theta_ {m, t}} \right\| _ {2} \\ \leq (1 + \gamma) R _ {\theta} + r _ {\max}. \\ \end{array} +$$ + +Following the steps similar to the above, we have $\left\| g_{x_{j_m,t}}(\tilde{\theta}_{m - 1})\right\| _2\leq G.$ Finally for $\left\| g_{x_{j_m,t}}(\tilde{\theta}_{m - 1})\right\| _2$ , we have + +$$ +\left\| g _ {x _ {j _ {m}, t}} (\tilde {\theta} _ {m - 1}) \right\| _ {2} = \left\| \frac {1}{M} \sum_ {i = (m - 1) M} ^ {m M - 1} g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) \right\| _ {2} +$$ + +$$ +\begin{array}{l} \leq \frac {1}{M} \sum_ {i = (m - 1) M} ^ {m M - 1} \left\| g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) \right\| _ {2} \\ \leq G, \tag {10} \\ \end{array} +$$ + +where eq. (10) follows from the last fact $\left\| g_{x_{j_m,t}}(\tilde{\theta}_{m - 1})\right\| _2\leq G.$ + +Lemma 4. Define $D_{1} = 2(1 + \gamma)^{2}$ and $D_{2} = 4((1 + \gamma)^{2}R_{\theta}^{2} + r_{\max}^{2})$ . For any $\theta \in \mathbb{R}^d$ , we have $\| g_{x_i}(\theta)\| _2^2\leq D_1\| \theta -\theta^*\| _2^2 +D_2$ . + +Proof. Recalling the definition of $g_{x_i}$ , and applying Lemma 2, we have + +$$ +\begin{array}{l} \left\| g _ {x _ {i}} (\theta) \right\| _ {2} ^ {2} = \left\| A _ {x _ {i}} \theta + b _ {x _ {i}} \right\| _ {2} ^ {2} \\ = \| A _ {x _ {i}} \left(\theta - \theta^ {*}\right) + \left(A _ {x _ {i}} \theta^ {*} + b _ {x _ {i}}\right) \| _ {2} ^ {2} \\ \leq 2 \left\| A _ {x _ {i}} \left(\theta - \theta^ {*}\right) \right\| _ {2} ^ {2} + 2 \left\| A _ {x _ {i}} \theta^ {*} + b _ {x _ {i}} \right\| _ {2} ^ {2} \\ \leq 2 \left\| A _ {x _ {i}} \right\| _ {2} ^ {2} \left\| \theta - \theta^ {*} \right\| _ {2} ^ {2} + 4 \left(\left\| A _ {x _ {i}} \right\| _ {2} ^ {2} \left\| \theta^ {*} \right\| _ {2} ^ {2} + \left\| b _ {x _ {i}} \right\| _ {2} ^ {2}\right) \\ \leq 2 (1 + \gamma) ^ {2} \| \theta - \theta^ {*} \| _ {2} ^ {2} + 4 ((1 + \gamma) ^ {2} R _ {\theta} ^ {2} + r _ {\max } ^ {2}) \\ = D _ {1} \left\| \theta - \theta^ {*} \right\| _ {2} ^ {2} + D _ {2}. \\ \end{array} +$$ + +![](images/19afaa1c58ac5a589697b88779c7e375cd347778a186bf1e2e8a116e3518c286.jpg) + +Lemma 5. Considering Algorithm 2 with Markovian samples. We have $\| \mathbb{E}[A_j|P_i] - A\| _F\leq (1 + \gamma)\kappa \rho^{j - i}$ and $\| \mathbb{E}[b_j|P_i] - b\| _2\leq r_{\max}\kappa \rho^{j - i}$ for $0 < i < j$ + +Proof. We first derive + +$$ +\begin{array}{l} \left\| \mathbb {E} \left[ A _ {j} \mid P _ {i} \right] - A \right\| _ {F} = \left\| \int A _ {x _ {i}} d P (x _ {i} \mid P _ {j}) - \int A _ {x _ {i}} d \mu_ {\pi} \right\| _ {F} \\ \leq \int \left\| A _ {x _ {i}} d P \left(x _ {i} \mid P _ {j}\right) - A _ {x _ {i}} d \mu_ {\pi} \right\| _ {F} \\ \leq \int \| A _ {x _ {i}} \| _ {F} | d P (x _ {i} | P _ {j}) - d \mu_ {\pi} | \\ \leq (1 + \gamma) \| P (x _ {i} | P _ {j}), \mu_ {\pi} \| _ {T V} \\ \leq (1 + \gamma) \kappa \rho^ {j - i}. \\ \end{array} +$$ + +Following the steps similar to the above, we can derive $\| \mathbb{E}[b_j | P_i] - b \|_2 \leq 2r_{\max} \kappa \rho^{j-i}$ . + +![](images/dd9d0e1a0b7ec5d4f7180024ae34fd766d80215c7b0147e236030d30cb7a74cb.jpg) + +# D PROOF OF THEOREM 1: CONVERGENCE OF VRTD WITH I.I.D. SAMPLES + +Recall that $B_{m}$ is the sample batch drawn at the beginning of each $m$ -th epoch and $x_{i,j}$ denotes the sample picked at the $j$ -th iteration in the $i$ -th epoch in Algorithm 1. We denote $\sigma(\tilde{\theta}_{0})$ as a trivial $\sigma$ -field when $\tilde{\theta}_{0}$ is a deterministic vector. Let $\sigma(A \cup B)$ indicate the smallest $\sigma$ -field that contains both $A$ and $B$ . Then, we construct a set of $\sigma$ -fields in the following incremental way. + +$$ +F _ {1, 0} = \sigma (\tilde {\theta} _ {0}), F _ {1, 1} = \sigma \left(F _ {1, 0} \cup \sigma \left(B _ {1}\right) \cup \sigma \left(x _ {1, 1}\right)\right), \dots , F _ {1, M} = \sigma \left(F _ {1, (M - 1)} \cup \sigma \left(x _ {1, M}\right)\right), +$$ + +$$ +F _ {2, 0} = \sigma (F _ {1, M} \cup \sigma (\tilde {\theta} _ {1})), F _ {2 1} = \sigma (F _ {2, 0} \cup \sigma (B _ {2}) \cup \sigma (x _ {2, 1})), \dots , F _ {2, m} = \sigma (F _ {2, (M - 1)} \cup \sigma (x _ {2, M})), +$$ + +中 + +$$ +F _ {m, 0} = \sigma \big (F _ {(m - 1), M} \cup \sigma (\tilde {\theta} _ {m - 1}) \big), F _ {m 1} = \sigma \big (F _ {m, 0} \cup \sigma (B _ {m}) \cup \sigma (x _ {m, 1}) \big), \dots , F _ {m, M} = \sigma \big (F _ {m, (M - 1)} \cup \sigma (x _ {m, M}) \big). +$$ + +The proof of Theorem 1 proceeds along the following steps. + +Step 1: Iteration within the $m$ -th epoch + +For the $m$ -th epoch, we consider the last update (i.e., the $M$ -th iteration in the epoch), and decompose its error into the following form. + +$$ +\begin{array}{l} \| \theta_ {m, M} - \theta^ {*} \| _ {2} ^ {2} = \left\| \theta_ {m, M - 1} + \alpha \Big (g _ {x _ {j _ {m}, M}} (\theta_ {m, M - 1}) - g _ {x _ {j _ {m}, M}} (\tilde {\theta} _ {m - 1}) + g _ {m} (\tilde {\theta} _ {m - 1}) \Big) - \theta^ {*} \right\| _ {2} ^ {2} \\ = \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 2 \alpha (\theta_ {m, M - 1} - \theta^ {*}) ^ {\top} \left(g _ {x _ {j _ {m}, M}} (\theta_ {m, M - 1}) - g _ {x _ {j _ {m}, M}} (\tilde {\theta} _ {m - 1}) + g _ {m} (\tilde {\theta} _ {m - 1})\right) \\ + \alpha^ {2} \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2}. \tag {11} \\ \end{array} +$$ + +First, consider the third term in the right-hand side of eq. (11), we have + +$$ +\begin{array}{l} \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \\ \leq 2 \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} + 2 \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \\ = 2 \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right) - \left[ \left(g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right)\right) - \left(g \left(\tilde {\theta} _ {m - 1}\right) - g \left(\theta^ {*}\right)\right) \right] \right\| _ {2} ^ {2} \\ + 2 \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \\ \leq 4 \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right) \right\| _ {2} ^ {2} + 4 \left\| \left(g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right)\right) - \left(g \left(\tilde {\theta} _ {m - 1}\right) - g \left(\theta^ {*}\right)\right) \right\| _ {2} ^ {2} \\ + 2 \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2}. \tag {12} \\ \end{array} +$$ + +Then, by taking the expectation conditioned on $F_{m,M - 1}$ on both sides of eq. (12), we have + +$$ +\mathbb {E} \left[ \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] +$$ + +$$ +\begin{array}{l} \stackrel {(i)} {\leq} 4 \mathbb {E} \left[ \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ + 4 \mathbb {E} \left[ \left\| \left(g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right)\right) - \mathbb {E} \left[ g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right) \mid F _ {m, M - 1} \right] \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ + 2 \mathbb {E} \left[ \left\| g _ {m} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1}) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(i i)} {\leq} 4 (1 + \gamma) ^ {2} \mathbb {E} \left[ \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} | F _ {m, M - 1} \right] + 4 (1 + \gamma) ^ {2} \mathbb {E} \left[ \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} | F _ {m, M - 1} \right] \\ + 2 \mathbb {E} \left[ \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ \end{array} +$$ + +where $(i)$ follows from the fact that $\mathbb{E}[(g_{x_{j_m,M}}(\tilde{\theta}_{m-1}) - g_{x_{j_m,M}}(\theta^*))|F_{m,M-1}] = g(\tilde{\theta}_{m-1}) - g(\theta^*)$ , and $(ii)$ follows from the inequality $\mathbb{E}[(X - \mathbb{E}X)^2] \leq \mathbb{E}X^2$ and Lemma 2. Then, taking the expectation conditioned on $F_{m,M-1}$ on both sides of eq. (11) yields + +$$ +\mathbb {E} \left[ \left\| \theta_ {m, M} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] +$$ + +$$ +\begin{array}{l} = \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 2 \alpha \left(\theta_ {m, M - 1} - \theta^ {*}\right) ^ {\top} \mathbb {E} \left[ g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \Big | F _ {m, M - 1} \right] \\ + \alpha^ {2} \mathbb {E} \left[ \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(i)} {\leq} \left\| \theta_ {m, M - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 2 \alpha \left(\theta_ {m, M - 1} - \theta^ {*}\right) ^ {\top} g \left(\theta_ {m, M - 1}\right) \\ + 2 \alpha \left(\theta_ {m, M - 1} - \theta^ {*}\right) ^ {\top} \left(\mathbb {E} \left[ g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \mid F _ {m, M - 1} \right] - g \left(\tilde {\theta} _ {m - 1}\right)\right) \\ + 4 \alpha^ {2} (1 + \gamma) ^ {2} \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 4 \alpha^ {2} (1 + \gamma) ^ {2} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} + 2 \alpha^ {2} \mathbb {E} \left[ \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ \stackrel {(i i)} {\leq} \left\| \theta_ {m, M - 1} - \theta^ {*} \right\| _ {2} ^ {2} - \alpha \lambda_ {A} \left\| \theta_ {m, M - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 2 \alpha \mathbb {E} \left[ \xi_ {m} (\tilde {\theta} _ {m - 1}) \Big | F _ {m, M - 1} \right] \\ + 4 \alpha^ {2} (1 + \gamma) ^ {2} \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 4 \alpha^ {2} (1 + \gamma) ^ {2} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} \\ + 2 \alpha^ {2} \mathbb {E} \left[ \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(i i i)} {\leq} \left\| \theta_ {m, M - 1} - \theta^ {*} \right\| _ {2} ^ {2} - \left[ \alpha \lambda_ {A} - 4 \alpha^ {2} (1 + \gamma) ^ {2} \right] \left\| \theta_ {m, M - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 4 \alpha^ {2} (1 + \gamma) ^ {2} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} \\ + 2 \alpha \mathbb {E} \left[ \xi_ {m} \left(\tilde {\theta} _ {m - 1}\right) \mid F _ {m, M - 1} \right] + 2 \alpha^ {2} \mathbb {E} \left[ \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right], \tag {13} \\ \end{array} +$$ + +where $(i)$ follows from the fact that $\mathbb{E}\left[g_{x_{j_{m,M}}}(\tilde{\theta}_{m - 1})\big|F_{m,M - 1}\right] = g(\tilde{\theta}_{m - 1})$ . In (ii) we define $\lambda_{A}$ as the absolute value of the largest eigenvalue of matrix $(A^T + A)$ , which is negative definite according to Tsitsiklis and Van Roy (1997). In (iii) we define $\xi_{m}(\theta) = (\theta - \theta^{*})^{\top}(g_{m}(\theta) - g(\theta))$ for $\theta \in \mathbb{R}^d$ . Then, by applying eq. (13) iteratively, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \theta_ {m, 1} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m, 0} \right] \\ \leq \| \theta_ {m, 0} - \theta^ {*} \| _ {2} ^ {2} - [ \alpha \lambda_ {A} - 4 \alpha^ {2} (1 + \gamma) ^ {2} ] \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \| \theta_ {m, i} - \theta^ {*} \| _ {2} ^ {2} | F _ {m, 0} \right] + 4 M \alpha^ {2} (1 + \gamma) ^ {2} \| \tilde {\theta} _ {m - 1} - \theta^ {*} \| _ {2} ^ {2} \\ + 2 \alpha M \mathbb {E} \left[ \xi_ {m} \left(\tilde {\theta} _ {m - 1}\right) \mid F _ {m, 0} \right] + 2 M \alpha^ {2} \mathbb {E} \left[ \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, 0} \right]. \tag {14} \\ \end{array} +$$ + +For all $1 \leq i \leq M$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \xi_ {m} (\tilde {\theta} _ {m - 1}) \mid F _ {m, 0} \right] = \mathbb {E} \left[ g _ {m} (\tilde {\theta} _ {m - 1}) \mid F _ {m, 0} \right] - g (\tilde {\theta} _ {m - 1}) \\ = \frac {1}{M} \sum_ {i \in B _ {m}} \mathbb {E} \left[ A _ {x _ {i}} \tilde {\theta} _ {m} + b _ {x _ {i}} \mid F _ {m, 0} \right] - (A \tilde {\theta} _ {m} + b) \\ = \left[ \left(\frac {1}{M} \sum_ {i \in B _ {m}} \mathbb {E} [ A _ {x _ {i}} | F _ {m, 0} ]\right) - A \right] \tilde {\theta} _ {m} + \left[ \left(\frac {1}{M} \sum_ {i \in B _ {m}} \mathbb {E} [ b _ {x _ {i}} | F _ {m, 0} ]\right) - b \right] \\ = 0. \\ \end{array} +$$ + +Then, arranging terms in eq. (14) and using the above fact yield + +$$ +\begin{array}{l} \left[ \alpha \lambda_ {A} - 4 \alpha^ {2} (1 + \gamma) ^ {2} \right] \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \| \theta_ {m, i} - \theta^ {*} \| _ {2} ^ {2} \mid F _ {m, 0} \right] \\ \leq \left[ 1 + 4 M \alpha^ {2} (1 + \gamma) ^ {2} \right] \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 2 M \alpha^ {2} \mathbb {E} \left[ \left\| g _ {m} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1}) \right\| _ {2} ^ {2} \mid F _ {m, 0} \right]. \tag {15} \\ \end{array} +$$ + +Finally, dividing eq. (15) by $[\alpha \lambda_A - 4\alpha^2 (1 + \gamma)^2 ]M$ on both sides yields + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m, 0} \right] \\ \leq \frac {1 / M + 4 \alpha^ {2} (1 + \gamma) ^ {2}}{\alpha \lambda_ {A} - 4 \alpha^ {2} (1 + \gamma) ^ {2}} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {2 \alpha}{\lambda_ {A} - 4 \alpha (1 + \gamma) ^ {2}} \mathbb {E} \left[ \left\| g _ {m} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1}) \right\| _ {2} ^ {2} \mid F _ {m, 0} \right]. \tag {16} \\ \end{array} +$$ + +# Step 2: Bounding the variance error + +For any $0 \leq k \leq m - 1$ , we have + +$$ +\mathbb {E} \left[ \left\| g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, 0} \right] \tag {17} +$$ + +$$ +\begin{array}{l} = \mathbb {E} \left[ \left\| \frac {1}{M} \sum_ {i \in B _ {m}} g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1}) \right\| _ {2} ^ {2} \Big | F _ {m, 0} \right] = \frac {1}{M ^ {2}} \mathbb {E} \left[ \left\| \sum_ {i \in B _ {m}} g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1}) \right\| _ {2} ^ {2} \Big | F _ {m, 0} \right] \\ = \frac {1}{M ^ {2}} \mathbb {E} \left[ \left(\sum_ {i \in B _ {m}} g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1})\right) ^ {\top} \left(\sum_ {j \in B _ {m}} g _ {x _ {j}} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1})\right) \Big | F _ {m, 0} \right] \\ = \frac {1}{M ^ {2}} \sum_ {i \in B _ {m}} \sum_ {j \in B _ {m}} \mathbb {E} \left[ \left\langle g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1}), g _ {x _ {j}} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1}) \right\rangle \mid F _ {m, 0} \right] \\ = \frac {1}{M ^ {2}} \sum_ {i = j} \mathbb {E} \left[ \left\| g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) - g (\tilde {\theta} _ {m - 1}) \right\| _ {2} ^ {2} \Big | F _ {m, 0} \right] \\ = \frac {1}{M ^ {2}} \sum_ {i = j} \mathbb {E} \left[ \left\| g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) - \mathbb {E} \left[ g _ {x _ {i}} (\tilde {\theta} _ {m - 1}) \right\rvert F _ {m, 0} \right] \right\| _ {2} \left| F _ {m, 0} \right] \\ \leq \frac {1}{M ^ {2}} \sum_ {i = j} \mathbb {E} \left[ \left\| g _ {x _ {i}} \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, 0} \right] \leq \frac {1}{M} \left(D _ {1} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + D _ {2}\right), \tag {18} \\ \end{array} +$$ + +where eq. (18) follows from Lemma 4. + +# Step 3: Iteration over $m$ epochs + +First, we substitute eq. (18) into eq. (16) to obtain + +$$ +\mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m, 0} \right] \leq C _ {1} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {2 D _ {2} \alpha}{\left(\lambda_ {A} - 4 \alpha (1 + \gamma) ^ {2}\right) M}, \tag {19} +$$ + +where we define $C_1 = \left(4\alpha (1 + \gamma)^2 +\frac{2D_1\alpha^2 + 1}{\alpha M}\right)\frac{1}{\lambda_A - 4\alpha(1 + \gamma)^2}.$ + +Taking the expectation of eq. (19) conditioned on $F_{m - 1,0}$ and following the steps similar to those in step 1 to upper bound $\mathbb{E}\left[\left\| \tilde{\theta}_{m - 1} - \theta^{*}\right\|_{2}^{2}\big|F_{m - 1,0}\right]$ , we obtain + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m - 1, 0} \right] \leq C _ {1} \mathbb {E} \left[ \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m - 1, 0} \right] + \frac {2 D _ {2} \alpha}{(\lambda_ {A} - 4 \alpha (1 + \gamma) ^ {2}) M} \\ \leq C _ {1} ^ {2} \left\| \tilde {\theta} _ {m - 2} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {2 D _ {2} \alpha}{(\lambda_ {A} - 4 \alpha (1 + \gamma) ^ {2}) M} \sum_ {k = 0} ^ {1} C _ {1} ^ {k}. \\ \end{array} +$$ + +Then, by following the above steps for $(m - 1)$ times, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \right] \leq C _ {1} ^ {m} \left\| \tilde {\theta} _ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {2 D _ {2} \alpha}{(\lambda_ {A} - 4 \alpha (1 + \gamma) ^ {2}) M} \sum_ {k = 0} ^ {m - 1} C _ {1} ^ {k} \\ \leq C _ {1} ^ {m} \left\| \tilde {\theta} _ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {2 D _ {2} \alpha}{(1 - C _ {1}) (\lambda_ {A} - 4 \alpha (1 + \gamma) ^ {2}) M}, \\ \end{array} +$$ + +which yields the desirable result. + +# E PROOF OF THEOREM 2: CONVERGENCE OF VRTD WITH MARKOVIAN SAMPLES + +We define $\sigma(S_i)$ to be the $\sigma$ -field of all samples up to the $i$ -th epoch and recall that $j_{m,t}$ is the index of the sample picked at the $t$ -th iteration in the $m$ -th epoch in Algorithm 2. Then we define a set of $\sigma$ -fields in the following incremental way: + +$$ +F _ {1, 0} = \sigma (S _ {0}), F _ {1, 1} = \sigma \left(F _ {1, 0} \cup \sigma \left(j _ {1, 1}\right)\right), \dots , F _ {1, M} = \sigma \left(F _ {1, (M - 1)} \cup \sigma \left(j _ {1, M}\right)\right), +$$ + +$$ +F _ {2, 0} = \sigma \left(\sigma \left(S _ {1}\right) \cup F _ {1, M} \cup \sigma \left(\tilde {\theta} _ {1}\right)\right), F _ {2 1} = \sigma \left(F _ {2, 0} \cup \sigma \left(j _ {2, 1}\right),..., F _ {2, m} = \sigma \left(F _ {2, (M - 1)} \cup \sigma \left(j _ {2, M}\right)\right), \right. +$$ + +: + +$$ +F _ {m, 0} = \sigma (\sigma (S _ {m - 1}) \cup F _ {(m - 1), M} \cup \sigma (\tilde {\theta} _ {m - 1})), F _ {m, 1} = \sigma (F _ {m, 0} \cup \sigma (j _ {m, 1})), \dots , F _ {m, M} = \sigma (F _ {m, (M - 1)} \cup \sigma (j _ {m, M})). +$$ + +# E.1 PROOF OF LEMMA 1 + +We first prove Lemma 1, which is useful for step 4 in the main proof in Theorem 2 provided in Section E.2. + +Proof. Recall the definition of the bias term: $\xi_n(\theta) = (\theta - \theta^*)^\top (g_n(\theta) - g(\theta))$ . We have + +$$ +\begin{array}{l} \mathbb {E} \left[ \xi_ {n} (\theta) | \mathcal {F} _ {n, 0} \right] \\ = \mathbb {E} \left[ \left(\theta - \theta^ {*}\right) ^ {\top} \left(g _ {n} (\theta) - g (\theta)\right) | \mathcal {F} _ {n, 0} \right] \\ = \mathbb {E} \left[ \left(\theta - \theta^ {*}\right) ^ {\top} \left[ \left(\frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} A _ {x _ {i}} - A\right) \theta + \left(\frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} b _ {x _ {i}} - b\right) \right] \mid \mathcal {F} _ {n, 0} \right] \\ \leq \frac {\lambda_ {A}}{4} \mathbb {E} [ \| \theta - \theta^ {*} \| _ {2} ^ {2} | \mathcal {F} _ {n, 0} ] + \frac {1}{\lambda_ {A}} \mathbb {E} \left[ \left\| \left(\frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} A _ {x _ {i}} - A\right) \theta + \left(\frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} b _ {x _ {i}} - b\right) \right\| _ {2} ^ {2} \middle | \mathcal {F} _ {n, 0} \right] \\ \leq \frac {\lambda_ {A}}{4} \mathbb {E} [ \| \theta - \theta^ {*} \| _ {2} ^ {2} | \mathcal {F} _ {n, 0} ] + \frac {2}{\lambda_ {A}} \mathbb {E} \left[ \left\| \left(\frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} A _ {x _ {i}} - A\right) \theta \right\| _ {2} ^ {2} \middle | \mathcal {F} _ {n, 0} \right] \\ + \frac {2}{\lambda_ {A}} \mathbb {E} \left[ \left\| \frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} b _ {x _ {i}} - b \right\| _ {2} ^ {2} \middle | \mathcal {F} _ {n, 0} \right] \\ \leq \frac {\lambda_ {A}}{4} \mathbb {E} [ \| \theta - \theta^ {*} \| _ {2} ^ {2} | \mathcal {F} _ {n, 0} ] + \frac {2 R _ {\theta} ^ {2}}{\lambda_ {A}} \mathbb {E} \left[ \left\| \frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} A _ {x _ {i}} - A \right\| _ {2} ^ {2} \Bigg | \mathcal {F} _ {n, 0} \right] \\ + \frac {2}{\lambda_ {A}} \mathbb {E} \left[ \left| \left| \frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} b _ {x _ {i}} - b \right| \right| _ {2} ^ {2} \middle | \mathcal {F} _ {n, 0} \right] \\ \stackrel {(i)} {\leq} \frac {\lambda_ {A}}{4} \mathbb {E} [ \| \theta - \theta^ {*} \| _ {2} ^ {2} | \mathcal {F} _ {n, 0} ] + \frac {2 R _ {\theta} ^ {2}}{\lambda_ {A}} \mathbb {E} \left[ \left\| \frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} A _ {x _ {i}} - A \right\| _ {F} ^ {2} \Bigg | \mathcal {F} _ {n, 0} \right] \\ + \frac {2}{\lambda_ {A}} \mathbb {E} \left[ \left\| \frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} b _ {x _ {i}} - b \right\| _ {2} ^ {2} \mid \mathcal {F} _ {n, 0} \right], \tag {20} \\ \end{array} +$$ + +where $(i)$ follows from the fact that $\| W\| _2\leq \| W\| _F$ for all matrices $W\in \mathbb{R}^{d\times d}$ . We define the interproduct between two matrices $W,V\in \mathbb{R}^{d\times d}$ as $\langle W,V\rangle = \sum_{ij}\sum_{ij}W_{ij}V_{ij}$ . Consider the second term in eq. (20): $\mathbb{E}\left[\left\| \frac{1}{M}\sum_{i = (n - 1)M}^{nM - 1}A_{x_i} - A\right\| _F^2\big|\mathcal{F}_{n,0}\right]$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} A _ {x _ {i}} - A \right\| _ {F} ^ {2} \mid \mathcal {F} _ {n, 0} \right] \\ = \frac {1}{M ^ {2}} \sum_ {i = (n - 1) M} ^ {n M - 1} \sum_ {j = (n - 1) M} ^ {n M - 1} \mathbb {E} [ \langle A _ {x _ {i}} - A, A _ {x _ {j}} - A \rangle | \mathcal {F} _ {n, 0} ] \\ = \frac {1}{M ^ {2}} \Big [ \sum_ {i = j} \mathbb {E} [ \| A _ {x _ {i}} - A \| _ {F} ^ {2} | \mathcal {F} _ {n, 0} ] + \sum_ {i \neq j} \mathbb {E} [ \langle A _ {x _ {i}} - A, A _ {x _ {j}} - A \rangle | \mathcal {F} _ {n, 0} ] \Big ] \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \frac {1}{M ^ {2}} \Big [ \sum_ {i = j} \mathbb {E} [ (\| A _ {x _ {i}} \| _ {F} + \| A \| _ {F}) ^ {2} | \mathcal {F} _ {n, 0} ] + \sum_ {i \neq j} \mathbb {E} [ \langle A _ {x _ {i}} - A, A _ {x _ {j}} - A \rangle | \mathcal {F} _ {n, 0} ] \Big ] \\ \leq \frac {1}{M ^ {2}} \left[ 4 (1 + \gamma) ^ {2} M + \sum_ {i \neq j} \mathbb {E} [ \langle A _ {x _ {i}} - A, A _ {x _ {j}} - A \rangle | \mathcal {F} _ {n, 0} ]. \right] \tag {21} \\ \end{array} +$$ + +Now consider upper bound the term $\mathbb{E}[\langle A_{x_i} - A,A_{x_j} - A\rangle |\mathcal{F}_{n,0}]$ in eq. (21). Without loss of generality, we consider the case when $i > j$ as follows: + +$$ +\begin{array}{l} \mathbb {E} [ \langle A _ {x _ {i}} - A, A _ {x _ {j}} - A \rangle | \mathcal {F} _ {n, 0} ] \\ = \mathbb {E} \left[ \mathbb {E} \left[ \langle A _ {x _ {i}} - A, A _ {x _ {j}} - A \rangle | x _ {j} \right] \mid \mathcal {F} _ {n, 0} \right] \\ = \mathbb {E} \left[ \left\langle \mathbb {E} \left[ A _ {x _ {i}} \mid x _ {j} \right] - A, A _ {x _ {j}} - A \right\rangle \middle | \mathcal {F} _ {n, 0} \right] \\ \leq \mathbb {E} \left[ \left\| \mathbb {E} \left[ A _ {x _ {i}} \mid x _ {j} \right] - A \right\| _ {F} \left\| A _ {x _ {j}} - A \right\| _ {F} \middle | \mathcal {F} _ {n, 0} \right] \\ \leq \mathbb {E} \left[ \left\| \mathbb {E} \left[ A _ {x _ {i}} \mid x _ {j} \right] - A \right\| _ {F} \left(\left\| A _ {x _ {j}} \right\| _ {F} + \| A \| _ {F}\right) \Big | \mathcal {F} _ {n, 0} \right] \\ \leq 2 (1 + \gamma) \mathbb {E} \left[ \left\| \mathbb {E} \left[ A _ {x _ {i}} \mid x _ {j} \right] - A \right\| _ {F} \mid \mathcal {F} _ {n, 0} \right] \\ \leq 2 \kappa (1 + \gamma) ^ {2} \rho^ {i - j}. \\ \end{array} +$$ + +We can further obtain + +$$ +\begin{array}{l} \sum_ {i \neq j} \mathbb {E} [ \langle A _ {x _ {i}} - A, A _ {x _ {j}} - A \rangle | \mathcal {F} _ {n, 0} ] \leq 2 \kappa (1 + \gamma) ^ {2} \sum_ {i \neq j} \rho^ {| i - j |} \leq 2 \kappa (1 + \gamma) ^ {2} \sum_ {k = 1} ^ {M - 1} \sum_ {l = 1} ^ {k} \rho^ {l} \\ \leq 2 \kappa (1 + \gamma) ^ {2} \frac {2 \rho}{1 - \rho} \sum_ {k = 1} ^ {M - 1} \left(1 - \rho^ {k}\right) \leq \frac {4 (1 + \gamma) ^ {2} M \kappa \rho}{1 - \rho}. \tag {22} \\ \end{array} +$$ + +Then substituting eq. (22) into eq. (21) yields + +$$ +\mathbb {E} \left[ \left\| \frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} A _ {x _ {i}} - A \right\| _ {F} ^ {2} \Bigg | \mathcal {F} _ {n, 0} \right] \leq \frac {1}{M} \left[ 4 (1 + \gamma) ^ {2} + \frac {4 (1 + \gamma) ^ {2} \kappa \rho}{1 - \rho} \right]. +$$ + +Then consider the third term in eq. (20): $\mathbb{E}\left[\left\| \frac{1}{M}\sum_{i = (n - 1)M}^{nM - 1}b_{x_i} - b\right\| _2^2\bigg|\mathcal{F}_{n,0}\right]$ , similarly we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \frac {1}{M} \sum_ {i = (n - 1) M} ^ {n M - 1} b _ {x _ {i}} - b \right\| _ {2} ^ {2} \middle | \mathcal {F} _ {n, 0} \right] \\ = \frac {1}{M ^ {2}} \sum_ {i = (n - 1) M} ^ {n M - 1} \sum_ {j = (n - 1) M} ^ {n M - 1} \mathbb {E} \left[ \left(b _ {x _ {i}} - b\right) ^ {\top} \left(b _ {x _ {j}} - b\right) \mid \mathcal {F} _ {n, 0} \right] \\ = \frac {1}{M ^ {2}} \left[ \sum_ {i = j} \mathbb {E} [ \| b _ {x _ {i}} - b \| _ {2} ^ {2} | \mathcal {F} _ {n, 0} ] + \sum_ {i \neq j} \mathbb {E} [ (b _ {x _ {i}} - b) ^ {\top} (b _ {x _ {j}} - b) | \mathcal {F} _ {n, 0} ] \right] \\ \leq \frac {1}{M ^ {2}} \left[ \sum_ {i = j} \mathbb {E} \left[ \left(\| b _ {x _ {i}} \| _ {2} + \| b \| _ {2}\right) ^ {2} | \mathcal {F} _ {n, 0} \right] + \sum_ {i \neq j} \mathbb {E} \left[ \left(b _ {x _ {i}} - b\right) ^ {\top} \left(b _ {x _ {j}} - b\right) | \mathcal {F} _ {n, 0} \right] \right] \\ \leq \frac {1}{M ^ {2}} \left[ 4 r _ {\max } ^ {2} M + \sum_ {i \neq j} \mathbb {E} \left[ \left(b _ {x _ {i}} - b\right) ^ {\top} \left(b _ {x _ {j}} - b\right) \mid \mathcal {F} _ {n, 0} \right] \right. \tag {23} \\ \end{array} +$$ + +Now to upper-bound the term $\mathbb{E}[(b_{x_i} - b)^\top (b_{x_j} - b)|\mathcal{F}_{n,0}]$ , without loss of generality, we assume $i > j$ + +$$ +\mathbb {E} \left[ \left(b _ {x _ {i}} - b\right) ^ {\top} \left(b _ {x _ {j}} - b\right) \mid \mathcal {F} _ {n, 0} \right] +$$ + +$$ +\begin{array}{l} = \mathbb {E} \left[ \mathbb {E} \left[ \left(b _ {x _ {i}} - b\right) ^ {\top} \left(b _ {x _ {j}} - b\right) | x _ {j} \right] \mid \mathcal {F} _ {n, 0} \right] \\ = \mathbb {E} \left[ \left(\mathbb {E} \left[ b _ {x _ {i}} \mid x _ {j} \right] - b\right) ^ {\top} \left(b _ {x _ {j}} - b\right) \Big | \mathcal {F} _ {n, 0} \right] \\ \leq \mathbb {E} \left[ \left| \left| \mathbb {E} \left[ b _ {x _ {i}} \mid x _ {j} \right] - b \right| \right| _ {2} \left| \left| b _ {x _ {j}} - b \right| \right| _ {2} \middle | \mathcal {F} _ {n, 0} \right] \\ \leq \mathbb {E} \left[ \left| \left| \mathbb {E} \left[ b _ {x _ {i}} \mid x _ {j} \right] - b \right| \right| _ {2} \left(\left| \left| b _ {x _ {j}} \right| \right| _ {2} - \| b \| _ {2}\right) \right| \mathcal {F} _ {n, 0} \Bigg ] \\ \leq 2 r _ {\max } \mathbb {E} \left[ \left| \left| \mathbb {E} \left[ b _ {x _ {i}} \mid x _ {j} \right] - b \right| \right| _ {2} \middle | \mathcal {F} _ {n, 0} \right] \\ \leq 2 \kappa r _ {\max } ^ {2} \rho^ {i - j}. \\ \end{array} +$$ + +Thus, we have + +$$ +\begin{array}{l} \sum_ {i \neq j} \mathbb {E} [ (b _ {x _ {i}} - b) ^ {\top} (b _ {x _ {j}} - b) | \mathcal {F} _ {n, 0} ] \leq 2 \kappa r _ {\max } ^ {2} \sum_ {i \neq j} \rho^ {| i - j |} \leq 2 \kappa r _ {\max } ^ {2} \sum_ {k = 1} ^ {M - 1} \sum_ {l = 1} ^ {k} \rho^ {l} \\ \leq 2 \kappa r _ {\max } ^ {2} \frac {2 \rho}{1 - \rho} \sum_ {k = 1} ^ {M - 1} \left(1 - \rho^ {k}\right) \leq \frac {4 r _ {\max } ^ {2} M \kappa \rho}{1 - \rho}. \tag {24} \\ \end{array} +$$ + +Combing all of the above pieces together yields the following final bound + +$$ +\mathbb {E} \left[ \xi_ {n} (\theta) \mid \mathcal {F} _ {n, 0} \right] \leq \frac {\lambda_ {A}}{4} \mathbb {E} [ \| \theta - \theta^ {*} \| _ {2} ^ {2} \mid \mathcal {F} _ {n, 0} ] + \frac {8 [ 1 + (\kappa - 1) \rho ]}{\lambda_ {A} (1 - \rho) M} \left[ R _ {\theta} ^ {2} (1 + \gamma) ^ {2} + r _ {\max } ^ {2} \right]. \tag {25} +$$ + +![](images/9ee0090c2657622c23bdc4bb14540f57d09004bc1b82592820999609c19f0820.jpg) + +# E.2 PROOF OF THEOREM 2 + +# Step 1: Iteration within the $m$ -th inner loop + +For the $m$ -th inner loop, we consider the last update (i.e., the $M$ -th iteration in the epoch), and decompose its error into the following form. + +$$ +\begin{array}{l} \left\| \theta_ {m, M} - \theta^ {*} \right\| _ {2} ^ {2} = \left\| \Pi_ {R _ {\theta}} \left(\theta_ {m, M - 1} + \alpha \left(g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right)\right)\right) - \theta^ {*} \right\| _ {2} ^ {2} \\ \leq \left\| \theta_ {m, M - 1} + \alpha \left(g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right)\right) - \theta^ {*} \right\| _ {2} ^ {2} \\ = \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 2 \alpha (\theta_ {m, M - 1} - \theta^ {*}) ^ {\top} \left(g _ {x _ {j _ {m}, M}} (\theta_ {m, M - 1}) - g _ {x _ {j _ {m}, M}} (\tilde {\theta} _ {m - 1}) + g _ {m} (\tilde {\theta} _ {m - 1})\right) \\ + \alpha^ {2} \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2}. \tag {26} \\ \end{array} +$$ + +First, consider the third term in the right-hand side of eq. (26). + +$$ +\begin{array}{l} \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \\ = \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right) - \left[ \left(g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right)\right) - \left(g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g _ {m} \left(\theta^ {*}\right)\right) \right] + g _ {m} \left(\theta^ {*}\right) \right\| _ {2} ^ {2} \\ \leq 3 \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right) \right\| _ {2} ^ {2} + 3 \left\| \left(g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right)\right) - \left(g _ {m} \left(\tilde {\theta} _ {m - 1}\right) - g _ {m} \left(\theta^ {*}\right)\right) \right\| _ {2} ^ {2} \\ + 3 \left\| g _ {m} \left(\theta^ {*}\right) \right\| _ {2} ^ {2}. \tag {27} \\ \end{array} +$$ + +Then, by taking the expectation conditioned on $F_{m,(M - 1)}$ on both sides of eq. (27), we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| g _ {x _ {j _ {m, M}}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m, M}}} \left(\tilde {\theta} _ {m - 1}\right) + g _ {m} \left(\tilde {\theta} _ {m - 1}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ \stackrel {(i)} {\leq} 3 \mathbb {E} \left[ \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\theta^ {*}\right) \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ + 3 \mathbb {E} \left[ \left\| \left(g _ {x _ {j _ {m}, M}} (\tilde {\theta} _ {m - 1}) - g _ {x _ {j _ {m}, M}} (\theta^ {*})\right) - \mathbb {E} \left[ g _ {x _ {j _ {m}, M}} (\tilde {\theta} _ {m - 1}) - g _ {x _ {j _ {m}, M}} (\theta^ {*}) | F _ {m, M - 1} \right] \right\| _ {2} ^ {2} \middle | F _ {m, M - 1} \right] \\ \end{array} +$$ + +$$ ++ 3 \mathbb {E} \left[ \| g _ {m} \left(\theta^ {*}\right) \| _ {2} ^ {2} \mid F _ {m, M - 1} \right] +$$ + +$$ +\leq 3 \mathbb {E} \left[ \left\| g _ {x _ {j _ {m}, M}} (\theta_ {m, M - 1}) - g _ {x _ {j _ {m}, M}} (\theta^ {*}) \right\| _ {2} ^ {2} \Big | F _ {m, M - 1} \right] + 3 \mathbb {E} \left[ \left\| g _ {x _ {j _ {m}, M}} (\tilde {\theta} _ {m - 1}) - g _ {x _ {j _ {m}, M}} (\theta^ {*}) \right\| _ {2} ^ {2} \Big | F _ {m, M - 1} \right] +$$ + +$$ +\begin{array}{l} \stackrel {(i i)} {\leq} 3 \mathbb {E} \left[ \| A _ {m, M} \| _ {2} ^ {2} \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} \Big | F _ {m, M - 1} \right] + 3 \mathbb {E} \left[ \| A _ {m, M} \| _ {2} ^ {2} \Big \| \tilde {\theta} _ {m - 1} - \theta^ {*} \Big \| _ {2} ^ {2} \Big | F _ {m, M - 1} \right] \\ + 3 \mathbb {E} \left[ \| g _ {m} \left(\theta^ {*}\right) \| _ {2} ^ {2} \mid F _ {1, 0} \right] \\ \end{array} +$$ + +$$ +\leq 3 (1 + \gamma) ^ {2} \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 3 (1 + \gamma) ^ {2} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 3 \mathbb {E} \left[ \| g _ {m} (\theta^ {*}) \| _ {2} ^ {2} \mid F _ {1, 0} \right] \tag {28} +$$ + +where $(i)$ follows from the fact that $\mathbb{E}[(g_{x_{j_m,M}}(\tilde{\theta}_{m-1}) - g_{x_{j_m,M}}(\theta^*))|F_{m,M-1}] = g_m(\tilde{\theta}_{m-1}) - g_m(\theta^*)$ , $(ii)$ follows from the inequality $\mathbb{E}[(X - \mathbb{E}X)^2] \leq \mathbb{E}X^2$ , and $(iii)$ follows from Lemma 2. We further consider the last term in eq. (28): + +$$ +\mathbb {E} \left[ \| g _ {m} (\theta^ {*}) \| _ {2} ^ {2} \mid F _ {1, 0} \right] = \left\| \left(\frac {1}{M} \sum_ {i = (m - 1) M} ^ {m M - 1} A _ {i}\right) \theta^ {*} + \left(\frac {1}{M} \sum_ {i = (m - 1) M} ^ {m M - 1} b _ {i}\right) \right\| _ {2} ^ {2}. +$$ + +Then, taking the expectation conditioned on $F_{m,M - 1}$ on both sides of eq. (26) yields + +$$ +\mathbb {E} \left[ \| \theta_ {m, M} - \theta^ {*} \| _ {2} ^ {2} \mid F _ {m, M - 1} \right] +$$ + +$$ +\begin{array}{l} \leq \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 2 \alpha (\theta_ {m, M - 1} - \theta^ {*}) ^ {\top} \mathbb {E} \left[ g _ {x _ {j _ {m}, M}} (\theta_ {m, M - 1}) - g _ {x _ {j _ {m}, M}} (\tilde {\theta} _ {m - 1}) + g _ {m} (\tilde {\theta} _ {m - 1}) \Big | F _ {m, M - 1} \right] \\ + \alpha^ {2} \mathbb {E} \left[ \left\| g _ {x _ {j _ {m}, M}} \left(\theta_ {m, M - 1}\right) - g _ {x _ {j _ {m}, M}} \left(\tilde {\theta} _ {m - 1}\right) + \tilde {g} _ {m} \right\| _ {2} ^ {2} \mid F _ {m, M - 1} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(i)} {\leq} \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 2 \alpha (\theta_ {m, M - 1} - \theta^ {*}) ^ {\top} \mathbb {E} \left[ g _ {x _ {j _ {m}, M}} (\theta_ {m, M - 1}) \Big | F _ {m, M - 1} \right] + 3 \alpha^ {2} (1 + \gamma) ^ {2} \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} \\ + 3 \alpha^ {2} (1 + \gamma) ^ {2} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 3 \alpha^ {2} \mathbb {E} \left[ \| g _ {m} (\theta^ {*}) \| _ {2} ^ {2} \mid F _ {1, 0} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(i i)} {=} \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 2 \alpha (\theta_ {m, M - 1} - \theta^ {*}) ^ {\top} g (\theta_ {m, M - 1}) + 2 \alpha \mathbb {E} \left[ \xi_ {m} (\theta_ {m, M - 1}) \Big | F _ {m, M - 1} \right] \\ + 3 \alpha^ {2} (1 + \gamma) ^ {2} \| \theta_ {m, M - 1} - \theta^ {*} \| _ {2} ^ {2} + 3 \alpha^ {2} (1 + \gamma) ^ {2} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 3 \alpha^ {2} \mathbb {E} \left[ \| g _ {m} (\theta^ {*}) \| _ {2} ^ {2} \mid F _ {1, 0} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \left\| \theta_ {m, M - 1} - \theta^ {*} \right\| _ {2} ^ {2} - \left[ \alpha \lambda_ {A} - 3 \alpha^ {2} (1 + \gamma) ^ {2} \right] \left\| \theta_ {m, M - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 3 \alpha^ {2} (1 + \gamma) ^ {2} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} \\ + 2 \alpha \mathbb {E} \left[ \xi_ {m} \left(\theta_ {m, M - 1}\right) \mid F _ {m, M - 1} \right] + 3 \alpha^ {2} \mathbb {E} \left[ \| g _ {m} \left(\theta^ {*}\right) \| _ {2} ^ {2} \mid F _ {1, 0} \right], \tag {29} \\ \end{array} +$$ + +where $(i)$ follows by plugging eq. (28) into its preceding step and from the fact that $\mathbb{E}\left[g_{x_{j_m,M}}(\tilde{\theta}_{m-1}) - g_m(\tilde{\theta}_{m-1})\Big|F_{m,M-1}\right] = 0$ . In $(ii)$ we define $\xi_m(\theta) = (\theta - \theta^*)^\top(g_m(\theta) - g(\theta))$ for $\theta \in \mathbb{R}^d$ . Then, by applying eq. (29) iteratively, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \theta_ {m, 1} - \theta^ {*} \right\| _ {2} ^ {2} \Big | F _ {m, 0} \right] \\ \leq \| \theta_ {m, 0} - \theta^ {*} \| _ {2} ^ {2} - [ \alpha \lambda_ {A} - 3 \alpha^ {2} (1 + \gamma) ^ {2} ] \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \| \theta_ {m, i} - \theta^ {*} \| _ {2} ^ {2} | F _ {m, 0} \right] + 3 M \alpha^ {2} (1 + \gamma) ^ {2} \| \tilde {\theta} _ {m - 1} - \theta^ {*} \| _ {2} ^ {2} \\ + 2 \alpha \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \xi_ {m} \left(\theta_ {m, i}\right) \mid F _ {m, 0} \right] + 3 M \alpha^ {2} \mathbb {E} \left[ \| g _ {m} \left(\theta^ {*}\right) \| _ {2} ^ {2} \mid F _ {1, 0} \right]. \tag {30} \\ \end{array} +$$ + +Arranging the terms in eq. (30) yields + +$$ +\left[ \alpha \lambda_ {A} - 3 \alpha^ {2} (1 + \gamma) ^ {2} \right] \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \| \theta_ {m, i} - \theta^ {*} \| _ {2} ^ {2} \mid F _ {m, 0} \right] +$$ + +$$ +\leq \left[ 1 + 3 M \alpha^ {2} (1 + \gamma) ^ {2} \right] \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + 2 \alpha \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \xi_ {m} \left(\theta_ {m, i}\right) \mid F _ {m, 0} \right] + 3 M \alpha^ {2} \mathbb {E} \left[ \| g _ {m} \left(\theta^ {*}\right) \| _ {2} ^ {2} \mid F _ {1, 0} \right]. \tag {31} +$$ + +Then, substituting eq. (25) into eq. (31), we obtain + +$$ +\begin{array}{l} [ \alpha \lambda_ {A} - 3 \alpha^ {2} (1 + \gamma) ^ {2} ] \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \| \theta_ {m, i} - \theta^ {*} \| _ {2} ^ {2} \Big | F _ {m, 0} \right] \\ \leq \left[ 1 + 3 M \alpha^ {2} (1 + \gamma) ^ {2} \right] \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {\lambda_ {A} \alpha}{2} \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \| \theta_ {m, i} - \theta^ {*} \| _ {2} ^ {2} | F _ {m, 0} \right] \\ + \frac {1 6 [ 1 + (\kappa - 1) \rho ] \alpha}{\lambda_ {A} (1 - \rho)} \left[ R _ {\theta} ^ {2} (1 + \gamma) ^ {2} + r _ {\max } ^ {2} \right] + 3 M \alpha^ {2} \mathbb {E} \left[ \| g _ {m} \left(\theta^ {*}\right) \| _ {2} ^ {2} \mid F _ {1, 0} \right]. \tag {32} \\ \end{array} +$$ + +Subtracting $0.5\lambda_{A}\alpha \sum_{i = 0}^{M - 1}\mathbb{E}[\| \theta_{m,i} - \theta^{*}\|_{2}^{2}|F_{m,0}]$ on both sides of eq. (32) yields + +$$ +\begin{array}{l} [ 0. 5 \alpha \lambda_ {A} - 3 \alpha^ {2} (1 + \gamma) ^ {2} ] \sum_ {i = 0} ^ {M - 1} \mathbb {E} \left[ \| \theta_ {m, i} - \theta^ {*} \| _ {2} ^ {2} \mid F _ {m, 0} \right] \\ \leq \left[ 1 + 3 M \alpha^ {2} (1 + \gamma) ^ {2} \right] \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {1 6 [ 1 + (\kappa - 1) \rho ] \alpha}{\lambda_ {A} (1 - \rho)} \left[ R _ {\theta} ^ {2} (1 + \gamma) ^ {2} + r _ {\max} ^ {2} \right] \\ + 3 M \alpha^ {2} \mathbb {E} \left[ \| g _ {m} \left(\theta^ {*}\right) \| _ {2} ^ {2} \mid F _ {1, 0} \right]. \tag {33} \\ \end{array} +$$ + +Then, dividing eq. (31) by $[0.5\alpha \lambda_A - 3\alpha^2 (1 + \gamma)^2 ]M$ on both sides, we obtain + +$$ +\begin{array}{l} \mathbb {E} \left[ \left| \left| \tilde {\theta} _ {m} - \theta^ {*} \right| \right| _ {2} ^ {2} \Big | F _ {m, 0} \right] \\ \leq \frac {1 / M + 3 \alpha^ {2} (1 + \gamma) ^ {2}}{0 . 5 \alpha \lambda_ {A} - 3 \alpha^ {2} (1 + \gamma) ^ {2}} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {1 6 [ 1 + (\kappa - 1) \rho ] [ R _ {\theta} ^ {2} (1 + \gamma) ^ {2} + r _ {\max} ^ {2} ] \alpha}{\lambda_ {A} (1 - \rho) [ 0 . 5 \alpha \lambda_ {A} - 3 \alpha^ {2} (1 + \gamma) ^ {2} ] M} \\ + \frac {3 \alpha}{0 . 5 \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2}} \mathbb {E} \left[ \| g _ {m} \left(\theta^ {*}\right) \| _ {2} ^ {2} \mid F _ {1, 0} \right]. \tag {34} \\ \end{array} +$$ + +For simplicity, let $C_1 = \frac{1 / M + 3\alpha^2(1 + \gamma)^2}{0.5\alpha\lambda_A - 3\alpha^2(1 + \gamma)^2}$ , $C_2 = \frac{16[1 + (\kappa - 1)\rho][R_\theta^2(1 + \gamma)^2 + r_{\max}^2]}{1 - \rho}$ and $C_3 = \frac{3\alpha}{0.5\lambda_A - 3\alpha(1 + \gamma)^2}$ . Then we rewrite eq. (34): + +$$ +\mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \mid F _ {m, 0} \right] \leq C _ {1} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {C _ {2}}{\left[ 0 . 5 \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2} \right] \lambda_ {A} M} + C _ {3} \mathbb {E} \left[ \| g _ {m} (\theta^ {*}) \| _ {2} ^ {2} \mid F _ {1, 0} \right]. \tag {35} +$$ + +# Step 2: Iteration over $m$ epochs + +Taking the expectation of eq. (35) conditioned on $F_{m - 1,0}$ and upper-bounding $\mathbb{E}\left[\left\| \tilde{\theta}_{m - 1} - \theta^{*}\right\|_{2}^{2}\right]$ by following similar steps in the previous steps, we obtained + +$$ +\mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \Big | F _ {m - 1, 0} \right] +$$ + +$$ +\begin{array}{l} \leq C _ {1} \left\| \tilde {\theta} _ {m - 1} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {C _ {2}}{[ 0 . 5 \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2} ] \lambda_ {A} M} + C _ {3} \mathbb {E} \left[ \| g _ {m} (\theta^ {*}) \| _ {2} ^ {2} \Big | F _ {1, 0} \right] \\ \leq C _ {1} ^ {2} \left\| \tilde {\theta} _ {m - 2} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {C _ {2}}{\left[ 0 . 5 \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2} \right] \lambda_ {A} M} \sum_ {k = 0} ^ {1} C _ {1} ^ {k} + C _ {3} \sum_ {k = 0} ^ {1} C _ {1} ^ {k} \mathbb {E} \left[ \| g _ {m - k} (\theta^ {*}) \| _ {2} ^ {2} \Big | F _ {1, 0} \right]. \\ \end{array} +$$ + +By following the above steps for $(m - 1)$ times, we have + +$$ +\mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \Big | F _ {1, 0} \right] +$$ + +$$ +\leq C _ {1} ^ {m} \left\| \tilde {\theta} _ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {C _ {2}}{\left[ 0 . 5 \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2} \right] \lambda_ {A} M} \sum_ {k = 0} ^ {m - 1} C _ {1} ^ {k} + C _ {3} \sum_ {k = 0} ^ {m - 1} C _ {1} ^ {k} \mathbb {E} \left[ \| g _ {m - k} (\theta^ {*}) \| _ {2} ^ {2} \mid F _ {1, 0} \right]. \tag {36} +$$ + +Then taking the expectation of $\sigma(S)$ (which contains the randomness of the entire sample trajectory) on both sides of eq. (36) yields + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \right] \\ \leq C _ {1} ^ {m} \left\| \tilde {\theta} _ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {C _ {2}}{[ 0 . 5 \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2} ] \lambda_ {A} M} \sum_ {k = 0} ^ {m - 1} C _ {1} ^ {k} + C _ {3} \sum_ {k = 0} ^ {m - 1} C _ {1} ^ {k} \mathbb {E} \left[ \| g _ {m - k} (\theta^ {*}) \| _ {2} ^ {2} \right], \tag {37} \\ \end{array} +$$ + +where the second term in the right hand side of eq. (37) corresponds to the bias error and the third term corresponds to the variance error. + +# Step 3: Bounding the variance error + +For any $0 \leq k \leq m - 1$ , we have + +$$ +\begin{array}{l} \left\| g _ {m - k} \left(\theta^ {*}\right) \right\| _ {2} ^ {2} = \left\| \frac {1}{M} \sum_ {i = (m - k - 1) M} ^ {(m - k) M - 1} g _ {x _ {i}} \left(\theta^ {*}\right) \right\| _ {2} ^ {2} \\ = \frac {1}{M ^ {2}} \Big (\sum_ {i = (m - k - 1) M} ^ {(m - k) M - 1} g _ {x _ {i}} ^ {\top} \left(\theta^ {*}\right) \Big) \Big (\sum_ {j = (m - k - 1) M} ^ {(m - k) M - 1} g _ {x _ {j}} \left(\theta^ {*}\right) \Big) \\ = \frac {1}{M ^ {2}} \sum_ {i = (m - k - 1) M} ^ {(m - k) M - 1} \sum_ {j = (m - k - 1) M} ^ {(m - k) M - 1} g _ {x _ {i}} ^ {\top} (\theta^ {*}) g _ {x _ {j}} (\theta^ {*}) \\ = \frac {1}{M ^ {2}} \sum_ {i = j} \| g _ {x _ {i}} (\theta^ {*}) \| _ {2} ^ {2} + \frac {1}{M ^ {2}} \sum_ {i \neq j} g _ {x _ {i}} ^ {\top} (\theta^ {*}) g _ {x _ {j}} (\theta^ {*}) \\ \stackrel {(i)} {\leq} \frac {G ^ {2}}{M} + \frac {1}{M ^ {2}} \sum_ {i \neq j} g _ {x _ {i}} ^ {\top} \left(\theta^ {*}\right) g _ {x _ {j}} \left(\theta^ {*}\right), \tag {38} \\ \end{array} +$$ + +where $(i)$ follows from Lemma 3. Consider the expectation of the second term in eq. (38), which is given by + +$$ +\frac {1}{M ^ {2}} \sum_ {i \neq j} \mathbb {E} \left[ g _ {x _ {i}} ^ {\top} \left(\theta^ {*}\right) g _ {x _ {j}} \left(\theta^ {*}\right) \right]. \tag {39} +$$ + +Without loss of generality, we consider the case when $j > i$ as follows: + +$$ +\begin{array}{l} \mathbb {E} \left[ g _ {x _ {i}} ^ {\top} \left(\theta^ {*}\right) g _ {x _ {j}} \left(\theta^ {*}\right) \right] = \mathbb {E} \left[ \mathbb {E} \left[ g _ {x _ {j}} \left(\theta^ {*}\right) \mid P _ {i} \right] ^ {\top} g _ {x _ {i}} \left(\theta^ {*}\right) \right] \\ \leq \mathbb {E} [ \| \mathbb {E} [ g _ {x _ {j}} (\theta^ {*}) | P _ {i} ] \| _ {2} \| g _ {x _ {i}} (\theta^ {*}) \| _ {2} ] \\ \leq G \mathbb {E} [ \| \mathbb {E} [ g _ {x _ {j}} (\theta^ {*}) | P _ {i} ] \| _ {2} ] \\ = G \mathbb {E} [ \| \mathbb {E} [ (A _ {j} \theta^ {*} + b _ {j}) | P _ {i} ] \| _ {2} ] \\ \leq G \mathbb {E} [ \| \mathbb {E} [ A _ {j} | P _ {i} ] \theta^ {*} + \mathbb {E} [ b _ {j} | P _ {i} ] \| _ {2} ] \\ = G \mathbb {E} [ \| (\mathbb {E} [ A _ {j} | P _ {i} ] - A) \theta^ {*} + (\mathbb {E} [ b _ {j} | P _ {i} ] - b) \| _ {2} ] \\ \leq G \mathbb {E} [ \| (\mathbb {E} [ A _ {j} | P _ {i} ] - A) \theta^ {*} \| _ {2} + \| \mathbb {E} [ b _ {j} | P _ {i} ] - b \| _ {2} ] \\ \leq G \mathbb {E} [ \| \mathbb {E} [ A _ {j} | P _ {i} ] - A \| _ {2} \| \theta^ {*} \| _ {2} + \| \mathbb {E} [ b _ {j} | P _ {i} ] - b \| _ {2} ] \\ \leq \kappa G [ (1 + \gamma) R _ {\theta} + r _ {\max } ] \rho^ {j - i}. \tag {40} \\ \end{array} +$$ + +Substituting eq. (40) into eq. (39), we obtain + +$$ +\frac {1}{M ^ {2}} \sum_ {i \neq j} \mathbb {E} \left[ g _ {x _ {i}} ^ {\top} \left(\theta^ {*}\right) g _ {x _ {j}} \left(\theta^ {*}\right) \right] \leq \frac {\kappa G \left[ (1 + \gamma) R _ {\theta} + r _ {\max } \right]}{M ^ {2}} \sum_ {i \neq j} \rho^ {| i - j |} +$$ + +$$ +\begin{array}{l} \leq \frac {\kappa G [ (1 + \gamma) R _ {\theta} + r _ {\max} ]}{M ^ {2}} (2 M \sum_ {k = 1} ^ {\lceil \frac {M}{2} \rceil} \rho^ {k}) \\ \leq \frac {2 \rho \kappa G [ (1 + \gamma) R _ {\theta} + r _ {\max} ]}{(1 - \rho) M}. \tag {41} \\ \end{array} +$$ + +Then substituting eq. (41) into eq. (38) yields + +$$ +\mathbb {E} \left[ \| g _ {m - k} \left(\theta^ {*}\right) \| _ {2} ^ {2} \right] \leq \frac {1}{M} \left(G ^ {2} + \frac {2 \rho \kappa G [ (1 + \gamma) R _ {\theta} + r _ {\max } ]}{(1 - \rho)}\right) \leq \frac {C _ {4}}{M}, \tag {42} +$$ + +where $C_4 = G^2 + \frac{2\rho\kappa G[(1 + \gamma)R_\theta + r_{\max}]}{(1 - \rho)}$ . Finally, substituting eq. (42) into the accumulated residual variance term in eq. (37), we have + +$$ +C _ {3} \sum_ {k = 0} ^ {m - 1} C _ {1} ^ {k} \mathbb {E} \left[ \| g _ {m - k} \left(\theta^ {*}\right) \| _ {2} ^ {2} \right] \leq \frac {C _ {3} C _ {4}}{M} \sum_ {k = 0} ^ {m - 1} C _ {1} ^ {k} \leq \frac {C _ {3} C _ {4}}{(1 - C _ {1}) M}. \tag {43} +$$ + +# Step 4: Combining all error terms + +Finally, substituting eq. (43) and substituting the values of $C_2$ and $C_3$ into eq. (37), we have + +$$ +\mathbb {E} \left[ \left\| \tilde {\theta} _ {m} - \theta^ {*} \right\| _ {2} ^ {2} \right] \leq C _ {1} ^ {m} \left\| \tilde {\theta} _ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {3 C _ {4} \alpha + C _ {2} / \lambda_ {A}}{(1 - C _ {1}) [ 0 . 5 \lambda_ {A} - 3 \alpha (1 + \gamma) ^ {2} ] M}, +$$ + +which yields the desired result. + +# F SAMPLE COMPLEXITY OF TD + +The finite-time convergence rate of vanilla TD under i.i.d. and Markovian sampling has been characterized in Bhandari et al. (2018); Srikant and Ying (2019). However, these studies did not provide the overall computational complexity, i.e., the total number of pseudo-gradient computations to achieve an $\epsilon$ -accuracy solution. This section provides such an analysis based on their convergence results for completeness. + +# F.1 TD WITH I.I.D. SAMPLES + +Consider the vanilla TD update in Bhandari et al. (2018). Following the steps similar to those in Bhandari et al. (2018) for proving Theorem 2, and letting the constant stepsize $\alpha \leq \min \left\{\frac{\lambda_A}{4(1 + \gamma)^2}, \frac{2}{\lambda_A}\right\}$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left\| \theta_ {t} - \theta^ {*} \right\| _ {2} ^ {2} \leq \left(1 - \frac {1}{2} \lambda_ {A} \alpha\right) ^ {t} \left\| \theta_ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {2 C _ {5} \alpha}{\lambda_ {A}} \\ \leq e ^ {- \frac {1}{2} \lambda_ {A} \alpha t} \| \theta_ {0} - \theta^ {*} \| _ {2} ^ {2} + \frac {2 C _ {5} \alpha}{\lambda_ {A}}, \\ \end{array} +$$ + +where $0 < C_5 < \infty$ is a constant. Let $\alpha = \min \left\{\frac{\lambda_A}{4(1 + \gamma)^2}, \frac{2}{\lambda_A}, \frac{\epsilon \lambda_A}{4C_5}\right\}$ . It can be checked easily that with the total number of iterations at most + +$$ +\begin{array}{l} t = \lceil \frac {2}{\lambda_ {A} \alpha} \log (\frac {2 \| \theta_ {0} - \theta^ {*} \| _ {2} ^ {2}}{\epsilon}) \rceil = \lceil 2 \max \{\frac {4 (1 + \gamma) ^ {2}}{\lambda_ {A} ^ {2}}, 1, \frac {4 C _ {5}}{\epsilon \lambda_ {A} ^ {2}} \} \log (\frac {2 \| \theta_ {0} - \theta^ {*} \| _ {2} ^ {2}}{\epsilon}) \rceil \\ = \mathcal {O} \left(\left(\frac {1}{\epsilon \lambda_ {A} ^ {2}}\right) \log \left(\frac {1}{\epsilon}\right)\right), \tag {44} \\ \end{array} +$$ + +an $\epsilon$ -accurate solution can be attained, i.e., $\mathbb{E}\left\| \theta_t - \theta^*\right\|_2^2 \leq \epsilon$ . Since each iteration requires one pseudo-gradient computation, the total number of pseudo-gradient computations is also given by eq. (44). + +# F.2 TD WITH MARKOVIAN SAMPLES + +Consider the vanilla TD update in Bhandari et al. (2018). Following the steps similar to those in Bhandari et al. (2018) for proving Theorem 3, and letting the constant stepsize $\alpha \leq \frac{1}{\lambda_A}$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left\| \theta_ {t} - \theta^ {*} \right\| _ {2} ^ {2} \leq \left(1 - \lambda_ {A} \alpha\right) ^ {t} \left\| \theta_ {0} - \theta^ {*} \right\| _ {2} ^ {2} + \frac {C _ {6} \alpha}{\lambda_ {A}} + \frac {C _ {7} \alpha \log \left(\frac {1}{\alpha}\right)}{\lambda_ {A}} \\ \leq e ^ {- \lambda_ {A} \alpha t} \| \theta_ {0} - \theta^ {*} \| _ {2} ^ {2} + \frac {C _ {6} \alpha}{\lambda_ {A}} + \frac {C _ {7} \alpha \log (\frac {1}{\alpha})}{\lambda_ {A}} \\ \end{array} +$$ + +where $0 < C_6 < \infty$ and $0 < C_7 < \infty$ are constants. Now let $\alpha = \min \left\{\frac{C_8\epsilon}{\log(1 / C_8\epsilon)},\frac{1}{\lambda_A}\right\}$ where $C_8 = \lambda_A\min \left\{\frac{1}{C_6},\frac{1}{6C_7}\right\}$ , it can be checked easily that with the total number of iterations at most + +$$ +\begin{array}{l} t = \lceil \frac {2}{\lambda_ {A} \alpha} \log (\frac {3 \| \theta_ {0} - \theta^ {*} \| _ {2} ^ {2}}{\epsilon}) \rceil \\ = \lceil \max \{\frac {2}{\min \{\frac {1}{C _ {6}} , \frac {1}{6 C _ {7}} \} \lambda_ {A} ^ {2} \epsilon}, 2 \} \log \left(\frac {1}{C _ {8} \epsilon}\right) \log (\frac {3 \| \theta_ {0} - \theta^ {*} \| _ {2} ^ {2}}{\epsilon}) \rceil \\ = \mathcal {O} \left(\left(\frac {1}{\epsilon \lambda_ {A} ^ {2}}\right) \log^ {2} \left(\frac {1}{\epsilon}\right)\right), \tag {45} \\ \end{array} +$$ + +an $\epsilon$ -accurate solution can be attained, i.e., $\mathbb{E}\left\| \theta_t - \theta^*\right\|_2^2 \leq \epsilon$ . Since each iteration requires one pseudo-gradient computation, the total number of pseudo-gradient computations is also given by eq. (45). \ No newline at end of file diff --git a/reanalysisofvariancereducedtemporaldifferencelearning/images.zip b/reanalysisofvariancereducedtemporaldifferencelearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..29698d8258b94c951a0c24bb2de84ad89f2dc3dc --- /dev/null +++ b/reanalysisofvariancereducedtemporaldifferencelearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53366ce67b5bdea81eb8788ff79d750229e7efdd73efcd2cbab0dfdf29d018c7 +size 1689801 diff --git a/reanalysisofvariancereducedtemporaldifferencelearning/layout.json b/reanalysisofvariancereducedtemporaldifferencelearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0c271df5ea64924ddd543d2208f52332cb6444f6 --- /dev/null +++ b/reanalysisofvariancereducedtemporaldifferencelearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b14c595f5bf8367bcf11c69b3bc5ebd5495ed3bd678ac9d708f2cea5545b65d +size 971584 diff --git a/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_content_list.json b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e5bbc58259ecff51339ce20fba9492bf3e3cc88c --- /dev/null +++ b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ec22f20d7c47624954b5ee496e0ba94fca283c491569212fc31ce9693342b78 +size 126150 diff --git a/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_model.json b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7986fa81b0c0e373218ece205855a35c9d575fea --- /dev/null +++ b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:412236c4b5a47a1144a10ebb8a569d2d5a9479fb3edf93d923c30ac6906f342a +size 149900 diff --git a/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_origin.pdf b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6448253f0722723efdfe8c0e9621a4a06547f02b --- /dev/null +++ b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a3b48b485e6b21fc2fd576c6aeec88bdbbcdbaa24269894ccababc4b0db833b +size 596218 diff --git a/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/full.md b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d7007ee2ec6ce7573240369217d680cf2f802274 --- /dev/null +++ b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/full.md @@ -0,0 +1,550 @@ +# RECLOR: A READING COMPREHENSION DATASET REQUIRING LOGICAL REASONING + +Weihao Yu*, Zihang Jiang*, Yanfei Dong & Jiashi Feng + +National University of Singapore + +weihaoyu6@gmail.com, {jzihang, dyanfei}@u.nus.edu, + +elefjia@nus.edu.sg + +# ABSTRACT + +Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.1 + +# 1 INTRODUCTION + +Machine reading comprehension (MRC) is a fundamental task in Natural Language Processing, which requires models to understand a body of text and answer a particular question related to the context. With success of unsupervised representation learning in NLP, language pre-training based models such as GPT-2 (Radford et al., 2019), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) have achieved nearly saturated performance on most of the popular MRC datasets (Rajpurkar et al., 2016; Lai et al., 2017; Rajpurkar et al., 2018; Wang et al., 2018). It is time to challenge state-of-the-art models with more difficult reading comprehension tasks and move a step forward to more comprehensive analysis and reasoning over text (Dua et al., 2019). + +In natural language understanding, logical reasoning is an important ability to examine, analyze and critically evaluate arguments as they occur in ordinary language according to the definition from Law School Admission Council (2019a). It is a significant component of human intelligence and is essential in negotiation, debate and writing etc. However, existing reading comprehension datasets have none or merely a small amount of data requiring logical reasoning, e.g., $0\%$ in MCTest dataset (Richardson et al., 2013) and $1.2\%$ in SQuAD (Rajpurkar et al., 2016) according to Sugawara & Aizawa (2016). One related task is natural language inference, which requires models to label the logical relationships of sentence pairs. However, this task only considers three types of simple logical relationships and only needs reasoning at sentence-level. To push the development of models in logical reasoning from simple logical relationship classification to multiple complicated logical reasoning and from sentence-level to passage-level, it is necessary to introduce a reading comprehension dataset targeting logical reasoning. + +A typical example of logical reasoning questions is shown in Table 1. Similar to the format of multiple-choice reading comprehension datasets (Richardson et al., 2013; Lai et al., 2017), it contains a context, a question and four options with only one right answer. To answer the question + +in this example, readers need to identify the logical connections between the lines to pinpoint the conflict, then understand each of the options and select an option that solves the conflict. Human minds need extensive training and practice to get used to complex reasoning, and it will take immense efforts for crowdsourcing workers to design such logical reasoning questions. Inspired by the datasets extracted from standardized examinations (Lai et al., 2017; Clark et al., 2018), we build a dataset by selecting such logical reasoning questions from standardized exams such as GMAT $^{2}$ and LSAT $^{3}$ . We finally collect 6,138 pieces of logical reasoning questions, which constitute a Reading Comprehension dataset requiring logical reasoning (ReClor). + +Human-annotated datasets usually contain biases (Schwartz et al., 2017; Cai et al., 2017; Bugert et al., 2017; Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019), which are often exploited by neural network models as shortcut solutions to achieve high testing accuracy. For data points whose options can be selected correctly without knowing the contexts and questions, we classify them as biased ones. In order to fully assess the logical reasoning ability of the models, we propose to identify the biased data points and group them as EASY set, and put the rest into HARD set. Based on our experiments on these separate sets, we find that even the state-of-the-art models can only perform well on EASY set and struggle on HARD set as shown in Figure 1. This phenomenon shows that current models can well capture the biases in the dataset but lack the ability to understand the text and reason based on connections between the lines. On the other hand, human beings perform similarly on both the EASY and HARD set. It is thus observed that there is still a long way to go to equip models with true logical reasoning ability. + +The contributions of our paper are two-fold. First, we introduce ReClor, a new reading comprehension dataset requiring logical reasoning. We use option-only-input baselines trained with different random seeds to identify the data points with biases in the testing set, and group them as EASY set, with the rest as HARD set to facilitate comprehensive evaluation. Second, we evaluate several state-of-the-art models on ReClor and find these pre-trained language models can perform well on EASY set but struggle on the HARD set. This indicates although current models are good at exploiting biases in the dataset, they are far from capable of performing real logical reasoning yet. + +![](images/5b009db65ecae0631f1078982d0b8e3e0c2fa78d57fa54818b3bc351919198be.jpg) +Figure 1: Performance comparison of state-of-the-art models and humans (graduate students) on EASY and HARD set of ReClor testing set. + +# 2 RELATED WORK + +Reading Comprehension Datasets. A variety of reading comprehension datasets have been introduced to promote the development of this field. MCTest (Richardson et al., 2013) is a dataset with 2,000 multiple-choice reading comprehension questions about fictional stories in the format similar to ReClor. Rajpurkar et al. (2016) proposed SQuAD dataset, which contains 107,785 question-answer pairs on 536 Wikipedia articles. The authors manually labeled 192 examples of the dataset and found that the examples mainly require reasoning of lexical or syntactic variation. In an analysis of the above-mentioned datasets, Sugawara & Aizawa (2016) found that none of questions requiring logical reasoning in MCTest dataset (Richardson et al., 2013) and only $1.2\%$ in SQuAD dataset (Rajpurkar et al., 2016). Lai et al. (2017) introduced RACE dataset by collecting the English exams for middle and high school Chinese students in the age range between 12 to 18. They hired crowd workers on Amazon Mechanical Turk to label the reasoning type of 500 samples in the dataset and show that around $70\%$ of the samples are in the category of word matching, paraphrasing or single-sentence reasoning. To encourage progress on deeper comprehension of language, + +# Context: + +In jurisdictions where use of headlights is optional when visibility is good, drivers who use headlights at all times are less likely to be involved in a collision than are drivers who use headlights only when visibility is poor. Yet Highway Safety Department records show that making use of headlights mandatory at all times does nothing to reduce the overall number of collisions. + +Question: Which one of the following, if true, most helps to resolve the apparent discrepancy in the information above? + +# Options: + +A. In jurisdictions where use of headlights is optional when visibility is good, one driver in four uses headlights for daytime driving in good weather. +B. Only very careful drivers use headlights when their use is not legally required. +C. The jurisdictions where use of headlights is mandatory at all times are those where daytime visibility is frequently poor. +D. A law making use of headlights mandatory at all times is not especially difficult to enforce. + +# Answer: B + +Table 1: An example in the ReClor dataset which is modified from the Law School Admission Council (2019b). + +more reading comprehension datasets requiring more complicated reasoning types are introduced, such as iterative reasoning about the narrative of a story (Kočisky et al., 2018), multi-hop reasoning across multiple sentences (Khashabi et al., 2018) and multiple documents (Welbl et al., 2018), commonsense knowledge reasoning (Mihaylov et al., 2018; Zhang et al., 2018; Huang et al., 2019) and numerical discrete reasoning over paragraphs (Dua et al., 2019). However, to the best of our knowledge, although there are some datasets targeting logical reasoning in other NLP tasks mentioned in the next section, there is no dataset targeting evaluating logical reasoning in reading comprehension task. This work introduces a new dataset to fill this gap. + +Logical Reasoning in NLP. There are several tasks and datasets introduced to investigate logical reasoning in NLP. The task of natural language inference, also known as recognizing textual entailment (Fyodorov et al., 2000; Condoravdi et al., 2003; Bos & Markert, 2005; Dagan et al., 2005; MacCartney & Manning, 2009) requires models to take a pair of sentence as input and classify their relationship types, i.e., ENTAILMENT, NEUTRAL, or CONTRADICTION. SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) datasets are proposed for this task. However, this task only focuses on sentence-level logical relationship reasoning and the relationships are limited to only a few types. Another task related to logical reasoning in NLP is argument reasoning comprehension task introduced by Habernal et al. (2018) with a dataset of this task. Given an argument with a claim and a premise, this task aims to select the correct implicit warrant from two options. Although the task is on passage-level logical reasoning, it is limited to only one logical reasoning type, i.e., identifying warrants. ReClor and the proposed task integrate various logical reasoning types into reading comprehension, with the aim to promote the development of models in logical reasoning not only from sentence-level to passage-level, but also from simple logical reasoning types to the complicated diverse ones. + +Datasets from Examinations. There have been several datasets extracted from human standardized examinations in NLP, such as RACE dataset (Lai et al., 2017) mentioned above. Besides, NTCIR QA Lab (Shibuki et al., 2014) offers comparative evaluation for solving real-world university entrance exam questions; The dataset of CLEF QA Entrance Exams Task (Rodrigo et al., 2015) is extracted from standardized English examinations for university admission in Japan; ARC dataset (Clark et al., 2018) consists of 7,787 science questions targeting student grade level, ranging from 3rd grade to 9th; The dialogue-based multiple-choice reading comprehension dataset DREAM (Sun et al., 2019) contains 10,197 questions for 6,444 multi-turn multi-party dialogues from English language exams that are designed by human experts to assess the comprehension level of Chinese learners of English. Compared with these datasets, ReClor distinguishes itself by targeting logical reasoning. + +# 3 RECLOR DATA COLLECTION AND ANALYSIS + +# 3.1 DATA COLLECTION + +The format of data in ReClor is similar to other multiple-choice reading comprehension datasets (Richardson et al., 2013; Lai et al., 2017), where a data point contains a context, a question and four + +
ReClorDREAMMCTestARCRACE
construction methodexamsexamscrowd-sourcingexamsexams
context typewritten textdialogueschild's stories-written text
# of options43444
# of context6,1386,444660-27,933
# of questions6,13810,1972,6407,78797,687
Vocab size26,57613,0378,0006,329136,629
Context Len73.685.9210.1-321.9
Question Len17.08.67.820.510.0
Option Len20.65.33.44.25.3
+ +Table 2: Statistics of several multiple-choice MRC datasets. + +answer options, among which only one option is right/most suitable. We collect reading comprehension problems that require complicated logical reasoning. However, producing such data requires the ability to perform complex logical reasoning, which makes it hard for crowdsourcing workers to generate such logical questions. Fortunately, we find the reading comprehension problems in some standardized tests, such as GMAT and LSAT, are highly in line with our expectation. + +We construct a dataset containing 6,138 logical reasoning questions sourced from open websites and books. In the original problems, there are five answer options in which only one is right. To comply with fair use of law $^4$ , we shuffle the order of answer options and randomly delete one of the wrong options for each data point, which results in four options with one right option and three wrong options. Furthermore, similar to ImageNet dataset $^5$ , ReClor is available for non-commercial research purpose only. We are also hosting a public evaluation server on EvalAI (Yadav et al., 2019) to benchmark progress on Reclor. + +# 3.2 DATA ANALYSIS + +As mentioned above, we collect 6,138 data points, in which $91.22\%$ are from actual exams of GMAT and LSAT while others are from high-quality practice exams. They are divided into training set, validation set and testing set with 4,638, 500 and 1,000 data points respectively. The overall statistics of ReClor and comparison with other similar multiple-choice MRC datasets are summarized in Table 2. As shown, ReClor is of comparable size and relatively large vocabulary size. Compared with RACE, the length of the context of ReCor is much shorter. In RACE, there are many redundant sentences in context to answer a question. However, in ReClor, every sentence in the context passages is important, which makes this dataset focus on evaluating the logical reasoning ability of models rather than the ability to extract relevant information from a long context. The length of answer options of ReClor is largest among these datasets. We analyze and manually annotate the types of questions on the testing set and group them into 17 categories, whose percentages and descriptions are shown in Table 3. The percentages of different types of questions reflect those in the logical reasoning module of GMAT and LSAT. Some examples of different types of logical reasoning are listed in Figure 2, and more examples are listed in the Appendix C. Taking two examples, we further express how humans would solve such questions in Table 4, showing the challenge of ReClor. + +# 3.3 DATA BIASES IN THE DATASET + +The dataset is collected from exams devised by experts in logical reasoning, which means it is annotated by humans and may introduce biases in the dataset. Recent studies have shown that models can utilize the biases in a dataset of natural language understanding to perform well on the task without truly understanding the text (Schwartz et al., 2017; Cai et al., 2017; Bugert et al., 2017; Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019). It is necessary to analyze such data biases to help evaluate models. In the ReClor dataset, the common context and question are shared across the four options for each data point, so we focus on the analysis of the difference in lexical choice and sentence length of the right and wrong options without contexts and questions. We first investigate the biases of lexical choice. We lowercase the options and then use WordPiece tokenization (Wu et al., 2016) of $\mathrm{BERT}_{\mathrm{BASE}}$ (Devlin et al., 2019) to get the tokens. Similar to + +
TypeDescription
Necessary Assumptions (11.4%)identify the claim that must be true or is required in order for the argument to work.
Sufficient Assumptions (3.0%)identify a sufficient assumption, that is, an assumption that, if added to the argument, would make it logically valid.
Strengthen (9.4%)identify information that would strengthen an argument
Weaken (11.3%)identify information that would weaken an argument
Evaluation (1.3%)identify information that would be useful to know to evaluate an argument
Implication (4.6%)identify something that follows logically from a set of premises
Conclusion/Main Point (3.6%)identify the conclusion/main point of a line of reasoning
Most Strongly Supported (5.6%)find the choice that is most strongly supported by a stimulus
Explain or Resolve (8.4%)identify information that would explain or resolve a situation
Principle (6.5%)identify the principle, or find a situation that conforms to a principle, or match the principles
Dispute (3.0%)identify or infer an issue in dispute
Technique (3.6%)identify the technique used in the reasoning of an argument
Role (3.2%)describe the individual role that a statement is playing in a larger argument
Identify a Flaw (11.7%)identify a flaw in an argument's reasoning
Match Flaws (3.1%)find a choice containing an argument that exhibits the same flaws as the passage's argument
Match the Structure (3.0%)match the structure of an argument in a choice to the structure of the argument in the passage
Others (7.3%)other types of questions which are not included by the above
+ +Poliak et al. (2018), for the tokens in options, we analyze their conditional probability of label $l \in \{\text{right}, \text{wrong}\}$ given by the token $t$ by $p(l|t) = \frac{\text{count}(t, l)}{\text{count}(t)}$ . The larger the correlation score is for a particular token, the more likely it contributes to the prediction of related option. Table 5 reports tokens in training set which occur at least twenty times with the highest scores since many of the tokens with the highest scores are of low frequency. We further analyze the lengths of right and wrong options (Gururangan et al., 2018) in training set. We notice a slight difference in the distribution of sentence length for right and wrong options. The average length for wrong options is around 21.82 whereas that for right options is generally longer with an average length of 23.06. + +Table 3: The percentage and description of each logical reasoning type. The descriptions are adapted from those specified by Khan Academy (2019). + +
TokenScore (%)Freq
motive65.223
##ce62.524
thereby56.025
consequence52.421
warm52.421
interfere52.223
contributes52.223
manufacture52.025
included52.025
preferences52.025
+ +![](images/54de7da77315b0dc0a90e95b3b4de799a984f867710a07b8c915d0466b47ebb5.jpg) +Figure 3: The distribution of the option length in ReClor with respect to right and wrong labels. + +Table 5: Top 10 tokens that correlate to right options with more than 20 occurrences. + +# 4 EXPERIMENTS + +# 4.1 BASELINE MODELS + +Many neural network based models such as FastText (Joulin et al., 2017), Bi-LSTM, GPT (Radford et al., 2018), GPT-2 (Radford et al., 2019), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), + +
Context: +If the purpose of laws is to contribute to people's happiness, we have a basis for criticizing existing laws as well as proposing new laws. Hence, if that is not the purpose, then we have no basis for the evaluation of existing laws, from which we must conclude that existing laws acquire legitimacy simply because they are the laws +Question: The reasoning in the argument is flawed in that the argument +Options: +A. takes a sufficient condition for a state of affairs to be a necessary condition for it +B. draws a conclusion about how the world actually is on the basis of claims about how it should be +C. infers a causal relationship from the mere presence of a correlation +D. trades on the use of a term in one sense in a premise and in a different sense in the conclusion +Answer: A +Reasoning Process of Humans: +We may first look at the question to understand the specific task of the question – identify a flaw. We then analyze the argument in the context. The conclusion 'existing laws acquire legitimacy simply because they are the laws.' is based on the argument (purpose is NOT happiness) → (NOT basis for criticizing laws), which is obtained from the first statement: (purpose is happiness) → (basis for criticizing laws). However, we know ¬A → ¬B cannot be obtained from A → B. Therefore, we should choose option A that describes this flaw. +The distractors here are different types of reasoning flaws. Prior knowledge of basic logical rules is needed to correctly answer this question.
Context: +Psychologist: Phonemic awareness, or the knowledge that spoken language can be broken into component sounds, is essential for learning to read an alphabetic language. But one also needs to learn how sounds are symbolically represented by means of letters; otherwise, phonemic awareness will not translate into the ability to read an alphabetic language. Yet many children who are taught by the whole-language method, which emphasizes the ways words sound, learn to read alphabetic languages. +Question: Which one of the following can be properly inferred from the psychologist's statements? +Options: +A. The whole-language method invariably succeeds in teaching awareness of how spoken language can be broken into component sounds. +B. Some children who are taught by the whole-language method are not prevented from learning how sounds are represented by means of letters. +C. The whole-language method succeeds in teaching many children how to represent sounds symbolically by means of letters. +D. When the whole-language method succeeds in teaching someone how to represent sounds by means of letters, that person acquires the ability to read an alphabetic language. +Answer: B +Reasoning Process of Humans: +Looking at the question and we know that it is asking about implication. From the first two sentences in context, we know that there are two necessary conditions to read an alphabetic language: phonemic awareness and symbolic letters. We also learn [(NOT symbolic letters) AND (phonemic awareness)] ≠ read an alphabetic language (denoted as Formula 1). The last sentence in the context says that many children are taught by the whole-language method to learn a language. As for option A, from the context, we only know the whole language method works for 'many' children, which cannot be inferred to 'invariably' works. As for option B, combing three sentences in the context, we know that the whole-language method meets the two necessary conditions to learn a language, especially the last sentence mentions 'learn to read alphabetic languages'. Children learn to read alphabetic languages means that they must recognize symbolic letters that represent sound because symbolic letters is a necessary condition of read an alphabetic language; otherwise, they cannot read because of Formula 1 mentioned above. Therefore, option B is correct. As for option C, from the context we only know the whole-language method teaches phonemic awareness and read an alphabetic language. Symbolic letters may be taught by other methods, so C is wrong. As for D, similar to C, symbolic letters may be taught by other methods and we also cannot obtain: symbolic letters → read an alphabetic language.
+ +Table 4: Two examples to show how humans would solve the questions. + +![](images/e435c4ae49cacfff1cb17bee5e559edf96d364aec409ae279963f7b4bcc53400.jpg) +Figure 2: Examples of some question types. The correct options are marked by $\checkmark$ . More examples are shown in the Appendix C. + +RoBERTa (Liu et al., 2019) have achieved impressive results in various NLP tasks. We challenge these neural models with ReClor to investigate how well they can perform. Details of the baseline models and implementation are shown in the Appendix A and B. + +# 4.2 EXPERIMENTS TO FIND BIASED DATA + +As mentioned earlier, biases prevalently exist in human-annotated datasets (Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019; Niven & Kao, 2019), which are often exploited by models to perform well without truly understanding the text. Therefore, it is necessary to find out the biased data points in ReClor in order to evaluate models in a more comprehensive manner (Sugawara et al., 2018). To this end, we feed the five strong baseline models (GPT, GPT-2, BERTBASE, XLNetBASE and RoBERTaBASE) with ONLY THE ANSWER OPTIONS for each problem. In other words, we purposely remove the context and question in the inputs. In this way, we are able to identify those problems that can be answered correctly by merely exploiting the biases in answer options without knowing the relevant context and question. However, the setting of this task is a multiple-choice question with 4 probable options, and even a chance baseline could have $25\%$ probability to get it right. To eliminate the effect of random guess, we set four different random seeds for each model and pick the data points that are predicted correctly in all four cases to form the EASY set. Then, the data points which are predicted correctly by the models at random could be nearly eliminated, since any data point only has a probability of $(25\%)^4 = 0.39\%$ to be guessed right consecutively for four times. Then we unite the sets of data points that are consistently predicted right by each model, + +because intuitively different models may learn different biases of the dataset. The above process is formulated as the following expression, + +$$ +\begin{array}{l} \mathbb {C} _ {\text {E A S Y}} = \left(\mathbb {C} _ {\text {G P T}} ^ {\text {s e e d} _ {1}} \cap \mathbb {C} _ {\text {G P T}} ^ {\text {s e e d} _ {2}} \cap \mathbb {C} _ {\text {G P T}} ^ {\text {s e e d} _ {3}} \cap \mathbb {C} _ {\text {G P T}} ^ {\text {s e e d} _ {4}}\right) \\ \cup \left(\mathbb {C} _ {\text {G P T} - 2} ^ {\text {s e e d} _ {1}} \cap \mathbb {C} _ {\text {G P T} - 2} ^ {\text {s e e d} _ {2}} \cap \mathbb {C} _ {\text {G P T} - 2} ^ {\text {s e e d} _ {3}} \cap \mathbb {C} _ {\text {G P T} - 2} ^ {\text {s e e d} _ {4}}\right) \\ \cup \left(\mathbb {C} _ {\text {B E R T}} ^ {\text {s e e d} _ {1}} \cap \mathbb {C} _ {\text {B E R T}} ^ {\text {s e e d} _ {2}} \cap \mathbb {C} _ {\text {B E R T}} ^ {\text {s e e d} _ {3}} \cap \mathbb {C} _ {\text {B E R T}} ^ {\text {s e e d} _ {4}}\right) \\ \cup \left(\mathbb {C} _ {\mathrm {X L N e t}} ^ {\mathrm {s e e d} _ {1}} \cap \mathbb {C} _ {\mathrm {X L N e t}} ^ {\mathrm {s e e d} _ {2}} \cap \mathbb {C} _ {\mathrm {X L N e t}} ^ {\mathrm {s e e d} _ {3}} \cap \mathbb {C} _ {\mathrm {X L N e t}} ^ {\mathrm {s e e d} _ {4}}\right) \tag {1} \\ \cup \left(\mathbb {C} _ {\text {R o B E R T a}} ^ {\text {s e e d} _ {1}} \cap \mathbb {C} _ {\text {R o B E R T a}} ^ {\text {s e e d} _ {2}} \cap \mathbb {C} _ {\text {R o B E R T a}} ^ {\text {s e e d} _ {3}} \cap \mathbb {C} _ {\text {R o B E R T a}} ^ {\text {s e e d} _ {4}}\right), \\ \end{array} +$$ + +$$ +\mathbb {C} _ {\text {H A R D}} = \mathbb {C} _ {\text {T E S T}} - \mathbb {C} _ {\text {E A S Y}}, +$$ + +where $\mathbb{C}_{\mathrm{BERT}}^{\mathrm{seed}_1}$ denotes the set of data points which are predicted correctly by $\mathrm{BERT}_{\mathrm{BASE}}$ with seed 1, and similarly for the rest. Table 6 shows the average performance for each model trained with four different random seeds and the number of data points predicted correctly by all of them. Finally, we get 440 data points from the testing set $\mathbb{C}_{\mathrm{TEST}}$ and we denote this subset as EASY set $\mathbb{C}_{\mathrm{EASY}}$ and the other as HARD set $\mathbb{C}_{\mathrm{HARD}}$ . + +
ModelValTestNumber
Chance25.025.03.9
GPT45.842.2238
GPT-246.842.6245
BERTBASE47.243.2234
XLNetBASE47.543.2225
RoBERTaBASE48.841.7200
Union--440
+ +Table 6: Average accuracy of each model using four different random seeds with only answer options as input, and the number of their common correct predictions. + +# 4.3 TRANSFER LEARNING THROUGH FINE-TUNING + +Among multiple-choice reading comprehension or QA datasets from exams, although the size of ReClor is comparable to those of ARC (Clark et al., 2018) and DREAM (Sun et al., 2019), it is much smaller than RACE Lai et al. (2017). Recent studies (Min et al., 2017; Howard & Ruder, 2018; Huang et al., 2019; Jin et al., 2019) have shown the effectiveness of pre-training on similar tasks or datasets then fine-tuning on the target dataset for transfer learning. Jin et al. (2019) find that by first training on RACE (Lai et al., 2017) and then further fine-tuning on the target dataset, the performances of $\mathrm{BERT}_{\mathrm{BASE}}$ on multiple-choice dataset MC500 (Richardson et al., 2013) and DREAM (Sun et al., 2019) can significantly boost from $69.5\%$ to $81.2\%$ , and from $63.2\%$ to $70.2\%$ , respectively. However, they also find that the model cannot obtain significant improvement even performs worse if it is first fine-tuned on span-based dataset like SQuAD (Rajpurkar et al., 2016). ReClor is a multiple-choice dataset, so we choose RACE for fine-tuning study. + +# 4.4 RESULTS AND ANALYSIS + +The performance of all tested models on the ReClor is presented in Table 7. This dataset is built on questions designed for students who apply for admission to graduate schools, thus we randomly choose 100 samples from the testing set and divide them into ten tests, which are distributed to ten different graduate students in a university. We take the average of their scores and present it as the baseline of graduate students. The data of ReClor are carefully chosen and modified from only high-quality questions from standardized graduate entrance exams. We set the ceiling performance to $100\%$ since ambiguous questions are not included in the dataset. + +The performance of fastText is better than random guess, showing that word correlation could be used to help improve performance to some extent. It is difficult for Bi-LSTM to converge on this + +
ModelInputRACEValTestTest-ETest-H
Chance(C, Q, A)25.025.025.025.0
fastText32.030.840.223.4
Bi-LSTM(C, Q, A)27.827.026.427.5
GPT47.645.473.023.8
GPT-252.647.273.027.0
BERTBASE(C, Q, A)54.647.371.628.2
(C, Q, A)55.249.568.934.3
BERTLARGE(A)46.442.469.321.3
(Q, A)48.843.472.720.4
(C, Q, A)53.849.872.032.3
(C, Q, A)55.654.573.939.3
XLNetBASE(C, Q, A)55.850.475.230.9
(C, Q, A)62.055.576.139.3
XLNetLARGE(A)45.042.966.124.6
(Q, A)47.843.468.623.6
(C, Q, A)62.056.075.740.5
(C, Q, A)70.862.477.750.4
RoBERTaBASE(C, Q, A)55.048.571.130.7
(C, Q, A)56.853.072.537.7
RoBERTaLARGE(A)48.843.269.522.5
(Q, A)49.845.872.025.2
(C, Q, A)62.655.675.540.0
(C, Q, A)68.065.178.954.3
Graduate Students(C, Q, A)-63.057.167.2
Ceiling Performance(C, Q, A)-100100100
+ +Table 7: Accuracy (%) of models and human performance. The column Input means whether to input context (C), question (Q) and answer options (A). The RACE column represents whether to first use RACE to fine-tune before training on ReClor. + +dataset. Transformer-based pre-training models have relatively good performance, close to the performance of graduate students. However, we find that these models only perform well on EASY set with around $75\%$ accuracy, showing these models have an outstanding ability to capture the biases of the dataset, but they perform poorly on HARD set with only around $30\%$ accuracy. In contrast, humans can still keep good performance on HARD set. We notice the difference in testing accuracy performed by graduate students on EASY and HARD set, but this could be due to the small number of students participated in the experiments. Therefore, we say humans perform relatively consistent on both biased and non-biased dataset. + +It is noticed that if the models are first trained on RACE and then fine-tuned on ReClor, they could obtain significant improvement, especially on HARD set. The overall performance of RoBERTa $_{\text{LARGE}}$ is even better than that of graduate students. This similar phenomenon can also be observed on DREAM dataset (Sun et al., 2019) by Jin et al. (2019), which shows the potential of transfer learning for reasoning tasks. However, even after fine-tuning on RACE, the best performance of these strong baselines on HARD set is around $50\%$ , still lower than that of graduate students and far away from ceiling performance. + +Experiments in different input settings are also done. Compared with the input setting of answer options only (A), the setting of questions and answer options (Q, A) can not bring significant improvement. This may be because some questions e.g., Which one of the following is an assumption required by the argument?, Which one of the following, if true, most strengthens the argument? can be used in the same reasoning types of question, which could not offer much information. Further adding context causes significant boost, showing the high informativeness of the context. + +We further analyze the model performance with respect to different question types of logical reasoning. Some results are shown in Figure 4 and the full results are shown in Figure 5, 6 and 7 in the Appendix E. Three models of $\mathrm{BERT}_{\mathrm{LARGE}}$ , $\mathrm{XLNet}_{\mathrm{LARGE}}$ and $\mathrm{RoBERTa}_{\mathrm{LARGE}}$ perform well on most of types. On HARD set, the three models perform poorly on certain types such as STRENGTHEN, WEAKEN and ROLE which require extensive logical reasoning. However, they perform relatively better on other certain types, such as CONCLUSION/Main POINT and MATCH STRUCTURES that + +are more straight-forward. For the result of transfer learning, we analyze $\mathrm{XLNet}_{\mathrm{LARGE}}$ in detail. Though the overall performance is significantly boosted after fine-tuning on RACE first, the histograms in the bottom of Figure 4 show that on EASY set, accuracy of the model with fine-tuning on RACE is similar to that without it among most question types, while on HARD set, significant improvement on some question types is observed, such as CONCLUSION/Main POINT and MOST STRONGLY SUPPORTED. This may be because these types require less logical reasoning to some extent compared with other types, and similar question types may also be found in RACE dataset. Thus, the pre-training on RACE helps enhance the ability of logical reasoning especially of relatively simple reasoning types, but more methods are still needed to further enhance the ability especially that of relatively complex reasoning types. + +![](images/94eb3ba1341f7f38877c49cf4094509979a605bef6ab53b12123dac525cec337.jpg) + +![](images/b5b3634620f0cbbf61ade377e1282fd4353fb8c0155aa76dd7e90743be51d166.jpg) + +![](images/0794324d613abe273f9c00acfcadb066e45095fc73819dc658ba38b19c3f206c.jpg) +Figure 4: Performance of models on EASY (left) and HARD (right) testing sets and that of models. XLNet $\mathrm{LARGE}$ +Fine-Tune means the model is first fine-tuned on RACE before training on ReClor. + +![](images/a8b76409ed33b40bad0e461ab47c0128d99222f4da096fae5a6fa586178b4306.jpg) + +# 5 CONCLUSION + +In this paper, we introduce ReClor, a reading comprehension dataset requiring logical reasoning, with the aim to push research progress on logical reasoning in NLP forward from sentence-level to passage-level and from simple logical reasoning to multiple complicated one. We propose to identify biased data points and split the testing set into EASY and HARD group for biased and non-biased data separately. We further empirically study the different behaviors of state-of-the-art models on these two testing sets, and find recent powerful transformer-based pre-trained language models have an excellent ability to exploit the biases in the dataset but have difficulty in understanding and reasoning given the non-biased data with low performance close to or slightly better than random guess. These results show there is a long way to equip deep learning models with real logical reasoning abilities. We hope this work would inspire more research in future to adopt similar split technique and evaluation scheme when reporting their model performance. We also show by first fine-tuning on a large-scale dataset RACE then fine-tuning on ReClor, the models could obtain significant improvement, showing the potential of transfer learning to solve reasoning tasks. + +# ACKNOWLEDGMENTS + +We would like to thank the anonymous reviewers for their insightful comments and suggestions; thank Rishabh Jain from Georgia Tech for helping build up the leaderboard of ReClor on EvalAI. Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646, ECRA R-263-000-C87-133, MOE Tier-II R-263-000-D17-112 and AI.SG R-263-000-D97-490. Weihao Yu and Zihang Jiang would like to thank TFRC program for the support of computational resources. + +# REFERENCES + +Common crawl. http://http://commoncrawl.org, 2019. +Khan Academy. https://www.khanacademy.org/test-prep/lsat/lsat-lessons/logical-reasoning/a/logical-reasoning--article--question-type-catalog, 2019. Accessed Sept. 16, 2019. +Johan Bos and Katja Markert. Recognising textual entailment with logical inference. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. 628-635. Association for Computational Linguistics, 2005. +Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632-642, 2015. +Michael Bugert, Yevgeniy Puzikov, Andreas Rückle, Judith Eckle-Kohler, Teresa Martin, Eugenio Martínez-Cármara, Daniil Sorokin, Maxime Peyrard, and Iryna Gurevych. Lsdsem 2017: Exploring data generation methods for the story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pp. 56-61, 2017. +Zheng Cai, Lifu Tu, and Kevin Gimpel. Pay attention to the ending: Strong neural baselines for the roc story cloze task. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 616-622, 2017. +Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. Clueweb09 data set, 2009. +Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. +Cleo Condoravdi, Dick Crouch, Valeria De Paiva, Reinhard Stolle, and Daniel G Bobrow. Entailment, intensionality and text understanding. In Proceedings of the HLT-NAACL 2003 workshop on Text meaning, pp. 38-45, 2003. +Law School Admission Council. https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning, 2019a. Accessed Sept. 16, 2019. +Law School Admission Council. https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning/logical-reasoning-sample-questions, 2019b. Accessed Sept. 16, 2019. +Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pp. 177-190. Springer, 2005. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, 2019. +Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of NAACL-HLT, pp. 2368-2378, 2019. +Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. A natural logic inference system. In Proceedings of the 2nd Workshop on Inference in Computational Semantics (ICoS-2). CiteSeer, 2000. + +Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324, 2018. +Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. The argument reasoning comprehension task: Identification and reconstruction of implicit warrants. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1930-1940, 2018. +Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 328-339, 2018. +Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2391-2401, 2019. +Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, and Dilek Hakkani-tur. Mmm: Multi-stage multi-task learning for multi-choice reading comprehension. arXiv preprint arXiv:1910.00458, 2019. +Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pp. 427-431. Association for Computational Linguistics, April 2017. +Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 252-262, 2018. +Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328, 2018. +Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683, 2017. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. +Bill MacCartney and Christopher D Manning. An extended model of natural logic. In Proceedings of the eighth international conference on computational semantics, pp. 140-156. Association for Computational Linguistics, 2009. +Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018. +Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. Question answering through transfer learning from large fine-grained supervision data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 510-517, 2017. +Timothy Niven and Hung-Yu Kao. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4658-4664, 2019. +Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. English gigaword fifth edition, 2011. + +Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014. +Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pp. 180-191, 2018. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assets/researchcovers/languageeunsupervised/language understanding paper. pdf, 2018. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. +Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018. +Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 193-203, 2013. +Alvaro Rodrigo, Anselmo Penas, Yusuke Miyao, Eduard H Hovy, and Noriko Kando. Overview of clef qa entrance exams task 2015. In CLEF (Working Notes), 2015. +Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A Smith. Story cloze task: Uw nlp system. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pp. 52-55, 2017. +Hideyuki Shibuki, Kotaro Sakamoto, Yoshinobu Kano, Teruko Mitamura, Madoka Ishioroshi, Kelly Y Itakura, Di Wang, Tatsunori Mori, and Noriko Kando. Overview of the ntcir-11 qa-lab task. In Ntcir, 2014. +Saku Sugawara and Akiko Aizawa. An analysis of prerequisite skills for reading comprehension. In Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods, pp. 1-5, 2016. +Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. What makes reading comprehension questions easier? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4208-4219, 2018. +Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. Dream: A challenge data set and models for dialogue-based reading comprehension. Transactions of the Association for Computational Linguistics, 7:217-231, 2019. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355, 2018. +Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics, 6:287-302, 2018. +Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122, 2018. + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. +Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvijit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee, and Dhruv Batra. Evalai: Towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570, 2019. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019. +Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. +Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885, 2018. +Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pp. 19-27, 2015. + +# A BASELINE MODELS + +fastText. FastText (Joulin et al., 2017) models sentences as a bag of n-grams, and tries to predict the probability of each answer being correct independently. We choose the answer with the highest score as the prediction for the multiple-choice setting. + +LSTM sentence encoder. A two-layer bi-LSTM is randomly initialized as a sentence encoder with GloVe word embedding (Pennington et al., 2014). With a span of text as input, the last hidden state of the second layer is max-pooled and then fed into a fully-connected layer to compute the output score. + +GPT and GPT-2. GPT (Radford et al., 2018) and GPT-2 (Radford et al., 2019) are both transformer (Vaswani et al., 2017) based models which are pre-trained using unsupervised method with a standard language modeling objective. GPT is pre-trained on BooksCorpus; GPT-2 is pre-trained using a larger dataset called WebText. Here we use the smallest model proposed in (Radford et al., 2019) as our GPT-2 baseline. To fine-tune on ReClor, the final hidden vector corresponding to the last input token ([_classify_]) is used as the aggregate representation followed by an extra fully connected layer to compute the score. + +BERT. BERT (Devlin et al., 2019) is also a transformer (Vaswani et al., 2017) based model which is trained by using BooksCorpus (Zhu et al., 2015) and English Wikipedia in two unsupervised tasks, i.e., Masked LM (MLM) and Next Sentence Prediction (NSP). During fine-tuning, the final hidden vector corresponding to the first input token ([CLS]) is used as the aggregate representation followed by two extra fully connected layers to compute the score. + +XLNet. XLNet (Yang et al., 2019) is trained with Permutation Language Modeling and without NSP. In addition, beside BooksCorpus and English Wikipedia used in BERT, it uses Giga5 (Parker et al., 2011), ClueWeb 2012-B (extended from (Callan et al., 2009)), and Common Crawl (com, 2019) for pre-training. We use the final hidden vector corresponding to the last input token as the aggregate representation and introduce two fully connected layers to predict the score. + +RoBERTa. RoBERTa (Liu et al., 2019) is an improved pre-training procedure of BERT with training the model longer, with bigger batches over more data and removing NSP objective etc.. Extra two fully connected layers are added to transform the final hidden vector of the first input token $($ to the score. + +The input format of different models is shown in Table 8. + +
ModelInput Format
GPT Radford et al. (2018)_start Context _delimiter_ Question || Option _classify_
GPT-2 Radford et al. (2019)_start Context _del delimiter_ Question || Option _classify_
BERT (Devlin et al., 2019)[CLS] Context [SEP] Question || Option [SEP] [PAD]...
XLNet (Yang et al., 2019)<pad>... Context <sep> Question || Option <sep><cls>
RoBERTa (Liu et al., 2019)<s> Context </s></s> Question || Option </s><pad>...
+ +Table 8: Input formats of different models. Context, Question and Option represent the token sequences of the context, question and option respectively, and $\left| \right|$ denotes concatenation. + +# B IMPLEMENTATION DETAIL + +Adam is used by all models. For fastText, we use its python library6 by converting ReClor to the required form, and keep the default setting of the hyper parameters. For Bi-LSTM, we use a two-layer Bidirectional LSTM with the GloVe 300d word embedding (Pennington et al., 2014) followed by max-pooling and a fully-connected layer. We train the model for 100 epochs using a batch size of 64 and learning rate of 0.1. A learning rate decay of 0.5 is also applied every 10 epochs. For pre-training models, we modify the code of Transformers of Hugging Face7 to implement them on ReClor. We use a batch size of 24 and fine-tune for 10 epochs. The maximum input sequence length for all models is 256. The detailed hyperparameters are shown in Table 9. + +
HYPERPARAMGPTGPT-2BERTBASEBERTLARGEXLNetBASEXLNetLARGERoBERTaBASERoBERTaLARGE
Learning Rate6.25e-56.25e-52e-52e-52e-52e-51e-51e-5
Batch Size24
Max Seq Length256
Learning Rate DecayLinear
Number of Epochs10
Warm-up Proportion0.1
Weight Decay0.010.010.00.00.010.010.010.01
Adam Epsilon1e-81e-81e-61e-61e-61e-61e-61e-6
Adam Betas(0.9, 0.999)(0.9, 0.999)(0.9, 0.999)(0.9, 0.999)(0.9, 0.999)(0.9, 0.999)(0.9, 0.98)(0.9, 0.98)
Clip Grad NormNot
+ +Table 9: Hyperparameters for finetuning pre-training language models on ReClor + +# C EXAMPLES + +# Type: Necessary Assumptions + +Definition: identify the claim that must be true or is required in order for the argument to work + +# Context: + +Slash-and-burn agriculture involves burning several acres of forest, leaving vegetable ash that provides ample fertilizer for three or four years of bountiful crops. On the cleared land nutrients leach out of the soil, however, and the land becomes too poor to support agriculture. New land is then cleared by burning and the process starts again. Since most farming in the tropics uses this method, forests in this region will eventually be permanently eradicated. + +Question: The argument depends on the assumption that + +# Options: + +A. forests in the tropics do not regenerate well enough to restore themselves once they have been cleared by the slash-and-burn method +B. some other methods of agriculture are not as destructive to the environment in tropical regions as the slash-and-burn method is +C. forests in the tropics are naturally deficient in nutrients that are needed to support the growth of plants that are not native to those regions +D. slash-and-burn agriculture is particularly suitable for farming in tropical areas + +Answer: A + +Table 10: The definition and an example of the logical reasoning type - Necessary Assumptions + +
Type: Sufficient Assumptions +Definition: identify a sufficient assumption, that is, an assumption that, if added to the argument, would make it logically valid
Context: +Geologist: A new method for forecasting earthquakes has reliably predicted several earthquakes. Unfortunately, this method can predict only that an earthquake will fall somewhere within a range of two and a half points on the Richter scale. Thus, since a difference of two and a half points can be the difference between a marginally perceptible shaking and a quake that causes considerable damage, the new method is unlikely to be useful. +Question: Which one of the following, if assumed, enables the geologist's conclusion to be properly inferred? +Options: +A. An earthquake-forecasting method is unlikely to be useful unless its predictions always differentiate earthquakes that are barely noticeable from ones that result in substantial destruction. +B. Several well-established methods for forecasting earthquakes can predict within much narrower ranges than two and a half points on the Richter scale. +C. Even if an earthquake-forecasting method makes predictions within a very narrow range on the Richter scale, this method is not likely to be useful unless its predictions are reliable. +D. An earthquake-forecasting method has not been shown to be useful until it has been used to reliably predict a large number of earthquakes. +Answer: A
+ +Table 11: The definition and an example of the logical reasoning type - Sufficient Assumptions + +
Type: Strengthen +Definition: identify information that would strengthen an argument
Context: +Financial success does not guarantee happiness. This claim is not mere proverbial wisdom but a fact verified by statistics. In a recently concluded survey, only one-third of the respondents who claimed to have achieved financial success reported that they were happy. +Question: Which one of the following, if true, most strongly supports the conclusion drawn from the survey results? +Options: +A. Most of the respondents who reported they were unhappy were in fact happy. +B. The respondents who reported financial success were, for the most part, financially successful. +C. Many of the respondents who claimed not to have achieved financial success reported that they were happy five years ago. +D. Many of the respondents who failed to report financial success were in fact financially successful. +Answer: B
+ +Table 12: The definition and an example of the logical reasoning type - Strengthen + +
Type: Weaken +Definition: identify information that would weaken an argument
Context: +“DNA fingerprinting” is a recently-introduced biochemical procedure that uses a pattern derived from a person's genetic material to match a suspect's genetic material against that of a specimen from a crime scene. Proponents have claimed astronomically high odds against obtaining a match by chance alone. These odds are based on an assumption that there is independence between the different characteristics represented by a single pattern. +Question: Which one of the following, if true, casts the most doubt on the claim of the proponents of DNA fingerprinting? +Options: +A. The skill required of laboratory technicians performing the DNA fingerprinting procedure is not extraordinary. +B. There is a generally accepted theoretical basis for interpreting the patterns produced by the procedure. +C. In the whole population there are various different subgroups, within each of which certain sets of genetic characteristics are shared. +D. In the investigation of certain genetic diseases, the techniques used in DNA fingerprinting have traced the transmission of the diseases among the living members of very large families. +Answer: C
+ +Table 13: The definition and an example of the logical reasoning type - Weaken + +
Type: Evaluation +Definition: identify information that would be useful to know to evaluate an argument
Context: +George: Some scientists say that global warming will occur because people are releasing large amounts of carbon dioxide into the atmosphere by burning trees and fossil fuels. We can see, though, that the predicted warming is occurring already. In the middle of last winter, we had a month of springlike weather in our area, and this fall, because of unusually mild temperatures, the leaves on our town's trees were three weeks late in turning color. +Question: Which one of the following would it be most relevant to investigate in evaluating the conclusion of George's argument? +Options: +A. whether air pollution is causing some trees in the area to lose their leaves +B. what proportion of global emissions of carbon dioxide is due to the burning of trees by humans +C. whether unusually warm weather is occurring elsewhere on the globe more frequently than before +D. when leaves on the trees in the town usually change color +Answer: C
+ +Table 14: The definition and an example of the logical reasoning type - Evaluation + +# Type: Implication + +Definition: identify something that follows logically from a set of premises + +# Context: + +To be horrific, a monster must be threatening. Whether or not it presents psychological, moral or social dangers, or triggers enduring infantile fears, if a monster is physically dangerous then it is threatening. In fact, even a physically benign monster is horrific if it inspires revulsion. + +Question: Which one of the following logically follows from the statements above? + +# Options: + +A. Any horror-story monster that is threatening is also horrific. +B. If a monster triggers infantile fears but is not physically dangerous, then it is not horrific. +C. All monsters that are not physically dangerous, but that are psychologically dangerous and inspire revulsion, are threatening. +D. If a monster is both horrific and psychologically threatening, then it does not inspire revulsion. +Answer: C + +Table 15: The definition and an example of the logical reasoning type - Implication + +
Type: Conclusion/Main Point +Definition: identify the conclusion/main point of a line of reasoning
Context: +Whether or not one can rightfully call a person's faithfulness a virtue depends in part on the object of that person's faithfulness. Virtues are by definition praiseworthy, which is why no one considers resentment virtuous, even though it is in fact a kind of faithfulness – faithfulness to hatreds or animosities. +Question: Which one of the following most accurately expresses the overall conclusion drawn in the argument? +Options: +A. The object of a person's faithfulness partially determines whether or not that faithfulness is virtuous. +B. Virtuous behavior is praiseworthy by definition. +C. Resentment should not be considered a virtuous emotion. +D. Behavior that emerges from hatred or animosity cannot be called virtuous. +Answer: A
+ +Table 16: The definition and an example of the logical reasoning type - Conclusion/Main Point + +
Type: Most Strongly Supported +Definition: find the choice that is most strongly supported by a stimulus
Context: +After a nuclear power plant accident, researchers found radioactive isotopes of iodine, tellurium, and cesium-but no heavy isotopes-in the atmosphere downwind. This material came either from spent fuel rods or from the plant's core. Spent fuel rods never contain significant quantities of tellurium isotopes. Radioactive material ejected into the atmosphere directly from the core would include heavy isotopes. After the accident, steam, which may have been in contact with the core, was released from the plant. The core contains iodine, tellurium, and cesium isotopes, which are easily dissolved by steam. +Question: +Of the following statements, which one is most strongly supported by the information above? +Options: +A. The nuclear power plant's spent fuel rods were not damaged. +B. Spent fuel rods do not contain heavy isotopes in significant quantities. +C. The researchers found some radioactive material from spent fuel rods as well as some material that was ejected into the atmosphere directly from the plant's core. +D. The radioactive material detected by the researchers was carried into the atmosphere by the steam that was released from the plant. +Answer: D
+ +Table 17: The definition and an example of the logical reasoning type - Most Strongly Supported + +
Type: Explain or Resolve +Definition: identify information that would explain or resolve a situation
Context: +To reduce the mosquito population in a resort area, hundreds of trees were planted that bear fruit attractive to birds. Over the years, as the trees matured, they attracted a variety of bird species and greatly increased the summer bird population in the area. As expected, the birds ate many mosquitoes. However, the planting of the fruit trees had the very opposite of its intended effect. +Question: +Which one of the following, if true, most helps to explain the apparently paradoxical result? +Options: +A. Most of the species of birds that were attracted by the trees that were planted did not eat mosquitoes. +B. Increases and decreases in mosquito populations tend to follow a cyclical pattern. +C. The species of birds that were attracted in the greatest number by the fruit of the trees that were planted did not eat mosquitoes. +D. The birds attracted to the area by the trees ate many more insects that prey on mosquitoes than they did mosquitoes. +Answer: D
+ +Table 18: The definition and an example of the logical reasoning type - Explain or Resolve + +
Type: Principle +Definition: identify the principle, or find a situation that conforms to a principle, or match the principles
Context: +Buying elaborate screenshots – programs that put moving images on a computer monitor to prevent damage – can cost a company far more in employee time than it saves in electricity and monitor protection. +Employees cannot resist spending time playing with screenshots that flash interesting graphics across their screens. +Question: +Which one of the following most closely conforms to the principle illustrated above? +Options: +A. An electronic keyboard may be cheaper to buy than a piano but more expensive to repair. +B. An energy-efficient insulation system may cost more up front but will ultimately save money over the life of the house. +C. The time that it takes to have a pizza delivered may be longer than it takes to cook a complete dinner. +D. A complicated hotel security system may cost more in customer goodwill than it saves in losses by theft. +Answer: D
+ +Table 19: The definition and an example of the logical reasoning type - Principle + +
Type: Dispute +Definition: identify or infer an issue in dispute
Context: +Raphaela: Forcing people to help others is morally wrong. Therefore, no government has the right to redistribute resources via taxation. Anyone who wants can help others voluntarily. Edward: Governments do have that right, insofar as they give people the freedom to leave and hence not to live under their authority. +Question: +Raphaela and Edward disagree about the truth of which one of the following? +Options: +A. Any government that forces people to help others should permit emigration. +B. Any government that permits emigration has the right to redistribute resources via taxation. +C. Any government that redistributes resources via taxation forces people to help others. +D. Every government should allow people to help others voluntarily. +Answer: B
+ +Table 20: The definition and an example of the logical reasoning type - Dispute + +
Type: Technique +Definition: identify the technique used in the reasoning of an argument
Context: +Joanna: The only way for a company to be successful, after emerging from bankruptcy, is to produce the same goods or services that it did before going bankrupt. It is futile for such a company to try to learn a whole new business. Ruth: Wrong. The Kelton Company was a major mining operation that went into bankruptcy. On emerging from bankruptcy, Kelton turned its mines into landfills and is presently a highly successful waste-management concern. +Question: +Ruth uses which one of the following argumentative techniques in countering Joanna's argument? +Options: +A. She undermines a claim by showing that it rests on an ambiguity. +B. She offers an alternative explanation for a phenomenon. +C. She presents a counterexample to a claim. +D. She establishes a conclusion by excluding the only plausible alternative to that conclusion. +Answer: C
+ +Table 21: The definition and an example of the logical reasoning type - Technique + +
Type: Role +Definition: describe the individual role that a statement is playing in a larger argument
Context: +The position that punishment should be proportional to how serious the offense is but that repeat offenders should receive harsher punishments than first-time offenders is unsustainable. It implies that considerations as remote as what an offender did years ago are relevant to the seriousness of an offense. If such remote considerations were relevant, almost every other consideration would be too. But this would make determining the seriousness of an offense so difficult that it would be impossible to apply the proportionality principle. +Question: +The statement that considerations as remote as what an offender did years ago are relevant to the seriousness of an offense plays which one of the following roles in the argument? +Options: +A. It is an allegedly untenable consequence of a view rejected in the argument's overall conclusion. +B. It is a statement the argument provides grounds to accept and from which the overall conclusion is inferred. +C. It is the overall conclusion in favor of which the argument offers evidence. +D. It is a premise offered in support of an intermediate conclusion of the argument. +Answer: A
+ +Table 22: The definition and an example of the logical reasoning type - Role + +
Type: Identify a Flaw +Definition: identify a flaw in an argument's reasoning
Context: +The tidal range at a particular location is the difference in height between high tide and low tide. Tidal studies have shown that one of the greatest tidal ranges in the world is found in the Bay of Fundy and reaches more than seventeen meters. Since the only forces involved in inducing the tides are the sun's and moon's gravity, the magnitudes of tidal ranges also must be explained entirely by gravitational forces. +Question: +Which one of the following most accurately describes a flaw in the reasoning above? +Options: +A. It does not differentiate between the tidal effect of the sun and the tidal effect of the moon. +B. It fails to consider that the size of a tidal range could be affected by the conditions in which gravitational forces act. +C. It presumes, without providing warrant, that most activity within the world's oceans is a result of an interplay of gravitational forces. +D. It gives only one example of a tidal range. +Answer: B
+ +Table 23: The definition and an example of the logical reasoning type - Identify a Flaw + +
Type: Match Flaws +Definition: find a choice containing an argument that exhibits the same flaws as the passage's argument
Context: +The museum's night security guard maintains that the thieves who stole the portrait did not enter the museum at any point at or above ground level. Therefore, the thieves must have gained access to the museum from below ground level. +Question: +The flawed pattern of reasoning in the argument above is most similar to that in which one of the following? +Options: +A. As had generally been expected, not all questionnaires were sent in by the official deadline. It follows that plans must have been made for the processing of questionnaires received late. +B. The store's competitors claim that the store, in selling off the shirts at those prices, neither made any profit nor broke even. Consequently, the store's customers must have been able to buy shirts there at less than the store's cost. +C. The product label establishes that this insecticide is safe for both humans and pets. Therefore, the insecticide must also be safe for such wild mammals as deer and rabbits. +D. If the census is to be believed, the percentage of men who are married is higher than the percentage of women who are married. Thus, the census must show a higher number of men than of women overall. +Answer: B
+ +Table 24: The definition and an example of the logical reasoning type - Match Flaws + +
Type: Match the Structure +Definition: match the structure of an argument in a choice to the structure of the argument in the passage
Context: +It is an absurd idea that whatever artistic endeavor the government refuses to support it does not allow, as one can see by rephrasing the statement to read: No one is allowed to create art without a government subsidy. +Question: +The pattern of reasoning in which one of the following is most similar to that in the argument above? +Options: +A. The notion that every scientist who has been supported by a government grant will be successful is absurd, as one can see by rewording it:No scientist is allowed to do research without a government grant. +B. The notion that every scientist who is supported by a government grant will be successful is absurd, as one can see by rewording it:No scientist lacking governmental support will be successful. +C. The claim that any driver who is not arrested does not break the law is absurd, as one can see by rewording it: Every driver who gets arrested has broken the law. +D. The claim that any driver who is not arrested does not break the law is absurd, as one can see by rewording it: Every driver who breaks the law gets arrested. +Answer: D
+ +Table 25: The definition and an example of the logical reasoning type - Match the Structure + +
Type: Others +Definition: other types of questions which are not included by the above
Context: +PhishCo runs a number of farms in the arid province of Nufa, depending largely on irrigation. Now, as part of a plan to efficiently increase the farms' total production, it plans to drill down to an aquifer containing warm, slightly salty water that will be used to raise fish in ponds. The water from the ponds will later be used to supplement piped-in irrigation water for PhishCo's vegetable fields, and the ponds and accompanying vegetation should help reduce the heat in the area of the farms. +Question: +Which of the following would, if true, most strongly suggest that the plan, if implemented, would increase the overall efficiency of PhishCo's farms? +Options: +A. Organic waste from fish in the pond water will help to fertilize fields where it is used for irrigation. +B. Fish raised on PhishCo's farms are likely to be saleable in the nearest urban areas. +C. Ponds will be located on low-lying land now partially occupied by grain crops. +D. The government of Nufa will help to arrange loan financing to partially cover the costs of drilling. +Answer: A
+ +Table 26: The definition and an example of the logical reasoning type - Others + +D CONSISTENCY OF DIFFERENT MODELS + +
GPTGPT-2BERTBASEXLNetBASERoBERTaBASE
GPT245164152142116
GPT-2238151144123
BERTBASE234138124
XLNetBASE225125
RoBERTaBASE200
+ +Table 27: Overlap of each pair of models after intersection among 4 random seeds. + +![](images/e033bd3a65c7fd0f61f96ade401dc4dc0de9feb5a718478ade3b5e0fc193b175.jpg) +E RESULTS WITH RESPECT TO DIFFERENT QUESTION TYPES +Figure 5: Accuracy of all baseline models on overall testing set + +![](images/5a817b118038a31be7abff1132e616e11e1d069545bc9b0c40eeb9f26c7bb844.jpg) +Figure 6: Accuracy of all baseline models on EASY set of testing set + +![](images/2aa756b598de563d0f5013e8cfa6d56e61cb841e9c250b92fe7eeffb2462fe53.jpg) +Figure 7: Accuracy of all baseline models on HARD set of testing set + +![](images/673e7efe4f4a3106f81219b184fa27be738bfffb38ee6bf43eaab71a79ea1291.jpg) + +![](images/9ae05e524d454e5145bfa35cd7b5f3815acb61ea5b8f47b0216ca27dfb787e0e.jpg) + +![](images/354802a57187569447058b71a7a3e37a7fc4f8ccadc05a575794611232aa6508.jpg) +Figure 8: Performance of $\mathrm{BERT}_{\mathrm{LARGE}}$ (top) and RoBERTaLARGE (bottom) on EASY (left) and HARD (right) testing sets. + +![](images/0b7e82a764092b2ee10568389ef1bf3115784f8e4675e63a9662bc1ba4bab79a.jpg) \ No newline at end of file diff --git a/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/images.zip b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ec81e83b21ada1b393a2ba5dda34600d38d6b515 --- /dev/null +++ b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6d7dbd32cc177b3a45d4778b69359e5698cca5b677e5c3b44733b12756f0d04 +size 3261307 diff --git a/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/layout.json b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..666b0551a34befa8dd3e96307f4c7b17d2f8808c --- /dev/null +++ b/reclorareadingcomprehensiondatasetrequiringlogicalreasoning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:680c6cb849c8e1589ea8f50c6adc3e177cc0254dba5bc9d275bc4a3285671d88 +size 497396 diff --git a/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_content_list.json b/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..03fc57a24d87bd1bc8d71cc384fd8a1e13823232 --- /dev/null +++ b/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9372dbfdced7a12d76a9f19d0b796f6a73bdf1e3514d47c8dd9ad68992c9f56a +size 127704 diff --git a/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_model.json b/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..85e35bff2b01df4ab4985e7d3d9e83757af777bb --- /dev/null +++ b/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9da5421f7fa18ab5462648d2f7756937e2adda9f5369524afc161da31660999 +size 150615 diff --git a/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_origin.pdf b/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..64b1cd619ee6b55ad44027d82d1975fb2c840e96 --- /dev/null +++ b/recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddf56abcfc960980498d459e38ea58fc2b265ea80e6c87071489eb4e0cde54ab +size 11951037 diff --git a/recurrentneuralcircuitsforcontourdetection/full.md b/recurrentneuralcircuitsforcontourdetection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ce27940ccfac5f4dc69e80d7bba12c6cf7297763 --- /dev/null +++ b/recurrentneuralcircuitsforcontourdetection/full.md @@ -0,0 +1,483 @@ +# RECURRENT NEURAL CIRCUITS FOR CONTOUR DETECTION + +Drew Linsley†, Junkyung Kim†‡, Alekh Ashok & Thomas Serre + +Department of Cognitive, Linguistic and Psychological Sciences Brown University + +Providence, RI 02912, USA + +{drew_linsley, alekh_ashok, thomas_serre}@brown.edu + +junkyung@google.com + +# ABSTRACT + +We introduce a deep recurrent neural network architecture that approximates known visual cortical circuits (Mély et al., 2018). We show that this architecture, which we refer to as the $\gamma$ -Net, learns to solve contour detection tasks with better sample efficiency than state-of-the-art feedforward networks, while also exhibiting a classic perceptual illusion, known as the orientation-tilt illusion. Correcting this illusion significantly reduces $\gamma$ -Net contour detection accuracy by driving it to prefer low-level edges over high-level object boundary contours. Overall, our study suggests that the orientation-tilt illusion is a byproduct of neural circuits that help biological visual systems achieve robust and efficient contour detection, and that incorporating such circuits in artificial neural networks can improve computer vision. + +# 1 INTRODUCTION + +An open debate since the inception of vision science concerns why we experience visual illusions. Consider the class of "contextual" illusions, where the perceived qualities of an image region, such as its orientation or color, are biased by the qualities of surrounding image regions. A well-studied contextual illusion is the orientation-tilt illusion depicted in Fig. 1a, where perception of the central grating's orientation is influenced by the orientation of the surrounding grating (O'Toole & Wenderoth, 1977). When the two orientations are similar, the central grating appears tilted slightly away from the surround (Fig. 1a, top). When the two orientations are dissimilar, the central grating appears tilted slightly towards the surround (Fig. 1a, bottom). Is the contextual bias of the orientation-tilt illusion a bug of biology or a byproduct of optimized neural computations? + +Over the past 50 years, there has been a number of neural circuit mechanisms proposed to explain individual contextual illusions (reviewed in Mély et al., 2018). Recently, Mély et al. (2018) proposed a cortical circuit, constrained by physiology of primate visual cortex (VI), that offers a unified explanation for contextual illusions across visual domains – from the orientation-tilt illusion to color induction. These illusions arise in the circuit from recurrent interactions between neural populations with receptive fields that tile visual space, leading to contextual (center/surround) effects. For the orientation-tilt illusion, neural populations encoding the surrounding grating can either suppress or facilitate the activity of neural populations encoding the central grating, leading to repulsion vs. attraction, respectively. These surround neural populations compete to influence encodings of the central grating: suppression predominates when center/surround are similar, and facilitation predominates when center/surround are dissimilar. + +The neural circuit of Mély et al. (2018) explains how contextual illusions might emerge, but it does not explain why. One possibility is that contextual illusions like the orientation-tilt illusion are “bugs”: vestiges of evolution or biological constraints on the neural hardware. Another possibility is that contextual illusions are the by-product of efficient neural routines for scene segmentation (Keemink & van Rossum, 2016; Mély et al., 2018). Here, we provide computational evidence for the latter + +possibility and demonstrate that the orientation-tilt illusion reflects neural strategies optimized for object contour detection. + +Contributions We introduce the $\gamma$ -Net, a trainable and hierarchical extension of the neural circuit of Mély et al. (2018), which explains contextual illusions. (i) The $\gamma$ -Net is more sample efficient than state-of-the-art convolutional architectures on two separate contour detection tasks. (ii) Similar to humans but not state-of-the-art contour detection models, the $\gamma$ -Net exhibits an orientation-tilt illusion after being optimized for contour detection. This illusion emerges from its preference for high-level object-boundary contours over low-level edges, indicating that neural circuits involved in contextual illusions also support sample-efficient solutions to contour detection tasks. + +# 2 RELATED WORK + +Modeling the visual system Convolutional neural networks (CNNs) are often considered the de facto "standard model" of vision. CNNs and their extensions represent the state of the art for most computer vision applications with performance approaching – and sometimes exceeding – human observers on certain visual recognition tasks (He et al., 2016; Lee et al., 2017; Phillips et al., 2018). CNNs also provide the best fit to rapid neural responses in the visual cortex (see Kriegeskorte 2015; Yamins & DiCarlo 2016 for reviews). Nevertheless, multiple lines of evidence suggest that biological vision is still far more robust and versatile than CNNs (see Serre, 2019, for a recent review). CNNs suffer from occlusions and clutter (Fyall et al., 2017; Rosenfeld et al., 2018; Tang et al., 2018). They are also sample inefficient at learning visual relations (Kim et al., 2018) and solving simple grouping tasks (Linsley et al., 2018c). State-of-the-art CNNs require massive datasets to reach their impressive accuracy (Lake et al., 2015) and their ability to generalize beyond training data is limited (Geirhos et al., 2018; Recht et al., 2018). + +Cortical feedback contributes to the robustness of biological vision (Hochstein & Ahissar, 2002; Wyatte et al., 2014; Kafaligonul et al., 2015). Feedforward projections in the visual system are almost always matched by feedback projections (Felleman & Van Essen, 1991), and feedback has been implicated in visual "routines" that cannot be implemented through purely feedforward vision, such as incremental grouping or filling-in (O'Reilly et al., 2013; Roelfsema, 2006). There is a also a growing body of work demonstrating the potential of recurrent neural networks (RNNs) to account for neural recordings (Fyall et al., 2017; Klink et al., 2017; Siegel et al., 2015; Tang et al., 2018; Nayebi et al., 2018; Kar et al., 2019; Kietzmann et al., 2019). + +Feedback for computer vision In contrast to CNNs, which build processing depth through a cascade of filtering and pooling stages with unique weights, RNNs process stimuli with filtering stages that reuse weights over "timesteps" of recurrence. On each discrete processing timestep, an RNN updates its hidden state through a nonlinear combination of an input and its the hidden state from its previous timestep. RNNs have been extended from their roots in sequence processing (e.g., Mozer 1992) to computer vision by computing the activity of RNN units through convolutional kernels. The common interpretation of these convolutional-RNNs, is that the input to each layer functions as a (fixed) feedforward drive, which is combined with layer-specific feedback from an evolving hidden state to dynamically adjust layer activity (Linsley et al., 2018c; George et al., 2017; Lotter et al., 2016; Wen et al., 2018; Liao & Poggio, 2016; Spoerer et al., 2017; Nayebi et al., 2018; Tang et al., 2018). In the current work, we are motivated by a similar convolutional-RNN, the horizontal gated recurrent unit (hGRU, Linsley et al. 2018a), which approximates the recurrent neural circuit model of (Mély et al., 2018) for explaining contextual illusions. + +# 3 RECURRENT NEURAL MODELS + +We begin by reviewing the dynamical neural circuit of Mély et al. (2018). This model explains contextual illusions by simulating interactions between cortical hypercolumns tiling the visual field (where hypercolumns describe a set of neurons encoding features for multiple visual domains at a single retinotopic position). In the model, hypercolumns are indexed by their 2D coordinate $(x,y)$ and feature channels $k$ . Units in hypercolumns encode idealized responses for a visual domain (e.g., neural responses from the orientation domain were used to simulate an orientation-tilt illusion; + +![](images/8546aa0d5d0828f11dda20ae000f116ec89863a009c27a5e86ffaa73bdcd17a2.jpg) +Figure 1: The orientation tilt-illusion (O'Toole & Wenderoth, 1977) is a contextual illusion where a central grating's perceived orientation is influenced by a surround grating's orientation. (a) When a central grating has a similar orientation as its surround, it is judged as tilting away from the surround (repulsion). When the two gratings have dissimilar orientations, the central grating is judged as tilting towards the surround (attraction). (b) We extend the recurrent circuit proposed by Mély et al. (2018) to explain this and other contextual illusions into a hierarchical model that learns horizontal (within a layer) and top-down (between layer) interactions between units. The circuit simulates dynamical suppressive $(\mathbf{H}_{xyk}^{(S)})$ and facilitative $(\mathbf{H}_{xyk}^{(F)})$ interactions between units in a layer $\ell$ , which receives feedforward drive from a center pathway encoding feature $k$ (e.g., edges oriented at $0^{\circ}$ or $22.5^{\circ}$ ) at position $(x, y)$ in an image. Blocks depict different layers, and arrowed connections denote top-down feedback. (c) A deep network schematic of the circuit diagram in (b), which forms the basis of the $\gamma$ -Net introduced here. Horizontal and top-down connections are implemented with feedback gated recurrent units (fGRUs). Image encodings pass through these blocks on every timestep, from bottom-up (left path) to top-down (right path), and predictions are read out from the fGRU closest to image resolution on the final timestep. This motif can be stacked to create a hierarchical model. + +![](images/a4c70486388c05ee13e494bc2323aeaf4c61b8c268c27b12eac5f039e6d388a9.jpg) +Fig. 1b). Dynamics of a single unit at $xyk$ obey the following equations (we bold activity tensors to distinguish them from learned kernels and parameters): + +![](images/cf6e2de9c75d5ad28e2cd44f92d52f6aac01e53a5273d13d6ec1ac83b45a7b24.jpg) + +$$ +\eta \dot {H} _ {x y k} ^ {(S)} + \epsilon^ {2} H _ {x y k} ^ {(S)} = \left[ \xi Z _ {x y k} - \left(\alpha H _ {x y k} ^ {(F)} + \mu\right) C _ {x y k} ^ {(S)} \right] _ {+} \quad \# \text {S t a g e 1 : R e c u r r e n t s u p p r e s s i o n o f Z} +$$ + +$$ +\tau \dot {H} _ {x y k} ^ {(F)} + \sigma^ {2} H _ {x y k} ^ {(F)} = \left[ \nu C _ {x y k} ^ {(F)} \right] _ {+}, \quad \# \text {S t a g e 2 : R e c u r r e n t f a c i l i t a t i o n o f} \mathbf {H} ^ {(S)} +$$ + +where + +$$ +C _ {x y k} ^ {(S)} = \left(W ^ {S} * \mathbf {H} ^ {(F)}\right) _ {x y k} \quad \# \text {C o m p u t e s u p p r e s s i o n i n t e r a c t i o n s} +$$ + +$$ +C _ {x y k} ^ {(F)} = \left(W ^ {F} * \mathbf {H} ^ {(S)}\right) _ {x y k}. \quad \# \text {C o m p u t e f a c i l i t a t i o n i n t e r a c t i o n s} +$$ + +Circuit activities consist of a feedforward drive, recurrent suppression, and recurrent facilitation, respectively denoted as $\mathbf{Z}$ , $\mathbf{H}^{(S)}$ , $\mathbf{H}^{(F)} \in \mathbb{R}^{X \times Y \times K}$ ( $X$ is width, $Y$ is height of the tensor, and $K$ is its feature channels)*. The circuit takes its "feedforward" input $\mathbf{Z}$ from hypercolumns (e.g., orientation encodings from hypercolumn units), and introduces recurrent suppressive and facilitatory interactions between units, $\mathbf{C}^{(S)}$ , $\mathbf{C}^{(F)} \in \mathbb{R}^{X \times Y \times K}$ (Fig. 1b). These interactions are implemented with separate kernels for suppression and facilitation, $W^{S}$ , $W^{F} \in \mathbb{R}^{E \times E \times K \times K}$ , where $E$ is the spatial extent of connections on a single timestep (connectivity in this model is constrained by primate physiology). + +These interactions are implemented through convolutions, allowing them to serially spread over timesteps of processing to connect units positioned at different spatial locations. The circuit outputs $\mathbf{H}^{(F)}$ after reaching steady state. + +The circuit model of Mély et al. (2018) depends on competition between $\mathbf{H}^{(S)}$ and $\mathbf{H}^{(F)}$ to explain the orientation-tilt illusion. Competition is implemented by (i) computing suppression vs. facilitation in separate stages, and (ii) having non-negative activities, which enforces these functionally distinct processing stages. With these constraints in the circuit model, the strength of recurrent suppression – but not facilitation – multiplicatively increases with the net recurrent output. For the orientation-tilt illusion, suppression predominates when center and surround gratings have similar orientations. This causes encodings of the surround grating to “repulse” encodings of the center grating. On the other hand, facilitation predominates (causing “attraction”) when center and surround gratings have dissimilar orientations because it is additive and not directly scaled by the circuit output. + +Parameters controlling the circuit's integration, suppression/facilitation, and patterns of horizontal connections between units are tuned by hand. Linear and multiplicative suppression (i.e., shunting inhibition) are controlled by scalars $\mu$ and $\alpha$ , feedforward drive is modulated by the scalar $\xi$ , and linear facilitation is controlled by the scalar $\nu$ . Circuit time constants are scalars denoted by $\eta$ , $\epsilon$ , $\tau$ and $\sigma$ . All activities are non-negative and both stages are linearly rectified (ReLU) $[\cdot]_{+} = \max(\cdot, 0)$ . + +Feedback gated recurrent units Linsley et al. (2018a) developed a version of this circuit for computer vision applications, called the hGRU. In their formulation they use gradient descent (rather than hand-tuning like in the original circuit) to fit its connectivity and parameters to image datasets. The hGRU was designed to learn a difficult synthetic incremental grouping task, and a single layer of the hGRU learned long-range spatial dependencies that CNNs with orders-of-magnitude more weights could not. The hGRU replaced the circuit's time constants with dynamic gates, converted the recurrent state $\mathbf{H}^{(S)}$ for suppression into an instantaneous activity, and introduced a term for quadratic facilitation. The hGRU also relaxed biological constraints from the original circuit, including an assumption of non-negativity, which enforced competition between recurrent suppression vs. facilitation (e.g., guaranteeing that Stage 1 in the circuit model describes suppression of $Z_{xyk}$ ). + +We extend the hGRU formulation in two important ways. First, like Mély et al. (2018), we introduce non-negativity. This constraint was critical for Mély et al. (2018) to explain contextual illusions, and as we describe below, was also important for our model. Second, we extend the circuit into a hierarchical model which can learn complex contour detection tasks. Recent neurophysiological work indicates that contextual effects emerge from both horizontal and top-down feedback (Chettih & Harvey, 2019). Motivated by this, we develop versions of the circuit to simulate horizontal connections between units within a layer, and top-down connections between units in different layers. + +We call our module the feedback gated recurrent unit (fGRU). We describe the evolution of fGRU recurrent units in $\mathbf{H} \in \mathbb{R}^{X \times Y \times K}$ , which are influenced by non-negative feedforward encodings $\mathbf{Z} \in \mathbb{R}^{X \times Y \times K}$ (e.g., a convolutional layer's response to a stimulus) over discrete timesteps $\cdot [t]$ : + +Stage 1: + +$$ +\mathbf {G} ^ {S} = \operatorname {s i g m o i d} \left(U ^ {S} * \mathbf {H} [ t - 1 ]\right) \quad \# \text {C o m p u t e c h a n n e l - w i s e s e l e c t i o n} +$$ + +$$ +\mathbf {C} ^ {S} = W ^ {S} * (\mathbf {H} [ t - 1 ] \odot \mathbf {G} ^ {S}) \quad \# \text {C o m p u t e s u p p r e s s i o n i n t e r a c t i o n s} +$$ + +$$ +\mathbf {S} = \left[ \mathbf {Z} - \left[ (\alpha \mathbf {H} [ t - 1 ] + \mu) \mathbf {C} ^ {S} \right] _ {+} \right] _ {+}, \quad \# \text {S u p p r e s s i o n o f} \mathbf {Z} +$$ + +Stage 2: + +$$ +\mathbf {G} ^ {F} = \operatorname {s i g m o i d} \left(U ^ {F} * \mathbf {S}\right) \quad \# \text {C o m p u t e c h a n n e l - w i s e r e c u r r e n t u p d a t e s} +$$ + +$$ +\mathbf {C} ^ {F} = W ^ {F} * \mathbf {S} \quad \# \text {C o m p u t e f a c i l i t a t i o n i n t e r a c t i o n s} +$$ + +$$ +\tilde {\mathbf {H}} = \left[ \nu (\mathbf {C} ^ {F} + \mathbf {S}) + \omega (\mathbf {C} ^ {F} * \mathbf {S}) \right] _ {+} \quad \# \text {F a c i l i t a t i o n} \mathbf {S} +$$ + +$$ +\mathbf {H} [ t ] = \left(1 - \mathbf {G} ^ {F}\right) \odot \mathbf {H} [ t - 1 ] + \mathbf {G} ^ {F} \odot \tilde {\mathbf {H}}. \quad \# \text {U p d a t e r e c u r r e n t s t a t e} +$$ + +Like the original circuit, the fGRU has separate stages for suppression (S) and facilitation (H). In the first stage, the feedforward encodings $\mathbf{Z}$ are suppressed by non-negative interactions between units in $\mathbf{H}[t - 1]$ (an fGRU hidden state from the previous timestep). Suppressive interactions are computed with the kernel $W^{S} \in \mathbb{R}^{E \times E \times K \times K}$ , where $E$ describes the spatial extent of horizontal connections on a single timestep. This kernel is convolved with a gated version of the persistent hidden state $\mathbf{H}[t - 1]$ . The gate activity $\mathbf{G}^{S}$ is computed by applying a sigmoid nonlinearity to a convolution of the kernel $U^{S} \in \mathbb{R}^{1 \times 1 \times K \times K}$ with $\mathbf{H}[t - 1]$ , which transforms its activity into the range [0, 1]. Additive and multiplicative forms of suppression are controlled by the parameters $\mu, \alpha \in \mathbb{R}^{K}$ , respectively. + +In the second stage, additive and multiplicative facilitation is applied to the instantaneous activity $\mathbf{S}$ . The kernels $W^{F} \in \mathbb{R}^{E \times E \times K \times K}$ controls facilitation interactions. Additive and multiplicative forms of facilitation are scaled by the parameters $\nu, \omega \in \mathbb{R}^{K}$ , respectively. A gate activity is also computed during this stage to update the persistent recurrent activity $\mathbf{H}$ . The gate activity $\mathbf{G}^{F}$ is computed by applying a sigmoid to a convolution of the kernel $U^{F} \in \mathbb{R}^{1 \times 1 \times K \times K}$ with $\mathbf{S}$ . This gate updates $\mathbf{H}[t]$ by interpolating $\mathbf{H}[t - 1]$ with the candidate activity $\tilde{\mathbf{H}}$ . After every timestep of processing, $\mathbf{H}[t]$ is taken as the fGRU output activity. As detailed in the following section, the fGRU output hidden state is either passed to the next convolutional layer (Fig. 1c, fGRU $(\ell) \to \mathrm{conv}(\ell + 1)$ ), or used to compute top-down connections (Fig. 1c, fGRU $(\ell + 1) \to \mathrm{fGRU}(\ell)$ ). + +The fGRU has different configurations for learning horizontal connections between units within a layer or top-down connections between layers (Fig. 1b). These two configurations stem from changing the activities used for a fGRU's feedforward encodings and recurrent hidden state. "Horizontal connections" between units within a layer are learned by setting the feedforward encodings $\mathbf{Z}$ to the activity of a preceding convolutional layer, and setting the hidden state $\mathbf{H}$ to a persistent activity initialized as zeros (Fig. 1c, $\mathrm{conv}^{(\ell)}\rightarrow \mathrm{fGRU}^{(\ell)})$ . "Top-down connections" between layers are learned by setting fGRU feedforward encodings $\mathbf{Z}$ to the persistent hidden state $\mathbf{H}^{(\ell)}$ of a fGRU at layer $\ell$ in a hierarchical model, and the hidden state $\mathbf{H}$ to the persistent activity $\mathbf{H}^{(\ell +1)}$ of an fGRU at a layer one level up in the model hierarchy (Fig. 1c, $\mathrm{fGRU}^{(\ell +1)}\to \mathrm{fGRU}^{(\ell)})$ . The functional interpretation of the top-down fGRU is that it first suppresses activity in the lower layer using the higher layer's recurrent horizontal activities, and then applies a kernel to the residue for facilitation, which allows for computations like interpolation, sharpening, or "filling in". Note that an fGRU for top-down connections does not have a persistent state (it mixes high and low-level persistent states), but an fGRU for horizontal connections does. + +$\gamma$ -Net Our main objective is to test how a model with the capacity for contextual illusions performs on natural image analysis. We do this by incorporating fGRUs into leading feedforward architectures for contour detection tasks, augmenting their feedforward processing with modules for learning feedback from horizontal and top-down connections (Fig. 1c). We refer to the resulting hierarchical models as $\gamma$ -Nets, because information flows in a loop that resembles a $\gamma$ : image encodings make a full bottom-up to top-down cycle through the architecture on every timestep, until dense predictions are read-out from the lowest-level recurrent layer of the network (thus, information flows in at the top of the hierarchy, loops through the network, and flows out from the top of the hierarchy). In our experiments we convert leading architectures for two contour detection problems into $\gamma$ -Nets: A VGG16 for BSDS500 (He et al., 2019), and a U-Net for detection of cell membranes in serial electron microscopy images (Lee et al., 2017). See Appendix A for an algorithmic description of $\gamma$ -net. + +# 4 CONTOUR DETECTION EXPERIMENTS + +Overview We evaluated $\gamma$ -Net performance on two contour detection tasks: object contour detection in natural images (BSDS500 dataset; Arbeláez et al., 2011) and cell membrane detection in serial electron microscopy (SEM) images of mouse cortex (Kasthuri et al., 2015) and mouse retina (Ding et al., 2016). Different $\gamma$ -Net configurations were used on each task, with each building on the leading architecture for their respective datasets. All $\gamma$ -Nets use 8-timesteps of recurrence and instance normalization (normalization controls vanishing gradients in RNN training; Ulyanov et al., 2016; Coolijmans et al., 2017, see Appendix A for details). The $\gamma$ -Nets were trained with Tensorflow and NVIDIA Titan RTX GPUs using single-image batches and the Adam optimizer (Kingma & Ba, 2014, dataset-specific learning rates are detailed below). Models were trained with early stopping, which + +![](images/9e6bd6c96c7ffd838670add9f8bccf9eaad7ef2c74d3f9f9ca614c0cd8cfdc7b.jpg) + +![](images/f600e423b62b6d109b14fe573b137191ab0b2f26f6583e726cfcbf4edadcdfe6.jpg) + +![](images/743cea3eae4e91a066e5127fcff5060d53edca82cbec972b41be6c99d3185f2f.jpg) + +![](images/1b6fb4f14a07e0e767fc39be0ff89e15e391e23d2e8a4eef001115e34b50c170.jpg) +Figure 2: Object contour detection in BSDS500 images. (a) The $\gamma$ -Net is on par with humans and the state-of-the-art for contour detection (BDCN; He et al. 2019) when trained on the entire training dataset with augmentations. In this regime, it also outperforms the published F1 ODS of all other approaches to BSDS500 (LPCB: Deng et al. 2018, RCF: Liu et al. 2019, CED: Wang et al. 2019, DB: Kokkinos 2015, HED: Xie & Tu 2017, and OEF: Hallman & Fowlkes 2015). The $\gamma$ -Net outperforms the BDCN when trained on $5\%$ , $10\%$ , or $100\%$ of the dataset. Performance is reported as F1 ODS (Arbeláez et al., 2011). (b) BDCN and $\gamma$ -Net predictions after training on the different proportions of BSDS500 images. (c) The evolution of $\gamma$ -Net predictions across timesteps of processing. Predictions from a $\gamma$ -Net trained on $100\%$ of BSDS are depicted: its initially coarse detections are refined over processing timesteps to select figural object contours. + +![](images/b2f0864e504e53ced6811728bc4737994389c18065b5984c187f13984cceb82b.jpg) + +![](images/2e4eb319076a45776cd63291d4b12e2b034d9fe5efff46b1f7f899ccf8412b26.jpg) + +![](images/54f282ccbbb0412fb574f0c61179c62e198701a01ba4215aafe33fd732193651.jpg) +Timestep + +![](images/bcbaef685e7af3aec3cb68b1c496d8e53e6e508fac6eedb6866921996bec71eb.jpg) + +terminated training if the validation loss did not drop for 50 straight epochs. The weights with the best validation-set performance were used for testing. + +Model evaluation We evaluated models in two ways. First, we validated them against state-of-the-art models for each contour dataset using standard benchmarks. As discussed below, we verified that our implementations of these state-of-the-art models matched published performance. Second, we tested sample-efficiency after training on subsets of the contour datasets without augmentations. Sample-efficiency compares the inductive biases of different architectures, and is critical for understanding how the capacity for exhibiting contextual illusions influences performance. We report model "wall time" in Appendix A; however, the focus of our work is on sample efficiency rather than the hardware/software-level optimizations that influence wall time. Model performance is evaluated as the F-measure at the Optimal Dataset Scale across images after non-maximum suppression post-processing (F1-ODS; Arbeláez et al., 2011), as is standard for contour detection tasks. + +# 4.1 OBJECT CONTOUR DETECTION IN NATURAL IMAGES + +Dataset We trained models for object contour detection on the BSDS500 dataset (Arbeláez et al., 2011). The dataset contains object-contour annotations for 500 natural images, which are split into train (200), validation (100), and test (200) sets. + +Architecture details The leading approach to BSDS500 is the Bi-Directional Cascade Network (BDCN, He et al. 2019), which places multi-layer readout modules at every processing block in a + +![](images/6a3c960b2cdb0abba00593481006d849228e629dc3f43027b888bfee8b70c839.jpg) + +![](images/87dd398297ea58de25be20e012a2f8354d2777efed50db15c06a1f64e6ad886e.jpg) + +![](images/44868ca70ec1b021d0bda80d962de852858f1fcadb030f72bb71e9c8df81eaf0.jpg) + +![](images/ed68e6518b775b795f2639ba7863495f633bf9ce3e8fbdf32c5f1f5fd9138d9b.jpg) +Figure 3: Membrane prediction in serial electron microscopy (SEM) images of neural tissue. (a) The $\gamma$ -Net outperforms a state-of-the-art U-Net (Lee et al., 2017) for membrane detection when trained on SEM image datasets of mouse visual cortex (SNEMI3D) and retina (Ding et al., 2016). Performance is F1 ODS. (b) Network predictions after training on different proportions of each dataset. (c) The evolution of $\gamma$ -Net predictions across timesteps of processing after training on $100\%$ of the datasets. $\gamma$ -Net learns to iteratively suppress contours belonging to internal cell features, such as organelles, which should not be annotated as contours for the purpose of neural tissue reconstruction. + +![](images/8add8b67c286da0bb38a34b80f665252eec695d955628823e8a1ff1663c610d3.jpg) + +![](images/f086b7efac9fe2fec4e307f8aa4af561dd0c828f9d46689e5bb2b40da9b97089.jpg) +Timestep + +![](images/637f4b7e3f9aa8f74baba05fd9d80c0291bef9c281d5bb870ad43f714c3e6ce0.jpg) + +![](images/ad7746994c216d74b41ebf10cb8afe02bcd39a39cef6b0d54d430f78de760b64.jpg) + +![](images/1d0cc3a5c254c6138b5b11aa99431dcbc89dcd6dd6ea14c06f8070c0b9a2bf2d.jpg) + +ILSVRC12-pretrained VGG16, and optimizes a loss that balances contributions from each of these readouts to achieve better scale tolerance in final prediction. All leading deep learning approaches to BSDS500 begin with a VGG16 pretrained on ILSVRC12 object recognition (He et al., 2019). + +Our $\gamma$ -Net for BSDS500 begins with the same ILSVRC12-pretrained VGG16 as the BDCN. fGRUs were introduced for learning horizontal (conv2_2, conv3_3, conv4_3, conv5_3) and top-down connections (conv5_3 $\rightarrow$ conv4_3, conv4_3 $\rightarrow$ conv3_3, and conv3_3 $\rightarrow$ conv2_2). To pass top-down activities between layers, higher-level activities were resized to match lower-level ones, then passed through two layers of $1 \times 1$ convolutions with linear rectification, which registered feature representations from higher-to-lower layers. The $\gamma$ -Net was trained with learning rates of $3e^{-4}$ on its randomly initialized fGRU weights and $1e^{-5}$ on its VGG-initialized weights. Training time and parameter counts for this $\gamma$ -Net and the BDCN is in Appendix Table S1. + +In contrast to the BDCN (and other recent approaches to BSDS) with multiple read-outs and engineered loss functions, we take $\gamma$ -Net predictions as a linear transformation of the lowest fGRU layer in its feature hierarchy, and optimize the model with binary cross entropy between per-pixel predictions and labels (following the method of Xie & Tu, 2017). This approach works because the $\gamma$ -Net uses feedback to merge high- and low-level image feature representations at the bottom of its feature hierarchy, resembling classic "V1 scratchpad" hypotheses for computation in visual cortex (Gilbert & Sigman, 2007; Lee & Mumford, 2003). We compared $\gamma$ -Nets with a BDCN implementation released by the authors, which was trained using the routine described in He et al. $(2019)^{\dagger}$ . + +Results We validated the $\gamma$ -Net against the BDCN after training on a full and augmented BSDS training set (Xie & Tu, 2017). The $\gamma$ -Net performed similarly in F1 ODS (0.802) as the BDCN (0.806) and humans (0.803), and outperformed all other approaches to BSDS (Fig. 2a; Deng et al. 2018; Xie & Tu 2017; Hallman & Fowlkes 2015; Kokkinos 2015; Wang et al. 2019; Liu et al. 2019). + +Our hypothesis is that contextual illusions reflect routines for efficient scene analysis, and that the capacity for exhibiting such illusions improves model sample efficiency. Consistent with this hypothesis, the $\gamma$ -Net was up to an order-of-magnitude more efficient than the BDCN. A $\gamma$ -Net trained on $5\%$ of BSDS performs on par with a BDCN trained on $10\%$ of the BSDS, and a $\gamma$ -Net trained on $10\%$ of the BSDS performs on par with a BDCN trained on $100\%$ of BSDS. Unlike the BDCN, the $\gamma$ -Net trained on $100\%$ of BSDS outperformed the state of the art for non-deep learning based models (Hallman & Fowlkes, 2015), and nearly matched the performance of the popular HED trained with augmentations (Xie & Tu, 2017). We also evaluated lesioned versions of $\gamma$ -Net to measure the importance of its horizontal/top-down connections, recurrence, non-negativity, and different specifications of its fGRU recurrent modules for detecting contours in BSDS500 (Fig. S4). + +We examined recurrent feedback strategies learned by $\gamma$ -Net for object contour by visualizing its performance on every timestep of processing. This was done by passing its activity at a timestep through the final linear readout layer. The $\gamma$ -Net iteratively refines its initially coarse contour predictions. For example, the top row of Fig. 2c shows that the $\gamma$ -Net selectively enhances the boundaries around the runner's bodies while suppressing the feature activities created by the crowd. In the next row of predictions, salient zebra stripes are gradually suppressed in favor of body contours (see Fig. S5 for $\gamma$ -Net prediction dynamics and its tendency towards steady state solutions). + +# 4.2 CELL MEMBRANE DETECTION + +Datasets "Connectomics" involves extracting the wiring diagrams of neurons from serial electron microscope (SEM) imaging data, and is an important step towards understanding the algorithms of brains (Briggman & Bock, 2012). CNNs can automate this procedure by segmenting neuron membranes in SEM images. Large-scale challenges like SNEMI3D (Kasthuri et al., 2015), which contains annotated images of mouse cortex, have helped drive progress towards automation. Here, we test models on membrane detection in SNEMI3D and a separate SEM dataset of mouse retina ("Ding" from Ding et al. 2016). We split both datasets into training (80 images for SNEMI3D and 307 images for Ding) and test sets (20 images for SNEMI3D and 77 images for Ding). Next, we generated versions of each training dataset with $100\%$ , $10\%$ , or $5\%$ of the images, as well as versions of the full datasets augmented with random left-right and up-down flips $(A + 100\%)$ . + +Architecture details The state-of-the-art on SNEMI3D is a U-Net variant (Ronneberger et al., 2015), which uses a different depth, different number of feature maps at every layer, and introduces new features like residual connections (Lee et al., 2017). We developed a $\gamma$ -Net for cell membrane segmentation, which resembled this U-Net variant Lee et al. (2017) (see Appendix B for details). These $\gamma$ -Net were trained from a random initialization with a learning rate of $1e^{-2}$ to minimize class-balanced per-pixel binary cross-entropy, and compared to the U-Net from Lee et al. (2017). Training time and parameter counts for these models is in Appendix Table S2. + +Results The $\gamma$ -Net and U-Net of Lee et al. (2017) performed similarly when trained on full augmented versions of both the SNEMI3D and Ding datasets (Fig. 3a A+100%). However, $\gamma$ -Nets were consistently more sample efficient than U-Nets on every reduced dataset condition (Fig. 3b). + +We visualized the recurrent membrane detection strategies of $\gamma$ -Nets trained on $100\%$ of both datasets. Membrane predictions were obtained by passing neural activity at every timestep through the final linear readout. The $\gamma$ -Net prediction timecourse indicates that it learns a complex visual strategy for membrane detection: it gathers a coarse "gist" of membranes in the first timestep of processing, and iteratively refines these predictions by enhancing cell boundaries and clearing out spurious contours of elements like cell organelles (Fig. 3c). + +# 5 BUGS OR BYPRODUCTS OF OPTIMIZED NEURAL COMPUTATIONS? + +Orientation-tilt illusion Like the neural circuit of Mély et al. (2018), fGRU modules are designed with an asymmetry in their ability to suppress and facilitate feedforward input (see Section 3). This + +![](images/e15a2458ea6d61558cf4c772f544b61b07410233db686a88f4440a1ffadef8c5.jpg) +Figure 4: Optimizing for contour detection produces an orientation-tilt illusion in the $\gamma$ -Net. The orientation-tilt illusion (O'Toole & Wenderoth, 1977) describes how perception of the center grating's orientation is repulsed from the surround when the two are in similar orientations (e.g., $\approx 30^{\circ}$ ), and attracted to the surround when the two are in dissimilar (but not orthogonal) orientations (e.g., $\approx 60^{\circ}$ ). We test for the orientation-tilt illusion in models trained on BSDS500 contour detection. Model weights were fixed and new layers were trained to decode the orientation of grating stimuli of a single orientation. These models were tested on grating stimuli in which surround orientations were systematically varied w.r.t. the center (exemplars depicted in the left panel). The $\gamma$ -Net but not the BDCN had an orientation-tilt illusion. Gray curves depict a fourth-order polynomial fit. + +![](images/84ab6fae778e5096d8774debe840c492575ed6d97952d8c2db7bdef2a8178e63.jpg) + +potentially gives $\gamma$ -Nets (which contain fGRU modules) the capacity to exhibit similar contextual illusions as humans. Here, we tested whether a $\gamma$ -Net trained on contour detection in natural images exhibits an orientation-tilt illusion. + +We tested for this illusion by training orientation decoders on the outputs of models trained on the full BSDS500 dataset. These decoders were trained on 100K grating images, in which the center and surround orientations were the same (Fig. S2a). These images were sampled from all orientations and spatial frequencies. The decoders had two $1 \times 1$ convolution layers and an intervening linear rectification to map model outputs into the sine and cosine of grating orientation. Both the $\gamma$ -Net and BDCN achieved nearly perfect performance on a held-out validation set of gratings. + +We tested these models on 1K grating stimuli generated with different center-surround grating orientations (following the method of O'Toole & Wenderoth 1977, Fig. S2b), and recorded model predictions for the center pixel in these images (detailed in Appendix C). Surprisingly, $\gamma$ -Net encodings of these test images exhibited a similar tilt illusion as found in human perceptual data (Fig. 4b). There was repulsion when the central and surround gratings had similar orientations, and attraction when these gratings were dissimilar. This illusory phenomenon cannot be explained by accidental factors such as aliasing between the center and the surround, which would predict the opposite pattern, suggesting that the illusion emerges from the model's strategy for contour detection. In contrast, the BDCN, which only relies on feedforward processing, did not exhibit the effect (Fig. 4b). We also tested lesioned $\gamma$ -Nets for the orientation-tilt illusion, but only the full $\gamma$ -Net and a version lesioned to have only (spatially broad) horizontal connections replicated the entire illusion (thought this lesioned model performed worse on contour detection than the full model; Fig. S4). These findings are consistent with the work of Mély et al. (2018), who used spatially broad horizontal connections to model contextual illusions, as well as the more recent neurophysiological work of Chettih & Harvey (2019), who explained contextual effects through both horizontal and top-down interactions. + +Correcting the orientation-tilt illusion What visual strategies does the orientation-tilt illusion reflect? We tested this question by taking a $\gamma$ -Net that was trained to decode central grating orientation in tilt-illusion, and then training it further to decode the central grating orientation of tilt-illusion stimuli (Fig. 5a, "illusion-corrected" in red). Importantly, $\gamma$ -Net weights were optimized during training, but the orientation decoder was not. Thus, improving performance for decoding the orientation of these illusory stimuli comes at the expense of changing $\gamma$ -Net weights that were + +![](images/6bae304f6807444bc5e582c24735d1a0ff50796f43e49ef4da16fc3426c9207d.jpg) +Figure 5: Contour detection performance of the $\gamma$ -Net depends on an orientation-tilt illusion. (a) F1 ODS scores on BSDS500 test images (200 total) from $\gamma$ -Nets after correcting an orientation-tilt illusion ("illusion-corrected") or not ("domain-transfer control"). The domain-transfer control $\gamma$ -Net was trained to decode the orientation of single-grating stimuli (blue), and the illusion-corrected $\gamma$ -Net was trained to decode the orientation of the central grating in illusory grating stimuli (red). Readouts for decoding orientation were fixed, and $\gamma$ -Net weights were allowed to change during training. Per-image F1 ODS was significantly greater for The domain-transfer control $\gamma$ -Net than the illusion-correction $\gamma$ -Net. (b) The illusion-corrected $\gamma$ -Net was biased towards low-level contours, whereas the domain-transfer control $\gamma$ -Net was biased towards contours on object boundaries. + +![](images/4320aa84d8edd4708254e71d79b60852935c5d98d79a0ba8964ef3c35e310f48.jpg) + +![](images/adf85ba3d876d8e6b7ed9345c2f684485d2a6141496913035dab46e1dcb92643.jpg) + +![](images/0feba71f5cf5d74e1db6e2654d2ad71d1ea6280750d2be9e8d19ea4f889529ae.jpg) + +responsible for its orientation-tilt illusion. As a control, another $\gamma$ -Net was trained with the same routine to decode the orientation of full-image gratings, for which there is no illusion (Fig. 5a, "domain-transfer control" in blue; see Fig. S6 for training performance of both models). Both models were tested on the BSDS500 test set. + +Correcting the orientation-tilt illusion of a $\gamma$ -Net significantly hurts its object contour detection performance (Fig. 5a; 1-sample $T$ -test of the per-image ODS F1 difference between models, $T(199) = 13.570$ , $p < 0.001$ ). The illusion reflects $\gamma$ -Net strategies for selecting object-boundaries rather than low-level contours (Fig. 5b; Fig. S7 for more examples). + +# 6 CONCLUSION + +Why do we experience visual illusions? Our experiments indicate that one representative contextual illusion, the orientation-tilt illusion, is a consequence of neural strategies for efficient scene segmentation. We directly tested whether this contextual illusion is a bug or a byproduct of optimized neural computations using the $\gamma$ -net: a dense prediction model with recurrent dynamics inspired by neural circuits in visual cortex. On separate contour detection tasks, the $\gamma$ -Net performed on par with state-of-the-art models when trained in typical regimes with full augmented datasets, but was far more efficient than these models when trained on sample-limited versions of the same datasets. At the same time, the $\gamma$ -Net exhibited an orientation-tilt illusion which biased it towards high-level object-boundary contours over low-level edges, and its performance was reduced when it was trained to correct its illusion. + +While $\gamma$ -Nets are more sample efficient than leading feedforward models for contour detection, they also take much more "wall-time" to train than these feedforward models. Learning algorithms and GPU optimizations for RNNs are lagging behind their feedforward counterparts in computational efficiency, raising the need for more efficient approaches for training hierarchical RNNs like $\gamma$ -Nets to unlock their full potential. + +More generally, our work demonstrates novel synergy between artificial vision and vision neuroscience: we demonstrated that circuit-level insights from biology can improve the sample efficiency of deep learning models. The neural circuit that inspired the fGRU module explained biological illusions in color, motion, and depth processing (Mély et al., 2018), and we suspect that $\gamma$ -Nets will have similar success in learning sample-efficient strategies – and exhibiting contextual illusions – in these domains. + +# REFERENCES + +P. Arbeláez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898-916, May 2011. +K. L. Briggman and D. D. Bock. Volume electron microscopy for neuronal circuit reconstruction. Curr. Opin. Neurobiol., 22(1):154-161, February 2012. +S. N. Chettih and C. D. Harvey. Single-neuron perturbations reveal feature-specific competition in V1. Nature, 567(7748):334-340, March 2019. +T. Cooijmans, N. Ballas, C. Laurent, C. Gulçehre, and A. Courville. Recurrent batch normalization. In International Conference on Learning Representations, 2017. +R. Deng, C. Shen, S. Liu, H. Wang, and X. Liu. Learning to predict crisp boundaries. In Computer Vision - ECCV 2018, pp. 570-586. Springer International Publishing, 2018. +H. Ding, R. G. Smith, A. Poleg-Polsky, J. S. Diamond, and K. L. Briggman. Species-specific wiring for direction selectivity in the mammalian retina. Nature, 535(7610):105-110, July 2016. +D. J. Felleman and D. C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex, 1(1):1-47, 1991. +A. M. Fyall, Y. El-Shamayleh, H. Choi, E. Shea-Brown, and A. Pasupathy. Dynamic representation of partially occluded objects in primate prefrontal and visual cortex. *Elite*, 6, September 2017. +R. Geirhos, C. R. M. Temme, J. Rauber, H. H. Schütt, M. Bethge, and F. A. Wichmann. Generalisation in humans and deep neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 7549-7561. Curran Associates, Inc., 2018. +D. George, W. Lehrach, K. Kansky, M. Lázaro-Gredilla, C. Laan, B. Marthi, X. Lou, Z. Meng, Y. Liu, H. Wang, A. Lavin, and D. S. Phoenix. A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs. Science, 358(6368), December 2017. +C. D. Gilbert and M. Sigman. Brain states: top-down influences in sensory processing. *Neuron*, 54 (5):677-696, June 2007. +R. H. Hahnloser, R. Sarpeshkar, M. a. Mahowald, R. J. Douglas, and H. S. Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405(6789):947-951, June 2000. +S. Hallman and C. C. Fowlkes. Oriented edge forests for boundary detection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1732-1740, June 2015. +J. He, S. Zhang, M. Yang, Y. Shan, and T. Huang. Bi-Directional cascade network for perceptual edge detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3828-3837, 2019. +K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +J. C. Heck and F. M. Salem. Simplified minimal gated unit variations for recurrent neural networks. In 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), pp. 1593-1596, August 2017. +S. Hochstein and M. Ahissar. View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36(5):791-804, 2002. +M. Januszewski and V. Jain. Segmentation-Enhanced CycleGAN. February 2019. +H. Kafaligonul, B. G. Breitmeyer, and H. Öğmen. Feedforward and feedback processes in vision. Front. Psychol., 6:279, March 2015. + +K. Kar, J. Kubilius, K. Schmidt, E. B. Issa, and J. J. DiCarlo. Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior. Nat. Neurosci., April 2019. +N. Kasthuri, K. J. Hayworth, D. R. Berger, R. L. Schalek, J. A. Conchello, S. Knowles-Barley, D. Lee, A. Vázquez-Reina, V. Kaynig, T. R. Jones, M. Roberts, J. L. Morgan, J. C. Tapia, H. S. Seung, W. G. Roncal, J. T. Vogelstein, R. Burns, D. L. Sussman, C. E. Priebe, H. Pfister, and J. W. Lichtman. Saturated reconstruction of a volume of neocortex. Cell, 162(3):648-661, July 2015. +S. W. Keemink and M. C. W. van Rossum. A unified account of tilt illusions, association fields, and contour detection based on elastica. Vision Res., 126:164-173, September 2016. +T. C. Kietzmann, C. J. Spoerer, L. Sorensen, and others. Recurrence required to capture the dynamic computations of the human ventral visual stream. arXiv preprint arXiv, 2019. +J. K. Kim, M. Ricci, and T. Serre. Not-So-CLEVR: learning same-different relations strains feedforward neural networks. Interface Focus theme issue on "Understanding images in biological and computer vision", 2018. +D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +P. C. Klink, B. Dagnino, M.-A. Ganiel-Mathis, and P. R. Roelfsema. Distinct feedforward and feedback effects of microstimulation in visual cortex reveal neural mechanisms of texture segregation. Neuron, June 2017. +I. Kokkinos. Pushing the boundaries of boundary detection using deep learning. arXiv preprint arXiv:1511.07386, 2015. +N. Kriegeskorte. Deep neural networks: A new framework for modeling biological vision and brain information processing. Annu Rev Vis Sci, 1:417-446, November 2015. +B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, December 2015. +K. Lee, J. Zung, P. Li, V. Jain, and H. Sebastian Seung. Superhuman accuracy on the SNEMI3D connectomics challenge. In Neural Information Processing Systems, 2017. +T. S. Lee and D. Mumford. Hierarchical bayesian inference in the visual cortex. J. Opt. Soc. Am. A Opt. Image Sci. Vis., 20(7):1434-1448, July 2003. +Q. Liao and T. Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. April 2016. +D. Linsley, J. K. Kim, V. Veerabadran, C. Windolf, and T. Serre. Learning long-range spatial dependencies with horizontal gated recurrent units. In Neural Information Processing Systems (NIPS), 2018a. +D. Linsley, J. Kim, D. Berson, and T. Serre. Robust neural circuit reconstruction from serial electron microscopy with convolutional recurrent networks. November 2018b. +D. Linsley, J. Kim, V. Veerabadran, C. Windolf, and T. Serre. Learning long-range spatial dependencies with horizontal gated recurrent units. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 152-164. Curran Associates, Inc., 2018c. +D. Linsley, D. Shiebler, S. Eberhardt, and T. Serre. Learning what and where to attend with humans in the loop. In International Conference on Learning Representations, 2019. +Y. Liu, M.-M. Cheng, X. Hu, J.-W. Bian, L. Zhang, X. Bai, and J. Tang. Richer convolutional features for edge detection. IEEE Trans. Pattern Anal. Mach. Intell., 41(8):1939-1946, August 2019. +W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning. May 2016. + +D. A. Mély, D. Linsley, and T. Serre. Complementary surrounds explain diverse contextual phenomena across visual modalities. Psychol. Rev., 2018. +M. C. Mozer. Induction of multiscale temporal structure. In J. E. Moody, S. J. Hanson, and R. P. Lippmann (eds.), Advances in Neural Information Processing Systems 4, pp. 275-282. Morgan-Kaufmann, 1992. +A. Nayebi, D. Bear, J. Kubilius, K. Kar, S. Ganguli, D. Sussillo, J. J. DiCarlo, and D. L. Yamins. Task-Driven convolutional recurrent models of the visual system. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 5295-5306. Curran Associates, Inc., 2018. +J. Nunez-Iglesias, R. Kennedy, T. Parag, J. Shi, and D. B. Chklovskii. Machine learning of hierarchical clustering to segment 2D and 3D images. PLoS One, 8(8):e71715, August 2013. +R. C. O'Reilly, D. Wyatte, S. Herd, B. Mingus, and D. J. Jilk. Recurrent processing during object recognition. Front. Psychol., 4(April):1-14, 2013. +B. O'Toole and P. Wenderoth. The tilt illusion: repulsion and attraction effects in the oblique meridian. Vision Res., 17(3):367-374, 1977. +P. J. Phillips, A. N. Yates, Y. Hu, C. A. Hahn, E. Noyes, K. Jackson, J. G. Cavazos, G. Jeckeln, R. Ranjan, S. Sankaranarayanan, J.-C. Chen, C. D. Castillo, R. Chellappa, D. White, and A. J. O'Toole. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proc. Natl. Acad. Sci. U. S. A., 115(24):6171-6176, June 2018. +B. Recht, R. Roelofs, L. Schmidt, and V. Shankar. Do CIFAR-10 classifiers generalize to CIFAR-10? June 2018. +P. R. Roelfsema. Cortical algorithms for perceptual grouping. Annu. Rev. Neurosci., 29:203-227, January 2006. +O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pp. 234–241. Springer International Publishing, 2015. +A. Rosenfeld, R. Zemel, and J. K. Tsotsos. The elephant in the room. August 2018. +T. Serre. Deep learning: The good, the bad, and the ugly. Annu Rev Vis Sci, 5:399-426, September 2019. +M. Siegel, T. J. Buschman, and E. K. Miller. Cortical information flow during flexible sensorimotor decisions. Science, 348(6241):1352-1355, June 2015. +C. J. Spoerer, P. McClure, and N. Kriegeskorte. Recurrent convolutional neural networks: A better model of biological object recognition. Front. Psychol., 8:1551, September 2017. +C. Tallec and Y. Ollivier. Can recurrent neural networks warp time? In International Conference on Learning Representations, 2018. +H. Tang, M. Schrimpf, W. Lotter, C. Moerman, A. Paredes, J. Ortega Caro, W. Hardesty, D. Cox, and G. Kreiman. Recurrent computations for visual pattern completion. Proc. Natl. Acad. Sci. U. S. A., 115(35):8835-8840, August 2018. +D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. July 2016. +E. Vorontsov, C. Trabelsi, S. Kadoury, and C. Pal. On orthogonality and learning recurrent networks with long term dependencies. January 2017. +Y. Wang, X. Zhao, Y. Li, and K. Huang. Deep crisp boundaries: From boundaries to Higher-Level tasks. IEEE Trans. Image Process., 28(3):1285-1298, March 2019. +H. Wen, K. Han, J. Shi, Y. Zhang, E. Culurciello, and Z. Liu. Deep predictive coding network for object recognition. February 2018. + +D. Wyatte, D. J. Jilk, and R. C. O'Reilly. Early recurrent feedback facilitates visual object recognition under challenging conditions. Front. Psychol., 5:674, July 2014. +S. Xie and Z. Tu. Holistically-Nested edge detection. Int. J. Comput. Vis., 125(1-3):3-18, December 2017. +D. L. K. Yamins and J. J. DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci., 19(3):356-365, February 2016. + +A $\gamma$ -NET + +Algorithm 1 A generic $\gamma$ -Netarchitecture. See Section 3 in the main text for the treatment. +Require: Image batch I +1: $\mathbf{Z}^{\ell} \gets \mathbf{I}$ +2: $\mathbf{H}^{0}[t] \gets 0$ +3: for $t = T$ do +4: for $\ell = 0$ to L do +5: $\mathbf{Z}^{\ell} = \mathrm{ReLU}(\mathrm{Conv}(\mathbf{Z}^{\ell}))$ +6: $\mathbf{H}^{\ell}[t] = fGRU_{H}(Z = \mathbf{Z}^{\ell}, H = \mathbf{H}^{\ell}[t - 1])$ +7: if $\ell < L$ then +8: $\mathbf{Z}^{\ell + 1} = \mathrm{Maxpool}(\mathbf{H}^{\ell}[t])$ +9: for $\ell = L$ to 0 do +10: if $\ell < L$ then +11: $\mathbf{H}^{\ell}[t] = \mathrm{fGRU}_{\mathrm{TD}}(Z = \mathbf{H}^{\ell}[t], H = \mathrm{ReLU}(\mathrm{Conv}(\mathrm{Upsample}(\mathbf{H}^{\ell + 1}[t])))$ +12: return $\mathbf{H}^{0}[t]$ +13: Hidden state of layer $\ell$ + +Connectomics The current standard for computer vision applications in connectomics is to train and test on separate partitions of the same tissue volume (Linsley et al., 2018b; Januszewski & Jain, 2019). This makes it difficult to develop new model architectures without overfitting to any particular dataset. For this reason, we first tuned our connectomics $\gamma$ -Net and its hyperparameters on a synthetic dataset of cell images (data not shown). + +In our experiments on synthetic data, we noted monotonically improved performance with increasing timesteps, which motivated our choice of building these models with as many timesteps as could fit into GPU memory without sacrificing (we carried this concept over to the design of our $\gamma$ -Nets for BSDS). Thus, we settled on 8 timesteps for the $\gamma$ -Nets. We also compared our use of fGRU modules to learn recurrent connection vs. the classic LSTM and GRU recurrent modules, and found that the $\gamma$ -Net was far more effective on small datasets, which we take as evidence that its recurrent application of suppression separately from facilitation is a better inductive bias for learning contour tasks (see Hahnloser et al. 2000 for a theoretical discussion on how these operations can amount to a digital selection of task-relevant features through inhibition, followed by an analog amplification of the residuals through excitation). + +We found that $\gamma$ -Net cell membrane detection was improved when every bottom-up unit (from a typical convolution) was given a hidden state. Like with gated recurrent architectures, these gates enable gradients to effectively skip timesteps of processing where they pathologically decay. We do this by converting every convolutional layer (except the first and last) into a "minimal gated unit" (Heck & Salem 2017). This conversion introduced two additional kernels to each convolutional layer, $U^{F}$ , $W^{H} \in \mathbb{R}^{1 \times 1 \times K \times K}$ , where the former was responsible for selecting channels from a persistent activity $\mathbf{H} \in \mathbb{R}^{X \times Y \times K}$ for processing on a given timestep and updating the persistent activity. The latter kernel transformed a modulated version of the hidden state $\mathbf{H}$ . This transformed hidden state was combined with a vanilla convolutional feedforward encoding, $\mathbf{Z} \in \mathbb{R}^{X \times Y \times K}$ (see Eq. 1 for the treatment). Weights in these layers were initialized with orthogonal random initializations, which help training recurrent networks (Vorontsov et al., 2017). + +$$ +\begin{array}{l} \mathbf {F} = \sigma (\mathbf {Z} + W ^ {F} * \mathbf {H} [ t - 1 ] + \mathbf {b} _ {F}) \\ \mathbf {H} [ t ] = \mathbf {F} \odot \mathbf {H} [ t - 1 ] + (1 - \mathbf {F}) \odot \operatorname {E L U} \left(Z + W ^ {H} * (\mathbf {F} \odot \mathbf {H} [ t - 1 ]) + \mathbf {b} _ {H}\right) \tag {1} \\ \end{array} +$$ + +fGRU Here we describe additional details of the fGRU. fGRU kernels for computing suppressive and facilitative interactions have symmetric weights between channels, similar to the original circuit of Mély et al. (2018). This means that the weight $W_{x_0 + \Delta x,y_0 + \Delta y,k_1,k_2}$ is equal to the weight $W_{x_0 + \Delta x,y_0 + \Delta y,k_2,k_1}$ , where $x_0$ and $y_0$ denote kernel center. This constraint means that there are nearly half as many learnable connections as a normal convolutional kernel. In our experiments, this constraint improved performance. + +BSDS (20M parameters) $\gamma$ -Net + +
LayerOperationOutput shape
conv-1-downconv 3 × 3 / 1320 × 480 × 64
conv 3 × 3 / 1320 × 480 × 64
maxpool 2 × 2 / 2160 × 240 × 64
conv-2-downconv 3 × 3 / 1160 × 240 × 128
conv 3 × 3 / 1160 × 240 × 128
fGRU-horizontal 3 × 3 / 1160 × 240 × 128
maxpool 2 × 2 / 280 × 120 × 128
conv-3-downconv 3 × 3 / 180 × 120 × 256
conv 3 × 3 / 180 × 120 × 256
conv 3 × 3 / 180 × 120 × 256
fGRU-horizontal 3 × 3 / 180 × 120 × 256
maxpool 2 × 2 / 240 × 60 × 256
conv-4-downconv 3 × 3 / 140 × 60 × 512
conv 3 × 3 / 140 × 60 × 512
conv 3 × 3 / 140 × 60 × 512
fGRU-horizontal 3 × 3 / 140 × 60 × 512
maxpool 2 × 2 / 220 × 30 × 512
conv-5-downconv 3 × 3 / 120 × 30 × 512
conv 3 × 3 / 120 × 30 × 512
conv 3 × 3 / 120 × 30 × 512
fGRU-horizontal 3 × 3 / 120 × 30 × 512
conv-4-upinstance-norm20 × 30 × 512
bilinear-resize40 × 60 × 512
conv 1 × 1 / 140 × 60 × 8
conv 1 × 1 / 140 × 60 × 512
fGRU-top-down 1 × 1 / 140 × 60 × 512
conv-3-upinstance-norm40 × 60 × 512
bilinear-resize80 × 120 × 512
conv 1 × 1 / 180 × 120 × 16
conv 1 × 1 / 180 × 120 × 256
fGRU-top-down 1 × 1 / 180 × 120 × 256
conv-2-upinstance-norm80 × 120 × 256
bilinear-resize160 × 240 × 256
conv 1 × 1 / 1160 × 240 × 64
conv 1 × 1 / 1160 × 240 × 128
fGRU-top-down 1 × 1 / 1160 × 240 × 128
Readoutinstance-norm160 × 240 × 128
bilinear-resize320 × 480 × 128
conv 1 × 1 / 1320 × 480 × 1
+ +Table S1: $\gamma$ -Net architecture for contour detection in BSDS natural images. For comparison, the BDCN, which is the state of the art on BSDS, contains $\approx 16.3$ M parameters. When training on an NVIDIA GeForce RTX, this $\gamma$ -Nettakes 1.8 seconds per image, whereas the BDCN takes 0.1 seconds per image. "Down" refers to down-sampling layers; "up" refers to up-sampling layers, and "readout" maps model activities into per-pixel decisions. Kernels are described as kernel-height $\times$ kernel-width / stride size. All convolutional layers except for the Readout use non-linearities. All non-linearities in this network are linear rectifications. Model predictions come from the fGRU hidden state for conv-2-down, which are resized to match the input image resolution and passed to the linear per-pixel readout. + +Connectomics $\gamma$ -Net (450K parameters) + +
LayerOperationOutput shape
conv-1-downconv 3 × 3 / 1384 × 384 × 24
conv 3 × 3 / 1384 × 384 × 24
fGRU-horizontal 9 × 9 / 1384 × 384 × 24
maxpool 2 × 2 / 2192 × 192 × 24
conv-2-downconv 3 × 3 / 1192 × 192 × 28
fGRU-horizontal 7 × 7 / 1192 × 192 × 28
maxpool 2 × 2 / 296 × 96 × 28
conv-3-downconv 3 × 3 / 196 × 96 × 36
fGRU-horizontal 5 × 5 / 196 × 96 × 36
maxpool 2 × 2 / 248 × 48 × 36
conv-4-downconv 3 × 3 / 148 × 48 × 48
fGRU-horizontal 3 × 3 / 148 × 48 × 48
maxpool 2 × 2 / 224 × 24 × 48
conv-5-downconv 3 × 3 / 124 × 24 × 64
fGRU-horizontal 1 × 1 / 124 × 24 × 64
conv-4-uptranspose-conv 4 × 4 / 248 × 48 × 48
conv 3 × 3 / 148 × 48 × 48
instance-norm48 × 48 × 48
fGRU-top-down 1 × 1 / 148 × 48 × 48
conv-3-uptranspose-conv 4 × 4 / 296 × 96 × 36
conv 3 × 3 / 196 × 96 × 36
instance-norm96 × 96 × 36
fGRU-top-down 1 × 1 / 196 × 96 × 36
conv-2-uptranspose-conv 4 × 4 / 2192 × 192 × 28
conv 3 × 3 / 1192 × 192 × 28
instance-norm192 × 192 × 28
fGRU-top-down 1 × 1 / 1192 × 192 × 28
conv-1-uptranspose-conv 4 × 4 / 2384 × 384 × 24
conv 3 × 3 / 1384 × 384 × 24
instance-norm384 × 384 × 24
fGRU-top-down 1 × 1 / 1384 × 384 × 24
Readoutinstance-norm384 × 384 × 24
conv 5 × 5 / 1384 × 384 × 24
+ +Table S2: $\gamma$ -Netarchitecture for cell membrane detection in SEM images. A 2D version of the U-Net of Lee et al. (2017), which is the state of the art on SNEMI3D, contains $\approx 600\mathrm{K}$ parameters. When training on an NVIDIA GeForce RTX, this $\gamma$ -Nettakes 0.7 seconds per image, whereas the U-Net takes 0.06 seconds per image. "Down" refers to down-sampling layers; "up" refers to upsampling layers, and "readout" maps model activities into per-pixel decisions. Kernels are described as kernel-height $\times$ kernel-width / stride size. All fGRU non-linearities are linear rectifications, and all convolutional non-linearities are exponential linear units (ELU), as in (Lee et al., 2017). All convolutional layers except for the Readout use non-linearities. Model predictions come from the fGRU hidden state for conv-1-down, which are passed to the linear readout. + +While optimizing $\gamma$ -Nets on synthetic cell image datasets, we found that a small modification of the fGRU input gate offered a modest improvement in performance. We realized that the input gate in the fGRU is conceptually similar to recently developed models for feedforward self-attention in deep neural networks. Specifically, the global-and-local attention modules of (Linsley et al., 2019), in which a non-linear transformation of a layer's activity is used to modulate the original activity. Here, we took inspiration from global-and-local attention, and introduced an additional gate into the fGRU, resulting in the following modification of the main equations. + +Stage 1: + +$$ +\mathbf {A} ^ {S} = U ^ {A} * \mathbf {H} [ t - 1 ] +$$ + +Compute channel-wise selection + +$$ +\mathbf {M} ^ {S} = U ^ {M} * \mathbf {H} [ t - 1 ] +$$ + +Compute spatial selection + +$$ +\mathbf {G} ^ {S} = \operatorname {s i g m o i d} \left(I N \left(\mathbf {A} ^ {S} \odot \mathbf {M} ^ {S *}\right)\right) +$$ + +Compute suppression gate + +$$ +\mathbf {C} ^ {S} = I N \left(W ^ {S} * (\mathbf {H} [ t - 1 ] \odot \mathbf {G} ^ {S})\right) +$$ + +Compute suppression interactions + +$$ +\mathbf {S} = \left[ \mathbf {Z} - \left[ (\alpha \mathbf {H} [ t - 1 ] + \mu) \mathbf {C} ^ {S} \right] _ {+} \right] _ {+}, +$$ + +Additive and multiplicative suppression of $\mathbf{Z}$ + +Stage 2: + +$$ +\mathbf {G} ^ {F} = \operatorname {s i g m o i d} (I N (U ^ {F} * \mathbf {S})) +$$ + +Compute channel-wise recurrent updates + +$$ +\mathbf {C} ^ {F} = I N (W ^ {F} * \mathbf {S}) +$$ + +Compute facilitation interactions + +$$ +\tilde {\mathbf {H}} = \left[ \nu (\mathbf {C} ^ {F} + \mathbf {S}) + \omega (\mathbf {C} ^ {F} * \mathbf {S}) \right] _ {+} +$$ + +Additive and multiplicative facilitation of S + +$$ +\mathbf {H} [ t ] = \left(1 - \mathbf {G} ^ {F}\right) \odot \mathbf {H} [ t - 1 ] + \mathbf {G} ^ {F} \odot \tilde {\mathbf {H}} +$$ + +Update recurrent state + +$$ +\text {w h e r e} I N (\mathbf {r}; \delta , \nu) = \nu + \delta \odot \frac {\mathbf {r} - \widehat {\mathbb {E}} [ \mathbf {r} ]}{\sqrt {\operatorname {V a r} [ \mathbf {r} ] + \eta}}. +$$ + +This yields the global input gate activity $\mathbf{A}^S \in \mathbb{R}^{X \times Y \times K}$ and the local input gate activity $\mathbf{M}^{S*} \in \mathbb{R}^{X \times Y \times I}$ , which are computed as filter responses between the previous hidden state $\mathbf{H}[t - 1]$ and the global gate kernel $U^A \in \mathbb{R}^{1 \times 1 \times K \times K}$ and the local gate kernel $U^M \in \mathbb{R}^{3 \times 3 \times K \times I}$ . Note that the latter filter is learning a mapping into 1 dimension and is therefore first tiled into $K$ dimensions, yielding $\mathbf{M}^{S*}$ , before elementwise multiplication with $\mathbf{A}^S$ . All results in the main text use this implementation. + +Following the lead of (Linsley et al., 2018a), we incorporated normalizations into the fGRU. Let $\mathbf{r} \in \mathbb{R}^d$ denote the vector of layer activations that will be normalized. We chose instance normalization (Ulyanov et al., 2016) since it is independent of batch size, which was 1 for $\gamma$ -Nets in our experiments. Instance normalization introduces two $k$ -dimensional learned parameters, $\delta, \nu \in \mathbb{R}^d$ , which control the scale and bias of normalized activities, and are shared across timesteps of processing. In contrast, means and variances are computed on every timestep, since fGRU activities are not i.i.d. across timesteps. Elementwise multiplication is denoted by $\odot$ and $\eta$ is a regularization hyperparameter. + +Learnable gates, such as those in the fGRU, are helpful for training RNNs. But there are other heuristics that are also important for optimizing performance. We use several of these with $\gamma$ -Nets, such as Chronos initialization of fGRU gate biases (Tallec & Ollivier, 2018) and random orthogonal initialization of kernels (Vorontsov et al., 2017). We initialized the learnable scale parameter $\delta$ of fGRU normalizations to 0.1, since values near 0 optimize the dynamic range of gradients passing through its sigmoidal gates (Coolijmans et al., 2017). Similarly, fGRU parameters for learning additive suppression/facilitation $(\mu, \nu)$ were initialized to 0, and parameters for learning multiplicative inhibition/excitation $(\alpha, \omega)$ were initialized to 0.1. Finally, when implementing top-down connections, we incorporated an extra skip connection. The activity of layer $\ell$ was added to the fGRU-computed top-down interactions between layer $\ell$ and layer $\ell + 1$ . This additional skip connection improved the stability of training. + +![](images/f21efaa376745ae72b59f973ae7365650ce3822d704042ce6b1ab8d77ac20b08.jpg) +Volume + +![](images/4bbd82ad2eb7b077a23d43fe8a9cc05830d914a527b42f843a2849cfb53e0d42.jpg) +Neurites + +![](images/f0cf753b4cc65782a859bd50fd2ed4b95185e99fed3fafa8e761eefd48cae13c.jpg) +Membrane + +![](images/d28730d18158924661223c295cfc10b25f45f95b6ee019080571034e1576d8c7.jpg) +Segmentations +Figure S1: We trained the reference 3D U-Net from (Lee et al., 2017) on the SNEMI3D dataset to validate the implementation. Segmentations here are derived by watershedding and agglomeration with GALA (Nunez-Iglesias et al., 2013), resulting in "superhuman" ARAND (evaluated according to the SNEMI3D standard; lower is better) of 0.04, which is below the reported human-performance threshold of 0.06 and on par with the published result (see Table 1 in Lee et al. 2017, mean affinity agglomeration). + +# B MEMBRANE PREDICTION MODELS + +Our reference model for membrane prediction is the 3D U-Net of (Lee et al., 2017). This architecture consists of four encoder blocks (multiple convolutions and skip connections, pooling and subsampling), followed by four decoder blocks (transpose convolution and convolution). This U-Net uses spatial pooling between each of its encoder blocks to downsample the input, and transpose convolutions between each of its decoder blocks to upsample intermediate activities. We validated our implementation of this U-Net following the author's training routine, and were able to replicate their reported "superhuman" performance in cell segmentation on SNEMI3D (Fig. S1). + +The $\gamma$ -Net for connectomics resembles the U-Net architecture of Lee et al. (2017). This $\gamma$ -Net replaces the blocks of convolutions and skip connections of that model with a single layer of convolution followed by an fGRU (as in the high level diagram of Fig. 1c, $\mathrm{Conv}^{(\ell)} \rightarrow \mathrm{fGRU}^{(\ell)}$ ). In the encoder pathway, fGRUs store horizontal interactions between spatially neighboring units of the preceding convolutional layer in their hidden states. In the decoder pathway, $\gamma$ -Net introduces fGRUs that learn top-down connections between layers, and connect recurrent units from higher-feature processing layers to lower-feature processing ones. + +
NameTissueImagingResolutionVoxels (X/Y/Z/Volumes)
SNEMI3DMouse cortexmbSEM6 × 6 × 29nm1024 × 1024 × 100 × 1
DingMouse retinaSBEM13.2 × 13.2 × 26nm384 × 384 × 384 × 1
+ +Table S3: SEM image volumes used in membrane prediction. SNEMI3D images and annotations are publicly available (Kasthuri et al., 2015), whereas the Ding dataset is a volume from (Ding et al., 2016) that we annotated. + +![](images/4d9f86fe229f0127920d76c698a78050af81d21201725de9463fa1ff90a38aa5.jpg) +Figure S2: Examples of tilt-illusion stimuli. (a) For training images, we sample over a range of size and wavelength to generate single oriented grating patches. (b) Test images are obtained by sampling a full range of surround orientation, while fixing all other parameters such as size and frequency of gratings as well as the orientation of the center gratings (at 45 degrees). + +Key to the approach of Lee et al. (2017) is their use of a large set of random data augmentations applied to SEM image volumes, which simulate common noise and errors in SEM imaging. These are (i) misalignment between consecutive $z$ -locations in each input image volume. (ii) Partial- or fully-missing sections of the input image volumes. (iii) Blurring of portions of the image volume. Augmentations that simulated these types of noise, as well as random flips over the $xyz$ -plane, rotations by $90^{\circ}$ , brightness and contrast perturbations, were applied to volumes following the settings of Lee et al. (2017). The model was trained using Adam (Kingma & Ba, 2014) and the learning rate schedule of Lee et al. (2017), in which the optimizer step-size was halved when validation loss stopped decreasing (up to four times). Training involved single-SEM volume batches of $160 \times 160 \times 18$ (X/Y/Z), normalized to [0, 1]. As in Lee et al. (2017), models were trained to predict nearest-neighbor voxel affinities, as well as 3 other mid- to long-range voxel distances. Only nearest neighbor affinities were used at test time. + +# C ORIENTATION-TILT ILLUSION IMAGE DATASET + +Models were tested for a tilt illusion by first training on grating images of a single orientation, then testing on images in which a center grating had the same/different orientation as a surround grating. Each image in the training dataset consisted of a circular patch of oriented grating on a gray canvas of size $500 \times 500$ pixels. To ensure that the decoder successfully decoded orientation information from model activities, the training dataset incorporated a wide variety of grating stimuli with 4 randomly sampled image parameters: $r$ , $\lambda$ , $\theta$ , and $\phi$ . $r$ denotes the radius of the circle in which oriented grating has been rendered, and was sampled from a uniform distribution with interval between 80 and 240 pixels; $\lambda$ specifies the wavelength of the grating pattern and was sampled from a uniform distribution with interval between 30 and 90 pixels; $\theta$ specifies the orientation of the gratings and is uniformly sampled from all possible orientations; $\phi$ denotes the phase offset of the oriented gratings and is also uniformly sampled from all possible values. The models' BSDS-trained weights were fixed and + +![](images/aa5bbb21dd65af23642e42adfdb121d5afe0646fa4936dfaf2ccbd487085480f.jpg) +(a) BDCN (BSDS) + +![](images/4bd9705c5f872a7e29f4ac0a2b1fb5b36643ce74d0b3f865989463c99ce35556.jpg) +Figure S3: Searching over learning rates did not rescue BSDS performance on small BSDS datasets had no affect on performance. over learning rates did not rescue from overfitting. The left panel depicts training and validation losses for BSDS on different sized subsets of BSDS500. (b) Performance after training with three different learning rates on the $5\%$ split. There is little difference in best validation performance between the three learning rates. (c) The full training and validation loss curves for the BDCN trained on $5\%$ of BSDS. The model overfits immediately. The model also overfit on the other dataset sizes, but because there was more data, this happened later in training. + +![](images/ff61aaa4387e64aa22fe675ecdec3e8d3bd87ff1ae3aa77d136b32c7c8641300.jpg) + +![](images/8758c938e2fe492a4bc09ccc13de029821b96772c96781b323b90c1ba0cfdaae.jpg) +(b) U-Net (Connectomics) + +readout layers were trained to decode orientation at the center of each image (procedure described in the main text). + +This setup allowed us to tease apart the effects of the surround on the representation of orientation in the center by introducing separate surround regions in each test image filled with gratings with same/different orientations as the center (Fig.S2b). Each test image was generated with one additional parameter, $\Delta \theta$ which specified orientation difference of the surround gratings with respect to the center orientation, $\theta$ , and was sampled from a uniform distribution with interval between $-90$ and $+90$ degrees. The radius of the surround grating is denoted by $r$ and was sampled from the same uniform distribution we used in training dataset. Center gratings are then rendered in a circle of radius that is one half of the surround gratings. + +![](images/40c9020403642b22f4bd3868b9e25a33aeaf992c3c20705fcc68f8afe424e0ff.jpg) +Figure S4: Performance of lesioned $\gamma$ -Nets. We evaluate the F1 ODS of these models on BSDS500 test images (left) and test for the presence of an orientation-tilt illusion (right). Top row: A comparison of $\gamma$ -Nets with "untied weights" (i.e., unrolled to 8-timesteps with unique weights on every timestep), only top-down connections, only horizontal connections (using the standard $3 \times 3$ horizontal kernels), and only horizontal connections but with larger kernels ( $15 \times 15$ ). The full $\gamma$ -Net outperformed each of these models on all subsets of BSDS500 data. Both horizontal-only models captured the repulsive regime of the orientation-tilt illusion when center and surround gratings were similar, but only the version with larger kernels also showed an attractive regime, when the center and surround orientations were dissimilar. Bottom row: A comparison of $\gamma$ -Nets with no recurrence (i.e., one timestep of processing), no constraint for non-negativity (see fGRU formulation in main text), no parameters for additive feedback ( $\mu$ and $\kappa$ ), and no parameters for multiplicative feedback ( $\gamma$ and $\omega$ ). Once again, the full $\gamma$ -Net outperformed each of these models. While none of these models showed the full orientation-tilt illusion, the $\gamma$ -Net without non-negativity constraints and the $\gamma$ -Net without additive feedback showed the repulsive regime of the orientation-tilt illusion. + +![](images/250b6eb24cddd88d75fb0f2422a9c087c29482bbba4ef770a6d0c410c7d6f0ea.jpg) + +![](images/d917a4c7e7c6ce11c5a8ca1b60b40c91603bcaff06b0c09fcd1543fa8502b519.jpg) + +![](images/1b2d3d272eb5a6182649c05428ec89ec828aff7084992ddea3ce50f65f9dc59d.jpg) + +![](images/813914f63507464efe13e5bbc88691465d4181fcd746eab7713c87604a5dcd92.jpg) + +![](images/1c40d4948c18dfa0432b41407b5637c7f6261679b055bd1395a65070a5708c5f.jpg) +Figure S5: $\gamma$ -Nets trained for contour detection learn to approach a steady-state solution. The processing timecourse of $\gamma$ -Net predictions on representative images from the BSDS500 test set. The L2 norm of per-pixel differences between predictions on consecutive timesteps (i.e., timestep 2 - timestep 1) approaches 0, indicating that the model converges towards steady state by the end of its processing timecourse. + +![](images/a8afc31d954168df757352d83033353fe8dea967d29c8235e0fd59cdd77d68e3.jpg) + +![](images/d6c5e9ddea30bc301267899839f5b2e3e74100ec48d68566200c3fd021e022e1.jpg) +Figure S6: Performance of $\gamma$ -Nets during experiments to correct an orientation-tilt illusion. The illusion-corrected model was trained to have veridical representations of the central grating in tilt-illusion stimuli. To control for potential detrimental effects of the training procedure per se, a control model ("domain-transfer control") was trained to decode orientations of single-grating stimuli. (a) Training causes contour-detection performance of both models to drop. However, the illusion-corrected model performance drops significantly more than the biased model (see main text for hypothesis testing). The losses for both models converge towards 0 across training, indicating that both learned to decode central-orientations of their stimuli. (b) Contour detection examples for biased and bias-corrected models across steps of this training procedure. + +![](images/9a217d1cdf1109c82ebfe61530d9ee5193ad542d7cf11acb75e417dfc215c7e8.jpg) + +![](images/76c7c467c57c0f030e41c29c62dd812650fa779c24fee8f2e52db9daded604e9.jpg) + +![](images/b12365775bb374f60e2ed03a56fcf40fd860f51ced0b3d879fe35f1751944b1d.jpg) +Figure S7: Differences in contour predictions for the illusion-corrected and domain-transfer control $\gamma$ -Nets on BSDS500. \ No newline at end of file diff --git a/recurrentneuralcircuitsforcontourdetection/images.zip b/recurrentneuralcircuitsforcontourdetection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c8b79030bb4d9cde60e72448d020038ac80378e9 --- /dev/null +++ b/recurrentneuralcircuitsforcontourdetection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9d5966a4b75a6c70ec9341692eabb5377816f340e4e01f5ef87af9409ea3916 +size 1498944 diff --git a/recurrentneuralcircuitsforcontourdetection/layout.json b/recurrentneuralcircuitsforcontourdetection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..45a1a601b3a81aa41aa014808a7291277458f0cd --- /dev/null +++ b/recurrentneuralcircuitsforcontourdetection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24476a56f1cef88be1f449217a8a4f8f4dd57031d5a61d535e6758e86b88ef12 +size 742964 diff --git a/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_content_list.json b/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7df029ee8e8c8c4ed0fb3bac5c49796d1af4f3b3 --- /dev/null +++ b/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5818380980d3ca3ab088f9214a0492bf06d21ffd8646ac0720b9f6296a6a51a +size 89195 diff --git a/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_model.json b/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..34ba1cfcd6ad1eebd7384297674b98f016fa5d18 --- /dev/null +++ b/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8f3513f14c8f5b13afae310f8748f9617f2cf9f36b3d9f88dd6be3f9df77da3 +size 111283 diff --git a/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_origin.pdf b/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..758df57784154908a688293cc4e4f52fb08bf1d7 --- /dev/null +++ b/reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43aa216ee136004b4d9e78bad3af790d2f9e04664f5db19cdb6906ad07937908 +size 590795 diff --git a/reducingtransformerdepthondemandwithstructureddropout/full.md b/reducingtransformerdepthondemandwithstructureddropout/full.md new file mode 100644 index 0000000000000000000000000000000000000000..eb0def1dc5d8debb0e3d6d80ae35d19d57a89d2c --- /dev/null +++ b/reducingtransformerdepthondemandwithstructureddropout/full.md @@ -0,0 +1,388 @@ +# REDUCING TRANSFORMER深度ONDEMANDWITH STRUCTURED DROPOUT + +Angela Fan + +Facebook AI Research/LORIA + +angelafan@fb.com + +Edouard Grave + +Facebook AI Research + +egrave@fb.com + +Armand Joulin + +Facebook AI Research + +ajoulin@fb.com + +# ABSTRACT + +Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation. + +# 1 INTRODUCTION + +Transformer architectures (Vaswani et al., 2017) have become the dominant architecture in natural language processing, with state-of-the-art performance across a variety of tasks, including machine translation (Vaswani et al., 2017; Ott et al., 2018), language modeling (Dai et al., 2019; Baevski & Auli, 2018) and sentence representation (Devlin et al., 2018; Yang et al., 2019). Each of its layers contains millions of parameters accessed during the forward pass, making it computationally demanding in terms of memory and latency during both training and inference. In an ideal situation, we would be able to extract sub-networks — automatically and without finetuning — from this over-parameterized network, for any given memory or latency constraint, while maintaining good performance. In contrast, standard pruning or distillation methods follow a strategy that often includes a finetuning or retraining step, and the process must be repeated for each desired depth. + +In this work, we propose a novel approach to extract any sub-network without a post-hoc pruning process from over-parameterized networks. The core of our method is to sample small sub-networks from the larger model during training by randomly dropping model weights as in Dropout (Hinton et al., 2012) or DropConnect (Wan et al., 2013). This has the advantage of making the network robust to subsequent pruning. If well-chosen groups of weights are dropped simultaneously, the resulting small sub-networks can be very efficient. In particular, we drop entire layers to extract shallow models at inference time. Previous work (Huang et al., 2016) has shown that dropping layers during training can regularize and reduce the training time of very deep convolutional networks. In contrast, we focus on pruning. As illustrated in Figure 1, an advantage of our layer dropping technique, or LayerDrop, is that from one single deep model, we can extract shallow sub-networks of any desired depth on demand at inference time. + +We validate our findings on a variety of competitive benchmarks, namely WMT14 English-German for machine translation, WikiText-103 (Merit et al., 2016) for language modeling, CNN-Dailymail (Hermann et al., 2015) for abstractive summarization, ELI5 (Fan et al., 2017) for long form question answering, and several natural language understanding tasks (Wang et al., 2019a) for sentence representation. Our approach achieves state of the art on most of these benchmarks as a result of the regularization effect, which stabilizes the training of larger and deeper networks. We also show that we can prune Transformer architectures to much smaller models while maintaining com + +![](images/fe77900d21af09e9edf11ea87c953b1978f8e19bdc41b3a53556a3aae128f267.jpg) +Figure 1: LayerDrop (right) randomly drops layers at training time. At test time, this allows for sub-network selection to any desired depth as the network has been trained to be robust to pruning. In contrast to standard approaches that must re-train a new model from scratch for each model size (left), our method trains only one network from which multiple shallow models can be extracted. + +petitive performance, outperforming specific model reduction strategies dedicated to BERT (Devlin et al., 2018; Sanh, 2019) as well as training smaller models from scratch. Overall, applying LayerDrop to Transformer networks provides the following key advantages: + +- LayerDrop regularizes very deep Transformers and stabilizes their training, leading to state-of-the-art performance across a variety of benchmarks. +- Small and efficient models of any depth can be extracted automatically at test time from a single large pre-trained model, without the need for finetuning. +- LayerDrop is as simple to implement as dropout. + +# 2 RELATED WORK + +Our approach is a form of Dropout (Srivastava et al., 2014) applied to model weights instead of activations, as in DropConnect (Wan et al., 2013). Different from DropConnect, we drop groups of weights to induce group redundancy to create models suited for pruning to shallow, efficient models at inference time. Gomez et al. (2018) propose a targeted Dropout and DropConnect, where they learn the drop rate of the weights to match a targeted pruning scheme. Instead, we adapt the masks to the structures that we are interested in pruning. Closer to our work, the Stochastic Depth approach of Huang et al. (2016) drops layers randomly during training. As opposed to our work, they are interested in accelerating the training of very deep ResNets (He et al., 2016), so their dropping schedule is adapted to this goal. Concurrently to this work, Pham et al. (2019) applied Stochastic Depth to train very deep Transformers for speech and show the benefits of its regularization effect. + +More generally, our method is a form of structured pruning (Liu et al., 2018b). As opposed to weight pruning (LeCun et al., 1990), structured pruning removes coherent groups of weights to preserve the original structure of the network. Structured pruning has been used in some NLP applications, such as machine translation (See et al., 2016), text classification (Joulin et al., 2016) and language modeling (Murray & Chiang, 2015). However, it has been more widely adopted in computer vision and applied to convolutional network to remove filters (Li et al., 2016; Wen et al., 2016), channels (He et al., 2017), or residual blocks (Huang et al., 2018; Huang & Wang, 2018). Similar to Mittal et al. (2018), we take advantage of the plasticity of neural networks to learn models that are resilient to random pruning or skipping connections Wang et al. (2018); Wu et al. (2018); Liu et al. (2018a), rather than learning the pruning itself. We refer the reader to Liu et al. (2018b) for an exhaustive study of these approaches and their evaluation in the context of convolutional networks. + +Reducing the memory footprint of Transformer architectures and BERT in particular is an active subject of research. Several works have compressed BERT as a post-processing step using different forms of distillation (Turc et al., 2019; Tang et al., 2019; Shulga, 2019; Sanh, 2019). Similarly, various papers have shown evidence that Transformers are over-parameterized, especially that most self-attention heads can be dropped at test time (Michel et al., 2019; Voita et al., 2019). Different + +from these, our models are trained to be resilient to pruning, which significantly reduces the performance drop induced by test time pruning. Others have proposed trainable adaptive mechanisms to control their memory footprint (Jernite et al., 2016; Sukhbaatar et al., 2019; Correia et al., 2019). These approaches are complementary to ours and should benefit from each other. + +# 3 METHOD + +In this section, we briefly introduce the Transformer, then describe our Structured Dropout technique and its application to layers. We also discuss several inference time pruning strategies. + +# 3.1 THE TRANSFORMER ARCHITECTURE + +We succinctly review the Transformer architecture and refer the reader to Vaswani et al. (2017) for additional details. A Transformer is a stack of layers composed of two sub-layers: multi-head self-attention followed by a feedforward sub-layer. The multi-head self-attention sub-layer consists of multiple attention heads applied in parallel. Each attention head takes a matrix $\mathbf{X}$ where each row represents an element of the input sequence and updates their representations by gathering information from their context using an Attention mechanism (Bahdanau et al., 2014): + +$$ +\mathbf {Y} = \operatorname {S o f t m a x} (\mathbf {X} ^ {T} \mathbf {K} (\mathbf {Q X} + \mathbf {P})) \mathbf {V X}, +$$ + +where $\mathbf{K},\mathbf{V},\mathbf{Q}$ and $\mathbf{P}$ are matrices of parameters. The outputs of the heads are then concatenated along the time step into a sequence of vectors. + +The second sub-layer then applies a fully connected feedforward network to each element of this sequence independently, $\mathrm{FFN}(\mathbf{x}) = \mathbf{U}\mathrm{ReLU}(\mathbf{V}\mathbf{x})$ , where $\mathbf{V}$ and $\mathbf{U}$ are matrices of parameters. Each sub-layer is followed by a AddNorm operation that is a residual connection (He et al., 2016) and a layer normalization (Ba et al., 2016). + +# 3.2 TRAINING TRANSFORMERS WITH RANDOM STRUCTURED PRUNING + +We present a regularization approach that makes Transformers robust to subsequent structured pruning at inference time. We focus in particular on the case where the targeted structure is a layer. + +# 3.2.1 RANDOMLY DROPPING STRUCTURES AT TRAINING TIME + +Regularizing networks to be robust to pruning can be achieved by randomly removing weights during its training as in DropConnect (Wan et al., 2013). In this approach, each weight is dropped independently following a Bernoulli distribution associated with a parameter $p > 0$ that controls the drop rate. This is equivalent to a pointwise multiplication of the weight matrix $\mathbf{W}$ with a randomly sampled $\{0,1\}$ mask matrix $\mathbf{M}$ : + +$$ +\mathbf {W} _ {d} = \mathbf {M} \odot \mathbf {W}. +$$ + +DropConnect is a form of random unstructured pruning that leads to smaller, but not necessarily more efficient, models. We propose to add structure to this mechanism to target model efficiency. + +Random Structured Dropout. The weights of a Transformer network belong to multiple overlapping structures, such as heads, FFN matrices, or layers. Dropping weights using groups that follow some of these inherent structures potentially leads to a significant reduction of the inference time. This is equivalent to constraining the mask $\mathbf{M}$ to be constant over some predefined groups of weights. More precisely, given a set $\mathcal{G}$ of predefined groups of weights, the $\{0,1\}$ mask matrix $\mathbf{M}$ is randomly sampled over groups instead of weights: + +$$ +\forall i, \mathbf {M} [ i ] \in \{0, 1 \}, \quad \text {a n d} \forall G \in \mathcal {G}, \forall (i, j) \in G, \mathbf {M} [ i ] = \mathbf {M} [ j ]. +$$ + +This structured dropout formulation is general and can be applied to any overlapping groups of weights, whether heads, FFN matrices, or layers. Nonetheless, not all of the structures in a Transformer lead to the same benefits when dropped. For example, dropping attention heads does not reduce runtime as they are usually computed in parallel. For simplicity, we focus on dropping layers, and we name this structured pruning, LayerDrop. This is inspired by the Stochastic Depth approach of Huang et al. (2016) used to train very deep ResNets (He et al., 2015). + +# 3.2.2 PRUNING AT INFERENCE TIME + +Selecting Layers to Prune Training with LayerDrop makes the network more robust to predicting with missing layers. However, LayerDrop does not explicitly provide a way to select which groups to prune. We consider several different pruning strategies, described below: + +- Every Other: A straightforward strategy is to simply drop every other layer. Pruning with a rate $p$ means dropping the layers at a depth $d$ such that $d \equiv 0(\bmod \lfloor \frac{1}{p} \rfloor)$ . This strategy is intuitive and leads to balanced networks. +- Search on Valid: Another possibility is to compute various combinations of layers to form shallower networks using the validation set, then select the best performing for test. This is straightforward but computationally intensive and can lead to overfitting on validation. +- Data Driven Pruning: Finally, we propose data driven pruning where we learn the drop rate of each layer. Given a target drop rate $p$ , we learn an individual drop rate $p_d$ for the layer at depth $d$ such that the average rate over layers is equal to $p$ . More precisely, we parameterize $p_d$ as a non-linear function of the activation of its layer and apply a softmax. At inference time, we forward only the fixed top-k highest scoring layers based on the softmax output (e.g. chosen layers do not depend on the input features). + +In practice, we observe that the Every Other strategy works surprisingly well across many tasks and configurations. Search on Valid and Data Driven Pruning only offer marginal gains. Note that we do not further finetune any of the pruned networks (see Appendix for analysis of finetuning). + +Setting the drop rate for optimal pruning. There is a straightforward relationship between the drop rate of groups and the average pruning level that the network should be resilient to. Assuming $N$ groups and a fixed drop ratio $p$ , the average number of groups used by the network during training is $N(1 - p)$ . As a consequence, to target a pruning size of $r$ groups, the optimal drop rate is: + +$$ +p ^ {*} = 1 - \frac {r}{N} +$$ + +In practice, we observe that networks are more robust to pruning than their expected ratio but higher pruning rates leads to better performance for smaller models. We use a LayerDrop rate of $p = 0.2$ for all our experiments, but we recommend $p = 0.5$ to target very small inference time models. + +# 4 EXPERIMENTAL SETUP + +We apply our method to a variety of sequence modeling tasks: neural machine translation, language modeling, summarization, long form question answering, and various natural language understanding tasks. Our models are implemented in PyTorch using fairseq-py (Ott et al., 2019). Additional implementation and training details with hyperparameter settings are in the Appendix. + +Neural Machine Translation. We experiment on the WMT English-German machine translation benchmark using the Transformer Big architecture. We use the dataset of 4.5M en-de sentence pairs from WMT16 (Vaswani et al., 2017) for training, newstest2013 for validation, and newstest2014 for test. We optimize the dropout value within the range $\{0.1, 0.2, 0.5\}$ on the validation set and set the LayerDrop rate $p$ to 0.2. For generation, we average the last 10 checkpoints, set the length penalty to 0.6, and beam size to 8, following the settings suggested in Wu et al. (2019a), and measure case-sensitive tokenized BLEU. We apply compound splitting, as used in Vaswani et al. (2017). + +Language Modeling. We experiment on the Wikitext-103 language modeling benchmark (Merit et al., 2016) which contains 100M tokens and a large vocabulary size of 260K. We adopt the 16 layer Transformer used in Baevski & Auli (2018). We set the LayerDrop rate $p$ to 0.2 and tune the standard dropout parameter in $\{0.1, 0.2, 0.3\}$ on the validation set. We report test set perplexity (PPL). + +
ModelEnc LayersDec LayersBLEU
Transformer (Vaswani et al., 2017)6628.4
Transformer (Ott et al., 2018)6629.3
DynamicConv (Wu et al., 2019a)7629.7
Transformer (Ott et al., 2018) + LayerDrop6629.6
Transformer (Ott et al., 2018) + LayerDrop12630.2
+ +Table 1: Results on WMT en-de Machine Translation (newstest2014 test set) + +
ModelLayersParamsPPL
Adaptive Inputs (Baevski & Auli, 2018)16247M18.7
Transformer XL Large (Dai et al., 2019)18257M18.3
Adaptive Inputs + LayerDrop16247M18.3
Adaptive Inputs + LayerDrop40423M17.7
+ +Table 2: Results on Wikitext-103 language modeling benchmark (test set). + +Summarization. We adopt the Transformer base architecture and training schedule from Edunov et al. (2019) and experiment on the CNN-Dailymail multi-sentence summarization benchmark. The training data contains over 280K full-text news articles paired with multi-sentence summaries (Hermann et al., 2015; See et al., 2017). We tune a generation length in the range $\{40, 50, 60\}$ and use 3-gram blocking. We set the LayerDrop rate $p$ to 0.2. We evaluate using ROUGE (Lin, 2004). + +Long Form Question Answering. We consider the Long Form Question Answering Dataset ELI5 of Fan et al. (2019), which consists of 272K question answer pairs from the subreddit Explain Like I'm Five along with extracted supporting documents from web search. We follow the Transformer Big architecture and training procedure of Fan et al. (2019). We generate long answers using beam search with beam size 5 and apply 3-gram blocking (Fan et al., 2017). We evaluate with ROUGE. + +Sentence representation Pre-training. We train base and large BERT (Devlin et al., 2018) models following the open-source implementation of Liu et al. (2019). We use two datasets: Bookscorpus + Wiki from Liu et al. (2019) and the larger combination of Bookscorpus + OpenWebText + CC-News + Stories (Liu et al., 2019). We evaluate the pretrained models on various natural language understanding tasks. Specifically, we evaluate accuracy on MRPC (Dolan & Brockett, 2005), QNLI (Rajpurkar et al., 2016), MNLI (Williams et al., 2018), and SST2 (Socher et al., 2013). + +# 5 RESULTS + +# 5.1 LAYERDROP AS A REGULARIZER + +Language Modeling. In Table 2, we show the impact of LayerDrop on the performance of a Transformer network trained in the setting of Adaptive Inputs (Baevski & Auli, 2018). Adding LayerDrop to a 16 layer Transformer improves the performance by 0.4 perplexity, matching the state-of-the-art results of Transformer-XL. Our 40 layer Transformer with LayerDrop further improves the state of the art by 0.6 points. Very deep Transformers are typically hard to train because of instability and memory usage, and they are prone to overfitting on a small dataset like Wikitext-103. LayerDrop regularizes the network, reduces the memory usage, and increases training stability as fewer layers are active at each forward pass. These results confirm that this type of approach can be used to efficiently train very deep networks, as shown in Huang et al. (2016) for convolutional networks. + +Sequence to sequence modeling. Similarly, as shown in Table 1 and Table 3, applying LayerDrop to Transformers on text generation tasks such as neural machine translation, summarization, and long form question answering also boosts performance for all tasks. In these experiments, we take the Transformer architectures that are state-the-art and train them with LayerDrop. In neu + +
ModelEncDecROUGE-1ROUGE-2ROUGE-L
Abstractive Summarization
Transformer (Edunov et al., 2019)6640.117.636.8
Transformer + LayerDrop6640.517.937.1
Transformer + LayerDrop6841.118.137.5
Long Form Question Answering
Transformer Multitask (Fan et al., 2019)6628.95.423.1
Transformer Multitask + LayerDrop6629.45.523.4
+ +Table 3: Results for CNN-Dailymail Summarization and ELI5 QA (test set). + +
DataLayersModelMNLI-mMRPCQNLISST2
Books + Wiki24RoBERTa89.090.293.995.3
24RoBERTa + LayerDrop89.290.294.295.4
+ more data24RoBERTa90.290.994.796.4
24RoBERTa + LayerDrop90.191.094.796.8
48RoBERTa + LayerDrop90.490.994.896.9
+ +Table 4: Results on Various NLU Tasks for RoBERTa Large trained for 500K updates (dev set). + +ral machine translation on newstest2014, our 12 encoder layer Transformer model with LayerDrop further improves the state of the art, reaching 30.2 BLEU. In comparison, a standard Transformer trained without LayerDrop diverges with 12 encoder layers. This is a known problem, and techniques such as improved initialization could be used to maintain stability (Junczys-Dowmunt, 2019; Zhang et al., 2019; Wang et al., 2019b; Wu et al., 2019b), but are out of the scope of this work. Similar results are seen in summarization. + +Bi-Directional Pre-training. In a second set of experiments, we look at the impact of LayerDrop on pre-training for sentence representation models and subsequent finetuning on multiple natural language understanding tasks. We compare our models to a variant of BERT for sentence representations, called RoBERTa (Liu et al., 2019), and analyze the results of finetuning for data adaptation on MNLI, MRPC, QNLI, and SST2. We apply LayerDrop during both pre-training and finetuning. + +We compare the performance of the large architecture on the BooksCorpus+Wiki dataset used in BERT. We analyze the performance of training on the additional data used in RoBERTa as well as pre-training for even longer. Comparing fixed model size and training data, LayerDrop can improve the performance of RoBERTa on several tasks. LayerDrop can further be used to both enable and stabilize the training (Huang et al., 2016) of models double the size for even stronger performance. + +# 5.2 PRUNING TRANSFORMER LAYERS TO ON-DEMAND深度WITH LAYERDROP + +Pruning Generation Tasks. In Figure 2, we investigate the impact of the number of pruned decoder layers on the performance of a Transformer for language modeling, neural machine translation, and summarization. We compare three different settings: standard Transformer models trained without LayerDrop but subsequently pruned, standard Transformer models trained from scratch to each desired depth, and lastly our approach: pruning layers of a Transformer trained with LayerDrop. Our model is trained once with the maximum number of layers and then pruned to the desired depth, without any finetuning in the shallower configuration. Our approach outperforms small models trained from scratch, showing that LayerDrop leads to more accurate small models at a whole range of depths. Further, training with LayerDrop does not incur the computational cost of retraining a new model for each desired depth. For completeness, dropping layers of a deep Transformer trained without LayerDrop performs poorly as it was not trained to be robust to missing layers. + +Pruning BERT-like Models. In Table 7 (left), we compare pruning Transformers trained with LayerDrop to different approaches used to create smaller, shallower models. We compare to BERT + +![](images/56c0d0ac88a2e9634b5b688534f4f4b7b0014a0dfe03cd9141a858176409717a.jpg) +Figure 2: Performance as a function of Pruning on various generation tasks (test set), compared to training smaller models from scratch and pruning a Transformer baseline trained without LayerDrop. Pruning networks with LayerDrop performs strongly compared to these alternatives. + +![](images/6ed86009376b227fa4375bbc5c9d1c3f48f6494c117d9125b7e91fe3b8eb2773.jpg) + +![](images/03eb9ae2a6ca61467235ae1bc1c0d451916667b62985c0d72f06e6612eb12fd7.jpg) + +![](images/ca85dd9a3c0eb8e08ed6f090a4fd13800ed3f6f6761ef5e463a6118000f8e63f.jpg) + +![](images/70e403aa574822587a226af1e6485d10988d129ee65cd5bc9399ab497eb58310.jpg) + +![](images/49e8a6700db496adddac8e5a5c54dec86d54418a7dd7f1384e98a725714a9274.jpg) + +
MNLISST2
6 Layers (50% Pruned)
RoBERTa82.392.1
+ LayerDrop82.992.5
+ more data84.193.2
3 Layers (75% Pruned)
RoBERTa78.190.3
+ LayerDrop78.690.5
+ more data82.292.0
+ +![](images/8a2dfc207f17b16f73983408d62eee40ee8431df7aa03384b5abe2a231213142.jpg) +Figure 3: (left) Performance as a function of Pruning on MNLI and SST2 compared to BERT and RoBERTa trained from scratch and DistilBERT. Pruning one network trained with LayerDrop (blue) outperforms alternatives that require a new network for each point. (right) Performance when Training on More Data shows even stronger results on MNLI and SST2 for pruned models. + +base and RoBERTa base trained from scratch with 6 and 3 layers as well as recent work on distillation, called DistilBERT (Sanh, 2019). We analyze both BERT and RoBERTa models as the vocabulary is not the same due to differences in subword tokenization, which affects performance. + +DistilBERT occasionally performs worse than BERT of the same size trained from scratch, which confirms the findings of Liu et al. (2018b) about the performance of pruned models compared to training small models from scratch. Our approach, however, obtains results better than BERT and RoBERTa trained from scratch. Further, our method does not need any post-processing: we simply prune every other layer of our RoBERTa model that has been pre-trained with LayerDrop and finetune the small models on each of the downstream tasks, following standard procedure. When training with additional data, shown in Table 7 (right), even stronger performance can be achieved. + +# 6 ABLATION STUDIES + +Comparison of Structured Dropout Figure 4 (left) contrasts various forms of structured dropout: dropping attention heads, FFN matrices, and entire Transformer layers. Dropping heads alone is worse than dropping entire sub-layers or layers. It also offers no advantage in terms of running time as attention heads are computed in parallel for computational efficiency. We observe no large differences between dropping sub-layers and layers, possibly because we are working with relatively shallow networks. In theory, dropping sub-layers should perform better and we expect this to be the + +![](images/abfb1a409d3d44bc3439c9417725e1977600e09248ee8172363598afd93f08b7.jpg) +Figure 4: (left) Impact of Various Structured Dropouts on Wikitext-103 Valid. Dropping Layers is straightforward and has strong performance. (right) Comparison of Pruning Strategies on Wikitext-103 Valid. Marginal gains can be achieved, but dropping every other layer is hard to beat. + +![](images/baea7a96bc376a05ff58b03d425d3fc57bba895145970aae9175d1501d2c33de.jpg) + +![](images/5bf1cfbadab4c4a381aa64dead87ee2529398eff0052759b27c05a7322948278.jpg) +Figure 5: Relative Importance of Specific Layers. (Wikitext-103 Valid) The full network is pruned into various 8 layer sub-network configurations, and the average perplexity pruning layer $n$ is displayed above. + +![](images/682b65c042c6041d9f7bd2b45b2700d4e0f4d9756c360c37f85fa3c3157c811c.jpg) +Figure 6: Effect of Train LayerDrop on Inference-time Pruning. (Wikitext-103 Valid) Training with larger LayerDrop is beneficial for significant pruning. + +case with very deep Transformers. We experiment with overlapping structured groups, such as heads + layers and heads + sub-layers and find that the beneficial effect can be advantageously combined. We focus on layers for simplicity, as dropping more structures introduces more parameters to tune. + +Comparison of Various Pruning Strategies. Figure 4 (right) contrasts various approaches to sub-selecting model layers at inference time. + +The predominant method used in this paper, the straightforward strategy of selecting every other layer, is tough to beat. We find only marginal improvement can be gained by searching over the validation set for the best set of 8 layers to use and by learning which layers to drop. In contrast, dropping chunks of consecutive layers is harmful. Namely, removing the first half or last half of a model is particularly harmful, as the model does not have the ability to process the input or project to the full vocabulary to predict the subsequent word. + +Choosing which Layers to Prune. Not all layers are equally important. In an experiment on Wikitext-103, we pruned selections of 8 layers at random. Figure 5 displays the perplexity when that layer is removed, averaging results from 20 pruned model per layer. The input and output layers of a network are the most important, as they process the input and project to the output vocabulary. + +Relationship between LayerDrop at Training Time and Pruning at Inference Time. Figure 6 displays the relationship between the training time LayerDrop and the performance of a pruned network at test time. If significant depth reduction is desired, training with larger LayerDrop is beneficial — this equalizes the train and test time settings. An analysis for BERT is in the Appendix. + +# 7 CONCLUSION + +Structured dropout regularizes neural networks to be more robust to applying structured pruning at inference time. We focus on the setting where structures are layers, enabling pruning of shallow and efficient models of any desired depth. In a variety of text generation and pre-training tasks, we show that LayerDrop enables and stabilizes the training of substantially deeper networks and simultaneously allows for the extraction of models of various depths with strong performance. + +# REFERENCES + +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. +Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. +Gonçalo M Correia, Vlad Niculae, and André FT Martins. Adaptively sparse transformers. arXiv preprint arXiv:1909.00015, 2019. +Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-x1: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019. +Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Proc. of ICML, 2017. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the International Workshop on Paraphrasing, 2005. +Sergey Edunov, Alexei Baevski, and Michael Auli. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4052-4059, 2019. +Angela Fan, David Grangier, and Michael Auli. Controllable abstractive summarization. arXiv, abs/1711.05217, 2017. +Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. Eli5: Long form question answering. arXiv preprint arXiv:1907.09190, 2019. +Aidan N Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, and Geoffrey E Hinton. Targeted dropout. 2018. +Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. Efficient softmax approximation for gpus. arXiv, abs/1609.04309, 2016. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proc. of CVPR, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389-1397, 2017. + +Karl Moritz Hermann, Tomáš Kočisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proc. of NIPS, 2015. +Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. +Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European conference on computer vision, pp. 646-661. Springer, 2016. +Gao Huang, Shichen Liu, Laurens Van der Maaten, and Kilian Q Weinberger. Condensenet: An efficient densenet using learned group convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2752-2761, 2018. +Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 304-320, 2018. +Yacine Jernite, Edouard Grave, Armand Joulin, and Tomas Mikolov. Variable computation in recurrent neural networks. arXiv preprint arXiv:1611.06188, 2016. +Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651, 2016. +Marcin Junczys-Dowmunt. Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation. arXiv preprint arXiv:1907.06170, 2019. +Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019. +Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990. +Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016. +Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Workshop on Text Summarization Branches Out, 2004. +Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, and Jiawei Han. Efficient contextualized representation: Language model pruning for sequence labeling. arXiv preprint arXiv:1804.07827, 2018a. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. +Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018b. +Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. +Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer Sentinel Mixture Models. arXiv, abs/1609.07843, 2016. +Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? arXiv preprint arXiv:1905.10650, 2019. +Deepak Mittal, Shweta Bhardwaj, Mitesh M Khapra, and Balaraman Ravindran. Recovering from random pruning: On the plasticity of deep convolutional neural networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 848-857. IEEE, 2018. + +Kenton Murray and David Chiang. Auto-sizing neural networks: With applications to n-gram language models. arXiv preprint arXiv:1508.05051, 2015. +Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proc. of WMT, 2018. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019. +Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. In Proceedings of the Second International Conference on Learning Representations (ICLR 2014), 2014. +Ngoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus Muller, and Alex Waibel. Very deep selfattention networks for end-to-end speech recognition. arXiv preprint arXiv:1904.13377, 2019. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of EMNLP, pp. 2383-2392. Association for Computational Linguistics, 2016. +Victor Sanh. Smaller, faster, cheaper, lighter: Introducing distilbert, a distilled version of bert. https://medium.com/huggingface/distilbert-8cf3380435b5, 2019. +Abigail See, Minh-Thang Luong, and Christopher D Manning. Compression of neural machine translation models via pruning. arXiv preprint arXiv:1606.09274, 2016. +Abigail See, Peter J Liu, and Christopher D Manning. Get to the point: Summarization with pointer-generator networks. In Proc. of ACL, 2017. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proc. of ACL, 2016. +Dima Shulga. Distilling bert how to achieve bert performance using logistic regression. towards-datascience.com, 2019. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pp. 1631-1642, 2013. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958, 2014. +Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. arXiv preprint arXiv:1905.07799, 2019. +Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139-1147, 2013. +Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. Distilling task-specific knowledge from bert into simple neural networks. arXiv preprint arXiv:1903.12136, 2019. +Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962, 2019. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418, 2019. +Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In International conference on machine learning, pp. 1058-1066, 2013. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. 2019a. In the Proceedings of ICLR. +Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F Wong, and Lidia S Chao. Learning deep transformer models for machine translation. arXiv preprint arXiv:1906.01787, 2019b. +Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 409-424, 2018. +Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pp. 2074-2082, 2016. +Adina Williams, Nikita Nangia, and Samuel R. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT, 2018. +Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations, 2019a. URL https://arxiv.org/abs/1901.10430. +Lijun Wu, Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. Depth growing for neural machine translation. arXiv preprint arXiv:1907.01968, 2019b. +Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S Davis, Kristen Grauman, and Rogerio Feris. Blockdrop: Dynamic inference paths in residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8817-8826, 2018. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019. +Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. arXiv preprint arXiv:1901.09321, 2019. + +# A APPENDIX + +# A.1 ADDITIONAL IMPLEMENTATION DETAILS + +# A.1.1 NEURAL MACHINE TRANSLATION + +WMT en-de: We model a 32K joint byte-pair encoding. We train using the cosine (Loshchilov & Hutter, 2016) learning rate schedule from Wu et al. (2019a) with label smoothing 0.1. vocabulary (Sennrich et al., 2015). We train on 8 GPU for total training time 66k seconds. + +IWSLT de-en: The dataset consists of 160K training pairs, fully lowercased. We model a 10K joint BPE vocabulary and generate with beam size 4. We do not average checkpoints. Following Wu et al. (2019a), we use the Transformer base architecture with 6 encoder layers and 6 decoder layers. As the dataset is small, we decrease the overall model size and instead use the following parameters: FFN size 1024, hidden dimension 512, and 4 attention heads. We train on 1 GPU. + +Pruning: We apply the Every Other Layer strategy to the decoder and do not finetune. + +# A.1.2 LANGUAGE MODELING + +Training: To handle the large vocabulary of Wikitext-103, we follow Dauphin et al. (2017) and Baevski & Auli (2018) in using adaptive softmax (Grave et al., 2016) and adaptive input for computational efficiency. For both input and output embeddings, we use dimension size 1024 and three adaptive bands: 20K, 40K, and 200K. We use a cosine learning rate schedule (Baevski & Auli, 2018; Loshchilov & Hutter, 2016) and train with Nesterov's accelerated gradient (Sutskever et al., 2013). We set the momentum to 0.99 and renormalize gradients if the norm exceeds 0.1 (Pascanu et al., 2014). During training, we partition the data into blocks of contiguous tokens that ignore document boundaries. At test time, we respect sentence boundaries. We train on 8 GPU for total training time of 216k seconds. + +Pruning: We apply the Every Other Layer strategy and do not finetune. + +# A.1.3 SUMMARIZATION + +Data: We use the full text (non-anonymized) version of CNN-Dailymail introduced by See et al. (2017). Following Fan et al. (2017), we truncate articles to 400 tokens and model a joint byte-pair vocabulary of 32K types (Sennrich et al., 2016). + +Training: We train using Adam with a cosine learning rate schedule, warming up for 10K steps. We optimize dropout in the range $\{0.2, 0.3\}$ on the validation set and set LayerDrop to 0.2. We train on 1 GPU. + +Pruning: We apply the Every Other Layer strategy to the decoder and do not finetune. + +# A.1.4 LONG FORM QUESTION ANSWERING + +Training: We compare to the full multi-task setting of Fan et al. (2019), where data augmentation and multi-tasking is done at training time to increase the data available. We train on 8 GPU. + +Generation: We set the minimum length to 150 tokens and the maximum length to 200. + +# A.1.5 BI-DIRECTIONAL PRE-TRAINING + +Training: The base architecture is a 12 layer model with embedding size 768 and FFN size 3072. The large architecture consists of 24 layers with embedding size 1024 and FFN size 4096. For both settings, we follow Liu et al. (2019) in using the subword tokenization scheme from Radford et al. (2019), which uses bytes as subword units. This eliminates unknown tokens. Note this produces a different vocabulary size than BERT (Devlin et al., 2018), meaning models of the same depth do not have the same number of parameters. We train with large batches of size 8192 and maintain this batch size using gradient accumulation. We do not use next sentence prediction (Lample & Conneau, 2019). We optimize with Adam with a polynomial decay learning rate schedule. For + +
HyperparameterBaseLarge
Number of Layers1224
Hidden Size7681024
FFN Size30724096
Attention Heads1216
LayerDrop0.20.2
Warmup Steps24k30k
Peak Learning Rate6e-44e-4
Batch Size81928192
+ +Table 5: Hyperparameters for RoBERTa Pretraining + +
ModelBLEU
Transformer (Wu et al., 2019a)34.4
Dynamic Conv (Wu et al., 2019a)35.2
Transformer + LayerDrop34.5
+ +Table 6: BLEU for IWSLT (test set). + +BERT-Base, we use 32 GPU (total training time 171k seconds) and for BERT-Large, we use 128 GPU. For the RoBERTa data setting with more data, we use 512 GPU to train BERT-Large. + +Finetuning: During finetuning, we hyperparameter search over three learning rate options (1e-5, 2e-5, 3e-5) and batchsize (16 or 32 sentences). The other parameters are set following Liu et al. (2019). We do single task finetuning, meaning we only tune on the data provided for the given natural language understanding task. We do not perform assembling. When finetuning models trained with LayerDrop, we apply LayerDrop during finetuning time as well. + +Training smaller models: We train the 6 and 3 layer RoBERTa models following the same settings, but using the smaller number of layers and without LayerDrop. We finetune with the same sweep parameters. The 6 and 3 layer BERT model results are taken from Devlin et al. (2018). + +Training larger models: We train the 48 layer RoBERTa model with 0.5 LayerDrop so only 24 layers on average are active during a forward pass. + +Pruning: When pruning RoBERTa models, we use the Every Other Layer strategy and finetune without LayerDrop for the smaller models. + +# A.2 ADDITIONAL RESULTS + +IWSLT Table 6 displays results on the IWSLT de-en dataset. We see small improvement, likely as the network is small and already has a large quantity of regularization with dropout, attention dropout, and weight decay. The Transformer is not the state of the art architecture, and there remains a large gap between the Transformer and the DynamicConv model proposed by Wu et al. (2019a). + +Pruning BERT Models The numerical values corresponding to the pruned 6 and 3 layer RoBERTa + LayerDrop models are shown in Table 7. + +# A.3 ADDITIONAL ANALYSIS + +Impact of LayerDrop on training time. Figure 7 shows the increase in training speed when training with increasingly large quantities of LayerDrop. The words per second were computed on 8 V100 GPUs with 32GB of memory, without floating point 16, for a 16 layer model trained on Wikitext-103. Assuming fixed layer size, LayerDrop removes layers at training time randomly, which increases the training speed almost 2x if dropping half the number of layers. + +
ModelDatasetLayersMNLI-mMRPCQNLISST-2
BERTBooks + Wiki681.984.8-91.3
Distil BERT (Sanh, 2019)Books + Wiki681.682.485.592.7
RoBERTaBooks + Wiki682.382.589.792.1
RoBERTa + LayerDropBooks + Wiki682.985.389.492.5
RoBERTa + LayerDrop+ more data684.186.189.593.2
BERTBooks + Wiki377.979.8-88.4
RoBERTaBooks + Wiki378.179.486.290.3
RoBERTa + LayerDropBooks + Wiki378.675.186.090.5
RoBERTa + LayerDrop+ more data382.279.488.692.0
+ +![](images/9319273796bd788a77fe07fee5493757007ea7ba47636992f96c0f4201699c8e.jpg) +Figure 7: Effect of LayerDrop on Training Time + +Table 7: Comparison between BERT base with and without distillation with our RoBERTa base trained with LayerDrop. Our models are pruned before finetuning on each individual task. The numbers from BERT are taken from Devlin et al. (2018). + +
ModelValid PPL
Pruned w/ LayerDrop20.78
+ Finetune20.56
+ +Table 8: Impact of additional finetuning on a 16 layer language model pruned to 8 layers. + +![](images/10c81b986624cb3f89801758ce5880c421b1c0469b906bcdff4d63168013deaa.jpg) +Figure 8: Effect of Train LayerDrop on Inference-time Pruning on MNLI, SST2, and QNLI + +![](images/9f9d40912e8582430a7184fb61f0c4ca14b2c181ab954a5672ab074f576cd6f5.jpg) + +![](images/f528053d43f8d321050ed749afb6988b2b3334f49892b18345273f5735aca58f.jpg) + +![](images/1eeba5351593ec11823eb83a4a10021cabc23827859a63f0687b41e53e997fbf.jpg) + +BERT: Relationship between LayerDrop at Training Time and Pruning at Inference Time Similar to the analysis on Language Modeling, we find that training with larger quantities of LayerDrop allows for more aggressive pruning at inference time on various natural language generation tasks. However, as these tasks involve a finetuning step on the downstream tasks after pre-training, the effect is less straightforward. Results are shown in Figure 8. + +Impact of Finetuning. LayerDrop allows models to be pruned to the desired depth at test time. Apart from finetuning for data adaptation on the GLUE tasks, we do not finetune the performance of our smaller models on any of the other tasks we consider in this work. As shown in Table 8, we found that finetuning the pruned models only results in marginal improvement. Further, the finetuning parameters were dependent on the depth of the model at test time and difficult to optimize. + +
LayerDropDropoutValid PPL
0.50.119.03
0.50.219.22
0.50.319.31
0.50.419.62
0.50.519.95
+ +Table 9: Performance Varying Dropout with Fixed LayerDrop on a 16 layer language model trained on Wikitext-103 (Valid). + +
Structured DropoutValid PPL
Half FFN29.6
Baseline28.3
Head28.1
Sublayer19.9
Head + Sublayer19.8
Layer19.7
Head + Layer19.7
+ +Table 11: Performance Varying Structured Dropout and Pruning to an 8 layer language model trained on Wikitext-103 (Valid). Pruning is done by removing every other layer to half the model size. + +
ModelValid PPL
Adaptive Input*18.4
Random LayerDrop 0.218.2
Linear LayerDrop to 0.318.6
Linear LayerDrop to 0.518.5
Linear LayerDrop to 0.818.9
+ +Table 10: Random v. Linear Decay Layer-Drop on a 16 layer language model trained on Wikitext-103 (Valid). * result is from Baevski & Auli (2018) + +Effect of Varying Standard Dropout. LayerDrop adds a strong regularization effect to neural network training. We examine the importance of tuning the standard dropout parameter when training with LayerDrop. In Table 9, we show the performance when LayerDrop is fixed and standard Dropout is varied. We see that when training with LayerDrop, the quantity of standard Dropout can be reduced. + +LayerDrop Schedule: Random or Linear. We investigate the random structured dropping of layers compared to the linear decay schedule proposed in Huang et al. (2016) in Table 10. We find that the linear decay schedule does not provide performance improvement compared to random dropping, which is more straightforward to implement. + +Impact of Types of Structured Dropout when Pruning. Figure 4 (left) contrasts the performance of various forms of structured dropout, such as dropping attention heads, sub-layers of Transformers such as attention or FFN, portions of FFN matrices, and entire Transformer layers. It examines these results in the setting of evaluating the full depth model on language modeling and shows that in general, different types of structured dropout can improve performance. + +In Table 11, we examine the effect of varying training time structured dropout with performance when pruning. We show that the trend shown in Figure 4 is consistent with inference-time pruning performance, particularly that Half FFN dropout performs slightly worse, but other forms of structured dropout are beneficial. \ No newline at end of file diff --git a/reducingtransformerdepthondemandwithstructureddropout/images.zip b/reducingtransformerdepthondemandwithstructureddropout/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..aa19df2e0ea736cdb2ff260de57e488d85dc97d0 --- /dev/null +++ b/reducingtransformerdepthondemandwithstructureddropout/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3ba87638242af64cae91f5ec69f8b30259670a809d5c36875788596c644fcb9 +size 588202 diff --git a/reducingtransformerdepthondemandwithstructureddropout/layout.json b/reducingtransformerdepthondemandwithstructureddropout/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..db8f6f4d902f7910645d6b44d005196300f3636e --- /dev/null +++ b/reducingtransformerdepthondemandwithstructureddropout/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d007ad53d2e693f63a6ada9d975283dfb4bdfbcef15a2e3b78be44bb0f70a84e +size 413215 diff --git a/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_content_list.json b/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bdc2ca4247380d755a4929cad3b7fbd91326beac --- /dev/null +++ b/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5822f823e072100eef2213bfabceea8ebd5cb061b42355cbfaf0c0638aec4440 +size 74069 diff --git a/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_model.json b/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b540f30fe96c38991ab626e4a353d486be0db0fe --- /dev/null +++ b/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85bfd27f59309200f255055f2f8fc83d6564c531f0dd0da220331e3c0eaf3462 +size 87832 diff --git a/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_origin.pdf b/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bebc8208723d89836ab28fdae011f1a3390ba83c --- /dev/null +++ b/reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10b0d50c0c24a8e3f28445ea1002fd495c9ec640e6502d49e28277a278321235 +size 623413 diff --git a/reformertheefficienttransformer/full.md b/reformertheefficienttransformer/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c8e1e076dd9199e805c01b3f437fada1cde72864 --- /dev/null +++ b/reformertheefficienttransformer/full.md @@ -0,0 +1,304 @@ +# REFORMER: THE EFFICIENT TRANSFORMER + +Nikita Kitaev* + +U.C. Berkeley & Google Research + +kitaev@cs.berkeley.edu + +Łukasz Kaiser* + +Google Research + +{lukaszkaiser,levskaya}@google.com + +Anselm Levskaya + +Google Research + +# ABSTRACT + +Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from $\mathrm{O}(L^2)$ to $\mathrm{O}(L\log L)$ , where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences. + +# 1 INTRODUCTION + +The Transformer architecture (Vaswani et al., 2017) is widely used in natural language processing and yields state-of-the-art results on a number of tasks. To obtain these results, researchers have resorted to training ever larger Transformer models. The number of parameters exceeds 0.5B per layer in the largest configuration reported in (Shazeer et al., 2018) while the number of layers goes up to 64 in (Al-Rfou et al., 2018). Transformer models are also used on increasingly long sequences. Up to 11 thousand tokens of text in a single example were processed in (Liu et al., 2018) and when processing other modalities, like music (Huang et al., 2018) and images (Parmar et al., 2018), even longer sequences are commonplace. These large-scale long-sequence models yield great results but strain resources to the point where some argue that this trend is breaking NLP research1. Many large Transformer models can only realistically be trained in large industrial research laboratories and such models trained with model parallelism cannot even be fine-tuned on a single GPU as their memory requirements demand a multi-accelerator hardware setup even for a single training step. + +Do large Transformer models fundamentally require such huge resources or are they simply inefficient? Consider the following calculation: the 0.5B parameters used in the largest reported Transformer layer account for 2GB of memory. Activations for 64K tokens with embedding size 1024 and batch size 8 account for $64\mathrm{K} \times 1\mathrm{K} \times 8 = 0.5\mathrm{B}$ floats, requiring another 2GB of memory. If our memory use was only per-layer, then we should fairly easily fit a large Transformer even on sequences of length 64K on a single accelerator. Further, the whole corpus used to train BERT only requires 17GB to store. Why is it then that we cannot even fine-tune these models on single machines? + +The above estimate includes only per-layer memory and input activations cost and does not take into account the following major sources of memory use in the Transformer. + +- Memory in a model with $N$ layers is $N$ -times larger than in a single-layer model due to the fact that activations need to be stored for back-propagation. +- Since the depth $d_{ff}$ of intermediate feed-forward layers is often much larger than the depth $d_{model}$ of attention activations, it accounts for a large fraction of memory use. +- Attention on sequences of length $L$ is $\mathrm{O}(L^2)$ in both computational and memory complexity, so even for a single sequence of 64K tokens can exhaust accelerator memory. + +We introduce the Reformer model which solves these problems using the following techniques: + +- Reversible layers, first introduced in Gomez et al. (2017), enable storing only a single copy of activations in the whole model, so the $N$ factor disappears. +- Splitting activations inside feed-forward layers and processing them in chunks removes the $d_{ff}$ factor and saves memory inside feed-forward layers. +- Approximate attention computation based on locality-sensitive hashing replaces the $\mathrm{O}(L^2)$ factor in attention layers with $\mathrm{O}(L\log L)$ and so allows operating on long sequences. + +We study these techniques and show that they have negligible impact on the training process compared to the standard Transformer. Splitting activations in fact only affects the implementation; it is numerically identical to the layers used in the Transformer. Applying reversible residuals instead of the standard ones does change the model but has a negligible effect on training in all configurations we experimented with. Finally, locality-sensitive hashing in attention is a more major change that can influence the training dynamics, depending on the number of concurrent hashes used. We study this parameter and find a value which is both efficient to use and yields results very close to full attention. + +We experiment on a synthetic task, a text task (enwik8) with sequences of length 64K and an image generation task (imagenet-64 generation) with sequences of length 12K. In both cases we show that Reformer matches the results obtained with full Transformer but runs much faster, especially on the text task, and with orders of magnitude better memory efficiency. + +# 2 LOCALITY-SENSITIVE HASHING ATTENTION + +Dot-product attention. The standard attention used in the Transformer is the scaled dot-product attention (Vaswani et al., 2017). The input consists of queries and keys of dimension $d_{k}$ , and values of dimension $d_{v}$ . The dot products of the query with all keys are computed, scaled by $\sqrt{d_k}$ , and a softmax function is applied to obtain the weights on the values. In practice, the attention function on a set of queries is computed simultaneously, packed together into a matrix $Q$ . Assuming the keys and values are also packed together into matrices $K$ and $V$ , the matrix of outputs is defined as: + +$$ +\operatorname {A t t e n t i o n} (Q, K, V) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {T}}{\sqrt {d _ {k}}}\right) V \tag {1} +$$ + +Multi-head attention. In the Transformer, instead of performing a single attention function with $d_{model}$ -dimensional keys, values and queries, one linearly projects the queries, keys and values $h$ times with different, learned linear projections to $d_k$ , $d_k$ and $d_v$ dimensions, respectively. Attention is applied to each of these projected versions of queries, keys and values in parallel, yielding $d_v$ -dimensional output values. These are concatenated and once again projected, resulting in the final values. This mechanism is known as multi-head attention. + +Memory-efficient attention. To calculate the memory use of the attention mechanism, let us focus on the attention computation from Equation 1. Let us assume that Q, K and V all have the shape [batch_size, length, $d_{model}$ ]. The main issue is the term $QK^T$ , which has the shape [batch_size, length, length]. In the experimental section we train a model on sequences of length $64K$ – in this case, even at batch-size of 1, this is a $64K \times 64K$ matrix, which in 32-bit floats would take 16GB of memory. This is impractical and has hindered the use of the Transformer for long sequences. But it is important to note that the $QK^T$ matrix does not need to be fully materialized in memory. The attention can indeed be computed for each query $q_i$ separately, only calculating $\mathrm{softmax}(\frac{q_iK^T}{\sqrt{dk}})V$ once in memory, and then re-computing it on the backward pass when needed for gradients. This way of computing attention may be less efficient but it only uses memory proportional to length. We use this memory-efficient implementation of attention to run the full-attention baselines presented in the experimental section. + +Where do Q, K, V come from? The multi-head attention described above operates on keys, queries and values, but usually we are only given a single tensor of activations A of the shape [batch_size, length, $d_{model}$ ] - e.g., coming from embedding the tokens in a sentence into vectors. + +![](images/1531e5b8ff848be21dd68aeb5485fc87f981bbc4e3d06ed95d6a5cf3e1ea192e.jpg) +Figure 1: An angular locality sensitive hash uses random rotations of spherically projected points to establish buckets by an argmax over signed axes projections. In this highly simplified 2D depiction, two points $x$ and $y$ are unlikely to share the same hash buckets (above) for the three different angular hashes unless their spherical projections are close to one another (below). + +To build Q, K and V from A, the Transformer uses 3 different linear layers projecting A into Q, K and V with different parameters. For models with LSH attention, we want queries and keys (Q and K) to be identical. This is easily achieved by using the same linear layer to go from A to Q and K, and a separate one for V. We call a model that behaves like this a shared-QK Transformer. It turns out that sharing QK does not affect the performance of Transformer, even if we additionally normalize the length of the keys K, as we show in the experimental Section 5. + +Hashing attention. For the LSH attention, we start with two tensors, $\mathrm{Q} = \mathrm{K}$ and $\mathrm{V}$ of the shape [batch_size, length, $d_{\text{model}}$ ]. We keep the multi-head mechanism intact and focus on the attention computation from Equation 1. As already mentioned, the main issue is the term $QK^T$ , which has the shape [batch_size, length, length]. But note that we are actually only interested in $\text{softmax}(QK^T)$ . Since softmax is dominated by the largest elements, for each query $q_i$ we only need to focus on the keys in $\mathbf{K}$ that are closest to $q_i$ . For example, if $\mathbf{K}$ is of length 64K, for each $q_i$ we could only consider a small subset of, say, the 32 or 64 closest keys. That is much more efficient, but how can we find the nearest neighbors among the keys? + +Locality sensitive hashing. The problem of finding nearest neighbors quickly in high-dimensional spaces can be solved by locality-sensitive hashing (LSH). A hashing scheme that assigns each vector $x$ to a hash $h(x)$ is called locality-sensitive if nearby vectors get the same hash with high probability and distant ones do not. In our case, we actually only require that nearby vectors get the same hash with high probability and that hash-backets are of similar size with high probability. + +We achieve this by employing random projections as follows (see Figure 1). To get $b$ hashes, we first fix a random matrix $R$ of size $[d_k, b/2]$ . We then define $h(x) = \arg \max([xR; -xR])$ where $[u; v]$ denotes the concatenation of two vectors. This method is a known LSH scheme (Andoni et al., 2015) and is easy to implement and apply to batches of vectors. + +LSH attention. Knowing our LSH scheme and the general idea of hashing attention, we will now formalize the LSH attention we use in this paper. We first rewrite the equation for normal attention, (1), for a single query position $i$ at a time: + +$$ +o _ {i} = \sum_ {j \in \mathcal {P} _ {i}} \exp \left(q _ {i} \cdot k _ {j} - z (i, \mathcal {P} _ {i})\right) v _ {j} \quad \text {w h e r e} \mathcal {P} _ {i} = \{j: i \geq j \} \tag {2} +$$ + +We introduce the notation $\mathcal{P}_i$ to represent the set that the query at position $i$ attends to, and $z$ to denote the partition function (i.e. the normalizing term in the softmax). For clarity, we also omit scaling by $\sqrt{d_k}$ . + +For batching purposes we typically perform attention over a larger set $\widetilde{\mathcal{P}}_i = \{0,1,\dots ,l\} \supseteq \mathcal{P}_i$ while masking out elements not in $\mathcal{P}_i$ : + +$$ +o _ {i} = \sum_ {j \in \widetilde {\mathcal {P}} _ {i}} \exp \left(q _ {i} \cdot k _ {j} - m (j, \mathcal {P} _ {i}) - z (i, \mathcal {P} _ {i})\right) v _ {j} \quad \text {w h e r e} m (j, \mathcal {P} _ {i}) = \left\{ \begin{array}{l l} \infty & \text {i f} j \notin \mathcal {P} _ {i} \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {3} +$$ + +![](images/d8db596c5b76d29a4745034b08c46f37c5f2307dbff244b9ccd5319789866f90.jpg) +Figure 2: Simplified depiction of LSH Attention showing the hash-bucketing, sorting, and chunking steps and the resulting causal attentions. (a-d) Attention matrices for these varieties of attention. + +![](images/ca5686db31bde8a9f26c54f6e8f9370a8456cb776592af3c40e06ff2521fc052.jpg) + +![](images/55ce100abc20441e6dee08e5039b45b283e6432b98d0d4f3baecb9c80d36946b.jpg) + +![](images/f874caefd32393d631a7b77ae4245115f95b938f91d7d0a2eb8bed1d547f2348.jpg) + +![](images/49aa7dcca0e839bac59bceb0664dbd70f7414edb9fdf01482cff31b9f62d3c6f.jpg) + +Now we turn to LSH attention, which we can think of in terms of restricting the set $\mathcal{P}_i$ of target items a query position $i$ can attend to, by only allowing attention within a single hash bucket. + +$$ +\mathcal {P} _ {i} = \{j: h \left(q _ {i}\right) = h \left(k _ {j}\right) \} \tag {4} +$$ + +Figure 2(a-b) shows a schematic comparison of full-attention with a hashed variant. Part (a) depicts that the attention matrix for full attention is typically sparse, but the computation does not take advantage of this sparsity. In (b), the queries and keys have been sorted according to their hash bucket. Since similar items fall in the same bucket with high probability, the full attention pattern can be approximated by only allowing attention within each bucket. + +Hash buckets in this formulation tend to be uneven in size, which makes it difficult to batch across buckets. Moreover, the number of queries and the number of keys within a bucket may be unequal – in fact, it is possible for a bucket to contain many queries but no keys. To alleviate these issues, we first ensure that $h(k_{j}) = h(q_{j})$ by setting $k_{j} = \frac{q_{j}}{\|q_{j}\|}$ . Next, we sort the queries by bucket number and, within each bucket, by sequence position; this defines a permutation where $i \mapsto s_i$ after sorting. In the sorted attention matrix, pairs from the same bucket will cluster near the diagonal (as depicted in Figure 2c). We can follow a batching approach where chunks of $m$ consecutive queries (after sorting) attend to each other, and one chunk back (Figure 2d). Following our earlier notation, this corresponds to setting: + +$$ +\widetilde {\mathcal {P}} _ {i} = \left\{j: \left\lfloor \frac {s _ {i}}{m} \right\rfloor - 1 \leq \left\lfloor \frac {s _ {j}}{m} \right\rfloor \leq \left\lfloor \frac {s _ {i}}{m} \right\rfloor \right\} \tag {5} +$$ + +If $\max_i |\mathcal{P}_i| < m$ , then $\mathcal{P}_i \subseteq \widetilde{\mathcal{P}}_i$ . In practice we set $m = \frac{2l}{n_{\text{buckets}}}$ (where $l$ is the sequence length). The average bucket size is $\frac{l}{n_{\text{buckets}}}$ , and we assume that the probability of a bucket growing to twice that size is sufficiently low. The overall process of LSH attention is summarized in Figure 2. + +Multi-round LSH attention. With hashing, there is always a small probability that similar items nevertheless fall in different buckets. This probability can be reduced by doing multiple rounds of hashing with $n_{rounds}$ distinct hash functions $\{h^{(1)}, h^{(2)}, \ldots\}$ , such that: + +$$ +\mathcal {P} _ {i} = \bigcup_ {r = 1} ^ {n _ {\text {r o u n d s}}} \mathcal {P} _ {i} ^ {(r)} \quad \text {w h e r e} \mathcal {P} _ {i} ^ {(r)} = \left\{j: h ^ {(r)} \left(q _ {i}\right) = h ^ {(r)} \left(q _ {j}\right) \right\} \tag {6} +$$ + +The multi-round case essentially involves performing LSH attention $n_{rounds}$ times in parallel; the details of the procedure are described in in Appendix A. + +Causal masking for shared-QK attention. In a Transformer decoder, masking (denoted by $m(j,\mathcal{P}_i)$ in Equation 3) is used to prevent positions from attending into the future. To implement masking in LSH attention, we associate every query/key vector with a position index, re-order the position indices using the same permutations used to sort the query/key vectors, and then use a comparison operation to compute the mask. + +Table 1: Memory and time complexity of attention variants. We write $l$ for length, $b$ for batch size, ${n}_{h}$ for the number of heads, ${n}_{c}$ for the number of LSH chunks, ${n}_{r}$ for the number of hash repetitions. + +
Attention TypeMemory ComplexityTime Complexity
Scaled Dot-Productmax(bnhldk, bnhl2)max(bnhldk, bnhl2)
Memory-Efficientmax(bnhldk, bnhl2)max(bnhldk, bnhl2)
LSH Attentionmax(bnhldk, bnhlnrl(4l/nc)2)max(bnhldk, bnhnrl(4l/nc)2)
+ +Table 2: Accuracies on the duplication task of a 1-layer Transformer model with full attention and with locality-sensitive hashing attention using different number of parallel hashes. + +
Eval TrainFull AttentionLSH-8LSH-4LSH-2LSH-1
Full Attention100%94.8%92.5%76.9%52.5%
LSH-40.8%100%99.9%99.4%91.9%
LSH-20.8%100%99.9%98.1%86.8%
LSH-10.8%99.9%99.6%94.8%77.9%
+ +While attention to the future is not allowed, typical implementations of the Transformer do allow a position to attend to itself. Such behavior is undesirable in a shared-QK formulation because the dot-product of a query vector with itself will almost always be greater than the dot product of a query vector with a vector at another position. We therefore modify the masking to forbid a token from attending to itself, except in situations where a token has no other valid attention targets (e.g. the first token in a sequence). + +# 2.1 ANALYSIS ON A SYNTHETIC TASK + +To verify the performance of LSH attention and study its behavior, we start with the following synthetic task: duplicate a sequence of symbols. In this task, each training and testing example has the form $0w0w$ where $w \in \{1, \dots, N\}^*$ is a sequence of symbols ranging from 1 to $N$ (we use $N = 127$ in our experiments). An example with the word $w$ of length 3 is given below. + +
Example:0191137201911372
+ +To study LSH attention, we train a language model on examples of the above form where each $w$ is of length 511 (so the whole input $0w0w$ is of length 1024). As this is a language modeling task, we always predict the next symbol given all the previous ones, but we mask the loss and accuracy to only consider positions in the second half of the input, i.e., those that can actually be predicted. + +The above task can be solved perfectly (to accuracy $100\%$ and loss 0) by a 1-layer Transformer model. Note though, that it requires non-local attention lookups, so it cannot be solved by any model relying on sparse attention with a limited span. To make it easy and fast to train but similar to models used in NLP, we use a 1-layer Transformer with $d_{model} = d_{ff} = 256$ and 4 heads. We train it for 150K steps in 4 different settings: with full attention, LSH attention with $n_{rounds} = 1$ , $n_{rounds} = 2$ and $n_{rounds} = 4$ . + +From the results summarized in Table 2 we see that a model trained with full attention can be immediately used with LSH attention, but at some loss of accuracy. When trained from scratch with LSH attention, the model trained with 4 hashes achieves almost perfect accuracy as well. Interestingly, the accuracy becomes perfect when evaluated with 8 hashes. It goes down when evaluated with 2 or 1 hashes. Models trained with less hashes show worse results but even the model trained with just 1 hash performs almost perfectly when evaluated with 8 hashes. + +# 3 REVERSIBLE TRANSFORMER + +As the above section shows, the complexity of attention can be reduced from square in length to linear, provided an approximation is acceptable. But it is clear from Table 1 that each field starts with a $b \cdot n_h \cdot l$ term: the $b \cdot n_h \cdot l \cdot d_k$ , or alternatively $b \cdot l \cdot d_{model}$ cost cannot be avoided. Indeed, the activations before each layer are already of the size $b \cdot l \cdot d_{model}$ , so the memory use of the whole model with $n_l$ layers is at least $b \cdot l \cdot d_{model} \cdot n_l$ . Even worse: inside the feed-forward layers of Transformer this goes up to $b \cdot l \cdot d_{ff} \cdot n_l$ . In a big Transformer it is usual to set $d_{ff} = 4K$ and $n_l = 16$ so with $l = 64K$ this again would use an impractical 16GB of memory + +In this section, we show how to reduce this cost by first dealing with the $n_l$ part of the term using reversible layers and then showing how chunking can allow us to handle the $d_{ff}$ problem. The effects of each of these approaches on memory and time complexity are summarized in Table 3. + +RevNets. Reversible residual networks were introduced by Gomez et al. (2017) where it was shown that they can replace ResNets for image classification. The main idea is to allow the activations at any given layer to be recovered from the activations at the following layer, using only the model parameters. Rather than having to checkpoint intermediate values for use in the backward pass, layers can be reversed one-by-one as back-propagation proceeds from the output of the network to its input. Whereas a normal residual layer performs a function $x \mapsto y$ that operates on a single input and produces a single output and has the form $y = x + F(x)$ , a reversible layer works on pairs of inputs/outputs: $(x_1, x_2) \mapsto (y_1, y_2)$ , and follows the equations: + +$$ +y _ {1} = x _ {1} + F \left(x _ {2}\right) \quad y _ {2} = x _ {2} + G \left(y _ {1}\right) \tag {7} +$$ + +A layer can be reversed by subtracting (rather than adding) the residuals: + +$$ +x _ {2} = y _ {2} - G \left(y _ {1}\right) \quad x _ {1} = y _ {1} - F \left(x _ {2}\right) \tag {8} +$$ + +Reversible Transformer. We apply the RevNet idea to the Transformer by combining the attention and feed-forward layers inside the revnet block. In the notation above, F becomes an attention layer while G becomes the feed-forward layer. Note that Layer Normalization (Ba et al., 2016) is moved inside the residual blocks. + +$$ +Y _ {1} = X _ {1} + \operatorname {A t t e n t i o n} \left(X _ {2}\right) \quad Y _ {2} = X _ {2} + \operatorname {F e e d F o r w a r d} \left(Y _ {1}\right) \tag {9} +$$ + +The reversible Transformer does not need to store activations in each layer and so gets rid of the $n_l$ term. In Section 5 we show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both $x_1$ and $x_2$ have size $d_{model}$ . + +Chunking. While reversibility covers the $n_l$ term, the thicker layers can still use a lot of memory. The feed-forward layer in particular can use intermediate vectors of dimensionality $d_{ff} = 4K$ or higher. However, computations in feed-forward layers are completely independent across positions in a sequence, so the computation can be split into $c$ chunks: + +$$ +Y _ {2} = \left[ Y _ {2} ^ {(1)}; \dots ; Y _ {2} ^ {(c)} \right] = \left[ X _ {2} ^ {(1)} + \operatorname {F e e d F o r w a r d} \left(Y _ {1} ^ {(1)}\right); \dots ; X _ {2} ^ {(c)} + \operatorname {F e e d F o r w a r d} \left(Y _ {1} ^ {(c)}\right) \right] \tag {10} +$$ + +This layer is typically batched by performing operations for all positions in parallel, but operating on one chunk at a time can reduce memory. The reverse computation in (8) and the backward pass are also chunked. In addition to the feed-forward layers, for models with large vocabulary (more than $d_{model}$ word types) we also chunk the log-probabilities at the output and calculate the loss for sections of the sequence at a time. + +Chunking, large batches and parameter reuse. With chunking and reversible layers the memory we use for activations in the whole network is independent of the number of layers. The same is not true for parameters though as their number grows with the number of layers. This problem is remedied though because we can swap layer parameters to and from CPU memory when this layer is not computing. In a standard Transformer this would be inefficient because memory transfer to CPU is slow. The batch size multiplied by length in Reformer is much larger though and therefore the amount of compute done with the parameters amortizes the cost of their transfer. + +Table 3: Memory and time complexity of Transformer variants. We write $d_{model}$ and $d_{ff}$ for model depth and assume $d_{ff} \geq d_{model}$ ; $b$ stands for batch size, $l$ for length, $n_l$ for the number of layers. We assume $n_c = l / 32$ so $4l / n_c = 128$ and we write $c = 128^2$ . + +
Model TypeMemory ComplexityTime Complexity
Transformermax(bldff, bnhl2)nl(bldff + bnhl2)nl
Reversible Transformermax(bldff, bnhl2)(bnhlldff + bnhl2)nl
Chunked Reversible Transformermax(bldmodel, bnhl2)(bnhlldff + bnhl2)nl
LSH Transformermax(bldff, bnhlnc)nl(bldff + bnhnrlc)nl
Reformermax(bldmodel, bnhlnc)(bldff + bnhnrlc)nl
+ +# 4 RELATED WORK + +The Transformer model introduced in (Vaswani et al., 2017) has been used widely in natural language tasks and further extended to model diverse data such as music scores (Huang et al., 2018), and images (Parmar et al., 2018; Ramachandran et al., 2019). Most notably, this model class has been applied successfully in the self-supervised training of extremely large language models (Devlin et al., 2018; Radford et al., 2019). + +Given the enormous computational requirements of state of the art sequence models, there has been increasing interest in finding methods to reduce the memory footprint and computational requirements of Transformer models. In addition to standard methods such as precision reduction and gradient checkpointing (Sohoni et al., 2019), more efficient versions of the Transformer model's self-attention mechanism (Sukhbaatar et al., 2019a;b) have also recently been explored. + +In particular, leveraging sparsity in the attention layers has proved fruitful. OpenAI introduced the sparse Transformer (Child et al., 2019) which exploits a factorized sparse representation of attention. Using product-key attention to increase the key space has also been used to reduce memory requirements in the feed-forward layers with no loss in performance (Lample et al., 2019). + +Locality-sensitive hashing (LSH) has, to our knowledge, not been directly applied to Transformer attention layers before. But previous work using external memory with neural networks has dealt with memories of large sizes. The original implementation of memory networks (Weston et al., 2014) and later work on scaling it (Bordes et al., 2015; Chandar et al., 2016) used memory with size in the millions. The cost of doing so is that the memory must be fixed prior to training. Moreover, since during the beginning of training the model is unlikely to query the memory correctly, strong supervision is used to encourage the model to query memory locations that are useful. These hints are either given as additional supervising information by the task or determined heuristically as in Hill et al. (2015). The requirement that the memory be fixed before has been removed in Santoro et al. (2016) at the cost of memory size and later alleviated by Rae et al. (2016). The last paper considered memory lookups with approximate nearest neighbors including both LSH and random kd-trees, but only for lookups in external memory. + +# 5 EXPERIMENTS + +In this section we present experimental results demonstrating the techniques described above. We analyze the techniques one-by-one to make clear which combinations have impact on performance. We start by showing that reversible layers and shared query-key spaces do not impact performance, then proceed to analyze hashing attention and finally the full Reformer model. + +We ran our experiments on the imagenet64 and enwik8-64K tasks, where the latter is a variant of enwik8 that is chunked into subsequences of $2^{16} = 64K$ tokens. We use 3-layer models for our ablations so as to make it tractable to compare with the regular Transformer, which has high memory usage and performs full $O(l^2)$ attention. All experiments have $d_{model} = 1024$ , $d_{ff} = 4096$ , $n_{heads} = 8$ , and a total batch size of 8 sequences. We used the Adafactor optimizer (Shazeer & Stern, 2018) for training these models. We also evaluate on the WMT 2014 English-to-German translation task, following the hyperparameters of Vaswani et al. (2017). Training for all experiments + +![](images/7297961e091e865731d82461f0096d58c94d43d975d46a924675b3e55203b4bc.jpg) + +![](images/4d2e7ac93bd699b725df42cb13c80d1a61e37e2c24d4e464f5a045deb67bb5f1.jpg) + +![](images/395b936052fa4387c72774452e08e6b6356e842dcbb47a1d2ff8b377456124bf.jpg) +Figure 3: Effect of shared query-key space (left) and reversibility (right) on performance on enwik8 and imagenet64 training. The curves show bits per dim on held-out data. + +![](images/da871c8ade9da2abbb0f7edc69a26c9f4d790f8e2d39ec83c1c8dddf0e73bb0f.jpg) + +Table 4: BLEU scores on newestest2014 for WMT English-German (En-De). We additionally report detokenized BLEU scores as computed by sacreBLEU (Post, 2018). + +
ModelBLEUsacreBLEU
Uncased3Cased4
Vaswani et al. (2017), base model27.3
Vaswani et al. (2017), big28.4
Ott et al. (2018), big29.3
Reversible Transformer (base, 100K steps)27.627.426.9
Reversible Transformer (base, 500K steps, no weight sharing)28.027.927.4
Reversible Transformer (big, 300K steps, no weight sharing)29.128.928.4
+ +was parallelized across 8 devices (8 GPUs or 8 TPU v3 cores). Code for training our models is made publicly available.2 + +Effect of sharing QK. We first consider the effect of shared-QK attention on a regular Transformer model. Shared-QK attention sets $k_{j} = \frac{q_{j}}{\|q_{j}\|}$ and prevents tokens from attending to themselves (except when no other context is available). In the left part of Figure 3, we plot perplexity curves for both regular and shared-QK attention. A shared query-key space does not perform worse than regular attention; in fact, for enwik8 it appears to train slightly faster. In other words, we are not sacrificing accuracy by switching to shared-QK attention. + +Effect of reversible layers. In the two plots on the right in Figure 3, we compare a regular Transformer per Vaswani et al. (2017) with the reversible one describe in Section 3. The two models have identical parameter counts, and the learning curves likewise appear to be nearly the same. These results show that the memory savings in the reversible Transformer do not come at the expense of accuracy. + +Reversible layers in machine translation. We also evaluate reversible layers in the context of an encoder-decoder Transformer model for machine translation from English to German. We start by making both the encoder and the decoder fully reversible in the Transformer-base architecture, and + +![](images/7736ac8e603c99db39552882b707a3f02dd55d59f77c9168ee92ae1a323ea538.jpg) +Figure 4: LSH attention performance as a function of hashing rounds onImagenet64. + +![](images/56f3c5ae38754b95c6af92c9623d1c9e6e33992ad16a495a1e16f0df7aa9b535.jpg) +Figure 5: Left: LSH attention performance as a function of number of layers on enwik8. Right: Speed of attention evaluation as a function of input length for full- and LSH- attention. + +![](images/53d3418baf095a3b08f7480057121b89f8f63176de4601b2b2594b5609abbab7.jpg) + +see that the resulting model performs comparably to Vaswani et al. (2017) when trained for 100K steps. We also evaluate training for a greater number of steps and with a larger model. Reformer models are very memory-efficient, so for the latter two experiments we do not need to save memory by sharing embedding and output projection weight matrices throughout the model. Results are shown in Table 4. We do not apply LSH attention in this setting because examples are single sentences, and sentences tend to be relatively short. Our typical LSH attention configuration uses chunks of 128 tokens after hashing and sorting, whereas the examples in the WMT14 test set are all shorter than 128 tokens. + +LSH attention in Transformer. LSH attention is an approximation for full attention that, as evidenced in Figure 4, becomes more accurate as the number of hashes increases. At $n_{rounds} = 8$ , it already almost matches full attention. The computational cost of a model grows with the number of hashes, so this hyperparameter can be adjusted depending on the available compute budget. Additionally, as in Table 2, the number of hashes can be increased at evaluation time to produce more accurate results. On the right half of Figure 5, we plot the speed of different attention types vs. the sequence length, while holding the total number of tokens fixed. We see that while regular attention becomes slower at longer sequence length, LSH attention speed remains flat. + +Large Reformer models. To verify that the Reformer can indeed fit large models on a single core and train fast on long sequences, we train up to 20-layer big Reformers on enwik8 and imagenet64. As can be seen in Figure 5, these models fit into memory and train. We were not able to train Transformer baselines in this case as they are too slow and memory-hungry, but we see clear improvement with the number of layers. A 12-layer model on enwik8 trained for 20K steps with a dropout rate of 0.1 achieves 1.19 bits/dim on the test set. We also trained a 12-layer Reformer model for longer with further tuning and improvements and we reached 1.05 bits/dim on the enwiki8 test set. + +# 6 CONCLUSION + +Reformer combines the modeling capacity of a Transformer with an architecture that can be executed efficiently on long sequences and with small memory use even for models with a large number of layers. We believe that this will help large, richly-parameterized Transformer models become more widespread and accessible. Also, the ability to handle long sequences opens the way for the use of the Reformer on many generative tasks. In addition to generating very long coherent text, the Reformer can bring the power of Transformer models to other domains like time-series forecasting, music, image and video generation. + +# REFERENCES + +Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. Character-level language modeling with deeper self-attention. CoRR, abs/1808.04444, 2018. URL http://arxiv.org/abs/1808.04444. +Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya P. Razenshteyn, and Ludwig Schmidt. Practical and optimal LSH for angular distance. CoRR, abs/1509.02897, 2015. URL http://arxiv.org/abs/1509.02897. +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. URL http://arxiv.org/abs/1607.06450. +Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. CoRR, abs/1506.02075, 2015. URL http://arxiv.org/abs/1506.02075. +Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016. +Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. URL https://openai.com/blog/sparse-transformers, 2019. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805. +Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. In Advances in neural information processing systems, pp. 2214-2224, 2017. +Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. CoRR, abs/1511.02301, 2015. URL http://arxiv.org/abs/1511.02301. +Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, Andrew M Dai, Matthew D Hoffman, and Douglas Eck. Music transformer: Generating music with long-term structure. arXiv preprint arXiv:1809.04281, 2018. +Guillaume Lample, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Large memory layers with product keys. CoRR, abs/1907.05242, 2019. URL http://arxiv.org/abs/1907.05242. +Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. CoRR, abs/1801.10198, 2018. URL http://arxiv.org/abs/1801.10198. +Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 1-9, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-6301. URL https://www.aclweb.org/anthology/W18-6301. + +Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and Alexander Ku. Image transformer. CoRR, abs/1802.05751, 2018. URL http://arxiv.org/abs/1802.05751. +Matt Post. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186-191, Belgium, Brussels, October 2018. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W18-6319. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. +Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In Advances in Neural Information Processing Systems, (NIPS), 2016. +Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jonathon Shlens. Stand-alone self-attention in vision models. CoRR, abs/1906.05909, 2019. URL http://arxiv.org/abs/1906.05909. +Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. One-shot learning with memory-augmented neural networks. CoRR, abs/1605.06065, 2016. URL http://arxiv.org/abs/1605.06065. +Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. CoRR, abs/1804.04235, 2018. URL http://arxiv.org/abs/1804.04235. +Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake Hechtman. Mesh-tensorflow: Deep learning for supercomputers. CoRR, abs/1811.02084, 2018. URL http://arxiv.org/abs/1811.02084. +Nimit Sharad Sohoni, Christopher Richard Aberger, Megan Leszczynski, Jian Zhang, and Christopher Ré. Low-memory neural network training: A technical report. CoRR, abs/1904.10631, 2019. URL http://arxiv.org/abs/1904.10631. +Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. CoRR, abs/1905.07799, 2019a. URL http://arxiv.org/abs/1905.07799. +Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Hervé Jégou, and Armand Joulin. Aug-mentation self-attention with persistent memory. CoRR, abs/1907.01470, 2019b. URL http://arxiv.org/abs/1907.01470. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, 2017. URL http://arxiv.org/abs/1706.03762. +Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916. + +# A MULTI-ROUND LSH ATTENTION + +In this section we describe in more detail the multi hash version of our LSH attention mechanism. We first repeat Equation (3) from the main text, which describes a general formulation of attention with sparsity: + +$$ +o _ {i} = \sum_ {j \in \widetilde {\mathcal {P}} _ {i}} \exp \left(q _ {i} \cdot k _ {j} - m (j, \mathcal {P} _ {i}) - z (i, \mathcal {P} _ {i})\right) v _ {j} \quad \text {w h e r e} m (j, \mathcal {P} _ {i}) = \left\{ \begin{array}{l l} \infty & \text {i f} j \notin \mathcal {P} _ {i} \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {3} +$$ + +In the multi-round case, a query position $i$ can attend to key positions $\mathcal{P}_i$ as defined in (6), which we also repeat here: + +$$ +\mathcal {P} _ {i} = \bigcup_ {r = 1} ^ {n _ {\text {r o u n d s}}} \mathcal {P} _ {i} ^ {(r)} \quad \text {w h e r e} \mathcal {P} _ {i} ^ {(r)} = \left\{j: h ^ {(r)} \left(q _ {i}\right) = h ^ {(r)} \left(q _ {j}\right) \right\} \tag {6} +$$ + +For batching purposes, attention is performed on chunks of sorted queries/keys: + +$$ +\widetilde {\mathcal {P}} _ {i} ^ {(r)} = \left\{j: \left\lfloor \frac {s _ {i} ^ {(r)}}{m} \right\rfloor - 1 \leq \left\lfloor \frac {s _ {j} ^ {(r)}}{m} \right\rfloor \leq \left\lfloor \frac {s _ {i} ^ {(r)}}{m} \right\rfloor \right\} \tag {11} +$$ + +Combining (3) and (6) gives: + +$$ +\begin{array}{l} o _ {i} = \sum_ {j \in \tilde {\mathcal {P}} _ {i}} \exp \left(q _ {i} \cdot k _ {j} - m (j, \mathcal {P} _ {i}) - z (i, \mathcal {P} _ {i})\right) v _ {j} (12) \\ = \sum_ {r = 1} ^ {n _ {\text {r o u n d s}}} \exp \left(z (i, \mathcal {P} _ {i} ^ {(r)}) - z (i, \mathcal {P} _ {i})\right) \sum_ {j \in \widetilde {\mathcal {P}} _ {i} ^ {(r)}} \frac {1}{N _ {i , j}} \exp \left(q _ {i} \cdot k _ {j} - m (j, \mathcal {P} _ {i} ^ {(r)}) - z (i, \mathcal {P} _ {i} ^ {(r)})\right) v _ {j} (13) \\ = \sum_ {r = 1} ^ {n _ {\text {r o u n d s}}} \exp \left(z (i, \mathcal {P} _ {i} ^ {(r)}) - z (i, \mathcal {P} _ {i})\right) o _ {i} ^ {(r)} (14) \\ \end{array} +$$ + +$$ +o _ {i} ^ {(r)} = \sum_ {j \in \widetilde {\mathcal {P}} _ {i} ^ {(r)}} \exp \left(q _ {i} \cdot k _ {j} - m _ {i, j} ^ {(r)} - z (i, \mathcal {P} _ {i} ^ {(r)})\right) v _ {j} \tag {15} +$$ + +$$ +\text {w h e r e} N _ {i, j} = \left| \left\{r ^ {\prime}: j \in \mathcal {P} _ {i} ^ {(r ^ {\prime})} \right\} \right| \text {a n d} m _ {i, j} ^ {(r)} = \left\{ \begin{array}{l l} \infty & \text {i f} j \notin \mathcal {P} _ {i} ^ {(r)} \\ 1 0 ^ {5} & \text {i f} i = j \\ \log N _ {i, j} & \text {o t h e r w i s e} \end{array} \right. \tag {16} +$$ + +Each round of LSH attention produces a vector $o_i^{(r)}$ that can be computed independently from other rounds, except for the inclusion of a term $N_{i,j}$ to avoid double-counting elements when constructing the union of $\mathcal{P}_i^{(r)}$ sets. In our implementation we fold the $N_{i,j}$ factor into the masking term $m_{i,j}^{(r)}$ . + +We also modify $m_{i,j}^{(r)}$ to introduce a special case for $i = j$ . This case is added because causal masking in a standard Transformer allows position $i$ to attend to itself, which is not desirable in a shared-QK formulation. We set the mask to a large but finite value to disallow attention-in-place, except in the situation where a token has no other valid attention targets. For example, the first token in a sequence attends only to itself, because no prior context is available. \ No newline at end of file diff --git a/reformertheefficienttransformer/images.zip b/reformertheefficienttransformer/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3a4892f87ad1bc63b7b4ff954b0fb221426e4730 --- /dev/null +++ b/reformertheefficienttransformer/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69d472d60f8492009f0e839c67185006140668ea81037ab2281ef8e509fd990e +size 520909 diff --git a/reformertheefficienttransformer/layout.json b/reformertheefficienttransformer/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dd9c684083f75f282855837e681a83a32b9f0b1b --- /dev/null +++ b/reformertheefficienttransformer/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff409a6ae689c650c65d9546ad8919706c26e5b2cb61bf6475938d6f010434ea +size 409586 diff --git a/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_content_list.json b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..39523bc57ab3cda632fe151f56d0edd1447c8564 --- /dev/null +++ b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d2657a2a4682f78e58e413b2785e3f955da9f792b186e59c74c9e6e54ba5a08 +size 75531 diff --git a/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_model.json b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3b9aff623519c2300a502fe872c7dd85325f5057 --- /dev/null +++ b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d5bec1198e4668a91c3b4c0bab6cba3ef98cc71d928e9d50d6e114ddc61f89e +size 97721 diff --git a/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_origin.pdf b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c14b2f2ce646d4bbcabe510a0985dad701ed0cea --- /dev/null +++ b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb66e130bc991d0242b5c34da6da633b334627ebfeceabe5cf47ecdbad5fe1c9 +size 759994 diff --git a/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/full.md b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/full.md new file mode 100644 index 0000000000000000000000000000000000000000..05d94682542896e4c929ca3dbd8c990d3d422409 --- /dev/null +++ b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/full.md @@ -0,0 +1,311 @@ +# REGULARIZING ACTIVATIONS IN NEURAL NETWORKS VIA DISTRIBUTION MATCHING WITH THE WASSER-STEIN METRIC + +Taejong Joo + +ESTsoft + +Republic of Korea + +tjoo@estsoft.com + +Donggu Kang + +ESTsoft + +Republic of Korea + +emppurity@gmail.com + +Byunghoon Kim + +Hanyang University + +Republic of Korea + +byungkim@hanyang.ac.kr + +# ABSTRACT + +Regularization and normalization have become indispensable components in training deep neural networks, resulting in faster training and improved generalization performance. We propose the projected error function regularization loss (PER) that encourages activations to follow the standard normal distribution. PER randomly projects activations onto one-dimensional space and computes the regularization loss in the projected space. PER is similar to the Pseudo-Huber loss in the projected space, thus taking advantage of both $L^1$ and $L^2$ regularization losses. Besides, PER can capture the interaction between hidden units by projection vector drawn from a unit sphere. By doing so, PER minimizes the upper bound of the Wasserstein distance of order one between an empirical distribution of activations and the standard normal distribution. To the best of the authors' knowledge, this is the first work to regularize activations via distribution matching in the probability distribution space. We evaluate the proposed method on the image classification task and the word-level language modeling task. + +# 1 INTRODUCTION + +Training of deep neural networks is very challenging due to the vanishing and exploding gradient problem (Hochreiter, 1998; Glorot & Bengio, 2010), the presence of many flat regions and saddle points (Shalev-Shwartz et al., 2017), and the shattered gradient problem (Balduzzi et al., 2017). To remedy these issues, various methods for controlling hidden activations have been proposed such as normalization (Ioffe & Szegedy, 2015; Huang et al., 2018), regularization (Littwin & Wolf, 2018), initialization (Mishkin & Matas, 2016; Zhang et al., 2019), and architecture design (He et al., 2016). + +Among various techniques of controlling activations, one well-known and successful path is controlling their first and second moments. Back in the 1990s, it has been known that the neural network training can be benefited from normalizing input statistics so that samples have zero mean and identity covariance matrix (LeCun et al., 1998; Schraudolph, 1998). This idea motivated batch normalization (BN) that considers hidden activations as the input to the next layer and normalizes scale and shift of the activations (Ioffe & Szegedy, 2015). + +Recent works show the effectiveness of different sample statistics of activations for normalization and regularization. Deecke et al. (2019) and Kalayeh & Shah (2019) normalize activations to several modes with different scales and translations. Variance constancy loss (VCL) implicitly normalizes the fourth moment by minimizing the variance of sample variances, which enables adaptive mode separation or collapse based on their prior probabilities (Littwin & Wolf, 2018). BN is also extended to whiten activations (Huang et al., 2018; 2019), and to normalize general order of central moment in the sense of $L^p$ norm including $L^0$ and $L^\infty$ (Liao et al., 2016; Hoffer et al., 2018). + +In this paper, we propose a projected error function regularization (PER) that regularizes activations in the Wasserstein probability distribution space. Specifically, PER pushes the distribution of activations to be close to the standard normal distribution. PER shares a similar strategy with previous approaches that dictates the ideal distribution of activations. Previous approaches, however, deal with single or few sample statistics of activations. On the contrary, PER regularizes the activations + +![](images/d94d74db259427eea7ecc0b7167ed9ade8527ddea30991d77d365a3952088541.jpg) +(a) + +![](images/9c2bcbb199d7905be3e130f845f1a0ae800bfcc96740920131029fb57c9248c3.jpg) +(b) + +![](images/6f35a4c6f780624deee97ff4fedbbfe4ff616c0d5fd437b60f09bfd7ef2bea89.jpg) +(c) +Figure 1: Limitation of statistics in terms of representing the probability distribution. In all subplots, $x$ has zero mean and unit variance and $y \sim \mathcal{N}(0,1)$ . In (a) $(x,y) \sim \mathcal{N}(0,I)$ . In (b), $x \sim \mathcal{N}(0,1)$ but correlated with $y$ . In (c), $x$ follows a skewed distribution. In (d), $x$ follows a bi-modal distribution. Standardization cannot differentiate (a)-(d) and whitening cannot differentiate (a), (c), and (d). + +![](images/b9825d5ee328ea215a890a4e676b9a775968eae1b5495a7a50bd5a04238afd43.jpg) +(d) + +by matching the probability distributions, which considers different statistics simultaneously, e.g., all orders of moments and correlation between hidden units. The extensive experiments on multiple challenging tasks show the effectiveness of PER. + +# 2 RELATED WORKS + +Many modern deep learning architectures employ BN as an essential building block for better performance and stable training even though its theoretical aspects of regularization and optimization are still actively investigated (Santurkar et al., 2018; Kohler et al., 2018; Bjorck et al., 2018; Yang et al., 2019). Several studies have applied the idea of BN that normalizes activations via the sample mean and the sample variance to a wide range of domains such as recurrent neural network (Lei Ba et al., 2016) and small batch size training (Wu & He, 2018). + +Huang et al. (2018; 2019) propose normalization techniques whitening the activation of each layer. This additional constraint on the statistical relationship between activations improves the generalization performance of residual networks compared to BN. Although the correlation between activations are not explicitly considered, dropout prevents activations from being activated at the same time, called co-adaptation, by randomly dropping the activations (Srivastava et al., 2014), the weights (Wan et al., 2013), and the spatially connected activations (Ghiasi et al., 2018). + +Considering BN as the normalization in the $L^2$ space, several works extend BN to other spaces, i.e., other norms. Streaming normalization (Liao et al., 2016) explores the normalization of a different order of central moment with $L^p$ norm for general $p$ . Similarly, Hoffer et al. (2018) explores $L^1$ and $L^\infty$ normalization, which enable low precision computation. Littwin & Wolf (2018) proposes a regularization loss that reduces the variance of sample variances of activation that is closely related to the fourth moment. + +The idea of controlling activations via statistical characteristics of activations also has motivated initialization methods. An example includes balancing variances of each layer (Glorot & Bengio, 2010; He et al., 2015), bounding scale of activation and gradient (Mishkin & Matas, 2016; Balduzzi et al., 2017; Gehring et al., 2017; Zhang et al., 2019), and norm preserving (Saxe et al., 2013). Although the desired initial state may not be maintained during training, experimental results show that they can stabilize the learning process as well. + +Recently, the Wasserstein metric has gained much popularity in a wide range of applications in deep learning with some nice properties such as being a metric in a probability distribution space without requiring common supports of two distributions. For instance, it is successfully applied to a multilabeled classification (Frogner et al., 2015), gradient flow of policy update in reinforcement learning (Zhang et al., 2018), training of generative models (Arjovsky et al., 2017; Gulrajani et al., 2017; Kolouri et al., 2019), and capturing long term semantic structure in sequence-to-sequence language model (Chen et al., 2019). + +While the statistics such as mean and (co)variance are useful summaries of a probability distribution, they cannot fully represent the underlying structure of the distribution (Fig. 1). Therefore, regular- + +izing or normalizing activation to follow the target distribution via statistics can be ineffective in some cases. For instance, normalizing activations via single mean and variance such as BN and decorrelated BN (Huang et al., 2018) can be inadequate in learning multimodal distribution (Bilen & Vedaldi, 2017; Deecke et al., 2019). This limitation motivates us to investigate a more general way of regularizing the distribution of activations. Instead of controlling activations via statistics, we define the target distribution and then minimize the Wasserstein distance between the activation distribution and the target distribution. + +# 3 PROJECTED ERROR FUNCTION REGULARIZATION + +We consider a neural network with $L$ layers each of which has $d_{l}$ hidden units in layer $l$ . Let $\mathcal{D} = \{(x_i, y_i)\}_{i=1}^n$ be $n$ training samples which are assumed to be i.i.d. samples drawn from a probability distribution $P_{\mathbf{x},\mathbf{y}}$ . In this paper, we consider the optimization by stochastic gradient descent with mini-batch of $b$ samples randomly drawn from $\mathcal{D}$ at each training iteration. For $i$ -th element of the samples, the neural network recursively computes: + +$$ +\boldsymbol {h} _ {i} ^ {l} = \phi \left(\boldsymbol {W} ^ {l} \boldsymbol {h} _ {i} ^ {l - 1} + \boldsymbol {b} ^ {l}\right) \tag {1} +$$ + +where $\pmb{h}_i^0 = \pmb{x}_i \in \mathbb{R}^{d_0}$ , $\pmb{h}_i^l \in \mathbb{R}^{d_l}$ is an activation in layer $l$ , and $\phi$ is an activation function. In the case of recurrent neural networks (RNNs), the recursive relationship takes the form of: + +$$ +\boldsymbol {h} _ {t _ {i}} ^ {l} = \phi \left(\boldsymbol {W} _ {\text {r e c}} ^ {l} \boldsymbol {h} _ {t - 1 _ {i}} ^ {l} + \boldsymbol {W} _ {\text {i n}} ^ {l} \boldsymbol {h} _ {t _ {i}} ^ {l - 1} + \boldsymbol {b} ^ {l}\right) \tag {2} +$$ + +where $\pmb{h}_{t_i}^l$ is an activation in layer $l$ at time $t$ and $\pmb{h}_{0_i}^l$ is an initial state. Without loss of generality, we focus on activations in layer $l$ of feed-forward networks and the mini-batch of samples $\{(x_{i},y_{i})\}_{i = 1}^{b}$ . Throughout this paper, we let $f^l$ be a function made by compositions of recurrent relation in equation 1 up to layer $l$ , i.e., $\pmb{h}_i^l = f^l (\pmb {x}_i)$ , and $f_{j}^{l}$ be a $j$ -th output of $f^l$ . + +This paper proposes a new regularization loss, called projected error function regularization (PER), that encourages activations to follow the standard normal distribution. Specifically, PER directly matches the distribution of activations to the target distribution via the Wasserstein metric. Let $\mu \in \mathcal{P}(\mathbb{R}^{d_l})$ be the Gaussian measure defined as $\mu (\mathbb{A}) = \frac{1}{2^{d_l / 2}}\int_{\mathbb{A}}\exp \left(-\frac{1}{2}\parallel \boldsymbol {x}\parallel^2\right)d\boldsymbol{x}$ and $\nu_{\mathbf{h}^l} = \frac{1}{b}\sum_i\delta_{\mathbf{h}_i^l}\in \mathcal{P}(\mathbb{R}^{d_l})$ be the empirical measure of hidden activations where $\delta_{\mathbf{h}_i^l}$ is the Dirac unit mass on $h_i^l$ . Then, the Wasserstein metric of order $p$ between $\mu$ and $\nu_{\mathbf{h}^l}$ is defined by: + +$$ +W _ {p} (\mu , \nu_ {\mathbf {h} ^ {l}}) = \left(\inf _ {\pi \in \Pi (\mu , \nu_ {\mathbf {h} ^ {l}})} \int_ {\mathbb {R} ^ {d _ {l}} \times \mathbb {R} ^ {d _ {l}}} d ^ {p} (\boldsymbol {x}, \boldsymbol {y}) \pi (d \boldsymbol {x}, d \boldsymbol {y})\right) ^ {1 / p} \tag {3} +$$ + +where $\prod (\mu ,\nu_{\mathbf{h}^l})$ is the set of all joint probability measures on $\mathbb{R}^{d_l}\times \mathbb{R}^{d_l}$ having the first and the second marginals $\mu$ and $\nu_{\mathbf{h}^l}$ , respectively. + +Because direct computation of equation 3 is intractable, we consider the sliced Wasserstein distance (Rabin et al., 2011) approximating the Wasserstein distance by projecting the high dimensional distributions onto $\mathbb{R}$ (Fig. 2). It is proved by that the sliced Wasserstein and the Wasserstein are equivalent metrics (Santambrogio, 2015; Bonnotte, 2013). The sliced Wasserstein of order one between $\mu$ and $\nu_{\mathrm{h}^l}$ can be formulated as: + +$$ +S W _ {1} \left(\mu , \nu_ {\mathbf {h} ^ {l}}\right) = \int_ {\mathbb {S} ^ {d - 1}} W _ {1} \left(\mu_ {\boldsymbol {\theta}}, \nu_ {\mathbf {h} _ {\boldsymbol {\theta}} ^ {l}}\right) d \lambda (\boldsymbol {\theta}) = \int_ {\mathbb {S} ^ {d - 1}} \int_ {- \infty} ^ {\infty} \left| F _ {\mu_ {\boldsymbol {\theta}}} (x) - \frac {1}{b} \sum_ {i = 1} ^ {b} 1 _ {\langle \mathbf {h} _ {i} ^ {l}, \boldsymbol {\theta} \rangle \leq x} \right| d x d \lambda (\boldsymbol {\theta}) \tag {4} +$$ + +where $\mathbb{S}^{d_l - 1}$ is a unit sphere in $\mathbb{R}^{d_l}$ , $\mu_{\theta}$ and $\nu_{\mathbf{h}_{\theta}^l}$ represent the measures projected to the angle $\theta$ , $\lambda$ is a uniform measure on $\mathbb{S}^{d - 1}$ , and $F_{\mu_{\theta}}(x)$ is a cumulative distribution function of $\mu_{\theta}$ . Herein, equation 4 can be evaluated through sorting $\{\langle h_i^l,\pmb {\theta}\rangle \}_{i}$ for each angle $\pmb{\theta}$ . + +While we can directly use the sliced Wasserstein in equation 4 as a regularization loss, it has a computational dependency on the batch dimension due to the sorting. The computational dependency between samples may not be desirable in distributed and large-batch training that is becoming more and more prevalent in recent years. For this reason, we remove the dependency by applying the + +![](images/778b7476d21e6534ca01abeb82355592661836c92626dd59e847339226e8fa0a.jpg) +Figure 2: Illustration of minimization of the sliced Wasserstein distance between the current distribution and the target distribution. Note that it only concerns a distance in the projected dimension. + +# Algorithm 1 Backward pass under PER + +Input The number of Monte Carlo evaluations $s$ , an activation for $i$ -th sample $h_i$ , the gradient of the loss $\nabla_{h_i}\mathcal{L}$ , a regularization coefficient $\lambda$ + +1: $\pmb{g} \gets \mathbf{0}$ +2: for $k \gets 1$ to $s$ do +3: Sample $\pmb{v} \sim \mathcal{N}(\pmb{0}, \pmb{I})$ +4: $\pmb{\theta} \gets \pmb{v} / \| \pmb{v} \|_2$ +5: Project $h_i^\prime \leftarrow \langle h_i,\pmb {\theta}\rangle$ +6: $g_{k}\gets \mathrm{erf}\left(h_{i}^{\prime} / \sqrt{2}\right)$ +7: $\pmb{g} \gets \pmb{g} + g_{k}\pmb{\theta} / s$ +8: end for +9: return $\nabla_{\pmb{h}_i}\mathcal{L} + \lambda \pmb{g}$ + +Minkowski inequality to equation 4, and obtain the regularization loss $\mathcal{L}_{per}(\nu_{\mathbf{h}^l})$ + +$$ +\begin{array}{l} S W _ {1} (\mu , \nu_ {\mathbf {h} ^ {l}}) \leq \int_ {\mathbb {S} ^ {d - 1}} \int_ {- \infty} ^ {\infty} \frac {1}{b} \sum_ {i = 1} ^ {b} \left| F _ {\mu_ {\theta}} (x) - 1 _ {\langle \boldsymbol {h} _ {i} ^ {l}, \boldsymbol {\theta} \rangle \leq x} \right| d x d \lambda (\boldsymbol {\theta}) \\ = \frac {1}{b} \sum_ {i = 1} ^ {b} \int_ {\mathbb {S} ^ {d - 1}} \left(\langle \boldsymbol {h} _ {i} ^ {l}, \boldsymbol {\theta} \rangle \operatorname {e r f} \left(\frac {\langle \boldsymbol {h} _ {i} ^ {l} , \boldsymbol {\theta} \rangle}{\sqrt {2}}\right) + \sqrt {\frac {2}{\pi}} \exp \left(- \frac {\langle \boldsymbol {h} _ {i} ^ {l} , \boldsymbol {\theta} \rangle^ {2}}{2}\right)\right) d \lambda (\boldsymbol {\theta}) = \mathcal {L} _ {p e r} \left(\nu_ {\mathbf {h} ^ {l}}\right) \tag {5} \\ \end{array} +$$ + +whose gradient with respect to $h_i^l$ is: + +$$ +\nabla_ {\boldsymbol {h} _ {i} ^ {l}} \mathcal {L} _ {p e r} \left(\nu_ {\mathbf {h} ^ {l}}\right) = \frac {1}{b} \mathbb {E} _ {\boldsymbol {\theta} \sim U \left(\mathbb {S} ^ {d _ {l} - 1}\right)} \left[ \operatorname {e r f} \left(\left\langle \boldsymbol {\theta}, \boldsymbol {h} _ {i} ^ {l} / \sqrt {2} \right\rangle\right) \boldsymbol {\theta} \right] \tag {6} +$$ + +where $U(\mathbb{S}^{d_l - 1})$ is the uniform distribution on $\mathbb{S}^{d_l - 1}$ . In this paper, expectation over $U(\mathbb{S}^{d_l - 1})$ is approximated by the Monte Carlo method with $s$ number of samples. Therefore, PER results in simple modification of the backward pass as in Alg. 1. + +Encouraging activations to follow the standard normal distribution can be motivated by the natural gradient (Amari, 1998). The natural gradient is the steepest descent direction in a Riemannian manifold, and it is also the direction that maximizes the probability of not increasing generalization error (Roux et al., 2008). The natural gradient is obtained by multiplying the inverse Fisher information matrix to the gradient. In Raiko et al. (2012) and Desjardins et al. (2015), under the independence assumption between forward and backward passes and activations between different layers, the Fisher information matrix is a block diagonal matrix each of which block is given by: + +$$ +\boldsymbol {F} _ {l} = \mathbb {E} _ {(\boldsymbol {x}, \boldsymbol {y}) \sim (\boldsymbol {x}, \boldsymbol {y})} \left[ \frac {\partial \mathcal {L}}{\partial \operatorname {v e c} \left(\boldsymbol {W} ^ {l}\right)} \frac {\partial \mathcal {L}}{\partial \operatorname {v e c} \left(\boldsymbol {W} ^ {l}\right)} ^ {T} \right] = \mathbb {E} _ {\boldsymbol {x}} \left[ \boldsymbol {h} ^ {l - 1} \boldsymbol {h} ^ {l - 1} ^ {T} \right] \mathbb {E} _ {(\boldsymbol {x}, \boldsymbol {y})} \left[ \frac {\partial \mathcal {L}}{\partial \boldsymbol {a} ^ {l}} \frac {\partial \mathcal {L}}{\partial \boldsymbol {a} ^ {l}} ^ {T} \right] \tag {7} +$$ + +where $\operatorname{vec}(\boldsymbol{W}^l)$ is vectorized $\boldsymbol{W}^l$ , $\boldsymbol{h}^{l-1} = f^{l-1}(\boldsymbol{x})$ , and $\boldsymbol{a}^l = \boldsymbol{W}^l f^{l-1}(\boldsymbol{x}) + \boldsymbol{b}^l$ for $\boldsymbol{x} \sim \mathbf{x}$ . + +Since computing the inverse Fisher information matrix is too expensive to perform every iterations, previous studies put efforts into developing reparametrization techniques, activation functions, and + +![](images/edb030f2e630fbd7129f747357990646be30b9946d3083b293e5c537d755465d.jpg) +Figure 3: Illustration of PER and its gradient in $\mathbb{R}$ . Herein, PER is shifted by $c$ so that $\mathcal{L}_{per}(0) - c = 0$ . The Huber loss is defined as $h(x) = |x| - 0.5$ in $|x| > 1$ and $h(x) = x^2 / 2$ in $|x| \leq 1$ and the Pseudo-Huber loss is defined as $g(x) = \sqrt{1 + x^2} - 1$ . + +regularization losses to make $\pmb{F}^{l}$ close to $\pmb{I}$ , thereby making the gradient close to the natural gradient. For instance, making zero mean and unit variance activations (LeCun et al., 1998; Schraudolph, 1998; Glorot & Bengio, 2010; Raiko et al., 2012; Wiesler et al., 2014) and decorrelated activations (Cogswell et al., 2016; Xiong et al., 2016; Huang et al., 2018) make $\mathbb{E}\left[h^{l - 1}h^{l - 1^T}\right] \approx I$ , and these techniques result in faster training and improved generalization performance. In this perspective, it is expected that PER will enjoy the same advantages by matching $\nu_{\mathbf{h}^l}$ to $\mathcal{N}(0,I)$ . + +# 3.1 COMPARISON TO CONTROLLING ACTIVATIONS IN $L^p$ SPACE + +In this subsection, we theoretically compare PER with existing methods that control activations in $L^p$ space. $L^p(\mathbb{R}^{d_0})$ is the space of measurable functions whose $p$ -th power of absolute value is Lebesgue integrable, and norm of $f \in L^p(\mathbb{R}^{d_0})$ is given by: + +$$ +\| f \| _ {p} = \left(\int_ {\mathbb {R} ^ {d _ {0}}} | f (\boldsymbol {x}) | ^ {p} d P _ {\boldsymbol {x}} (\boldsymbol {x})\right) ^ {1 / p} < \infty \tag {8} +$$ + +where $P_{\mathbf{x}}$ is the unknown probability distribution generating training samples $\{\pmb{x}_i\}_{i=1}^n$ . Since we have no access to $P_{\mathbf{x}}$ , it is approximated by the empirical measure of mini-batch samples. + +The $L^p$ norm is widely used in the literature for regularization and normalization of neural networks. For instance, activation norm regularization (Merit et al., 2017a) penalizes $L^2$ norm of activations. As another example, BN and its $p$ -th order generalization use $L^p$ norm such that the norm of the centralized activation, or pre-activation, is bounded: + +$$ +\psi \left(h _ {i j} ^ {l}\right) = \gamma_ {j} ^ {l} \xi \left(h _ {i j} ^ {l}\right) + \beta_ {j} ^ {l}, \quad \xi \left(h _ {i j} ^ {l}\right) = \frac {h _ {i j} ^ {l} - \bar {\mu} _ {j}}{\left(\sum_ {k} \frac {1}{b} \left| h _ {k j} ^ {l} - \bar {\mu} _ {j} \right| ^ {p}\right) ^ {1 / p}} \tag {9} +$$ + +where $h_{ij}^{l}$ is $j$ -th unit of $\pmb{h}_i^l$ , $\bar{\mu}_j = \frac{1}{b}\sum_k h_{kj}^l$ is the sample mean, $\beta_j^l$ is a learnable shift parameter, and $\gamma_j^l$ is a learnable scale parameters. Herein, we have $\| \xi \circ f_j^l\| _p = 1$ for any unit $j$ and any empirical measure, thus $\| \psi \|_p\leq \| \gamma_j^l\xi \circ f_j^l\|_p + \| \beta_j^l\|_p = |\gamma_j^l | + |\beta_j^l |$ . + +PER differs from $L^p$ norm-based approaches in two aspects. First, PER can be considered as $L^p$ norm with adaptive order in the projected space because it is very similar to the Pseudo-Huber loss in one-dimensional space (Fig. 3). Herein, the Pseudo-Huber loss is a smooth approximation of the Huber loss (Huber, 1964). Therefore, PER smoothly changes its behavior between $L^1$ and $L^2$ norms, making the regularization loss sensitive to small values and insensitive to outliers with large values. However, the previous approaches use predetermined order $p$ , which makes the norm to change insensitively in the near-zero region when $p \leq 1$ or to explode in large value region when $p > 1$ . + +Second, PER captures the interaction between hidden units by projection vectors, unlike $L^p$ norm. To see this, let $\| f^l\| _p^p = \frac{1}{b}\sum_{i,j}|h_{ij}^l |^p = \frac{1}{b}\sum_{i,j}|\langle h_i^l,e_j\rangle|^p$ where $\{\pmb {e}_j\}_{j = 1}^{d_l}$ is the natural basis of + +Table 1: Top-1 error rates of ResNets on CIFAR-10. Lower is better. All numbers are rounded to two decimal places. Boldface indicates the minimum error. * and ** are results from Zhang et al. (2019) and He et al. (2016), respectively. + +
ModelMethodTest error
ResNet-56Vanilla7.21
BN6.95
PER6.72
ResNet-110Vanilla6.90 (7.24*)
BN6.62 (6.61**)
PER6.19
+ +Table 2: Top-1 error rates of 11-layer CNNs on tiny ImageNet. Lower is better. All numbers are rounded to two decimal places. Boldface indicates the minimum error. Numbers in parentheses represent results in Littwin & Wolf (2018). + +
MethodTest error
Vanilla37.45 (39.22)
BN39.22 (40.02)
VCL(37.30)
PER36.74
+ +$\mathbb{R}^{d_l}$ . That is, the norm computes the regularization loss, or the normalizer, of activations with the natural basis as a projection vector. However, PER uses general projection vectors $\theta \sim U(\mathbb{S}^{d_l - 1})$ , capturing the interaction between hidden units when computing the regularization loss. These two differences make PER more delicate criterion for regularizing activations in deep neural networks than $L^p$ norm, as we will show in the next section. + +# 4 EXPERIMENTS + +This section illustrates the effectiveness of PER through experiments on different benchmark tasks with various datasets and architectures. We compare PER with BN normalizing the first and second moments and VCL regularizing the fourth moments. PER is also compared with $L^1$ and $L^2$ activation norm regularizations that behave similarly in some regions of the projected space. We then analyze the computational complexity PER and the impact of PER on the distribution of activations. Throughout all experiments, we use 256 number of slices and the same regularization coefficient for the regularization losses computed in each layer. + +# 4.1 IMAGE CLASSIFICATION IN CIFAR-10, CIFAR-100, AND TINY IMAGENET + +We evaluate PER in image classification task in CIFAR (Krizhevsky et al., 2009) and a subset of ImageNet (Russakovsky et al., 2015), called tiny ImageNet. We first evaluate PER with ResNet (He et al., 2016) in CIFAR-10 and compare it with BN and a vanilla network initialized by fixup initialization (Zhang et al., 2019). We match the experimental details in training under BN with He et al. (2016) and under PER and vanilla with Zhang et al. (2019), and we obtain similar performances presented in the papers. Herein, we search the regularization coefficient over $\{3\mathrm{e} - 4,1\mathrm{e} - 4,3\mathrm{e} - 5,1\mathrm{e} - 5\}$ . Table 1 presents results of CIFAR-10 experiments with ResNet-56 and ResNet-110. PER outperforms BN as well as vanilla networks in both architectures. Especially, PER improves the test errors by $0.49\%$ and $0.71\%$ in ResNet-56 and ResNet-110 without BN, respectively. + +We also performed experiments on an 11-layer convolutional neural network (11-layer CNN) examined in VCL (Littwin & Wolf, 2018). This architecture is originally proposed in Clevert et al. (2016). Following Littwin & Wolf (2018), we perform experiments on 11-layer CNNs with ELU, ReLU, and Leaky ReLU activations, and match experimental details in Littwin & Wolf (2018) except that we used $10\mathrm{x}$ less learning rate for bias parameters and additional scalar bias after ReLU and Leaky ReLU based on Zhang et al. (2019). By doing so, we obtain similar results presented in Littwin & Wolf (2018). Again, a search space of the regularization coefficient is $\{3\mathrm{e} - 4, 1\mathrm{e} - 4, 3\mathrm{e} - 5, 1\mathrm{e} - 5\}$ . For ReLU and Leaky ReLU in CIFAR-100, however, we additionally search $\{3\mathrm{e} - 6, 1\mathrm{e} - 6, 3\mathrm{e} - 7, 1\mathrm{e} - 7\}$ because of divergence of training with PER in these settings. As shown in Table 3, PER shows the best performances on four out of six experiments. In other cases, PER gives compatible performances to BN or VCL, giving $0.16\%$ less than the best performances. + +Following Littwin & Wolf (2018), PER is also evaluated on tiny ImageNet. In this experiment, the number of convolutional filters in each layer is doubled. Due to the limited time and resources, we + +Table 3: Top-1 error rates of 11-layer CNNs on CIFAR-10 and CIFAR-100. Lower is better. All numbers are rounded to two decimal places. Boldface indicates the minimum error. Numbers in parentheses represent results in Littwin & Wolf (2018). + +
ActivationMethodCIFAR-10CIFAR-100
ReLUVanilla8.43 (8.36)29.45 (32.80)
BN7.53 (7.78)29.13 (29.10)
VCL7.80 (7.80)30.30 (30.30)
PER7.2129.29
LeakyReLUVanilla6.73 (6.70)26.50 (26.80)
BN6.38 (7.08)26.83 (27.20)
VCL6.45 (6.45)26.30 (26.30)
PER6.2925.50
ELUVanilla6.74 (6.98)27.53 (28.70)
BN6.69 (6.63)26.60 (26.90)
VCL6.26 (6.15)25.86 (25.60)
PER6.4225.73
+ +conduct experiments only with ELU that gives good performances for PER, BN, and VCL in CIFAR. As shown in Table 2, PER is also effective in the larger model in the larger image classification dataset. + +# 4.2 LANGUAGE MODELING IN PTB AND WIKITEXT2 + +We evaluate PER in word-level language modeling task in PTB (Mikolov et al., 2010) and WikiText2 (Merit et al., 2017b). We apply PER to LSTM with two layers having 650 hidden units with and without reuse embedding (RE) proposed in Inan et al. (2017) and Press & Wolf (2016), and variational dropout (VD) proposed in Gal & Ghahramani (2016). We used the same configurations with Merity et al. (2017a) and failed to reproduce the results in Merity et al. (2017a). Especially, when we rescale gradient when its norm exceeds 10, we observed divergence or bad performance (almost $2\mathrm{x}$ perplexity compared to the published result). Therefore, we rescale gradient with norm over 0.25 instead of 10 based on the default hyperparameter of the PyTorch word-level language model that is also mentioned in Merity et al. (2017a). We also train the networks for 60 epochs instead of 80 epochs since validation perplexity is not improved after 60 epochs in most cases. In this task, PER is compared with recurrent BN (RBN; Coolijmans et al., 2017) because BN is not directly applicable to LSTM. We also compare PER with $L^1$ and $L^2$ activation norm regularizations. Herein, the search space of regularization coefficients of PER, $L^1$ regularization, and $L^2$ regularization is $\{3\mathrm{e - }4,1\mathrm{e - }4,3\mathrm{e - }5\}$ . For $L^1$ and $L^2$ penalties in PTB, we search additional coefficients over $\{1\mathrm{e - }5,$ $3\mathrm{e - }6,1\mathrm{e - }6,3\mathrm{e - }6,1\mathrm{e - }6,3\mathrm{e - }7,1\mathrm{e - }7\}$ because the searched coefficients seem to constrain the capacity. + +We list in Table 4 the perplexities of methods on PTB and WikiText2. While all regularization techniques show regularization effects by giving improved test perplexity, PER gives the best test perplexity except LSTM and RE-VD-LSTM in the PTB dataset wherein PER is the second-best method. We also note that naively applying RBN often reduces performance. For instance, RBN increases test perplexity of VD-LSTM by about 5 in PTB and WikiText2. + +# 4.3 ANALYSIS + +In this subsection, we analyze the computational complexity of PER and its impact on closeness to the standard normal distribution in the 11-layer CNN. + +Table 4: Validation and test perplexities on PTB and WikiText2. Lower is better. All numbers are rounded to one decimal place. Boldface indicates minimum perplexity. + +
ModelMethodPTBWikiText2
ValidTestValidTest
LSTMVanilla123.2122.0138.9132.7
\( L^1 \) penalty119.6114.1137.7130.0
\( L^2 \) penalty120.5115.2136.0131.1
RBN118.2115.1156.2148.3
PER118.5114.5134.2129.6
RE-LSTMVanilla114.1112.2129.2123.2
\( L^1 \) penalty112.2108.5128.6122.7
\( L^2 \) penalty116.6108.2126.5123.3
RBN113.6110.4138.1131.6
PER110.0108.5123.2117.4
VD-LSTMVanilla84.981.199.694.5
\( L^1 \) penalty84.981.598.292.9
\( L^2 \) penalty84.581.298.894.2
RBN89.786.4104.399.4
PER84.180.798.192.6
RE-VD-LSTMVanilla78.975.791.486.4
\( L^1 \) penalty78.375.190.586.1
\( L^2 \) penalty79.275.890.386.1
RBN83.780.595.590.5
PER78.174.990.685.9
+ +# 4.3.1 COMPUTATIONAL COMPLEXITY + +PER has no additional parameters. However, BN and VCL require additional parameters for each channel and each location and channel in every layer, respectively; that is, $2.5\mathrm{K}$ and $350\mathrm{K}$ number of parameters are introduced in BN and VCL in the 11-layer CNN, respectively. In terms of time complexity, PER has the complexity of $O(bd_{l}s)$ for projection operation in each layer $l$ . On the other hand, BN and VCL have $O bd_{l})$ complexities. In our benchmarking, each training iteration takes 0.071 seconds for a vanilla network, 0.083 seconds for BN, 0.087 for VCL, and 0.093 seconds for PER on a single NVIDIA Titan X. Even though PER requires slightly more training time than BN and VCL, this disadvantage can be mitigated by computation of PER is only required in training and PER does not have additional parameters. + +# 4.3.2 CLOSENESS TO THE STANDARD NORMAL DISTRIBUTION + +To examine the effect of PER on the closeness to $\mathcal{N}(\mathbf{0},\mathbf{I})$ , we analyze the distribution of activations in 11-layer CNN in different perspectives. We first analyze the distribution of a single activation $h_j^l$ for some unit $j$ and layer $l$ (Fig. 4). We observe that changes in probability distributions between two consecutive epochs are small under BN because BN bound the $L^2$ norm of activations into learned parameters. On the contrary, activation distributions under vanilla and PER are jiggled between two consecutive epochs. However, PER prevents the variance explosion and pushes the mean to zero. As shown in Fig. 4, while variances of $\nu_{h_j^6}$ under both PER and Vanilla are very high at the beginning of training, the variance keeps moving towards one under PER during training. Similarly, PER recovers biased means of $\nu_{h_j^3}$ and $\nu_{h_j^9}$ at the early stage of learning. + +To precisely evaluate closeness to the standard normal distribution, we also analyze $SW_{1}(\mathcal{N}(\mathbf{0},\mathbf{I}),\nu_{\mathbf{h}^{l}})$ at each epoch (Fig. 5). Herein, the sliced Wasserstein distance is computed by approximating the Gaussian measure using the empirical measure of samples drawn from $\mathcal{N}(\mathbf{0},\mathbf{I})$ as in Rabin et al. (2011). As similar to the previous result, while $\mathrm{BN}\beta_j^l = 0$ and $\gamma_j^l = 1$ at initial state gives small $SW_{1}(\mathcal{N}(\mathbf{0},\mathbf{I}),\nu_{\mathbf{h}^{l}})$ in early stage of training, PER also can effectively control + +![](images/7193f95b87f438bf32f82c6a9f326d07d89cda7fa303c58719fd50859084e990.jpg) +Figure 4: Evolution of distributions of $\nu_{h_i^3}$ , $\nu_{h_j^6}$ , and $\nu_{h_j^9}$ for fixed randomly drawn $i, j, k$ on training set. (a)-(c) represent values (0.25, 0.5, 0.75) quantiles under PER, vanilla, and BN. (d) and (e) represent the sample mean and the sample variance of activations. Variance is clipped at 5 for better visualization. + +![](images/3c068c9c138e56c8c52981c8962877d20721b3543036bd5d5ba7878983dfbf21.jpg) +Figure 5: Closeness to $\mathcal{N}(0, I)$ in the Wasserstein probability distribution space. + +![](images/b543488b958b4581b63eb43633b2be4d7a742873253601e32d0938376efecb94.jpg) + +![](images/c388685eba93ed1ccbe62fd4dccc145691a848250d2b99e7e38bcf66a660ea4b.jpg) + +the distribution without such normalization. This confirms that PER prevents the distribution of activation to be drifted away from the target distribution. + +# 5 CONCLUSION + +We proposed the regularization loss that minimizes the upper bound of the 1-Wasserstein distance between the standard normal distribution and the distribution of activations. In image classification and language modeling experiments, PER gives marginal but consistent improvements over methods based on sample statistics (BN and VCL) as well as $L^1$ and $L^2$ activation regularization methods. The analysis of changes in activations' distribution during training verifies that PER can stabilize the probability distribution of activations without normalization. Considering that the regularization loss can be easily applied to a wide range of tasks without changing architectures or training strategies + +unlike BN, we believe that the results indicate the valuable potential of regularizing networks in the probability distribution space as a future direction of research. +The idea of regularizing activations with the metric in probability distribution space can be extended to many useful applications. For instance, one can utilize task-specific prior when determining a target distribution, e.g., the Laplace distribution for making sparse activation. The empirical distribution of activations computed by a pretrained network can also be used as a target distribution to prevent catastrophic forgetting. In this case, the activation distribution can be regularized so that it does not drift away from the activation distribution learned in the previous task as different from previous approaches constrains the changes in the function $L^2$ space of logits (Benjamin et al., 2019). + +# ACKNOWLEDGMENTS + +We would like to thank Min-Gwan Seo, Dong-Hyun Lee, Dongmin Shin, and anonymous reviewers for the discussions and suggestions. + +# REFERENCES + +Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251-276, 1998. +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning, 2017. +David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The shattered gradients problem: If resnets are the answer, then what is the question? In International Conference on Machine Learning, 2017. +Ari S Benjamin, David Rolnick, and Konrad Kording. Measuring and regularizing networks in function space. In International Conference on Learning Representations, 2019. +Hakan Bilen and Andrea Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275, 2017. +Nils Bjorck, Carla P Gomes, Bart Selman, and Kilian Q Weinberger. Understanding batch normalization. In Advances in Neural Information Processing Systems, 2018. +Nicolas Bonnotte. Unidimensional and Evolution Methods for Optimal Transportation. PhD thesis, Paris 11, 2013. +Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, and Lawrence Carin. Improving sequence-to-sequence learning via optimal transport. In International Conference on Learning Representations, 2019. +Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). In International Conference of Learning Representations, 2016. +Michael Cogswell, Faruk Ahmed, Ross Girshick, Larry Zitnick, and Dhruv Batra. Reducing overfitting in deep networks by decorrelating representations. In International Conference on Learning Representations, 2016. +Tim Cooijmans, Nicolas Ballas, César Laurent, Caglar Gülçehre, and Aaron Courville. Recurrent batch normalization. In International Conference on Learning Representations, 2017. +Lucas Deecke, Iain Murray, and Hakan Bilen. Mode normalization. In International Conference on Learning Representations, 2019. +Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, et al. Natural neural networks. In Advances in Neural Information Processing Systems, 2015. +Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. Learning with a Wasserstein loss. In Advances in Neural Information Processing Systems, 2015. + +Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems, 2016. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In International Conference on Machine Learning, 2017. +Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Dropblock: A regularization method for convolutional networks. In Advances in Neural Information Processing Systems, 2018. +Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Artificial Intelligence and Statistics, 2010. +Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of Wasserstein gans. In Advances in Neural Information Processing Systems, 2017. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In IEEE International Conference on Computer Vision, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. +Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02): 107-116, 1998. +Elad Hoffer, Ron Banner, Itay Golan, and Daniel Soudry. Norm matters: Efficient and accurate normalization schemes in deep networks. In Advances in Neural Information Processing Systems, 2018. +Lei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorrelated batch normalization. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. +Lei Huang, Yi Zhou, Fan Zhu, Li Liu, and Ling Shao. Iterative normalization: Beyond standardization towards efficient whitening. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Peter J Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, pp. 73-101, 1964. +Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. In International Conference on Learning Representations, 2017. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015. +Mahdi M Kalayeh and Mubarak Shah. Training faster by separating modes of variation in batch-normalized models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. +Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, and Thomas Hofmann. Towards a theoretical understanding of batch normalization. arXiv preprint arXiv:1805.10694, 2018. +Soheil Kolouri, Phillip E. Pope, Charles E. Martin, and Gustavo K. Rohde. Sliced Wasserstein auto-encoders. In International Conference on Learning Representations, 2019. +Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, 2009. +Yann LeCun, Leon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efficient backprop. In Neural Networks: Tricks of the Trade, pp. 9-50. 1998. + +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. +Qianli Liao, Kenji Kawaguchi, and Tomaso Poggio. Streaming normalization: Towards simpler and more biologically-plausible normalizations for online and recurrent learning. arXiv preprint arXiv:1610.06160, 2016. +Etai Littwin and Lior Wolf. Regularizing by the variance of the activations' sample-variances. In Advances in Neural Information Processing Systems, 2018. +Stephen Merity, Bryan McCann, and Richard Socher. Revisiting activation regularization for language rnns. In International Conference on Machine Learning, 2017a. +Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In International Conference on Learning Representations, 2017b. +Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černocký, and Sanjeev Khudanpur. Recurrent neural network based language model. In Annual Conference of the International Speech Communication Association, 2010. +Dmytro Mishkin and Jiri Matas. All you need is a good init. In International Conference on Learning Representations, 2016. +Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016. +Julien Rabin, Gabriel Peyré, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In International Conference on Scale Space and Variational Methods in Computer Vision, 2011. +Tapani Raiko, Harri Valpola, and Yann LeCun. Deep learning made easier by linear transformations in perceptrons. In Artificial Intelligence and Statistics, 2012. +Nicolas L Roux, Pierre-Antoine Manzagol, and Yoshua Bengio. Topmoumoute online natural gradient algorithm. In Advances in Neural Information Processing Systems, 2008. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115 (3):211-252, 2015. +Filippo Santambrogio. Optimal transport for applied mathematicians. Birkhäuser, NY, 55:58-63, 2015. +Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In Advances in Neural Information Processing Systems, 2018. +Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013. +Nicol Schraudolph. Accelerated gradient descent by factor-centering decomposition. Technical report, 1998. +Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of gradient-based deep learning. In International Conference on Machine Learning, 2017. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014. +Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In International Conference on Machine Learning, 2013. + +Simon Wiesler, Alexander Richard, Ralf Schlüter, and Hermann Ney. Mean-normalized stochastic gradient for large-scale deep learning. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2014. +Yuxin Wu and Kaiming He. Group normalization. In European Conference on Computer Vision, 2018. +Wei Xiong, Bo Du, Lefei Zhang, Ruimin Hu, and Dacheng Tao. Regularizing deep convolutional neural networks with a structured decorrelation constraint. In IEEE International Conference on Data Mining, 2016. +Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S Schoenholz. A mean field theory of batch normalization. In International Conference on Learning Representations, 2019. +Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. In International Conference on Learning Representations, 2019. +Ruiyi Zhang, Changyou Chen, Chunyuan Li, and Lawrence Carin. Policy optimization as Wasserstein gradient flows. In International Conference on Machine Learning, 2018. \ No newline at end of file diff --git a/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/images.zip b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ec9943e92f8715ade4f0467313eca8bf5c5e774d --- /dev/null +++ b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46da500d41994a7066e7cd78ba3d317fcb7ab5a6c5cbdebdd714f636c3abf980 +size 483280 diff --git a/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/layout.json b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a6354721539a545914706dbd85b40f4d4fbe0085 --- /dev/null +++ b/regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f8fad18dbe6de46688801bdacec7ed5614ace78ea13b3ed7266429b677b99bb +size 484609 diff --git a/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_content_list.json b/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dbf62564548c7655aa871781cf559d6bc795fd9a --- /dev/null +++ b/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:557d7a88d0015d9c417d9d18383ba2b1cb58ac80750d82b5cf765142ddeb05ab +size 93086 diff --git a/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_model.json b/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f65e180144fd28dc46604463d87dbb23bdde63bf --- /dev/null +++ b/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84eb9f760e799e4ff8105019eb24383a725353e26b84138f888e3a237620c8c2 +size 115635 diff --git a/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_origin.pdf b/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a719874829537f166e0ed12fe223d47fcf78a39d --- /dev/null +++ b/reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df8be2ac9b776c85654e1a64cd378b18958ccf5164bcf19d53e273cc444f6cbb +size 1666535 diff --git a/reinforcedactivelearningforimagesegmentation/full.md b/reinforcedactivelearningforimagesegmentation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..767c80d710aa951cf159ac0b6014e55b3b9091ab --- /dev/null +++ b/reinforcedactivelearningforimagesegmentation/full.md @@ -0,0 +1,321 @@ +# REINFORCED ACTIVE LEARNING FOR IMAGE SEGMENTATION + +Arantxa Casanova* + +École Polytechnique de Montréal + +Mila, Quebec Artificial Intelligence Institute + +ElementAI + +Pedro O. Pinheiro + +ElementAI + +Negar Rostamzadeh + +ElementAI + +Christopher J. Pal + +École Polytechnique de Montréal + +Mila, Quebec Artificial Intelligence Institute + +ElementAI + +# ABSTRACT + +Learning-based approaches for semantic segmentation have two inherent challenges. First, acquiring pixel-wise labels is expensive and time-consuming. Second, realistic segmentation datasets are highly unbalanced: some categories are much more abundant than others, biasing the performance to the most represented ones. In this paper, we are interested in focusing human labelling effort on a small subset of a larger pool of data, minimizing this effort while maximizing performance of a segmentation model on a hold-out set. We present a new active learning strategy for semantic segmentation based on deep reinforcement learning (RL). An agent learns a policy to select a subset of small informative image regions – opposed to entire images – to be labeled, from a pool of unlabeled data. The region selection decision is made based on predictions and uncertainties of the segmentation model being trained. Our method proposes a new modification of the deep Q-network (DQN) formulation for active learning, adapting it to the large-scale nature of semantic segmentation problems. We test the proof of concept in CamVid and provide results in the large-scale dataset Cityscapes. On Cityscapes, our deep RL region-based DQN approach requires roughly $30\%$ less additional labeled data than our most competitive baseline to reach the same performance. Moreover, we find that our method asks for more labels of under-represented categories compared to the baselines, improving their performance and helping to mitigate class imbalance. + +# 1 INTRODUCTION + +Semantic segmentation, the task of labelling an image pixel-by-pixel with the category it belongs to, is critical for a variety of applications such as autonomous driving (Müller et al., 2018; Wang & Pan, 2018), robot manipulation (Schwarz et al., 2018), embodied question answering (Yu et al., 2019) and biomedical image analysis (Ronneberger et al., 2015). Convolutional neural networks (Lecun et al., 1998)-based methods have achieved excellent results on large-scale supervised semantic segmentation, in which we assume pixel-level annotations are available (Farabet et al., 2013; Pinheiro & Collobert, 2014; Long et al., 2015). For such models to work, however, they need a large amount of pixel-level annotations that may require costly human labor (Cordts et al., 2016; Bearman et al., 2016). + +Current semantic segmentation datasets have pixel-wise annotations for each image. This standard approach has two important issues: (i) pixel-level labelling is extremely time consuming. For example, annotation and quality control required more than 1.5h per image (on average) on Cityscapes (Cordts et al., 2016), a popular dataset used for benchmarking semantic segmentation methods. (ii) Class imbalance in the data is typically extreme. Certain categories (such as 'building' or 'sky') can appear + +![](images/f6fbd874dcc7ef708e2c09ecda0c9e5ea0891e42079f5d3581cec87746cbc945.jpg) +Figure 1: (Left) Input image from Cityscapes dataset (Cordts et al., 2016), with selected regions by our method to be labeled. (Right) Retrieved ground truth annotation for the selected regions. Our method focuses on small objects and under-represented classes, such as bicycles, pedestrians and poles. Best viewed in color. + +with two orders of magnitude more frequently than others (e.g. 'pedestrian' or 'bicycle'). This can lead to undesired biases and performance properties for learned models. + +This is specially relevant when we want to collect annotated data with a human in the loop to create a new dataset or to add more labeled data to an existing one. We can tackle the aforementioned problems by selecting, in an efficient and effective way, which regions of the images should be labeled next. Active learning (AL) is a well-established field that studies precisely this: selecting the most informative samples to label so that a learning algorithm will perform better with less data than a non-selective approach, such as labelling the entire collection of data. Active learning methods can be roughly divided in two groups: (i) methods that combine different manually-designed AL strategies (Roy & McCallum, 2001; Osugi et al., 2005; Gal et al., 2017; Baram et al., 2004; Chu & Lin, 2016; Hsu & Lin, 2015; Ebert et al., 2012; Long & Hua, 2015) and (ii) data-driven AL approaches (Bachman et al., 2017; Fang et al., 2017; Konyushkova et al., 2017; Woodward & Finn, 2016; Ravi & Larochelle, 2018; Konyushkova et al., 2018), that learn which samples are most informative to train a model using information of the model itself. Although label acquisition for semantic segmentation is more costly and time consuming than image classification, there has been considerably less work in active learning for semantic segmentation (Dutt Jain & Grauman, 2016; Mackowiak et al., 2018; Vezhnevets et al., 2012; Konyushkova et al., 2015; Gorriz et al., 2017; Yang et al., 2017), and they focus on hand-crafted strategies. + +Current AL techniques that use reinforcement learning (Konyushkova et al., 2018; Fang et al., 2017; Woodward & Finn, 2016; Pang et al., 2018; Padmakumar et al., 2018; Bachman et al., 2017) focus on labelling one sample per step until a budget of labels is met. In semantic segmentation, this would translate into labelling a single region per step. This is highly inefficient, since each step involves updating the segmentation network and computing the rewards. In this work, we propose an end-to-end method to learn an active learning strategy for semantic segmentation with reinforcement learning by directly maximizing the performance metric we care about, Intersection over Union (IoU). We aim at learning a policy from the data that finds the most informative regions on a set of unlabeled images and asks for its labels, such that a segmentation network can achieve high-quality performance with a minimum number of labeled pixels. Selecting regions, instead of entire images, allows the algorithm to focus on the most relevant parts of the images, as shown in Figure 1. Although class imbalance in segmentation datasets has been previously addressed in (Badrinarayanan et al., 2017; Chan et al., 2019; Sudre et al., 2017), among others, they try to solve a problem that arises from the data collection process. We show that our proposed method can help mitigate the problem at its source, i.e. in the data annotation itself. Because our method maximizes the mean IoU per class, it indirectly learns to ask for more labels of regions with under-represented classes, compared to the baselines. Moreover, we propose and explore a batch-mode active learning approach that uses an adapted DQN to efficiently chose batches of regions for labelling at each step. + +To the best of our knowledge, all current approaches for active learning in semantic segmentation rely on hand-crafted active learning heuristics. However, learning a labelling policy from the data could allow the query agent to ask for labeled data as a function of the data characteristics and class imbalances, that may vary between datasets. Our main contributions can be summarized as follows: (i) we learn a RL-based acquisition function for region-based active learning for segmentation, (ii) we formulate our active learning framework with a batch-mode DQN, which labels multiple regions in parallel at each active learning iteration (a more efficient strategy for large-scale datasets that is compatible with standard mini-batch gradient descent), and (iii) we test the proof of concept + +in CamVid (Brostow et al., 2008) dataset and provide results in Cityscapes (Cordts et al., 2016) dataset, beating a recent state-of-the-art technique known as BALD (Gal et al., 2017), a widely used entropy-based selection criterion and uniform sampling baselines. + +# 2 RELATED WORK + +Active learning. Traditional active learning techniques focus on estimating the sample informativeness using hand-crafted heuristics derived from sample uncertainty: employing entropy (Shannon, 1948), query-by-committee (Dagan & Engelson, 1995; Shannon, 1948; Freund et al., 1993), maximizing the error reduction (Roy & McCallum, 2001), disagreement between experts (Dagan & Engelson, 1995; Freund et al., 1993) or Bayesian methods that need to estimate the posterior distribution (Houlsby et al., 2011a; Gal et al., 2017). Some approaches combine different techniques to improve AL performance. For instance, relying on exploration-exploitation trade-offs (Osugi et al., 2005), on a bandit formulation (Baram et al., 2004; Chu & Lin, 2016; Hsu & Lin, 2015) and on reinforcement learning (Ebert et al., 2012; Long & Hua, 2015). However, these methods are still limited in the sense that they combine hand-crafted strategies instead of learning new ones. More recent active learning methods rely on an acquisition function that estimates the sample informativeness with a learned metric. Konyushkova et al. (2017) estimate the error reduction of labelling a particular sample, choosing the ones that maximize the error reduction. Wang et al. (2017) introduce a cost-effective approach that also uses confident predictions as pseudo ground truth labels. + +AL with reinforcement learning. Recently, reinforcement learning has gained attention as a method to learn a labelling policy that directly maximizes the learning algorithm performance. For instance, Liu et al. (2018); Bachman et al. (2017) leverage expert knowledge from oracle policies to learn a labelling policy, and Pang et al. (2018); Padmakumar et al. (2018) rely on policy gradient methods to learn the acquisition function. In a different approach, some methods gather all labeled data in one big step. In Contardo et al. (2017), all samples are chosen in one step with a bi-directional RNN for the task of one-shot learning. In Sener & Savarese (2018), they propose to select a batch of representative samples that maximize the coverage of the entire unlabeled set. However, the bounded core-set loss used tends to perform worse when the number of classes grows. + +More similar to our approach, some prior works propose to learn the acquisition function with a Deep Q-Network (DQN) (Mnih et al., 2013) formulation. These works have examined both stream-based active learning (Fang et al., 2017; Woodward & Finn, 2016), where unlabeled samples are provided one by one, and the decision is to label it or not, and pool-based active learning (Konyushkova et al., 2018), where all the unlabeled data is provided beforehand, and the decision is later taken on which samples to choose. The work of Konyushkova et al. (2018) is the closest to ours. Similar to them, our method also leverages the benefits of Q-learning (Watkins & Dayan, 1992) to tackle pool-based AL. Contrary to them, we deal with a much more complex problem: semantic segmentation versus simple classification on UCI repository (Dua & Graff, 2017). The large-scale nature of the problem requires us to use a very different definition of actions, states and rewards. Moreover, we need to adapt the DQN formulation to allow the problem to be computationally feasible. + +AL for semantic segmentation. Active learning for semantic segmentation has been relatively less explored than other tasks, potentially due to its large-scale nature. For instance, Dutt Jain & Grauman (2016) combine metrics (defined on hand-crafted heuristics) that encourage the diversity and representativeness of labeled samples. Some rely on unsupervised superpixel-based over-segmentation (Vezhnevets et al., 2012; Konyushkova et al., 2015) – and highly depend on the quality of the super-pixel segmentation. Others focus on foreground-background segmentation of biomedical images (Gorriz et al., 2017; Yang et al., 2017), also using hand-crafted heuristics. Settles et al. (2008); Vijayanarasimhan & Grauman (2009); Mackowiak et al. (2018) focus on cost-effective approaches, proposing manually-designed acquisition functions based on the cost of labeling images or regions of images. However, this information is not always given, restricting their applicability. + +Mackowiak et al. (2018) focus on cost-effective approaches, where the cost of labeling an image is not considered equal for all images. Similar to our work, they use a region-based approach to cope with the large number of samples on a segmentation dataset. Contrary to us, their labelling strategy is based on manually defined heuristics, limiting the representability of the acquisition function. To the best of our knowledge, our work is the first to apply data-driven RL-based approach to the problem of active learning for semantic segmentation. + +# 3 METHOD + +We are interested in selecting a small number of regions $^1$ (cropped from images in the original dataset) from a large unlabeled set to maximize the performance of a segmentation network $f$ , parameterized by $\theta$ . This process is done iteratively until a given budget $B$ of labeled samples is achieved. At each iteration $t$ , a query network $\pi$ , parameterized by $\phi$ , selects $K$ regions to be labeled by an oracle from a large unlabeled set $\mathcal{U}_t$ . These samples are added to the labeled set $\mathcal{L}_t$ , that is used to train the segmentation network $f$ . The performance is measured with a standard semantic segmentation metric, Intersection-over-Union (IoU). + +We cast the AL problem within a Markov decision process (MDP) formulation, inspired by other work such as (Padmakumar et al., 2018; Fang et al., 2017; Bachman et al., 2017; Pang et al., 2018; Konyushkova et al., 2018). We model the query network $\pi$ as a reinforcement learning agent, specifically a deep Q-network (Mnih et al., 2013). This data-driven approach allows the model to learn selection strategies based solely on prior AL experience. Our formulation differs from other approaches by the task we address, the definitions of states, actions and rewards, and the reinforcement learning algorithm we use to find the optimal policy. + +# 3.1 ACTIVE LEARNING WITH REINFORCEMENT LEARNING FOR SEGMENTATION + +In our setting, we use four different data splits. To train $\pi$ , we define a subset of labeled data $\mathcal{D}_T$ to play the active learning game for several episodes and learn a good acquisition function that maximizes performance with a budget of $B$ regions. The query network is evaluated on a different split $\mathcal{D}_V$ . We use a separate subset $\mathcal{D}_R$ to obtain the reward signal by evaluating the segmentation network on it. The set $\mathcal{D}_S(|\mathcal{D}_S| \ll |\mathcal{D}_T|)$ is used to construct the state representation. + +The MDP is defined with the sequence of transitions $\{(s_t, a_t, r_{t+1}, s_{t+1})\}$ . For every state $s_t \in S$ (function of the segmentation network at timestep $t$ ), the agent can perform actions $a_t \in \mathcal{A}$ to choose which samples from $\mathcal{U}_t$ to annotate. The action $a_t = \{a_t^k\}_{k=1}^K$ , composed of $K$ sub-actions, is a function of the segmentation network, the labeled and the unlabeled set. Each sub-action asks for a specific region to be labeled. Then, it receives a reward $r_{t+1}$ based on the improvement in mean IoU per class after training the segmentation network with the selected samples. Note that states and actions do not depend on the specific architecture of the segmentation network. We are interested in finding a policy to select samples that maximize the segmentation performance. We use deep Q-network (Mnih et al., 2013) and samples from an experience buffer $\mathcal{E}$ to train the query network $\pi$ . + +Each episode $e$ elapses a total of $T$ steps. We start by setting the segmentation network $f$ to a set of initial weights $\theta_0$ and with no annotated data, i.e., $\mathcal{L}_0 = \emptyset$ and $\mathcal{U}_0 = \mathcal{D}_T$ . At each iteration $t$ , the following steps are done: + +1. The state $s_t$ is computed as a function of $f_t$ and $\mathcal{D}_S$ . +2. A restricted action space is built with $K$ pools $\mathcal{P}_t^k$ with $N$ regions, sampled uniformly from the unlabeled set $\mathcal{U}_t$ . For each region in each pool, we compute its sub-action representation $a_t^{k,n}$ . +3. The query agent selects $K$ sub-actions $\{a_t^k\}_{k=1}^K$ with $\epsilon$ -greedy policy. Each sub-action $a_t^k$ is defined as selecting one region $x_k$ (out of $N$ ) to annotate from a pool $\mathcal{P}_t^k$ . +4. An oracle labels the regions and the sets are updated: $\mathcal{L}_{t + 1} = \mathcal{L}_t\cup \{(x_k,y_k)\}_{k = 1}^K$ and $\mathcal{U}_{t + 1} = \mathcal{U}_t\setminus \{x_k\}_{k = 1}^K.$ +5. The segmentation network $f_{t + 1}$ is trained one iteration on the recently added regions $\{x_k\}_{k = 1}^K$ . +6. The agent receives the reward $r_{t + 1}$ as the difference of performance between $f_{t + 1}$ and $f_{t}$ on $\mathcal{D}_R$ . + +Figure 2 depicts this training algorithm. We consider the termination of each episode when the budget $B$ of labeled regions is met, i.e., $|\mathcal{L}_t| = B$ . Once the episode is terminated, we restart the weights of the segmentation network $f$ to the initial weights $\theta_0$ , set $\mathcal{L}_0 = \emptyset$ and $\mathcal{U}_0 = \mathcal{D}_T$ , and restart the episode. We train the query policy $\pi$ by simulating several episodes and updating its weights at each + +![](images/05a8907d6c77788178cd860ae7a26e350c12d7a3dff7810cea668f167ec79d22.jpg) +Figure 2: The query network $\pi$ is trained during several episodes $e$ with MDP transitions $\{(s_t, a_t, r_{t+1}, s_{t+1})\}$ . 1) The state $s_t$ is computed as a function of segmentation network $f$ and state set $\mathcal{D}_S$ . 2) $K$ unlabeled pools $\mathcal{P}_t^k$ are sampled uniformly from the unlabeled set $\mathcal{U}_t$ . The representation of their possible sub-actions are computed using $f$ , labeled set $\mathcal{L}_t$ and unlabeled set $\mathcal{U}_t$ . 3) Query network $\pi$ selects action $a_t$ , composed of $K$ sub-actions $a_t^k$ . Each of them is chosen from its corresponding pool. 4) Selected regions are labeled and added to $\mathcal{L}_t$ (and removed from $\mathcal{U}_t$ ). 5) Segmentation network $f$ is trained with those new labeled samples. 6) Reward $r_{t+1}$ is obtained from $\mathcal{D}_R$ . This loop continues until a budget $B$ of labeled regions is achieved. + +timestep by sampling transitions $\{(s_t, a_t, r_{t+1}, s_{t+1})\}$ from the experience replay buffer $\mathcal{E}$ . More details in Section 3.2. + +State representation. We would like to use the state of the segmentation network $f$ as the MDP state. Unfortunately, it is not straightforward to embed $f$ into a state representation. Following Konyushkova et al. (2017), we represent the state space $S$ with the help of a set-aside set $\mathcal{D}_S$ . We use a small subset of data from the train set, making sure it contains a significant representation of all classes. We consider it to be a representative set of the dataset, and that any improvement in the segmentation performance on subset $\mathcal{D}_S$ will translate into an improvement over the full dataset2. We use the predictions of the segmentation network $f_t$ on $\mathcal{D}_S$ to create a global representation of state $s_t$ (step 1 in Figure 2). + +We need a compact representation to avoid intensive memory usage due to the pixel-wise predictions. The samples in $\mathcal{D}_S$ are split in patches, and compact feature vectors are computed for all of them. Then, each region is encoded by the concatenation of two sets of features: one is based on class predictions of $f_{t}$ and the other on its prediction uncertainty, represented as the Shannon entropy (Shannon, 1948). The first set of features (i) is a (normalized) count of the number of pixels that are predicted to each category. This feature encodes the segmentation prediction on a given patch while dismissing the spatial information, less important for small patches. Moreover, we measure the uncertainty of the predictor with the entropy over the probability of predicted classes. For each region, we compute the entropy of each pixel location to obtain a spatial entropy map. To compress this representation, we apply min, average and max-poolings to the entropy map to obtain downsampled feature maps. The second set of features (ii) is thus obtained by flattening these entropy features and concatenating them. + +Finally, the state $s_t$ is represented by an ensemble of the feature representation of each region in $\mathcal{D}_S$ . Figure A.1a illustrates how $s_t$ is computed from each region. + +Action representation. In our setting, taking an action means asking for the pixel-wise annotation of an unlabeled region. Due to the large-scale nature of semantic segmentation, it would be prohibitively expensive to compute features for each region in the unlabeled set at each step. For this reason, instead, at each step $t$ , we approximate the whole unlabeled set by sampling $K$ pools of unlabeled regions $\mathcal{P}_t^k$ , each containing $N$ (uniformly) sampled regions. For each region, we compute its sub-action representation $a_{t}^{k,n}$ (step 2 in Figure 2). + +Each sub-action $a_{t}^{k,n}$ is a concatenation of four different features: the entropy and class distribution features (as in the state representation), a measure of similarity between the region $x_{k}$ and the labeled set and another between the region and the unlabeled set. The intuition is that the query network could learn to build a more class-balanced labeled set while still taking representative samples from the unlabeled set. This could help mitigate the hard imbalance of the segmentation datasets and improve overall performance. + +For each candidate region, $x$ in a pool $\mathcal{P}_t^k$ , we compute the KL divergence between the class distributions of the prediction map of region $x$ (estimated as normalized counts of predicted pixels in each category) and the class distributions of each labeled and unlabeled regions (using the ground-truth annotations and network predictions, respectively). For the labeled set, we compute a KL divergence score between each of the labeled regions' class distribution and the one of region $x$ . Summarizing all these KL divergences could be done by taking the maximum or summing them. However, to obtain more informative features, we compute a normalized histogram of KL divergence scores, resulting in a distribution of similarities. As an example, if we were to sum all the scores, having half of the labeled regions with a KL divergence of zero and the other half with a value $c$ , would be equivalent to have all labeled regions with a KL divergence of $c / 2$ . The latter could be more interesting, since it means there are no labeled regions with the same class distribution as $x$ . For the unlabeled set we follow the same procedure, resulting in another distribution of KL divergences. Both of them are concatenated and added to the action representation. Figure A.1b illustrates how we represent each possible action in a pool. + +Based on early experimentation, learning the state and action representations directly with a CNN does not provide strong enough features for the reinforcement learning framework to converge to a good solution. An ablation study on the state and action components can be found in Appendix E.1. + +# 3.2 BATCH MODE DQN + +The desired query agent should follow an optimal policy. This policy maps each state to an action that maximizes the expected sum of future rewards. We rely on a DQN (Mnih et al., 2013), parameterized by $\phi$ , to find an optimal policy. + +We train our DQN with a labeled set $\mathcal{D}_T$ and compute the rewards in a held-out split $\mathcal{D}_R$ . As mentioned above, the query agent in our method selects $K$ regions before transitioning to the next state. We assume that each region is selected independently, as in the case where $K$ annotators label one region in parallel. In this case, the action $a_t$ is composed of $K$ independent sub-actions $\{a_t^k\}_{k=1}^K$ , each with a restricted action space, avoiding the combinatorial explosion of the action space. To ease computation and avoid selecting repeated regions in the same time-step, we restrict each sub-action $a_t^k$ to select a region $x_k$ in $\mathcal{P}_t^k$ defined as: + +$$ +a _ {t} ^ {k} = \underset {a _ {t} ^ {k, n} \in \mathcal {P} _ {t} ^ {k}} {\operatorname {a r g m a x}} Q \left(s _ {t}, a _ {t} ^ {k, n}; \phi\right), \tag {1} +$$ + +for each $k \in \{1, \dots, K\}$ action take in timestep $t$ . + +The network is trained by optimizing a loss based on temporal difference (TD) error (Sutton, 1988). The loss is defined as the expectation over decomposed transitions $\mathcal{T}_k = \{(s_t,a_t^k,r_{t + 1}^k,s_{t + 1})\}$ , obtained from the standard transitions $\{(s_t,a_t,r_{t + 1},s_{t + 1})\}$ , by approximating $r_{t + 1}^k\approx r_{t + 1}$ : + +$$ +\mathbb {E} _ {\mathcal {T} _ {k} \sim \mathcal {E}} \left[ \left(y _ {t} ^ {k} - Q \left(s _ {t}, a _ {t} ^ {k}; \phi\right)\right) ^ {2} \right], \tag {2} +$$ + +where $\mathcal{E}$ is the experience replay buffer and $y_{t}^{k}$ the TD target for each sub-action. + +To stabilize the training, we used a target network with weights $\phi^{\prime}$ and the double DQN (Van Hasselt et al., 2016) formulation. The action selection and evaluation is decoupled; the action is selected with the target network and is evaluated with the query network. The TD target for each sub-action is represented as: + +$$ +y _ {t} ^ {k} = r _ {t + 1} + \gamma Q \left(s _ {t + 1}, \underset {a _ {t + 1} ^ {k, n} \in \mathcal {P} _ {t + 1} ^ {k}} {\operatorname {a r g m a x}} Q \left(s _ {t + 1}, a _ {t + 1} ^ {k, n}; \phi^ {\prime}\right); \phi\right). \tag {3} +$$ + +where $\gamma$ is a discount factor. + +This formulation is valid under the approximation that the sub-actions are independent of each other, conditioned on the state. We observed that increasing the number of sub-actions $K$ per step eases computation and does not hinder segmentation performance. We provide an ablation study on the effect of $K$ in Appendix E.3. + +# 4 EXPERIMENTS + +We start this section by describing the datasets that we use to evaluate our method, the experimental setup, and the baselines. We evaluate the algorithm in Camvid as a proof of concept and we show large-scale results on Cityscapes. + +# 4.1 EXPERIMENTAL SETUP + +Although we can apply active learning in a setting with unlabeled data with a human in the loop that labels selected regions, we test our approach in fully labeled datasets, where it is easier to mask out the labels of a part of the data and reveal them when the active learning algorithm selects them. + +CamVid (Brostow et al., 2008). This dataset consists of street scene view images, with the resolution of $360 \times 480$ and 11 categories. It has 370, 104 and 234 images for train, validation and test set, respectively. We split the train set with uniform sampling in 110 labeled images (from where we get 10 images to represent the state set $\mathcal{D}_S$ and the rest for $\mathcal{D}_T$ ), and 260 images to build $\mathcal{D}_V$ , where we evaluate and compare our acquisition function to the baselines. The state set is chosen to be representative of $\mathcal{D}_T$ , by restricting the sampling of $\mathcal{D}_S$ to have a similar class distribution to the one of $\mathcal{D}_T$ . Each image is split into 24 regions of dimension $80 \times 90$ . We use the dataset's validation set for $\mathcal{D}_R$ . We report the final segmentation results on the test set. In our experiments, we chose $K = 24$ regions per step. Our model is quite robust to the number of regions selected at each time step (see Appendix E.3). + +Cityscapes (Cordts et al., 2016). It is also composed of real street scene views, with image resolution of $2048 \times 1024$ and 19 semantic categories. The train set with fine-grained segmentation labels has 2975 images and the validation dataset of 500 images. We uniformly sampled 360 labeled images from the train set. Out of these, 10 images represent $\mathcal{D}_S$ , 150 build $\mathcal{D}_T$ and 200, $\mathcal{D}_R$ , where we get our rewards. The remaining 2615 images of train set are used for $\mathcal{D}_V$ , as if they were unlabeled. We report the results in the validation set (test set not available). Each image is split in 128 regions of dimension $128 \times 128$ . We chose $K = 256$ regions per step. + +Implementation details. The split $\mathcal{D}_R$ is used to get the rewards for the DQN and also for hyperparameter selection, that are chosen according to the best setup for both baselines and our method. We report the average and standard deviation of the 5 different runs (5 random seeds). As data augmentation, we use random horizontal flips and random crops of $224 \times 224$ . For more details, please refer to Appendix B on supplementary material. + +Evaluation. The query network $\pi$ is trained on $\mathcal{D}_T$ with a small, fixed budget (0.5k regions for Camvid and 4k regions for Cityscapes) to encourage picking regions that will boost the performance in an heavily scarce data regime. The learned acquisition function, as well as the baselines, is evaluated on $\mathcal{D}_V$ , where we ask for labels until the budget is met, for different budgets. Note that the baselines do not have any learnable component. + +Once the budget is reached, we train the segmentation network $f$ with $\mathcal{L}_T$ until convergence (with early stopping in $\mathcal{D}_R$ ). For a fair comparison, all methods' segmentation network has been pre-trained (initial $f$ weights $\theta_0$ ) on GTA dataset (Richter et al., 2016), a synthetic dataset where high amounts of labeled data can be obtained without human effort, and $\mathcal{D}_T$ (where we had labels to train the DQN). We evaluate the final segmentation performance (measured in mean IoU) on the test set of CamVid and on the validation set of Cityscapes. + +# 4.2 RESULTS + +![](images/70694ddf51db30d95c50fcf9a2950b7ce55870c4e16b48778febf6a30b55d0cb.jpg) +(a) Active learning in CamVid + +![](images/68757bbf35c5504da892517526531ae9be541a6259cbee48fe3ad5a218dbff49.jpg) +(b) Active learning in Cityscapes + +Results in CamVid. We compare our results against three distinct baselines: (i) $\mathbf{U}$ is the uniform random sampling of the regions to label at each step out of all possible regions in the pool, (ii) $\mathbf{H}$ is an uncertainty sampling method that selects the regions with maximum cumulative pixel-wise Shannon entropy, (iii) $\mathbf{B}$ picks regions with maximum cumulative pixel-wise BALD (Houlsby et al., 2011b; Gal et al., 2017) metric. We use 20 iterations of MC-Dropout (Gal & Ghahramani, 2016) (instead of 100, as in (Gal et al., 2017)) for computational reasons. In preliminary experiments, we did not observe any improvement using over 20 iterations. In Camvid, we use a pool size of 10 for our method, $\mathbf{H},\mathbf{B}$ and 50 for $\mathbf{U}$ . In Cityscapes, we have access to more data so we use pool sizes of 500, 200, 200 and 100 respectively for $\mathbf{U},\mathbf{H},$ $\mathbf{B}$ and our method. Pool sizes were selected according to the best validation mean IoU. + +Figure 4a shows results on CamVid for different budget sizes. Our method outperforms the baselines for every fixed budget, except for $1.5\mathrm{k}$ regions, where we + +achieve similar performance as $\mathbf{H}$ . We argue that the dataset has a small number of images and selecting 1.5k regions already reaches past $98\%$ of maximum performance, where differences between our method and $\mathbf{H}$ are negligible. Surprisingly, $\mathbf{B}$ is worse than $\mathbf{U}$ , specially for small budgets, where training with the newly acquired labels does not provide any additional information. It overfits quickly to the training, getting a worst result that with the initial weights. In general, all results have a high variance due to the low regime of data we are working in. In Appendix E.2 we show the advantages of labeling small regions instead of full images. + +![](images/9b0078004290a0d3021dd7cd73a2be39b219cd7a49622ea6a52552f826ee8ae0.jpg) +Figure 4: Performance of several methods with increasing active learning budget, expressed as the number of $128 \times 128$ pixel regions labeled and the $\%$ of additional labeled data. All methods have been pretrained with GTAV and a small subset of their target datasets. Budget indicates additional number of regions labeled (and the percentage of unlabeled data used). The dashed line represents the $96\%$ of the total performance achieved by the segmentation network with fully-supervised training (having access to all labels). We report the mean and standard deviation of 5 runs. +Figure 3: Entropy of class distributions obtained from pixels of selected regions. + +Results in Cityscapes. Figure 4b shows results on Cityscapes for different budgets. Here, we also observe that our method outperforms the baselines for all budgets points. Labelling 20k regions, corresponding to only $6\%$ of the total pixels (additional to the labeled data in $\mathcal{D}_T$ ), we obtain a performance of $64.5\%$ mean IoU. This is $96\%$ of the performance of the segmentation network if it had access to all labeled pixels. To reach the same performance, $\mathbf{H}$ requires an additional 6k labeled regions (around $30\%$ more pixels, equivalent to an extra 45 images). In this larger dataset, $\mathbf{B}$ performs better than random, showing that for the task of segmentation, $\mathbf{B}$ might start to show its benefits only for considerably large budgets. Table 1 shows the per-class IoU for the evaluated methods (with a fixed budget). Our method works specially well for under-represented classes, such as Person, Motorcycle or Bicycle, among others. Indeed, our method selects more pixels belonging to under-represented classes than baselines. Note that this is a side effect of directly optimizing for the mean IoU and defining class-aware representations for states and actions. Figure 3 shows the entropy of the distribution of selected pixels of the final labeled set (for a budget of 12k regions) for Cityscapes. The higher the entropy means closer to uniform distribution over classes, and our method + +
MethodRoadSide-WalkBuild-ingWallFencePoleTraffic LightTraffic SignVegetationTerrain
U96.6776.6388.4833.8936.0052.8054.2760.8490.2752.34
H95.6072.0888.0635.3044.5952.4353.7061.3890.0851.87
B95.2569.3788.7532.2844.3653.8158.8464.7990.2750.51
Ours96.1974.2488.4633.5642.2853.2857.1863.6190.2051.84
SkyPersonRiderCarTruckBusTrainMotor-cycleBicyclemIoU
U92.5769.6631.8290.1327.0443.4123.3032.9863.6458.78
H88.2772.6940.8590.4642.4058.8833.6343.1768.0862.29
B93.3371.1639.0888.3834.2343.4130.3537.3766.6760.64
Ours91.3273.3045.2290.9142.1458.8435.9745.1469.3563.32
+ +Table 1: Per category IoU and mean IoU [%] on Cityscapes validation set, for a budget of 12k regions. For clarity, only the mean of 5 runs is reported. Results with standard deviations in Table C.1. + +has the highest entropy. Appendix C shows the distribution from which the entropy is computed and Appendix D presents some qualitative results, showing what each method decides to label for some images. + +# 5 CONCLUSION + +We propose a data-driven, region-based method for active learning for semantic segmentation, based on reinforcement learning. The goal is to alleviate the costly process of obtaining pixel-wise labels with a human in the loop. We propose a new modification of DQN formulation to learn the acquisition function, adapted to the large-scale nature of semantic segmentation. This provides a computationally efficient solution that uses less labeled data than competitive baselines, while achieving the same performance. Moreover, by directly optimizing for the per-class mean IoU and defining class-aware representations for states and actions, our method asks for more labels of under-represented classes compared to baselines. This improves the performance and helps to mitigate class imbalance. As future work, we highlight the possibility of designing a better region definition, that could help improve the overall results, and adding domain adaptation for the learnt policy, to transfer it between datasets. + +# ACKNOWLEDGEMENTS + +We thank NSERC and PROMPT. We would also like to thank the team at ElementAI for supporting this research and providing useful feedback. + +# REFERENCES + +Philip Bachman, Alessandro Sordoni, and Adam Trischler. Learning algorithms for active learning. In ICML, 2017. +Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. PAMI, 2017. +Yoram Baram, Ran El Yaniv, and Kobi Luz. Online choice of active learning algorithms. JMLR, 2004. +Amy Bearman, Olga Russakovsky, Vittorio Ferrari, and Li Fei-Fei. What's the point: Semantic segmentation with point supervision. In ECCV, 2016. +Gabriel J. Brostow, Jamie Shotton, Julien Fauqueur, and Roberto Cipolla. Segmentation and recognition using structure from motion point clouds. In ECCV, 2008. +Robin Chan, Matthias Rottmann, Fabian Hüger, Peter Schlicht, and Hanno Gottschalk. Application of decision rules for handling class imbalance in semantic segmentation. arXiv preprint arXiv:1901.08394, 2019. +Hong-Min Chu and Hsuan-Tien Lin. Can active learning experience be transferred? In ICMD, 2016. + +Gabriella Contardo, Ludovic Denoyer, and Thierry Artières. A meta-learning approach to one-step active-learning. In International Workshop on Automatic Selection, Configuration and Composition of Machine Learning Algorithms. CEUR, 2017. +Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016. +Ido Dagan and Sean P. Engelson. Committee-based sampling for training probabilistic classifiers. In ICML, 1995. +J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. +Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml. +Suyog Dutt Jain and Kristen Grauman. Active image segmentation propagation. In CVPR, 2016. +Sandra Ebert, Mario Fritz, and Bernt Schiele. Ralf: A reinforced active learning formulation for object class recognition. In CVPR, 2012. +Meng Fang, Yuan Li, and Trevor Cohn. Learning how to active learn: A deep reinforcement learning approach. In EMNLP, 2017. +Clement Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. Learning hierarchical features for scene labeling. PAMI, 2013. +Yoav Freund, H Sebastian Seung, Eli Shamir, and Naftali Tishby. Information, Prediction, and Query by Committee. In NIPS, 1993. +Y Gal, R Islam, and Z Ghahramani. Deep bayesian active learning with image data. ICML, 2017. +Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016. +Marc Gorriz, Axel Carlier, Emmanuel Faure, and Xavier Giroi Nieto. Cost-effective active learning for melanoma segmentation. ML4H: Machine Learning for Health Workshop, NIPS, 2017. +Ji He, Mari Ostendorf, Xiaodong He, Jianshu Chen, Jianfeng Gao, Lihong Li, and Li Deng. Deep reinforcement learning with a combinatorial action space for predicting popular reddit threads. In EMNLP, 2016. +Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Mate Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011a. +Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Mate Lengyel. Bayesian active learning for classification and preference learning. arXiv, 2011b. +Wei-Ning Hsu and Hsuan-Tien Lin. Active learning by learning. In AAAI, 2015. +Alexander Kirillov, Ross B. Girshick, Kaiming He, and Piotr Dólar. Panoptic feature pyramid networks. CVPR, 2019. +Ksenia Konyushkova, Raphael Sznitman, and Pascal Fua. Introducing geometry in active learning for image segmentation. In ICCV, 2015. +Ksenia Konyushkova, Raphael Sznitman, and Pascal Fua. Learning active learning from data. In NIPS, 2017. +Ksenia Konyushkova, Raphael Sznitman, and Pascal Fua. Discovering general-purpose active learning strategies. arXiv preprint arXiv:1810.04114, 2018. +Yann Lecun, León Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 1998. + +Tsung-Yi Lin, Piotr Dálár, Ross B. Girshick, Kaiming He, Bharath Hariharan, and Serge J. Belongie. Feature pyramid networks for object detection. In CVPR, 2017. +Ming Liu, Wray Buntine, and Gholamreza Haffari. Learning how to actively learn: A deep imitation learning approach. In ACL, 2018. +Chengjiang Long and Gang Hua. Multi-class multi-annotator active learning with robust gaussian process for visual recognition. In ICCV, 2015. +Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. +Radek Mackowiak, Philip Lenz, Omair Ghori, Ferran Diego, Oliver Lange, and Carsten Rother. Cereals-cost-effective region-based active learning for semantic segmentation. BMVC, 2018. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. NIPS Deep Learning Workshop, 2013. +Matthias Müller, Alexey Dosovitskiy, Bernard Ghanem, and Vladen Koltun. Driving policy transfer via modularity and abstraction. CoRL, 2018. +Thomas Osugi, Deng Kim, and Stephen Scott. Balancing exploration and exploitation: A new algorithm for active machine learning. In ICDM, 2005. +Aishwarya Padmakumar, Peter Stone, and Raymond Mooney. Learning a policy for opportunistic active learning. In EMNLP, 2018. +Kunkun Pang, Mingzhi Dong, Yang Wu, and Timothy Hospedales. Meta-learning transferable active learning policies by deep reinforcement learning. arXiv preprint arXiv:1806.04798, 2018. +Pedro O. Pinheiro and Ronan Collobert. Recurrent convolutional neural networks for scene labeling. In ICML, 2014. +Sachin Ravi and Hugo Larochelle. Meta-learning for batch mode active learning. In ICLR Workshop, 2018. +Stephan R. Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016. +Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. +Nicholas Roy and Andrew McCallum. Toward optimal active learning through monte carlo estimation of error reduction. ICML, 2001. +Max Schwarz, Anton Milan, Arul Selvam Periyasamy, and Sven Behnke. RGB-d object detection and semantic segmentation for autonomous manipulation in clutter. *IJRR*, 2018. +Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. *ICLR*, 2018. +Burr Settles, Mark Craven, and Lewis Friedland. Active learning with real annotation costs. In NIPS, 2008. +Claude Elwood Shannon. A mathematical theory of communication. Bell system technical journal, 1948. +Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, and M Jorge Cardoso. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, 2017. +Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 1988. + +Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In AAAI, 2016. +Alexander Vezhnevets, Joachim M Buhmann, and Vittorio Ferrari. Active learning for semantic segmentation with expected change. In CVPR, 2012. +Sudheendra Vijayanarasimhan and Kristen Grauman. What's it going to cost you?: Predicting effort vs. informativeness for multi-label image annotations. In CVPR, 2009. +Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. Cost-effective active learning for deep image classification. IEEE Transactions on Circuits and Systems for Video Technology, 2017. +Wenfu Wang and Zhijie Pan. Dsnet for real-time driving scene semantic segmentation. arXiv preprint arXiv:1812.07049, 2018. +Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 1992. +Mark Woodward and Chelsea Finn. Active one-shot learning. NIPS Deep RL Workshop, 2016. +Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, and Danny Z Chen. Suggestive annotation: A deep active learning framework for biomedical image segmentation. In MICCAI, 2017. +Licheng Yu, Xinlei Chen, Georgia Gkioxari, Mohit Bansal, Tamara L. Berg, and Dhruv Batra. Multi-target embodied question answering. In CVPR, 2019. + +# A STATE AND ACTION REPRESENTATION DETAILS + +In this section, we provide illustrations that show more details on how the state and action are built. Figure A.1a shows how to build the state representation and Figure A.1b how to compute the action representation of a particular region. + +![](images/e0eb92821f1e43d5819d567c095eb36ac2f747ea2a82347f26ab3e42f9ff1457.jpg) +(a) State representation + +![](images/c81b414d43b98384706a9107f5ce62479a1a4efbce64957d26a2dd198770d738.jpg) +(b) Action representation +Figure A.1: (a) Each region $x_{i}$ in $\mathcal{D}_S$ is represented as a concatenation of two features, one based on entropy and the other on class predictions. The final state $s_t$ is the concatenation of the features for all regions. (b) Each region $x_{k}$ in pool $\mathcal{P}_k$ is represented as a concatenation of four features: entropy-based features, class predictions and two KL divergence distributions, comparing each region $x_{k}$ with the labeled and unlabeled set. + +# B EXTENDED EXPERIMENTAL SETUP + +The segmentation network $f$ is an adaptation of feature pyramid network (Lin et al., 2017) for semantic segmentation (similar to the segmentation branch of (Kirillov et al., 2019)), with a ResNet-50 backbone (He et al., 2016), pretrained on ImageNet (Deng et al., 2009). The network is pretrained on the full train set of a large-scale synthetic dataset, GTAV (Richter et al., 2016), therefore not requiring much human labelling effort. Moreover, this dataset has the advantage of possessing the same categories as real datasets we experiment with. + +The query network $\pi$ , depicted in Figure B.1, is composed of two paths, one to compute state features and another to compute action features, fusing them at the end. Each of the layers are composed of Batch Normalization, ReLU activation and a fully-connected layer. The state path and action path are composed of 4 and 3 layers, respectively, with a final layer that fuses them together to get the global features; these are gated with a sigmoid, controlled by the KL distance distributions in the action representation. The weights are updated at each step of the active learning loop, by sampling batches of 16 experience tuples from an experience replay buffer, sized 600 and 3200 for Camvid and Cityscapes, respectively. + +We train both networks with stochastic gradient descent (SGD) with momentum. We use the same learning rate for both the segmentation and query networks; $10^{-4}$ and $10^{-3}$ for Cityscapes and Camvid respectively. Weight decay is set to $10^{-4}$ for the segmentation network and $10^{-3}$ for the query network. We used a training batch size of 32 for Camvid and 16 for Cityscapes. + +![](images/cb0ef0ba775c535f72221aaed523b22143f7371b3184f4ae4b9aedee4a3ba589.jpg) +Figure B.1: The DQN takes a state representation $s_t$ and an action representation for a possible action (labeling region $x_k$ in an unlabeled pool $\mathcal{P}_t^k$ ). $N_F$ are the number of state and action features (class distributions and entropy-based features), and $N_{SIM}$ the number of features for the KL divergence distributions. Features are computed for both representations separately with layers composed of Batch Normalization, ReLU activation and fully connected layers. Both feature vectors are flattened and concatenated, to apply a final linear layer that obtains a score as a single scalar. The Q-values are computed as the gated score, where the gate is controlled by a feature representation from the KL distance distributions of the action representation. + +# C CLASS FREQUENCIES AND PERFORMANCE PER CLASS + +We show in Figure C.1 a more detailed plot for the class frequencies of regions that each of the methods chooses for labeling. As the entropy of the class distributions in Figure 3 show, our method picks more regions containing under-represented classes. Specially, it asks labels for more Person, Rider, Train, Motorcycle and Bicycle pixels. + +We observe that our B baseline picks more than $50\%$ of pixels for only 3 classes that are overrepresented or have a medium representation: Building, Vegetation and Sky. This could explain why the performance is worse than the H baseline. + +Moreover, in table C.1, we extend Table 1 by adding the standard deviation for each result. + +# D QUALITATIVE RESULTS + +We compare our method qualitatively with the baselines in Figure D.1. Baseline $\mathbf{U}$ asks for random patches. Our method tends to pick more regions with the under-represented classes and small objects. For instance, in the first image in the left, our method asks for several regions of a Train, that almost has no samples in the training data. In the second image, it focuses on Person, Bicycle and Poles. In the third image, it asks for labels of the traffic lights and a pedestrian on a bicycle. Baselines $\mathbf{B}$ and $\mathbf{H}$ select some of those relevant regions, but miss a lot of them. + +# E ABLATION STUDIES + +In this section, we provide an ablation study on the state and action representation, the effect of labeling small regions versus full images, and the comparison of taking different regions per step. + +![](images/6851a662bdf520cc313386c9d504eb39f55f2a5a9ad4de15fffc8a282c7c36fc.jpg) +Figure C.1: Class frequencies $[\%]$ in Cityscapes for the selected regions to label after the active learning acquisition for different methods. "Data split" frequencies refer to the proportion of classes in the unlabeled data split, where we reveal the masks for the purpose of showing the underlying class frequencies. In this split is where all methods perform active learning, in the setting where we mask out the labels $(\mathcal{D}_v)$ . Budget used: 12k regions. For ease of visualization, we only plot the mean of 5 runs. Void label represents all pixels for which we do not assign any labels. + +
MethodRoadSidewalkBuildingWallFence
U96.67 ± 0.0976.63 ± 0.5188.48 ± 0.1233.89 ± 1.1136.00 ± 2.12
H (Shannon, 1948)95.60 ± 0.3372.08 ± 1.2888.06 ± 0.4235.30 ± 1.7344.59 ± 1.62
B (Gal et al., 2017)95.25 ± 0.2869.37 ± 0.9488.75 ± 0.1832.28 ± 0.8844.36 ± 1.12
Ours96.19 ± 0.2374.24 ± 1.5088.46 ± 0.2333.56 ± 2.3042.28 ± 1.40
PoleTraffic LightTraffic SignVegetationTerrain
U52.80 ± 0.4154.27 ± 1.3460.84 ± 0.9990.27 ± 0.1452.34 ± 1.38
H (Shannon, 1948)52.43± 0.3153.70 ± 1.4861.38 ± 0.8190.08 ± 0.1651.87 ± 0.79
B (Gal et al., 2017)53.81 ± 0.3058.84 ± 0.5064.79 ± 0.3490.27 ± 0.1550.51 ± 0.94
Ours53.28 ± 0.5157.18 ± 1.9263.61 ± 1.4790.20 ± 0.2651.84 ± 1.62
SkyPersonRiderCarTruck
U92.57 ± 0.3069.66 ± 0.6231.82 ± 2.6690.13 ± 0.0127.04 ± 2.16
H (Shannon, 1948)88.27 ± 3.2672.69 ± 0.5340.85 ± 1.8590.46 ±0.3842.40 ± 1.96
B (Gal et al., 2017)93.33 ± 0.2471.16 ± 0.4739.08 ± 1.2688.38 ± 0.2934.23 ± 1.24
Ours91.32 ± 1.0673.30 ± 0.4345.22 ± 2.7590.91 ± 0.2342.14 ± 1.41
BusTrainMotorcycleBicyclemIoU
U43.41 ± 2.8023.30 ± 2.5232.98 ± 3.8163.64 ± 0.3358.78 ± 0.29
H (Shannon, 1948)58.88 ± 2.9733.63 ± 4.7643.17 ±1.3768.08 ± 0.3862.29 ± 0.55
B (Gal et al., 2017)43.41 ±4.3430.35 ± 3.1837.37 ± 0.7966.67 ± 0.6760.64 ± 0.49
Ours58.84 ± 4.1535.97 ± 3.5045.14 ± 2.3469.35 ± 0.9063.32 ± 0.93
+ +Table C.1: Per category IoU and mean IoU [%], on Cityscapes validation set, for a budget of 12k regions. Both the mean and standard deviation of 5 runs is reported. + +# E.1 STATE AND ACTION REPRESENTATION + +Here, we analyze the incremental effect of our design choices for the state and action representation on Cityscapes. We use 3 pooling operations – min, average, max – to compress the entropy map of the region and use it in the state and action representation. Also, KL divergences are added to the latter. As seen in Table E.1, using only the max-pooled entropy map (Ours - 1H), the performance is slightly worse than H. When we combine the information of the 3 pooled entropy maps (Ours - 3H), we outperform H baseline. Moreover, when adding the two distribution of KL distances to our action representation (Ours - 3H + KL): between possible regions to label and the labeled set and between the region and the unlabeled set, we further increase the performance, getting our best state and action representations. + +![](images/ce0184f89901ad8d1d6fb2a8bdf3c9a4e024cc023fe81d0b2f8ee3551bccc73f.jpg) +Figure D.1: Qualitative results in Cityscapes after running the active learning algorithm with a budget of 2k regions. The first row consists on input images, the second shows the what $\mathbf{U}$ picks, the third, $\mathbf{B}$ , the fourth $\mathbf{H}$ , and the last row shows what our method picks. Best viewed in color. + +
StatePool size
20100200
U54.62 ± 0.6054.92 ± 0.5955.15 ± 0.64
H (Shannon, 1948)57.41 ± 0.1757.55 ± 0.6057.48 ± 0.96
B (Gal et al., 2017)56.73 ± 0.2056.99 ± 0.3256.44 ± 0.77
Ours - 1H56.89 ± 1.2257.29 ± 0.6757.62 ± 0.96
Ours - 3H57.65 ± 0.7458.10 ± 1.1657.65 ± 1.30
Ours - 3H+KL57.67 ± 0.9258.95 ± 0.5959.18 ± 0.62
+ +Table E.1: Contribution to the validation mean IoU performance $[\%]$ of Cityscapes dataset, for a budget of 4K and for each of the components of our state representation, compared to the baselines. Mean and standard deviation of 5 runs is reported. + +# E.2 REGION VS. FULL IMAGE ANNOTATION + +In this subsection, we analyze the effect of asking for labels in regions instead of full images and the effect of the number of regions per step. We compare the validation IoU when asking for pixel-wise labels for entire images versus pixel-wise labels for small regions. In the first case, we ask for one image at each step and, for the latter, we ask for 24 regions per step (pixel-wise, equivalent to one image). As it is shown in Table E.2, asking for entire image labels has similar performance for all methods, that resemble Uniform performance when asking for region labels. This indicates that, in order to select more informative samples, it is useful to split the images into patches (crops) and be able to disregard regions that only contain over-represented classes of the dataset. + +# E.3 INFLUENCE OF STEP REGIONS + +Empirically, our selector network is quite robust to the number of regions per step, as seen in Table E.3. Therefore, we select 24 regions for CamVid, the one that yielded best results. This is more efficient to train than taking one region per step. + +
UHBOurs
Full im.69.64 ± 0.3369.46 ± 0.1569.66 ± 0.2169.44 ± 0.22
24 R70.35 ± 0.7170.40 ± 0.6570.63 ± 0.7771.85 ± 0.68
+ +Table E.2: Comparison between labeling a full image and 24 non-overlapping square regions (pixel-wise, equivalent to a full image), for different methods. Performance is measured in terms of validation mean IoU performance $[\%]$ in CamVid dataset, for a budget of $0.5\mathrm{k}$ . In the first row, results for "full im.", one entire image is labeled at each step (region size equal to the size of the image). In the second row, "24 R" results for labeling 24 regions at each step. Pool size selected as the one that performed better, out of 10, 20, 50 and 100. Results are reported with the mean and standard deviation of 5 runs. + +
Regions per stepVal IoU [%]
171.10 ± 0.75
1270.93 ± 0.70
2471.85 ± 0.68
3671.24 ± 0.49
4871.25 ± 1.17
7271.20 ± 0.53
+ +Table E.3: Results of varying the number of regions to be labeled at each step by our method. Performance is measured in terms of validation mean IoU performance $[\%]$ in CamVid dataset, for a budget of 0.5k Results are reported with the mean and standard deviation of 5 runs. \ No newline at end of file diff --git a/reinforcedactivelearningforimagesegmentation/images.zip b/reinforcedactivelearningforimagesegmentation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6875547a3f913004db204294fcc078de7d3b7d2c --- /dev/null +++ b/reinforcedactivelearningforimagesegmentation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e230678241170f298fa0f9aa283b51a7eeee5d0d1c15b7af53eef69b4ad4251 +size 595537 diff --git a/reinforcedactivelearningforimagesegmentation/layout.json b/reinforcedactivelearningforimagesegmentation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6c7ae6fd0f6a9a05eaf90c4c1e51f98be21b2346 --- /dev/null +++ b/reinforcedactivelearningforimagesegmentation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e309c2d14a67fbae5d88f69873ab7b7298701857535847629d06d82d7cb29b33 +size 564641 diff --git a/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_content_list.json b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6d9009ff16897543f46b307a881a1c048da5157d --- /dev/null +++ b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7bb1a803ac294347d077bd30c3fb73c4427193ddb23f1e6d72583dad310de96 +size 141649 diff --git a/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_model.json b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3c65cadbdda42586ffda03c27272277147950086 --- /dev/null +++ b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b88eedc7963db3107e59bd62205c5b1a5ae8845474414d8f602c87bdc7e84616 +size 171366 diff --git a/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_origin.pdf b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f598be493bcabdec5675724df0be0c2c234c40e5 --- /dev/null +++ b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/ee4577fd-48b0-47e4-b92b-8a0767070ac7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98a6677e8ff4610be0fd97e263ea6a7a4005a74a46a6740d4dc8b6ee62e49481 +size 873847 diff --git a/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/full.md b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..12c06901f9c7a70bec209fe175858d17b0d76879 --- /dev/null +++ b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/full.md @@ -0,0 +1,595 @@ +# REINFORCED GENETIC ALGORITHM LEARNING FOR OPTIMIZING COMPUTATION GRAPHS + +Aditya Paliwal * + +Google Research + +adipal@google.com + +Felix Gimeno + +DeepMind + +fgimeno@google.com + +Vinod Nair + +DeepMind + +vinair@google.com + +Yujia Li + +DeepMind + +yujiali@google.com + +Miles Lubin + +Google Research + +mlubin@google.com + +Pushmeet Kohli + +DeepMind + +pushmeet@google.com + +Oriol Vinyals + +DeepMind + +vinyals@google.com + +# ABSTRACT + +We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler. Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training. This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours. We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage. In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks. + +# 1 INTRODUCTION + +Deep Learning frameworks such as MXNet (Chen et al., 2015), PyTorch (Paszke et al., 2017), and TensorFlow (TensorFlow Authors, 2016a) represent neural network models as computation graphs. Efficiently executing such graphs requires optimizing discrete decisions about how to map the computations in a graph onto hardware so as to minimize a relevant cost metric (e.g., running time, peak memory). Given that execution efficiency is critical for the success of neural networks, there is growing interest in the use of optimizing static compilers for neural network computation graphs, such as Glow (Rotem et al., 2018), MLIR (MLIR Authors, 2018), TVM (Chen et al., 2018a), and XLA (XLA team, 2017). + +Here we consider the model parallelism setting where a computation graph can be executed using multiple devices in parallel. Nodes of the graph are computational tasks, and directed edges denote dependencies between them. We consider jointly optimizing over placement, i.e., which nodes are executed on which devices, and schedule, i.e., the node execution order on each device. These decisions are typically made in either one or two passes in the compiler. We consider two different objectives: 1) minimize running time, subject to not exceeding device memory limits, and 2) minimize peak memory usage. In the optimization literature, such problems are studied under the class of task scheduling, which is known to be NP-hard in typical settings (Sinnen, 2007; Kwok & Ahmad, 1999). + +As scheduling and placement are just a few of the many complex decisions made in a compiler, it is essential in a production setting that a solution 1) produce solutions of acceptable quality fast, even on large graphs (e.g., thousands of nodes) and decision spaces, and 2) handle diverse graphs from various types of applications, neural network architectures, and users. In this work we consider + +![](images/c3037734dd5fd388e5c825b1e0cef2e8275079744497843052bda276bf99e948.jpg) +Fig. 1: Overview of our approach. The Biased Random Key Genetic Algorithm (BRKGA) is used to optimize execution decisions for a computation graph (e.g., placement and scheduling of nodes) with respect to a cost metric (e.g., running time, peak memory) computed using the performance model. BRKGA requires proposal distributions for each node in the graph to generate candidate solutions in its search loop. The default choice is agnostic to the input graph: uniform distribution over $[0,1]$ at all nodes. We use a graph neural network policy to predict node-specific non-uniform proposal distribution choices (parameterized as beta distributions over $[0,1]$ ). BRKGA is then run with those choices and outputs the best solution found by its iteration limit. By controlling the non-uniformity of the distributions, the policy directs how BRKGA's search effort is allocated such that a better solution can be found with the same search budget. + +learning an optimizer that satisfies these requirements. Crucially, we aim to learn an optimizer that generalizes to a broad set of previously unseen computation graphs, without the need for training on such graphs, thus allowing it to be fast at test time. + +Previous works on learning to optimize model parallelism decisions (Mirhoseini et al., 2017; 2018; Addanki et al., 2019) have not considered generalization to a broad set of graphs nor joint optimization of placement and scheduling. In Mirhoseini et al. (2017; 2018), learning is done from scratch for each computation graph and for placement decisions only, requiring hours (e.g., 12 to 27 hours per graph). This is too slow to be broadly useful in a general-purpose production compiler. We propose an approach that takes only seconds to optimize similar graphs. In concurrent work to ours, Addanki et al. (2019) shows generalization to unseen graphs, but they are generated artificially by architecture search for a single learning task and dataset. In contrast, we collect real user-defined graphs spanning a broad set of tasks, architectures, and datasets. In addition, both Mirhoseini et al. (2017; 2018) and Addanki et al. (2019) consider only placement decisions and rely on TensorFlow's dynamic scheduler; they do not address the static compiler setting where it is natural to jointly optimize scheduling and placement. + +The key idea of our approach (Figure 1) is to learn a neural network that, conditioned on the input graph to be optimized, directs an existing optimization algorithm's search such that it finds a better solution in the same search budget. We choose the Biased Random-Key Genetic Algorithm (BRKGA (Gonçalves & Resende, 2011)) as the optimization algorithm after an extensive evaluation of several choices showed that it gives by far the best speed-vs-quality trade-off for our application. BRKGA produces good solutions in just a few seconds even for real-world TensorFlow graphs with thousands of nodes, and we use learning to improve the solution quality significantly at similar speed. We train a graph neural network (Battaglia et al., 2018) to take a computation graph as input and output node-specific proposal distributions to use in the mutant generation step of BRKGA's inner loop. BRKGA is then run to completion with those input-dependent distribution choices, instead of input-agnostic default choices, to compute execution decisions. The distributions are predicted at each node, resulting in a high-dimensional prediction problem. There is no explicit supervision available, so we use the objective value as a reward signal in a contextual bandit approach with REINFORCE (Williams, 1992). Our approach, "Reinforced Genetic Algorithm Learning" (REGAL), uses the network's ability to generalize to new graphs to significantly improve the solution quality of the genetic algorithm for the same objective evaluation budget. + +We follow the static compiler approach of constructing a coarse static cost model to evaluate execution decisions and optimizing them with respect to it, as done in (Addanki et al., 2018; Jia et al., 2018). This is in contrast to evaluating the cost by executing the computation graph on hardware (Mirhoseini + +et al., 2017; 2018). A computationally cheap cost model enables fast optimization. It is also better suited for distributed training of RL policies since a cost model is cheap to replicate in parallel actors, while hardware environments are not. Our cost model corresponds to classical NP-hard scheduling problems, so optimizing it is difficult. In this paper we focus fully on learning to optimize this cost model, leaving integration with a compiler for future work. + +We structure the neural network's task as predicting proposal distributions to use in the search over execution decisions, rather than the decisions themselves directly. Empirically we have found the direct prediction approach to be too slow at inference time for our application and generalizes poorly. Our approach potentially allows the network to learn a more abstract policy not directly tied to detailed decisions that are specific to particular graphs, which may generalize better to new graphs. It can also make the learning task easier as the search may succeed even with sub-optimal proposal distribution predictions, thus smoothening the reward function and allowing the network to incrementally learn better proposals. The node-specific proposal distribution choices provide a rich set of knobs for the network to flexibly direct the search. Combining learning with a search algorithm has been shown to be successful (e.g., (Silver et al., 2017; 2018)), and our work can be seen as an instance of the same high-level idea. + +This paper makes several contributions: + +- We are the first to demonstrate learning a policy for jointly optimizing placement and scheduling that generalizes to a broad set of real-world TensorFlow graphs. REGAL significantly outperforms all baseline algorithms on two separate tasks of minimizing runtime and peak memory usage (section 5.3) on datasets constructed from 372 unique real-world TensorFlow graphs, the largest dataset of its kind in the literature and at least an order of magnitude larger than the ones in previous works (Mirhoseini et al., 2017; 2018; Chen et al., 2018b; Addanki et al., 2018; 2019). +- We use a graph neural network to predict mutant sampling distributions of a genetic algorithm, specifically BRKGA, for the input graph to be optimized. This directs BRKGA's search in an input-dependent way, improving solution quality for the same search budget. +- We compare extensively to classical optimization algorithms, such as enumerative search, local search, genetic search, and other heuristics, and analyze room-for-improvement in the objective value available to be captured via learning. Both are missing in previous works. + +# 2 RELATED WORK + +Learning to optimize computation graphs: AutoTVM (Chen et al., 2018b) applies learning to the very different problem of optimizing low-level implementations of operators in a tensor program, while we focus on optimizing higher-level decisions such as placement and scheduling of ops. Mao et al. (2019) use graph neural nets and RL to learn a scheduling policy for data processing jobs on clusters. These works are conceptually similar to ours in their use of learning, applied to a different domain. + +Learning for combinatorial optimization: Our work is an instance of applying learning for combinatorial optimization (Bengio et al., 2018). Previous works on learning graph combinatorial optimization algorithms (e.g., Li et al. (2018); Khalil et al. (2017)) have focused on problems such as Minimum Vertex Cover, Maximum Clique, Maximum Independent Set, etc. The task scheduling problem we consider is significantly different in that the objective value is a more complex function on node-level decisions. Also, we focus on large-scale, real-world TensorFlow graphs, while e.g., Khalil et al. (2017) uses small-scale, synthetic graph distributions. + +Learning a proposal distribution for stochastic search: Bunel et al. (2017) learns a policy for predicting instance-dependent proposal distributions to be used in the stochastic optimizer STOKE (Schkufza et al., 2013) for superoptimizing programs. However, it uses handcrafted instance features and shows results on relatively simple, small programs. In contrast, we automatically learn the instance representations and show results on real-world graphs. An earlier work by Paige & Wood (2016) similarly learns a neural network to predict input-dependent proposal distributions for sequential Monte Carlo search for inference in a graphical model. + +Optimization without learning: Parallel task scheduling (Sinnen, 2007; Kwok & Ahmad, 1999) is a classical problem for scheduling ops in a computational graph to minimize runtime. Learning is not + +traditionally a part of the approaches proposed in this literature. Mayer et al. (2017) studies greedy task scheduling approaches for TensorFlow. Jia et al. (2018) develops a simulation-based optimizer for deep learning computation graphs that uses a larger decision space by combining data, model, and attribute parallelism. Our approach can potentially be extended to such larger decisions spaces to achieve even bigger improvements in execution cost. + +# 3 BACKGROUND + +Figure 1 shows an overview of our approach. Given an input graph to optimize, instead of applying BRKGA directly with the default uniform distribution at all nodes, a graph neural network predicts beta distribution choices at each node. BRKGA is run with these choices to optimize placement and scheduling decisions with respect to the objective defined by the performance model. We first explain the performance model and BRKGA in this section, and the learning component in the next. + +# 3.1 PERFORMANCE MODEL + +A computation graph has a set of ops to run. Each op produces zero or more tensors and requires zero or more tensors as input. The runtime of each op is known and fixed (e.g., given by a simulator as in Jia et al. (2018)). The memory use of each tensor is known (an assumption that holds in static compilers like XLA). We assume a collection of $d$ homogeneous devices that have separate local memory and can run at most one op at a time. An op can run only when its input tensors are present in the local memory. Tensors can be transferred across devices by synchronous (blocking) transfers. Tensors are freed from local memory after all local consumers have run. + +In this setting, we consider the problem of finding an assignment of ops to devices and an overall schedule such that each op is run once with the objectives of (1) minimizing the peak local memory use across devices (e.g., to find a feasible way to run a large computation graph), or (2) minimizing the runtime subject to a constraint on the peak memory used on any device. + +The performance model does not consider rematerialization of tensors, fragmentation when computing memory use, and asynchronous transfers between devices. Despite these simplifications, the model yields slight variants of problems that are known to be NP-hard (Eyraud-Dubois et al., 2015) and therefore remains a challenging setting in which to study how to learn an optimizer. See section A.4 for more details of the model. + +# 3.2 BIASED RANDOM-KEY GENETIC ALGORITHM + +Biased random-key genetic algorithm (BRKGA) is a meta-heuristic framework that has been successful in a wide array of applications for solving hard combinatorial optimization problems (Gonçalves & Resende, 2011). In BRKGA, chromosomes in a population are encoded as $n$ -dimensional vectors with entries in [0, 1] for some fixed $n$ . This random-key encoding decouples the application from the genetic algorithm, specifically the crossover and mutant generation procedures (Bean, 1994). + +The BRKGA variant we use is specified by (1) a fitness evaluation function $f:[0,1]^n\to \mathbb{R}$ , (2) scalar integer parameters $\pi$ , $\pi_e$ , and $\pi_c$ representing the population size, number of elites, and number of children, resp., (3) an elite bias $\rho \in [0.5,1.0)$ , and (4) a mutant generation distribution $\mathcal{D}$ over $[0,1]^n$ . The procedure aims to find a chromosome that maximizes $f$ . + +The initial population (a collection of $\pi$ chromosomes) is created by sampling from $\mathcal{D}$ . (Known good solutions may also be used to initialize a population.) One evolution step is completed as follows. + +1. Sort the chromosomes in order of decreasing fitness using $f$ . Denote the first $\pi_e$ chromosomes as elites and the remaining chromosomes as nonelites. +2. Construct the next generation from three different sources of chromosomes: (a) Copy the elite chromosomes unmodified from the last generation. (b) For each of the $\pi_c$ new children, select two parent chromosomes uniformly at random, one from the nonelites and one from the elites. Apply the crossover procedure (described below) to generate a new chromosome given the two parents. (c) Generate the remaining $\pi - \pi_e - \pi_c$ by sampling from $\mathcal{D}$ . + +We continue the evolution procedure for a fixed number of evaluations, i.e., calls to $f$ . Given an elite and nonelite chromosome $\mathbf{a}, \mathbf{b} \in [0,1]^n$ (resp.), the crossover procedure produces a child chromosome $\mathbf{c}$ by independently combining entries from the parents. Specifically, for each index $i \in 1, \ldots, n$ independently, let $c_i = a_i$ with probability $\rho$ and $c_i = b_i$ with probability $1 - \rho$ . + +Our use of BRKGA is standard except for the mutant-sampling distribution $\mathcal{D}$ , which is usually fixed to the uniform distribution. We generalize BRKGA for instance-specific learning by sampling from $n$ independent beta distributions, whose parameters can vary by index. Beta flexibly admits non-uniform distribution choices and also subsumes the uniform choice. + +Given a computation graph, let $d$ be the number of devices, $o$ the number of ops, and $t$ the number of tensors. We define the chromosome encoding a scheduling solution to have three distinct parts: (1) $o \times d$ entries specifying op-to-device affinities; (2) $o$ entries specifying scheduling priorities for each op; (3) $t \times d$ entries specifying tensor-to-device priorities for transfers that may be needed. Given a chromosome, op placements are picked by maximum affinity. Transfer ops are created as implied by the placements. We then obtain a schedule by performing a topological sort over the ops given their tensor dependencies, breaking ties by using the corresponding node priorities. Once a schedule is constructed, the performance model is used to evaluate its peak memory and/or runtime. When enforcing a memory constraint, the fitness of a schedule is encoded such that all memory-feasible schedules have better fitness than infeasible schedules. An example is provided in section A.5. + +# 4 REGAL + +# 4.1 CONTEXTUAL BANDIT FORMULATION + +We train a contextual bandit policy that predicts beta distribution choices for each of the nodes of a computation graph to be used by BRKGA to optimize it. For each round in the bandit setting, first the context is observed by drawing a computation graph $G$ as an i.i.d. sample from a distribution $\mathcal{D}$ (e.g., a distribution over TensorFlow graphs). $G$ has a set of nodes $V$ and a set of edges $E$ , with features associated with the nodes and edges. A policy $p(\boldsymbol{a}|G)$ is applied to make a set of decisions at each node. These decisions, denoted $\boldsymbol{a}_v$ for each $v \in V$ , across all nodes form one action $\boldsymbol{a} = \{\boldsymbol{a}_{v \in V}\}$ . One decision in $\boldsymbol{a}$ corresponds to playing one arm of the bandit, and specifying the entire $\boldsymbol{a}$ corresponds to playing several arms together in a single round. This can be viewed as a combinatorial multi-armed bandit problem (Chen et al., 2013). + +The action $\pmb{a}$ specifies all the node-specific beta distributions BRKGA needs to optimize placement and scheduling decisions for $G$ . To enable a policy over discrete choices, we quantize the mean and variance parameters of the beta distribution. The environment then runs BRKGA with those distribution choices with a fixed iteration limit. The final objective value is used to compute the reward. To make the reward values comparable across different graphs, we divide the objective value $o_{a}(G)$ achieved on a graph $G$ with action $\pmb{a}$ by the objective value $o_{s}(G)$ achieved by standard BRKGA using uniform distributions. Since we want to minimize the objective (e.g., runtime or peak memory), we define the reward as $r(\pmb{a}, G) = -\frac{o_{a}(G)}{o_{s}(G)}$ . So a reward $> -1$ corresponds to an action that achieves a better objective value than standard BRKGA on a graph. + +We maximize the expected reward $L = \mathbb{E}_G[\sum_a p(\boldsymbol{a}|G)r(\boldsymbol{a},G)]$ , where $\mathbb{E}_G$ is an expectation over graphs in our training set. Learning is done by REINFORCE (Williams, 1992). We added a scalar baseline $b(G)$ to reduce the variance of the gradient estimates. + +# 4.2 GRAPH NEURAL NETWORK POLICY + +From computation graphs, we derive multigraphs with attributed nodes and directed edges. Denote a multigraph $G = (V, E)$ . In our setup, the nodes $V$ correspond 1:1 to the ops. An edge $e \in E$ exists from $u$ to $v$ for each tensor that op $v$ requires that is produced by op $u$ . As a tensor can be required by multiple ops, the correspondence from edges to tensors may be many to one. Each node $v \in V$ and edge $e \in E$ has an attribute vector $x_v$ and $x_e$ . The attributes contain respective features, e.g., sizes of the tensors. + +We learn a model that predicts good mutant sampling distributions for BRKGA given this multigraph. Each node has $d + 1$ independent beta distributions, corresponding to device affinities and scheduling + +priorities, whose parameters are represented as a vector $\mathbf{a}_v$ . These are the model's actions in RL terms, and our model specifies a distribution over actions $\mathbf{a} = \{\mathbf{a}_v\}_{v \in V}$ for each graph, $p(\mathbf{a}|G)$ . Note the action space is different from graph to graph. + +We use Graph Neural Networks (GNNs) (Scarselli et al., 2009; Li et al., 2015; Gilmer et al., 2017; Battaglia et al., 2018) to learn representations for computation graphs. Given a (multi)graph $G$ , a GNN computes representation vectors $h_v$ for each node through an iterative message passing process as follows: + +$$ +\begin{array}{l} \boldsymbol {h} _ {v} ^ {(0)} = \operatorname {M L P} _ {n} (\boldsymbol {x} _ {v}), \quad \boldsymbol {h} _ {e} = \operatorname {M L P} _ {e} (\boldsymbol {x} _ {e}) \\ \boldsymbol {m} _ {e} ^ {(t)} = \operatorname {M L P} _ {\mathrm {m s g}} \left(\left[ \boldsymbol {h} _ {e _ {s}} ^ {(t)}, \boldsymbol {h} _ {e _ {t}} ^ {(t)}, \boldsymbol {h} _ {e} \right]\right), \quad \boldsymbol {m} _ {e} ^ {(t) ^ {\prime}} = \operatorname {M L P} _ {\mathrm {m s g}} ^ {\prime} \left(\left[ \boldsymbol {h} _ {e _ {s}} ^ {(t)}, \boldsymbol {h} _ {e _ {t}} ^ {(t)}, \boldsymbol {h} _ {e} \right]\right) \tag {1} \\ \boldsymbol {h} _ {v} ^ {(t + 1)} = \mathrm {M L P} _ {\text {n o d e}} \left(\left[ \boldsymbol {h} _ {v} ^ {(t)}, \sum_ {e: e _ {t} = v} \boldsymbol {m} _ {e} ^ {(t)} + \sum_ {e: e _ {s} = v} \boldsymbol {m} _ {e} ^ {(t) ^ {\prime}} \right]\right), \\ \end{array} +$$ + +where $e_s$ is the source node of edge $e$ and $e_t$ is the target node. In our formulation, $\mathrm{MLP}_n$ and $\mathrm{MLP}_e$ are multilayer perceptrons (MLPs) that encode node and edge attributes, $\mathrm{MLP}_{\mathrm{msg}}$ and $\mathrm{MLP}_{\mathrm{msg}}'$ compute messages along the edges in the edge direction $(\boldsymbol{m}_e^{(t)})$ and the opposite direction $(\boldsymbol{m}_e^{(t)'}$ , $\mathrm{MLP}_{\mathrm{node}}$ updates node representations and [.] represents flat vector concatenation. After $T$ rounds of message passing, the representation for each node $h_v = h_v^{(T)}$ will contain information from the $T$ -hop neighborhood around $v$ in the graph. Given the $h_v$ 's, we produce $a_v$ 's through conditionally independent predictions, where the prediction for one node $v$ does not depend on the predictions of other nodes given the computed representations: + +$$ +p (\boldsymbol {a} | G) = \prod_ {v} p \left(\boldsymbol {a} _ {v} | G\right) = \prod_ {v} p \left(\boldsymbol {a} _ {v} | \mathrm {M L P} _ {a} \left(\boldsymbol {h} _ {v}\right)\right). \tag {2} +$$ + +$\mathrm{MLP}_a$ is shared across all nodes for predicting the parameters of the output distributions. In our experiments, we quantize the continuous beta distribution parameters and use a discrete action space. The outputs are therefore categorical, and we use the MLP to compute the logits of the corresponding softmax distributions. More details are included in section A.6. The baseline is computed using a separate GNN, where after we obtained the node representations $\pmb{h}_v$ , we aggregate across nodes and compute $b(G)$ as $b(G) = \mathrm{MLP}_b\left(\frac{1}{|V|}\sum_v\mathrm{MLP}_g(\pmb{h}_v)\right)$ . + +# 5 EXPERIMENTAL RESULTS + +# 5.1 TASKS AND DATASETS + +We consider two tasks, one is minimizing peak memory and the other is minimizing running time, both on two homogeneous devices with 16 GiB of memory each and synchronous tensor transfers with zero cost (zero latency and infinite bandwidth). We train a separate neural network for each task-dataset pair for the case of two devices. + +We have collected a dataset of 372 topologically-unique real-world TensorFlow graphs by mining machine learning jobs on Google's internal compute cluster (see A.1.2). These jobs are from a wide range of production and research use cases. The dataset is split into {train, valid, test} sets containing $\{60\%, 20\%, 20\%\}$ graphs, respectively. These sets are disjoint with respect to graph topology, so at test time the policy needs to generalize to new topologies. + +We augment the dataset by applying multiplicative noise to tensor sizes and op running times to create several variants per graph. Even though the variants of the same graph share the same topology, they represent different optimization problem instances. We create separate datasets for minimizing runtime and peak memory. The TF runtime dataset has 16329 training, 5470 validation, and 5266 test graphs. The TF peak memory dataset has 22400 training, 7400 validation, and 7400 test graphs. + +For reproducibility, we have released1 a synthetic dataset of computation graphs with 10000 training, 1000 validation, and 1000 test cases. The graph topologies are generated from several classical random graph models, and the op running times and tensor sizes are sampled from Gaussian distributions (see A.1.4). On this dataset we minimize running time without a memory constraint (e.g., on two homogeneous devices with infinite memory). + +# 5.2 BASELINES + +Graph Partitioning + Depth First Search (GP+DFS): Combines a graph partitioning (GP) baseline for device placement to minimize communication across devices and a Depth-First Search heuristic similar to the one implemented in XLA (TensorFlow Authors, 2016b) to compute per-device schedules given placements. This is representative of the XLA compiler's solution for model parallelism. + +Local Search: The method starts with a random placement and schedule and greedily improves it by moving an op across devices or changing an op's order in the current schedule. + +Graph-As-Sequence Model (GAS): Like Mirhoseini et al. (2017; 2018), we convert the graph into a sequence using a topological sort and apply a recurrent neural network to predict node-level distributions to be used by BRKGA. This comparison measures the usefulness of graph structure for learning. + +BRKGA $XK$ : Run BRKGA for $X$ thousand iterations with uniform sampling distributions using default hyperparameters consistent with Gonçalves & Resende (2011). This comparison measures the performance of the default version of BRKGA. + +Tuned BRKGA: Apply grid search to BRKGA's hyperparameters on the training set and pick the best. This represents how well BRKGA performs by customizing it to the distribution of computation graphs, but without instance-dependent customization. + +Instance-dependent Random Search (IDRS): Same as REGAL, but BRKGA is replaced with random search. This is done by running BRKGA for only one generation using the proposal distributions computed by the neural network. + +Additionally, we use a Constraint Programming (CP) approach with the CP-SAT solver of Google ORtools (Google, 2019) to establish a provably global optimum for each computation graph optimization problem instance by running for up to 24 hours. As an enumerative algorithm, it is generally not competitive when run only for seconds. + +For a fair comparison, we fix the number of performance model evaluations allowed per graph to be the same across algorithms. (Except $\mathrm{GP + DFS}$ , which does not allow fixing it.) Given typical TensorFlow graph sizes and compiler running time constraints, we estimate that a budget of 5000 evaluations is feasible in practice, so we use that in the experiments. + +Learning to directly predict a solution: We have explored two more approaches for training a graph neural network to predict placement and scheduling solutions directly, without BRKGA. We used supervised learning to train a network to predict BRKGA's solutions. The best accuracy was achieved by predicting the solution autoregressively, one variable at a time conditioned on previously predicted variables. We also used RL to learn a policy with IMPALA (Espeholt et al., 2018) to optimize the objective value by incrementally predicting the solution one variable at a time, and once complete, iteratively improving it with a learned local search policy. The inference cost for both approaches is quadratic in the number of nodes (the graph net is applied a linear number of times, each with linear cost), while REGAL's inference cost is linear, making them orders of magnitude slower than REGAL at test time. An evaluation on a test set of small graphs showed that neither approach improves on BRKGA5K. Improving the scalability and the generalization of these approaches is left as future work, and we do not present their results here. + +# 5.3 COMPARISON TO BASELINE ALGORITHMS + +We use two metrics to compare algorithms. 1) Average percent improvement over BRKGA 5K: For a given graph, compute the percent improvement in the objective achieved by an algorithm relative to BRKGA with evaluation limit set to 5000. BRKGA 5K is a natural reference for measuring the effect of learning approaches that predict proposal distributions for it. 2) Average percent gap from best known solution: Compute the best known objective value among all the algorithms. (This will be found by CP-SAT if it finishes within the time limit.) Compute the percent difference between an algorithm's solution and the best known objective value. We report averages over test set graphs. Training set results are similar and reported in section A.3. + +Table 1 compares REGAL to other algorithms on the two TensorFlow test sets and the synthetic dataset. REGAL outperforms all the baselines on all three tasks. It gives $1.9 \times$ and $4.4 \times$ bigger improvements + +Table 1: Comparison of methods on the TensorFlow and Synthetic test sets. Results are averages over test set graphs. Higher is better for % Improvement over BRKGA5K, and lower is better for % Gap from best known. Note: CP-SAT, an enumerative algorithm, is run for up to 24 hours only to establish provably global optima (if possible) for evaluation purposes. + +
AlgorithmTF Runtime test setTF Peak Memory test setSynthetic Runtime test set
% Improv. over BRKGA 5K% Gap from best known% Improv. over BRKGA 5K% Gap from best known% Improv. over BRKGA 5K% Gap from best known
CP-SAT 24hr15.85%1.00%-1.48%8.06%19.50%0.00%
GP + DFS-37.32%66.98%-6.51%14.77%-55.8%93.66%
Local Search-1.66%22.63%0.63%7.24%0.08%24.60%
BRKGA 5k0.00%20.19%0.00%7.98%0.00%24.63%
Tuned BRKGA3.20%16.40%0.80%7.11%3.11%20.76%
GAS3.79%15.24%0.16%7.67%0.80%23.48%
IDRS-6.87%28.60%-3.16%12.39%-12.12%39.72%
REGAL7.09%11.04%3.56%4.44%4.81%18.57%
+ +than the next best algorithm on runtime and peak memory minimization tasks, respectively. The percent improvement over $\mathrm{GP} + \mathrm{DFS}$ is $44.4\%$ and $10.1\%$ for runtime and peak memory, respectively. REGAL reduces the average percent gap from the best known solution by about $1.8\times$ with respect to BRKGA 5K on both TensorFlow test sets, and by about $6\times$ and $3.3\times$ with respect to $\mathrm{GP} + \mathrm{DFS}$ on the TensorFlow Runtime and Peak Memory test sets, respectively. (For an XLA user, $\mathrm{GP} + \mathrm{DFS}$ is the current, albeit weak, state-of-the-art algorithm.) The synthetic test set shows similar results. The learned policy successfully generalizes to previously unseen graphs, to the extent that a large fraction of the estimated room for improvement over BRKGA 5K is captured using the same evaluation limit. + +To further test the limits of generalization for the policies learned using REGAL, we evaluate them on XLA graphs from a production compiler team's internal performance benchmark. XLA uses a different set of ops from TensorFlow, and the benchmark graphs on average have about an order of magnitude more nodes and edges than the TensorFlow graphs in our training set, so this is a difficult generalization challenge. REGAL achieves $0.58\%$ average runtime improvement over BRKGA 5K on 94 graphs, and $3.74\%$ average peak memory improvement on 32 graphs. It is promising that any improvements are possible at all despite training only on TensorFlow graphs, and points to the possibility of bigger improvements by training directly on XLA graphs. + +Optimizer running times: BRKGA 5K takes on average 0.89 seconds on the TensorFlow Peak Memory test set to optimize a computation graph, while REGAL takes 1.04 seconds. (The times are similar on the Runtime test set.) Instead of taking hours to compute a solution per graph (e.g., Mirhoseini et al. (2017; 2018)), REGAL produces solutions in orders of magnitude less time, while still being better than all the baselines. + +# 5.4 COMPARING REGAL VS. BRKGA + +Figure 2 shows histograms of percent improvements in runtime (left) and peak memory (right) achieved by REGAL over BRKGA 5K on the test sets. Green bars correspond to graphs on which REGAL improved over BRKGA 5K, while red bars correspond to graphs on which REGAL was worse. (Ties have been omitted for clarity.) REGAL matches or beats BRKGA 5K on $87.4\%$ of the runtime test set, and $88.9\%$ of the peak memory test set. The highest improvement is $26.0\%$ for run time and $54.3\%$ for peak memory, while the worst regression is $24.0\%$ for run time and $17.9\%$ for peak memory. + +To assess whether the improvements provided by REGAL's policy generalize to evaluation limits other than the one for which it was trained, we varied the evaluation limit used by both BRKGA and REGAL at test time. The results are shown in figure 3. REGAL's performance improves with more evaluations, confirming that the policy generalizes to higher evaluation limits. In other words, there exist node-level choices for the distributions used in BRKGA that perform well regardless of the evaluation limit, and REGAL learns to predict those choices. This is particularly useful in cases where the actual evaluation limit to use will be known only at test time, so that the same policy can be + +![](images/829c77c0fa3df1ab183ee58ea5f64b85494e94d7736790ad1cc6793b6fce1f24.jpg) +Fig. 2: Histogram of percent improvements in objective value on the TensorFlow runtime (left) and peak memory (right) datasets for test graphs on which REGAL is better (green) and worse (red) than BRKGA. (Ties are omitted from the figure for clarity but are included in the histogram percentage calculation.) + +![](images/e169680ab1d1b5f0fe926a146e10e362f4d72cdf126d3b6f91720b4c9b5b7ca5.jpg) + +![](images/9edd18c543309ed7c36abb3f620cfcaf39e487756c0a6561a090b0d1d6df2423.jpg) +Fig. 3: Average percent improvement over BRKGA 5K given by REGAL and BRKGA on the TensorFlow test set for running time (left) and peak memory (right) as the evaluation limit is increased. + +![](images/3ab3e870828c4d24fd219c19aa7807da6a35cc330e7adf3645c831543ddab879.jpg) + +applied without re-training. Interestingly, even with 50K evaluations, BRKGA is not able to match REGAL's performance with just 5K evaluations! + +# 5.5 GRAPH-DEPENDENT POLICY + +The RL agent's actions are instance dependent. The agent that performs the best on the TF Runtime dataset has a choice of 16 different node placement actions for each node in a graph. For each graph in the TF Runtime test set, we compute the entropy of the distribution of the node placement actions taken by the agent and plot a histogram of these entropies in Figure 4(a). The mean of this distribution is 1.71 nats which implies that the actions are neither uniform random, nor constant, and vary from graph to graph. + +Furthermore, the agent's performance overall gets better with more graph message passing iterations $T$ . Figure 4(b) shows the peak validation reward reached within a hyperparameter sweep for each $T$ for the TF runtime optimization task. Models that utilize the GNN with message passing $(T > 0)$ reach higher performance than $T = 0$ (i.e., ignoring the graph structure). + +# 6 CONCLUSIONS AND FUTURE WORK + +By training a graph neural network policy to predict graph-conditional node-level distributions for BRKGA, REGAL successfully generalizes to new graphs, significantly outperforms all baselines in solution quality, and computes solutions in about one second on average per TensorFlow test set + +![](images/d35c59725deea425c9734d12663d690cd300fef7c48198b09c73c73d11c6fc14.jpg) +(a) + +![](images/4783fc8293fbb2fbf9f88c6011a8b63018b27e4883ccb91ce48a1d97611871aa.jpg) +(b) +Fig. 4: The agent learns to utilize the graph structure. (a) The TF runtime agent picks a diverse set of actions. This plot shows the histogram of the entropy of the agent's actions across graphs in the dataset. (b) This plot shows the best validation reward achieved within a sweep of hyperparameters for each number of graph message passing rounds $T$ . The performance gets overall better as $T$ increases, and models with $T > 0$ perform better than $T = 0$ , which does not utilize the structure. + +graph. REGAL's speed and generalization make it a strong choice for use in a production compiler that needs to handle a diverse set of graphs under a limited time budget. + +We foresee several extensions. Integrating REGAL into a neural network compiler would allow us to evaluate the end-to-end gains due to better placement and scheduling decisions. To further improve REGAL's own performance, one could use a Mixture of Experts architecture. Given the diversity of graphs, a mixture model can train specialized sub-models on different types of graphs (e.g., convolutional networks, recurrent networks, etc.). Another is to replace BRKGA with alternatives, e.g., combining learned neural policies with local search. + +# ACKNOWLEDGMENTS + +The authors would like to thank Ross Anderson, David Applegate, and Peter Hawkins for a significant amount of infrastructure on which this work builds, including the BRKGA and CP baselines, and Peter Ma, HyoukJoong Lee, and Peter Dolan for help with data collection. + +# REFERENCES + +Ravichandra Addanki, Shaileshh Venkatakrishnan, Shreyan Gupta, Hongzi Mao, and Mohammad Alizadeh. Placeto: Efficient progressive device placement optimization. In Workshop on ML for Systems at NeurIPS 2018, 2018. +Ravichandra Addanki, Shaileshh Bojja Venkatakrishnan, Shreyan Gupta, Hongzi Mao, and Mohammad Alizadeh. Placeto: Learning generalizable device placement algorithms for distributed machine learning. In Proceedings of NeurIPS '19, 2019. +Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286 (5439):509-512, 1999. +Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Flores Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulçehre, Francis Song, Andrew J. Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matthew Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks. CoRR, 2018. URL http://arxiv.org/abs/1806.01261. +James C. Bean. Genetic algorithms and random keys for sequencing and optimization. ORSA Journal on Computing, 6(2):154-160, 1994. doi: 10.1287/ijoc.6.2.154. URL https://doi.org/10.1287/ijoc.6.2.154. + +Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d'horizon. CoRR, abs/1811.06128, 2018. URL http://arxiv.org/abs/1811.06128. +Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H. S. Torr, and Pushmeet Kohli. Learning to superoptimize programs. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=r1rz6U5lg. +Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015. +Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. Tvm: An automated end-to-end optimizing compiler for deep learning. In OSDI 2018, 4 2018a. URL https://arxiv.org/abs/1802.04799. +Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. Learning to optimize tensor programs. In Neural Information Processing Systems 2018, 5 2018b. URL https://arxiv.org/abs/1805.08166. +Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework and applications. In Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pp. 151–159, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. URL http://proceedings.mlr.press/v28/chen13a.html. +Paul Erdos and Alfred Rényi. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17-60, 1960. +Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In Proceedings of the International Conference on Machine Learning (ICML), 2018. +Lionel Eyraud-Dubois, Loris Marchal, Oliver Sinnen, and Frédéric Vivien. Parallel scheduling of task trees with limited memory. ACM Trans. Parallel Comput., 2(2):13:1-13:37, June 2015. ISSN 2329-4949. doi: 10.1145/2779052. URL http://doi.acm.org/10.1145/2779052. +Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017. +Jose Fernando Gonçalves and Maurizio G. Resende. Biased random-key genetic algorithms for combinatorial optimization. Journal of Heuristics, 17(5):487-525, October 2011. ISSN 1381-1231. doi: 10.1007/s10732-010-9143-1. URL http://dx.doi.org/10.1007/s10732-010-9143-1. +Google. CP-SAT solver. https://developers.google.com/optimization/cp/cpSolver, 2019. [Online; accessed 21-January-2019]. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109-137, 1983. +Zhihao Jia, Matei Zaharia, and Alex Aiken. Beyond data and model parallelism for deep neural networks. CoRR, 2018. URL http://arxiv.org/abs/1807.05358. +G. Karypis and V. Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on Scientific Computing, 20(1):359-392, 1998. doi: 10.1137/S1064827595287997. +B. W. Kernighan and S. Lin. An efficient heuristic procedure for partitioning graphs. The Bell System Technical Journal, 49(2):291-307, Feb 1970. ISSN 0005-8580. doi: 10.1002/j.1538-7305.1970.tb01770.x. + +Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 6348-6358. 2017. +Yu-Kwong Kwok and Ishfaq Ahmad. Static scheduling algorithms for allocating directed task graphs to multiprocessors. ACM Comput. Surv., 31(4):406-471, December 1999. ISSN 0360-0300. +Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015. +Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial optimization with graph convolutional networks and guided tree search. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 539-548. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7335-combinatorial-optimization-with-graph-convolutional-networks-and-guided-tree-search.pdf. +Hongzi Mao, Malte Schwarzkopf, Shaileshh Bojja Venkatakrishnan, Zili Meng, and Mohammad Alizadeh. Learning scheduling algorithms for data processing clusters. In Proceedings of the ACM Special Interest Group on Data Communication, SIGCOMM '19, pp. 270-288, New York, NY, USA, 2019. ACM. ISBN 978-1-4503-5956-6. doi: 10.1145/3341302.3342080. URL http://doi.acm.org/10.1145/3341302.3342080. +Ruben Mayer, Christian Mayer, and Larissa Laich. The tensorflow partitioning and scheduling problem: It's the critical path! In Proceedings of the 1st Workshop on Distributed Infrastructures for Deep Learning, DIDL '17, pp. 1-6, New York, NY, USA, 2017. ACM. ISBN 978-1-4503-5169-0. doi: 10.1145/3154842.3154843. URL http://doi.acm.org/10.1145/3154842.3154843. +Azalia Mirhoseini, Hieu Pham, Quoc V. Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, Naveen Kumar, Mohammad Norouzi, Samy Bengio, and Jeff Dean. Device placement optimization with reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 2430-2439, 2017. URL http://proceedings.mlr.press/v70/mirhoseini17a.html. +Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. Le, and Jeff Dean. A hierarchical model for device placement. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hkc-TeZ0W. +MLIR Authors. Multi-level intermediate representation. https://github.com/tensorflow/mlir, 2018. Accessed: 2019-05-22. +Brooks Paige and Frank Wood. Inference networks for sequential monte carlo in graphical models. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, 2016. +Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pp. 1310-1318, 2013. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. +Nadav Rotem, Jordan Fix, Saleem Abdulrasool, Summer Deng, Roman Dzhabarov, James Hegeman, Roman Levenstein, Bert Maher, Nadathur Satish, Jakob Olesen, Jongsoo Park, Artem Rakhov, and Misha Smelyanskiy. Glow: Graph lowering compiler techniques for neural networks. CoRR, 2018. URL http://arxiv.org/abs/1805.00907. +Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. +Eric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic superoptimization. 2013. + +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550, 2017. +David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140-1144, 2018. ISSN 0036-8075. doi: 10.1126/science.aar6404. URL https://science.sciencemag.org/content/362/6419/1140. +O. Sinnen. Task Scheduling for Parallel Systems. Wiley Series on Parallel and Distributed Computing. Wiley, 2007. ISBN 9780470121160. +TensorFlow Authors. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016a. URL http://arxiv.org/abs/1603.04467. +TensorFlow Authors. hlo_memory_scheduler. https://github.com/tensorflow/tensorflow/blob/4bfa2359152e9d106c2c20e9fff67643c8578c81/tensorflow/company/xla/service/hlo_memory_scheduler.h#L53, 2016b. Accessed: 2019-01-25. +Duncan J Watts and Steven H Strogatz. Collective dynamics of 'small-world' networks. nature, 393 (6684):440, 1998. +Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8(3-4):229-256, 1992. +XLA team. Xla - tensorflow compiled. post in the google developers blog. http://web.archive.org/web/20170308172654/https:// developers.googleblog.com/2017/03/xla-tensorflow-compiled.html, 2017. +Yuan Yu, Martin Abadi, Paul Barham, Eugene Brevdo, Mike Burrows, Andy Davis, Jeff Dean, Sanjay Ghemawat, Tim Harley, Peter Hawkins, Michael Isard, Manjunath Kudlur, Rajat Monga, Derek Gordon Murray, and Xiaoqiang Zheng. Dynamic control flow in large-scale machine learning. CoRR, 2018. URL http://arxiv.org/abs/1805.01772. + +![](images/e94ab54164fa0c38292a19127e6dc1b59f2cbedcd234120e98eeb95b42b955a7.jpg) +Fig. 5: Histograms of number of nodes (left) and edges (right) for the different datasets. The $y$ -axis shows the percentage of graphs. + +![](images/1b7f26c50b7f08b4d06c46ea73ad79937d76fd6c44259e5d222b0223c55e4b39.jpg) + +![](images/f11f8aff760e43fd61f1b4460b8f67b47f6aca15a09d7488079fc6ae7dc5c2b0.jpg) +Fig. 6: Histogram of diameters of the graphs in the TF Runtime and TF Memory datasets. + +# A APPENDIX + +# A.1 DATASETS + +# A.1.1 DATASET STATISTICS + +Figure 5 and figure 6 give statistics for the number of nodes and edges in the datasets. The broad range of graph sizes indicates the diversity of the datasets. + +# A.1.2 TENSORFLOW DATASET + +We collected a dataset by mining TensorFlow jobs running in a shared production cluster and extracting computation graphs in the MetaGraphDef $^2$ format. As many computation graphs were repeated due to device/machine/job replicas, we de-duplicate the dataset by graph topology (specifically node in-degree sequence). We have not applied any other kind of filtering to restrict the dataset in any way (e.g., by architecture, input modality, learning task, dataset, etc.). Since the graphs were collected from a large and diverse set of production and research use cases across input modalities, learning types, and datasets, we strongly believe our dataset is representative of a broad, real-world distribution of TensorFlow graphs. + +Computational costs for these computation graphs are simulated with an in-house simulator (based on Grappler3) that outputs memory and running time profiled information in the CostGraphDef4 format. The simulator TensorFlow Op coverage didn't include custom kernels or complicated control flow Yu et al. (2018) like cycles (e.g. tf while_loop). + +The train-validation-test set split is made by selecting a set of 5 graphs from a list of graphs sorted by number of nodes, and splitting them as 3-1-1 across the three sets, respectively. This ensures that the distribution of the number of nodes is similar for the three sets. + +For each graph in the train/validation/test sets, we make 99 copies of it and multiply each tensor size and each Op running time cost with a uniform sampled number in the interval (0.5, 1.5) (one sample per tensor size per copy plus one sample per TF Op per copy). The modified copies are added back to the respective set so that the graph topologies in train/validation/test do not overlap. + +Graphs with no relevant cost information or no room for improvement (e.g. a graph with a single node, a chain in the minimizing running time task) are filtered. This results in two different datasets for the two objective functions, one for runtime and another for peak memory. + +The encoding is described in A.2 + +# A.1.3 XLA DATASET + +We also collected a dataset by extracting CostGraphDefs during the XLA compilation of several benchmark graphs and extracted the greatest-size control-flow-free subgraph. + +After dedduplication 94 graphs remained. + +The encoding is described in A.2 + +# A.1.4 SYNTHETIC DATASET + +We sample synthetic graphs from a set of classic random graph models, including the Erdos-Renyi model Erdos & Renyi (1960), the Barabasi-Albert model Barabasi & Albert (1999), the Watts-Strogatz model Watts & Strogatz (1998) and the stochastic block model Holland et al. (1983). The parameters we used for each of the random graph models are listed in Table 2. + +
Model TypeParameters
Erdos-RenyiEdge probability p = 0.05.
Barabasi-AlbertEach node connected to m = 2 previous nodes.
Watts-StrogatzEach node connected to k = 4 neighbors initially, with probability p = 0.3 to swap an edge with a random edge.
Stochastic Block ModelNumber of blocks k = 4, within block edge probability p = 0.3, cross block edge probability q = 0.01.
+ +Table 2: Parameters for the random graph models used to generate synthetic graphs. + +The number of nodes in a graph is sampled uniformly from range [50, 200]. Note that all the above random graph models generate undirected graphs. To convert the graphs into directed acyclic graphs, for each graph we sample a random ordering of nodes $\pi$ , and then set the direction of all edges $(i,j)$ such that $\pi(i) < \pi(j)$ , i.e. node $i$ comes before node $j$ in the ordering $\pi$ . After setting all the edge directions, we locate all the head nodes that don't have any predecessors and all the tail nodes that don't have any successors, and then create a source node and a sink node, and create edges from the source node to all the head nodes, and from all the tail nodes to the sink node. Real TensorFlow graphs all contain one source and one sink node. Examples of synthetic graphs generated from the 4 random graph models are shown in Figure 7. + +![](images/a393c2988e4a3c6ed03d935dcbbb696017f0a5e15008aa10ce2323de448b49c6.jpg) +Erdos-Renyi + +![](images/5b84482995e808592bdb0c2b47b2ab410b7acb0ed9388d46d604b5372928896d.jpg) +Barabasi-Albert + +![](images/a23e8235ef075310b0f0a4172bda95882bbfbf62ebf9ee7d6a323d22f487ccb7.jpg) +Watts-Strogatz + +![](images/35e511ec83c0f5e9a7654fe318d187f9bd2f4cecc4ae2df44e7f895c89b7395a.jpg) +Stochastic Block Model +Fig. 7: Example synthetic graphs. + +Each edge $(i,j)$ in the graph represents either a control dependency (op $j$ can run only when op $i$ is finished) or a data dependency (op $i$ can run only when some output tensor(s) produced by op $j$ is available). We assign a probability of 0.1, 0.8 and 0.1 for each op to produce 0, 1 or 2 output tensors. When op $i$ produces 0 output tensors, any other op $j$ that depends on it can only be through a control dependency. Otherwise, op $j$ will have a probability of 0.2 to make the dependency a control dependency, and probability of 0.8 to make this a data dependency, in which case an output tensor is picked according to an uniform distribution. + +We fill in the memory cost for each tensor by sampling from a Gaussian distribution with mean 50 and standard deviation 10. The time cost for each op is computed as the sum of all the input and output tensor memory costs plus a random noise that is a fraction $r$ of the total memory cost, with $r$ sampled from a Gaussian with 0 mean and standard deviation 0.1. The source and sink nodes do not have any memory or time costs. + +To make sure the generated synthetic graphs are interesting, we apply an additional filtering step by running BRKGA 1K and BRKGA 10K, and keeping only the graphs whose runtime improved by at least $18\%$ . This gives us a dataset of graphs that on average improve runtime by $20\%$ from running BRKGA 1K to 10K. + +The synthetic data in CostGraphDef format is available at https://github.com/deepmind/deepmind-research/tree/master/regal. + +The encoding is described in A.2. + +# A.2 DATAENCODING + +We consider "control dependencies" as being tensors of size zero. + +Each triple (op producer, tensor, op consumer) in the CostGraphDef is encoded as a separate edge. Each of these edges $e$ is associated with three features denoted by $x_{e}$ : the size of the tensor that is associated with the edge, a one-hot feature indicating if the edge is a control edge, and normalized index of the hyperedge to which the edge belongs. + +This means that the graph neural network's input is a directed graph with multiple edges. (There are alternative encodings like a bipartite graph where both TF ops and TF tensors are nodes, and edges exist only between ops and tensors when the op consumes or produces the tensor.) + +Each node $v$ in the computation graph is associated with node features $x_{v}$ . There are 11 node features in total, which can be grouped into three categories: memory-based, runtime-based (not used in the peak memory task) and BRKGAs-based (task dependent). + +As memory-based node features, we use the sum of input tensor sizes, the sum of output tensor sizes, the extra internal memory of the TensorFlow op, and a one-hot indicator of whether the op is the one which uses the greatest memory in the graph. + +As runtime-based node features, we use the sum of direct predecessor nodes' running times, the sum of direct successor nodes' running times, the running time cost of the op itself, and one-hot indicator of whether the op is the one with the greatest runtime cost in the graph. + +As BRKGA-based node features, we have a node aggregation (the expectation of the placement per device and the schedule order for each node) of the chromosomes found by BRKGA (minimizing peak memory for the peak memory dataset and minimizing runtime for the runtime dataset) running for 400 evaluations with uniform random distributions. To make comparisons fair, REGAL with $K$ fitness evaluations means 400 evaluations to compute features, and $K - 400$ fitness evaluations for BRKGA using the instance-specific distributions. + +For each graph, all node or edge features relating to memory size are normalized by the greatest memory size number in that graph and all features relating to runtime are normalized by the greatest op runtime cost. + +To break symmetry and reduce a single degree of freedom without loss of performance, we fix the placement of the node with highest memory for the memory task and runtime for the runtime task to the first device. + +![](images/2fc20c88c76a97698db17f2daff737d14bc38abf2521edd201edae474a1d29b2.jpg) +Fig. 8: Average reward achieved on a minibatch of training graphs during training for runtime minimization (left) and peak memory minimization (right) + +![](images/bd7bb480f12bb453dbb94189786a30848765c06e0ba95aa3721b4753783f904b.jpg) + +# A.3 TRAINING SET RESULTS + +Figure 8 shows the reward curves on the training set for runtime minimization (left) and peak memory minimization (right). Each point in the curves is the average reward achieved on a minibatch of training graphs at the corresponding training step. The graph neural network policy is trained using TensorFlow on 10 cores of a CPU machine (no GPUs were used), with multi-threading used by BRKGA for evaluating chromosomes and by TensorFlow. A training run takes approximately 2-3 days to be completed. The final average percent improvement over BRKGA5K on the training set is similar to that of the test set for both tasks: $7.25\%$ on the training set vs. $7.09\%$ on the test set for runtime minimization, and $4.36\%$ on the training set vs. $3.56\%$ on the test set for peak memory minimization. The small gap between train and test results shows that the policy is able to generalize successfully to unseen graphs at test time. + +# A.4 PERFORMANCE MODEL + +The scheduling problem is specified by a set of devices $D$ and a computation graph. The computation graph has the list of ops $j \in N$ and tensors $\tau \in T$ , the tensors produced by each op $I(j) \subseteq T$ , the tensors consumed by each op $C(j) \subseteq T$ , the memory used by each tensor $m_{\tau}$ , and the execution time of each op $r_j$ . A tensor is produced by exactly one op but can be consumed by many. + +Solutions to the scheduling problem are constructed as follows. A placement is an assignment $pl: N \to D$ from ops to devices. Given a placement we define $\tilde{N}_{pl}$ as the set of ops $N$ extended with synchronous inter-device transfer operations. A transfer operation consumes a tensor on the device where it was created and produces it on a device where it is needed. + +Given a placement $pl$ , a schedule is a total ordering $s: \tilde{N} \to \{1, 2, \dots, |\tilde{N}|\}$ on ops in $\tilde{N}_{pl}$ . We say that op $j$ runs at simulation time step $s(j)$ . We model the schedule execution as follows. At each simulation time step $t \in \{1, 2, \dots, |\tilde{N}|\}$ , each device $d$ has a list $l_{d,t}$ of tensors currently in memory. A tensor is added to the list when produced by an op that runs on the device or by a transfer op that receives the tensor on the device. A tensor is removed immediately after all of its consumers on the device have run. A schedule is valid if for each op $j$ , all the input tensors are available on the corresponding device at simulation time step $s(j)$ . See Section A.9 for an example schedule. + +The memory used on a device at simulation time step $t$ is the sum of the memory used by each tensor that is in memory, i.e., $\sum_{\tau \in l_{d,t}} m_{\tau}$ . The peak memory of a schedule is the maximum value of the memory used at any time and on any device. + +The runtime of a schedule is computed by stepping through the simulation time steps in order and accounting for the execution time of each op on each device. Synchronous transfers block until both the sender and the receiver have completed their preceding tasks, and their execution time depends on a known bandwidth between devices. + +# A.5 BRKGA CHROMOSOME ENCODING + +Let the input graph $G$ contain $o$ ops and $t$ tensors which must be placed over $d$ devices. Then, the BRKGA chromosome for this graph is a vector $c \in [0,1]^{o \times d + o + t \times d}$ composed of the following parts + +![](images/9730db12032c8e06d391ebc9d5c8b8496186ce3c943d9528b6cdc1feb166096b.jpg) +Fig. 9: A computation graph and an example of a chromosome encoding. + +1. The first $o \times d$ entries in $c$ represent the node-device affinities, one value for each (node, device). Each node is assigned to the device for which it has the highest value in the chromosome. +2. The next $o$ entries represent the node scheduling priorities. A valid schedule of the computation graph is obtained by performing a topological sort over the nodes of the graph, breaking ties using the node scheduling priorities. Nodes with higher priority are scheduled first. +3. The final $t \times d$ entries represent the tensor transfer priorities, one entry for each (tensor, device) pair. These priorities determine the order in which tensors are transferred across devices. + +An example of a chromosome encoding is shown in Figure 9 for a graph with $o = 3$ nodes, $t = 3$ tensors and $d = 2$ devices. As per the example, nodes 1 and 3 are placed on device 1 while node 2 is placed on device 2. The scheduling order over the nodes is 1, 2, 3. Since nodes 1 and 2 are placed on different devices, tensors A and C must be transferred from device 1, where they are produced, to their consumer, node 2, which is on device 2. As per the tensor transfer priorities, tensor C is transferred before tensor A since tensor C has a higher priority to get transferred to device 2. + +# A.6 ACTION DEQUANTIZATION + +Each real number in a BRKGA chromosome is sampled from its own Beta distribution, which is parameterized by two real numbers $\alpha$ and $\beta$ . To be more precise, if we denote the chromosome by $c \in \mathbb{R}^L$ , then $c_i \sim \mathcal{D}(\alpha_i, \beta_i) \forall 1 \leq i \leq L$ where $\mathcal{D}(\alpha_i, \beta_i)$ is a Beta distribution with parameters $\alpha_i$ and $\beta_i$ . To be able to run BRKGA, REGAL must propose the values for $\alpha_i$ and $\beta_i$ for each $i$ . + +As described in A.5, the BRKGA chromosome consists of three parts. In REGAL, we optimize the RL agents over choices of $\alpha_{i}$ and $\beta_{i}$ for the first two parts of the chromosome, i.e., the parts of the chromosome corresponding to the node placement and scheduling decisions. The Beta distribution parameters for the tensor transfer priorities are fixed to $\alpha_{i} = \beta_{i} = 1$ which correspond to the uniform random distribution. Thus, for a graph with $o$ ops and with $d$ devices, the RL agent must propose $(d + 1)\times 2$ values for each of the $o$ ops in the graph. + +To make the learning task easier, rather than directly predicting values of $\alpha$ and $\beta$ , we quantize the output space of the RL agent's actions such that each action uniquely maps to a Beta distribution. This mapping is done as follows: + +- For each node in the graph $1 \leq i \leq o$ , and for each entry $1 \leq j \leq (d + 1)$ corresponding to node $i$ in the BRKGA chromosome, the agent performs the set of actions + +$$ +\boldsymbol {a} _ {i} = \left\{m _ {i, 1}, v _ {i, 1}, m _ {i, 2}, v _ {i, 2}, \dots , m _ {i, j}, v _ {i, j}, \dots , m _ {i, d + 1}, v _ {i, d + 1} \right\}. +$$ + +- $m_{ij}$ and $v_{ij} \in \{0,1,\dots,k - 1\}$ where $k$ is some fixed constant greater than 1, and these represent the quantized mean $\mu_{ij}$ and variance $\sigma_{ij}^2$ of the Beta distribution which are related to each other as follows: + +$$ +\mu_ {i j} = \frac {m _ {i j} + 1}{k _ {i j} + 1}, \sigma_ {i j} ^ {2} = \mu_ {i j} \times (1 - \mu_ {i j}) \times \frac {v _ {i j} + 1}{k _ {i j} + 1} +$$ + +- $\mu_{ij}$ and $\sigma_{ij}^2$ can be mapped to $\alpha_{ij}$ and $\beta_{ij}$ for a Beta distribution as follows: + +$$ +\beta_ {i j} = \mu_ {i j} \times \frac {(1 - \mu_ {i j}) ^ {2}}{\sigma_ {i j} ^ {2}} - 1 + \mu_ {i j}, \alpha_ {i j} = \beta_ {i j} * \frac {\mu_ {i j}}{1 - \mu_ {i j}} +$$ + +- The values $m_{ij}$ and $v_{ij}$ are sampled from a Categorical distribution whose logits are determined by $\mathrm{MLP}_a(\pmb{h}_i)[k*(j-1):k*j]$ . + +We use a similar quantization strategy for the BRKGA crossover probabilities. For every crossover probability, we sample an integer $c \in \{0, 1, \dots, k - 1\}$ from a Categorical distribution for some fixed integer constant $k$ , and the dequantized crossover probability is given by $0.5 * \left(1 + \frac{c + 1}{k}\right)$ + +# A.7 EXTRA MODEL DETAILS + +MLPs Multi-layer perceptrons, or multi-layer fully connected neural networks are models that map input vectors to output vectors through layers of linear transformations and nonlinear activation functions, like the following: + +$$ +\boldsymbol {h} = \operatorname {M L P} (\boldsymbol {x}) = \boldsymbol {W} _ {l} \sigma_ {l - 1} \left(\dots \sigma_ {2} \left(\boldsymbol {W} _ {2} \sigma_ {1} \left(\boldsymbol {W} _ {1} \boldsymbol {x} + \boldsymbol {b} _ {1}\right) + \boldsymbol {b} _ {2}\right)\right)...) + \boldsymbol {b} _ {l}, \tag {3} +$$ + +where $\pmb{x}$ is an input vector, $(\pmb{W}_i, \pmb{b}_i)$ are the parameters for the $i$ th layer, and $\pmb{h}$ is the output vector. $\sigma$ is a nonlinear scalar function applied element-wise to the input vectors. Typical choices include the logistic sigmoid function $\sigma(x) = \frac{1}{1 + e^{-x}}$ , tanh function $\sigma(x) = \frac{e^x + e^{-x}}{e^x - e^{-x}}$ and the ReLU function $\sigma(x) = \max\{0, x\}$ . + +RNNs and LSTMs Recurrent neural networks (RNNs) are good sequence models. Typical RNNs contains a recurrent memory $c_{t}$ that is updated recursively by taking some input at each step $t$ as the following: + +$$ +\boldsymbol {c} _ {t} = \operatorname {R N N C e l l} \left(\boldsymbol {c} _ {t - 1}, \boldsymbol {x} _ {t}\right), \tag {4} +$$ + +where $\boldsymbol{x}_t$ is the input at step $t$ . The simplest RNN cell has the following form + +$$ +\boldsymbol {c} _ {t} = \sigma (\boldsymbol {W} [ \boldsymbol {c} _ {t - 1}, \boldsymbol {x} _ {t} ] + \boldsymbol {b}), \tag {5} +$$ + +where $\mathbf{W}, \mathbf{b}$ are the parameters and $\sigma$ is a nonlinearity. + +Long-short term memory (LSTM) models are a type of RNNs that uses explicit gating to control the access to the memory. LSTMs distinguish the memory $c_{t}$ and the output of the LSTM $h_{t}$ as two sets of vectors, and compute the update at step $t$ as + +$$ +\boldsymbol {i} = \operatorname {s i g m o i d} \left(\boldsymbol {W} _ {i} \left[ \boldsymbol {h} _ {t - 1}, \boldsymbol {x} _ {t} \right] + \boldsymbol {b} _ {i}\right) \tag {6} +$$ + +$$ +\boldsymbol {f} = \operatorname {s i g m o i d} \left(\boldsymbol {W} _ {f} \left[ \boldsymbol {h} _ {t - 1}, \boldsymbol {x} _ {t} \right] + \boldsymbol {b} _ {f}\right) \tag {7} +$$ + +$$ +\boldsymbol {o} = \operatorname {s i g m o i d} \left(\boldsymbol {W} _ {o} \left[ \boldsymbol {h} _ {t - 1}, \boldsymbol {x} _ {t} \right] + \boldsymbol {b} _ {o}\right) \tag {8} +$$ + +$$ +\boldsymbol {g} = \tanh \left(\boldsymbol {W} _ {g} \left[ \boldsymbol {h} _ {t - 1}, \boldsymbol {x} _ {t} \right] + \boldsymbol {b} _ {g}\right) \tag {9} +$$ + +$$ +\boldsymbol {c} _ {t} = \boldsymbol {f} \odot \boldsymbol {c} _ {t - 1} + \boldsymbol {i} \odot \boldsymbol {g} \tag {10} +$$ + +$$ +\boldsymbol {h} _ {t} = \boldsymbol {o} \odot \tanh (\boldsymbol {c} _ {t}). \tag {11} +$$ + +Here $i, f, o$ are the input, forget and output gates and $\odot$ is element-wise multiplication. The carefully designed memory access control through gating makes LSTMs better at modeling long-term dependencies. + +# A.8 AUTOREGRESSIVE PREDICTION MODELS + +We can use an autoregressive model to capture structure in the outputs. Given the node representations $\pmb{h}_v$ for each of the nodes from the GNN, we can utilize an ordering of the nodes, e.g. from a topological sort, and treat the node representations as a sequence, and then use an LSTM (Hochreiter & Schmidhuber, 1997) to predict the outputs $\pmb{y}_v$ sequentially. + +We tried this approach but found that using an LSTM on top of the $h_v$ 's to predict $y_v$ 's did not perform as well as the conditionally independent model. The reasons for this might be: (1) the autoregressive approach relies on a sequential ordering of the nodes, and this ordering might not be reliable nor consistent across graphs; (2) the number of nodes in the computation graphs can be large, and learning recurrent models on long sequences is known to be challenging (Pascanu et al., 2013); (3) the noisy training signal in our REINFORCE-based training setup makes this model even more difficult to train. + +# A.9 PLACEMENT AND SCHEDULING EXAMPLE + +Figure 10 illustrates a computation graph, a valid schedule, and how we account for which tensors are in memory at a given time under the model presented in sec. 3.1. + +![](images/175aea61249f9f6018d4bf856afdb10f781e0014fa4740d5f4aaf0e631fadd63.jpg) +Fig. 10: An example computation graph and execution schedule across two devices. Op 3 is assigned to device 2 while all others are assigned to device 1. + +
OpIn +mem. +(dev1)In +mem. +(dev2)
Run 1A, B
Transfer BA, BB
Run 2A, CB
Run 3CB, D
Run 4C, ED
Transfer DE, DD
Run 5D, E
+ +# A.10 BASELINES + +Here we provide supplementary information about our baselines. + +- CP SAT: The multi-device peak memory minimization problem is formulated for the CP solver using a model of the operation execution order, tensor lifetimes, and cumulative constraints to evaluate peak memory usage. The solver is guaranteed to find the globally optimal solution given sufficient time. +- Graph partition + DFS: The graph partitioning objective is to minimize data transferred across devices. We use the modified implementation of the Kernighan-Lin algorithm (Kernighan & Lin, 1970) used by XLA for device placement in some settings. This implementation is generally slower than heuristics implemented in popular libraries like METIS (Karypis & Kumar, 1998) although it tends to find better quality solutions. +- Local Search: The initial schedule is a topological sort order of the ops, and the initial placement selects devices uniformly randomly. A local move either changes the device assignment of the op, or changes the op order in the current schedule. The hyperparameters (e.g., number of random restarts) are set to values that perform the best on a sample of 10,000 graphs in the training set as found by grid search. +- Tuned BRKGA: The following hyperparameters of BRKGA using grid search: the beta distribution parameters (two scalars), and the number of chromosomes, elites, mutants, and populations. The grid search tries 648 hyperparameter settings and picks the best one as evaluated on 10,000 training set graphs. +- REGAL: The performance of REGAL is stochastic both because the actions are stochastic and because BRKGA itself depends on random samples for mutation and crossover. We estimated the standard deviation of the percent improvement statistics with respect to these sources of randomness as below $0.1\%$ , which is small compared to the differences we observe. Hence we have omitted the error bars from figures 3. + +# A.11 RUNNING TIME COMPARISON FOR ALGORITHMS + +Table 3 shows the average running times of the various algorithms on the TensorFlow test set and the XLA dataset, as measured on an Intel Xeon E5-1650 3.60GHz machine. The times are averaged over the unaugmented graphs in the test set. REGAL provides both fast running time as well as high solution quality. For a slightly higher running time than BRKGA 5K, REGAL improves the solution quality significantly. Almost all of the added running time is due to extra cost of sampling beta distributions by REGAL compared to uniform distributions by BRKGA. This can be seen from the nearly identical running times of REGAL and Tuned BRKGA, which also uses beta distributions, + +Table 3: Average running times for all methods (time cost of running the algorithms, not to be confused with solution quality). + +
AlgorithmTF Peak Memory test
CP SAT~2 hours
GP + DFS144 sec
Local Search122 sec
BRKGA 5K0.89 sec
Tuned BRKGA1.04 sec
GAS1.04 sec
REGAL1.04 sec
+ +Table 4: Performance of REGAL on peak memory with various subsets of actions. + +
PlacementSchedulingValidTest
YesNo-0.4%-0.2%
NoYes4.4%3.65%
YesYes4.67%3.56%
+ +but without the neural network policy. The local search heuristic runs slowly because it was not implemented efficiently, e.g., with incremental evaluations of the objective; we show its timing results for completeness only. + +# A.12 ABLATION ANALYSIS OF AGENT ACTION TYPES + +REGAL can train a policy to generate any subset of the following actions for BRKGA: 1) Actions for node placement priorities, and 2) Actions for node scheduling priorities. We train REGAL with various subsets of these actions and compare their performance against each other in table 4. + +We observe that on the validation set, REGAL performs best when it has to learn actions for both placement and scheduling compared to just scheduling or placement alone. + +# A.13 ANALYSIS OF THE STRUCTURE OF PLACEMENT AND SCHEDULING DECISIONS, + +Our best model trained on the TF peak memory dataset is capable of generating 16 different kinds of actions for node placement decisions and 4 different kind of actions for node scheduling decisions. Each of these actions determine the shape of the Beta distributions from which we sample the node-device affinities and the node scheduling priorities. In this section we attempt to gain insights into the structure of these actions. + +We divide our node placement decisions into three categories: + +- Actions that give a node a higher probability to be placed on device 1 +- Actions that give a node a higher probability to be placed on device 2 +- Actions that give equal preference to the two devices. + +Similarly, we divide our node scheduling decisions in two categories: + +- Actions that give nodes a "high" scheduling priority. +- Actions that give node a "low" scheduling priority. + +Finally, we aggregate the average relative memory consumption of all nodes that were assigned the same set of actions, where memory consumption of a node is defined as the sum of the memory uses of all its input and output tensors. The relative memory usage is the memory usage normalized by the largest memory usage of a node in the graph. + +![](images/789ea66b5084b26091c4bce8508fdd43cce3d7d103fa5679462734050a656de4.jpg) +Fig. 11: The agent picks a diverse set of actions. These plots show the frequency of actions chosen (in percent, left) and the average relative weights of the nodes that the actions are applied to (right) + +We plot this data in Figure 11. On the right, each cell represents the average relative memory consumption of the nodes that were assigned a particular placement and scheduling decision (darker cells indicate nodes with higher average relative memory consumption). On the left, each cell represents the frequency of the nodes that were assigned a particular placement and scheduling decision (darker cells indicate higher frequency). With these we can make the following observations: + +- On average, nodes with higher normalized memory consumption are assigned lower scheduling priorities. +- Most of the nodes with the highest relative memory consumption have no affinity for either of the two devices. +- For nodes with the lowest relative memory consumption, most of them have an affinity to be placed on device 2, while a smaller but still significant number of them prefer device 1. + +This implies that the node placement strategy is more complicated than a trivial "place lighter nodes on device 2" strategy and REGAL's actions are non-trivially dependent on the input. + +# A.14 HYPERPARAMETERS OF THE BEST AGENT FOR PEAK MEMORY TF + +The graph neural network had a state size of 32 for each node and edge, 16 propagations, all networks $\mathrm{MLP}_n\mathrm{MLP}_e\mathrm{MLP}_{\mathrm{node}}\mathrm{MLP}_{\mathrm{msg}}\mathrm{MLP}_{\mathrm{msg}}'$ being two layers of size 32, the aggregation used was mean pooling. For faster training, the reward of the training set was made with 1000 fitness evaluations for REGAL and BRKGA (4600 for REGAL and 5000 for BRKGA for the validation and test sets). Training lasted 100000 gradient steps with each step having a mini-batch of size 4 and with gradient clipping by $L_{2}$ norm with value 10. The baseline mean squared error term's contribution to the overall loss was weighted by 0.0001. The optimizer was Adam with beta1 0.9 beta2 0.999 epsilon $1e - 8$ and learning rate 0.0001. The number of devices (for the memory model) was 2. + +# A.15 HYPERPARAMETERS OF THE BEST AGENT FOR RUNTIME TF + +The graph neural network had a state size of 32 for each node and edge, 16 residual graph propagations, all networks $\mathrm{MLP}_n\mathrm{MLP}_e\mathrm{MLP}_{\mathrm{node}}\mathrm{MLP}_{\mathrm{msg}}\mathrm{MLP}_{\mathrm{msg}}'$ being two layers of size 32, the aggregation used was sum. Training lasted 100000 gradient steps with each step having a mini-batch of size 4 and with gradient clipping by $L_{2}$ norm with value 10. The baseline mean squared error term's contribution to the overall loss was weighted by 0.0001. The optimizer was Adam with beta1 0.9 beta2 0.999 epsilon $1e - 8$ and learning rate 0.0001. With $k = 16$ for scheduling and $k = 2$ for placement (k being the quantization level defined in A.6). + +![](images/c2522d6f7a620e20d84b22765d17a21aa3dcb56e5520e4a285a5f8abef981daa.jpg) +Fig. 12: A box plot of rewards on the TF Runtime test set by unique graph topology. There are 100 graphs for each topology, 99 of which generated by data augmentation. A reward greater than -1 implies that REGAL finds a better solution than BRKGA. Box plots visualize the 25th, 50th, and 75th percentiles and the range of a set of points. + +# A.16 HYPERPARAMETERS OF THE BEST AGENT FOR RUNTIME SYNTHETIC + +Same as A.15 but with 2 graph propagations and GRU node updates and aggregation using mean. + +# A.17 PERFORMANCE BY GRAPH TOPOLOGY + +Figure 12 shows the distribution of rewards on the TF Runtime test set broken down by graph topology. As described in Section 5.1, for each unique topology we augment the dataset by perturbing the tensor sizes and op running times. This generates a distribution of 100 rewards per topology. The variance for a fixed topology is typically relatively small. \ No newline at end of file diff --git a/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/images.zip b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..eb5e69d42bf8a4de8ebb032f7eb2720b2bb5aff9 --- /dev/null +++ b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:417e60c4d3cb6e36440f3fb71db54070135f91c4ddb7d3d70173ac18a93d5414 +size 851330 diff --git a/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/layout.json b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..eab292d0c6c07a0a14d806de11942871ae79e8d6 --- /dev/null +++ b/reinforcedgeneticalgorithmlearningforoptimizingcomputationgraphs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c4f2c68ba2bf19884293c40a71989aa01300f4279fbe25c720ff8eeed5b2d66 +size 784402 diff --git a/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_content_list.json b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5b2cd31cb6e9bb1668f7c821a3d9ec3da9b19b6c --- /dev/null +++ b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf75721793fb17490ac89fadce2247f571b955a934c3325fadba4d6d46de870b +size 102300 diff --git a/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_model.json b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_model.json new file mode 100644 index 0000000000000000000000000000000000000000..419273c590210aa7db4c7d2dd4606846a047022b --- /dev/null +++ b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94d99ea48d46c743b7f11787ce9efe2e0538b150706af1de84d4c3d1a712749d +size 129701 diff --git a/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_origin.pdf b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..193119b3fb00a4e21990a7922ba581277c94e03c --- /dev/null +++ b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/733c4c8d-b5d4-4a6f-9f66-60b635efab67_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e57f549bf14f040792319252844c21f516575732e93898978b8a4079b38f6b5a +size 1465439 diff --git a/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/full.md b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6b43a37bf5777c1ca1d95ab58990c726096b054d --- /dev/null +++ b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/full.md @@ -0,0 +1,411 @@ +# REINFORCEMENT LEARNING BASED GRAPH-TO-SEQUENCE MODEL FOR NATURAL QUESTION GENERATION + +Yu Chen + +Department of Computer Science Rensselaer Polytechnic Institute cheny39@rpi.edu + +Lingfei Wu* + +IBM Research +lwu@email.wm.edu + +Mohammed J. Zaki + +Department of Computer Science Rensselaer Polytechnic Institute zaki@cs.rpi.edu + +# ABSTRACT + +Natural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, or (iii) fail to fully exploit the answer information. To address these limitations, in this paper, we propose a reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text. We also introduce an effective Deep Alignment Network for incorporating the answer information into the passage at both the word and contextual levels. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the standard SQuAD benchmark. + +# 1 INTRODUCTION + +Natural question generation (QG) has many useful applications such as improving the question answering task (Chen et al., 2017; 2019a) by providing more training data (Tang et al., 2017; Yuan et al., 2017), generating practice exercises and assessments for educational purposes (Heilman & Smith, 2010; Danon & Last, 2017), and helping dialog systems to kick-start and continue a conversation with human users (Mostafazadeh et al., 2016). While many existing works focus on QG from images (Fan et al., 2018; Li et al., 2018) or knowledge bases (Serban et al., 2016; Elsahar et al., 2018), in this work, we focus on QG from text. + +Conventional methods (Mostow & Chen, 2009; Heilman & Smith, 2010; Heilman, 2011) for QG rely on heuristic rules or hand-crafted templates, leading to the issues of low generalizability and scalability. Recent attempts have been focused on exploiting Neural Network (NN) based approaches that do not require manually-designed rules and are end-to-end trainable. Encouraged by the huge success of neural machine translation, these approaches formulate the QG task as a sequence-to-sequence (Seq2Seq) learning problem. Specifically, attention-based Seq2Seq models (Bahdanau et al., 2014; Luong et al., 2015) and their enhanced versions with copy (Vinyals et al., 2015; Gu et al., 2016) and coverage (Tu et al., 2016) mechanisms have been widely applied and show promising results on this task (Du et al., 2017; Zhou et al., 2017; Song et al., 2018a; Kumar et al., 2018a). However, these methods typically ignore the hidden structural information associated with a word + +sequence such as the syntactic parsing tree. Failing to utilize the rich text structure information beyond the simple word sequence may limit the effectiveness of these models for QG. + +It has been observed that in general, cross-entropy based sequence training has several limitations like exposure bias and inconsistency between train/test measurement (Ranzato et al., 2015; Wu et al., 2016). As a result, they do not always produce the best results on discrete evaluation metrics on sequence generation tasks such as text summarization (Paulus et al., 2017) or question generation (Song et al., 2017). To cope with these issues, some recent QG approaches (Song et al., 2017; Kumar et al., 2018b) directly optimize evaluation metrics using Reinforcement Learning (RL) (Williams, 1992). However, existing approaches usually only employ evaluation metrics like BLEU and ROUGE-L as rewards for RL training. More importantly, they fail to exploit other important metrics such as syntactic and semantic constraints for guiding high-quality text generation. + +Early works on neural QG did not take into account the answer information when generating a question. Recent works have started to explore various means of utilizing the answer information. When question generation is guided by the semantics of an answer, the resulting questions become more relevant and readable. Conceptually, there are three different ways to incorporate the answer information by simply marking the answer location in the passage (Zhou et al., 2017; Zhao et al., 2018; Liu et al., 2019), or using complex passage-answer matching strategies (Song et al., 2017), or separating answers from passages when applying a Seq2Seq model (Kim et al., 2018; Sun et al., 2018). However, they neglect potential semantic relations between passage words and answer words, and thus fail to explicitly model the global interactions among them in the embedding space. + +To address these aforementioned issues, in this paper, we present a novel reinforcement learning based generator-evaluator architecture that aims to: i) make full use of rich hidden structure information beyond the simple word sequence; ii) generate syntactically and semantically valid text while maintaining the consistency of train/test measurement; iii) model explicitly the global interactions of semantic relationships between passage and answer at both word-level and contextual-level. + +In particular, to achieve the first goal, we explore two different means to either construct a syntax-based static graph or a semantics-aware dynamic graph from the text sequence, as well as its rich hidden structure information. Then, we design a graph-to-sequence (Graph2Seq) model based generator that encodes the graph representation of a text passage and decodes a question sequence using a Recurrent Neural Network (RNN). Our Graph2Seq model is based on a novel bidirectional gated graph neural network, which extends the gated graph neural network (Li et al., 2015) by considering both incoming and outgoing edges, and fusing them during the graph embedding learning. + +To achieve the second goal, we design a hybrid evaluator which is trained by optimizing a mixed objective function that combines both cross-entropy and RL loss. We use not only discrete evaluation metrics like BLEU, but also semantic metrics like word mover's distance (Kusner et al., 2015) to encourage both syntactically and semantically valid text generation. To achieve the third goal, we propose a novel Deep Alignment Network (DAN) for effectively incorporating answer information into the passage at multiple granularity levels. + +Our main contributions are as follows: + +- We propose a novel RL-based Graph2Seq model for natural question generation. To the best of our knowledge, we are the first to introduce the Graph2Seq architecture for OG. +- We explore both static and dynamic ways of constructing graph from text and are the first to systematically investigate their performance impacts on a GNN encoder. +- The proposed model is end-to-end trainable, achieves new state-of-the-art scores, and outperforms existing methods by a significant margin on the standard SQuAD benchmark for QG. Our human evaluation study also corroborates that the questions generated by our model are more natural (semantically and syntactically) compared to other baselines. + +# 2 AN RL-BASED GENERATOR-EVALUATOR ARCHITECTURE + +In this section, we define the question generation task, and then present our RL-based Graph2Seq model for question generation. We first motivate the design, and then present the details of each component as shown in Fig. 1. + +![](images/899a4d805d3d035b836a8c66c82652c1d46495b5858c18f30de655cf4ee02fda.jpg) +Figure 1: Overall architecture of the proposed model. Best viewed in color. + +# 2.1 PROBLEM FORMULATION + +The goal of question generation is to generate natural language questions based on a given form of data, such as knowledge base triples or tables (Bao et al., 2018), sentences (Du et al., 2017; Song et al., 2018a), or images (Li et al., 2018), where the generated questions need to be answerable from the input data. In this paper, we focus on QG from a given text passage, along with a target answer. + +We assume that a text passage is a collection of word tokens $X^{p} = \{x_{1}^{p}, x_{2}^{p}, \dots, x_{N}^{p}\}$ , and a target answer is also a collection of word tokens $X^{a} = \{x_{1}^{a}, x_{2}^{a}, \dots, x_{L}^{a}\}$ . The task of natural question generation is to generate the best natural language question consisting of a sequence of word tokens $\hat{Y} = \{y_{1}, y_{2}, \dots, y_{T}\}$ which maximizes the conditional likelihood $\hat{Y} = \arg \max_{Y} P(Y|X^{p}, X^{a})$ . Here $N, L,$ and $T$ are the lengths of the passage, answer and question, respectively. We focus on the problem setting where we have a set of passage (and answers) and target questions pairs, to learn the mapping; existing QG approaches (Du et al., 2017; Song et al., 2018a; Zhao et al., 2018; Kim et al., 2018) make a similar assumption. + +# 2.2 DEEP ALIGNMENT NETWORK + +Answer information is crucial for generating relevant and high quality questions from a passage. Unlike previous methods that neglect potential semantic relations between passage and answer words, we explicitly model the global interactions among them in the embedding space. To this end, we propose a novel Deep Alignment Network (DAN) component for effectively incorporating answer information into the passage with multiple granularity levels. Specifically, we perform attention-based soft-alignment at the word-level, as well as at the contextual-level, so that multiple levels of alignments can help learn hierarchical representations. + +![](images/cee18573c508d590daa6b2053bacf274402f87f5f75ca6a1ebca9d2fe792d019.jpg) +Figure 2: The attention-based soft-alignment mechanism. + +Let $\mathbf{X}^p\in \mathbb{R}^{F\times N}$ and $\widetilde{\mathbf{X}}^p\in \mathbb{R}^{\widetilde{F}_p\times N}$ denote two embeddings associated with passage text. Similarly, let $\mathbf{X}^a\in \mathbb{R}^{F\times L}$ and $\widetilde{\mathbf{X}}^a\in \mathbb{R}^{\widetilde{F}_a\times L}$ denote two embeddings associated with answer text. Conceptually, as shown in Fig. 2, the soft-alignment mechanism consists of three steps: i) compute the attention score $\beta_{i,j}$ for each pair of passage word $x_i^p$ and answer word $x_j^a$ ; ii) multiply the attention matrix $\beta$ with the answer embeddings $\widetilde{\mathbf{X}}^a$ to obtain the aligned answer embeddings $\mathbf{H}^p$ for the passage; iii) concatenate the resulting aligned answer embeddings $\mathbf{H}^p$ with the passage embeddings $\widetilde{\mathbf{X}}^p$ to get the final passage embeddings $\widetilde{\mathbf{H}}^p\in \mathbb{R}^{(\widetilde{F}_p + \widetilde{F}_a)\times N}$ . + +Formally, we define our soft-alignment function as following: + +$$ +\tilde {\mathbf {H}} ^ {p} = \operatorname {A l i g n} \left(\mathbf {X} ^ {p}, \mathbf {X} ^ {a}, \tilde {\mathbf {X}} ^ {p}, \tilde {\mathbf {X}} ^ {a}\right) = \operatorname {C A T} \left(\tilde {\mathbf {X}} ^ {p}; \mathbf {H} ^ {p}\right) = \operatorname {C A T} \left(\tilde {\mathbf {X}} ^ {p}; \tilde {\mathbf {X}} ^ {a} \boldsymbol {\beta} ^ {T}\right) \tag {1} +$$ + +where the matrix $\tilde{\mathbf{H}}^p$ is the final passage embedding, the function CAT is a simple concatenation operation, and $\beta$ is a $N\times L$ attention score matrix, computed by + +$$ +\beta \propto \exp \left(\operatorname {R e L U} \left(\mathbf {W X} ^ {p}\right) ^ {T} \operatorname {R e L U} \left(\mathbf {W X} ^ {a}\right)\right) \tag {2} +$$ + +where $\mathbf{W} \in \mathbb{R}^{d \times F}$ is a trainable weight matrix, with $d$ being the hidden state size and ReLU is the rectified linear unit (Nair & Hinton, 2010). After introducing the general soft-alignment mechanism, we next introduce how we do soft-alignment at both word-level and contextual-level. + +# 2.2.1 WORD-LEVEL ALIGNMENT + +In the word-level alignment stage, we first perform a soft-alignment between the passage and the answer based only on their pretrained GloVe embeddings and compute the final passage embeddings by $\tilde{\mathbf{H}}^p = \mathrm{Align}(\mathbf{G}^p,\mathbf{G}^a,[\mathbf{G}^p;\mathbf{B}^p;\mathbf{L}^p],\mathbf{G}^a)$ , where $\mathbf{G}^p,\mathbf{B}^p$ , and $\mathbf{L}^p$ are the corresponding GloVe embedding (Pennington et al., 2014), BERT embedding (Devlin et al., 2018), and linguistic feature (i.e., case, NER and POS) embedding of the passage text, respectively. Then a bidirectional LSTM (Hochreiter & Schmidhuber, 1997) is applied to the final passage embeddings $\tilde{\mathbf{H}}^p = \{\tilde{\mathbf{h}}_i^p\}_{i=1}^N$ to obtain contextualized passage embeddings $\tilde{\mathbf{H}}^p \in \mathbb{R}^{\bar{F} \times N}$ . + +On the other hand, for the answer text $\mathbf{X}^a$ , we simply concatenate its GloVe embedding $\mathbf{G}^a$ and its BERT embedding $\mathbf{B}^a$ to obtain its word embedding matrix $\mathbf{H}^a \in \mathbb{R}^{d' \times L}$ . Another BiLSTM is then applied to the concatenated answer embedding sequence to obtain the contextualized answer embeddings $\bar{\mathbf{H}}^a \in \mathbb{R}^{\bar{F} \times L}$ . + +# 2.2.2 CONTEXTUAL-LEVEL ALIGNMENT + +In the contextual-level alignment stage, we perform another soft-alignment based on the contextualized passage and answer embeddings. Similarly, we compute the aligned answer embedding, and concatenate it with the contextualized passage embedding to obtain the final passage embedding matrix Align([G $^p$ ; B $^p$ ; $\bar{\mathbf{H}}^p$ ], [G $^a$ ; B $^a$ ; $\bar{\mathbf{H}}^a$ ], $\bar{\mathbf{H}}^p$ , $\bar{\mathbf{H}}^a$ ). Finally, we apply another BiLSTM to the above concatenated embedding to get a $\vec{F} \times N$ passage embedding matrix $\mathbf{X}$ . + +# 2.3 BIDIECTIONAL GRAPH-TO-SEQUENCE GENERATOR + +While RNNs are good at capturing local dependencies among consecutive words in text, GNNs have been shown to better utilize the rich hidden text structure information such as syntactic parsing (Xu et al., 2018b) or semantic parsing (Song et al., 2018b), and can model the global interactions (relations) among sequence words to further improve the representations. Therefore, unlike most of the existing methods that rely on RNNs to encode the input passage, we first construct a passage graph $\mathcal{G}$ from text where each passage word is treated as a graph node, and then employ a novel Graph2Seq model to encode the passage graph (and answer), and to decode the question sequence. + +# 2.3.1 PASSAGE GRAPH CONSTRUCTION + +Existing GNNs assume a graph structured input and directly consume it for computing the corresponding node embeddings. However, we need to construct a graph from the text. Although there + +are early attempts on constructing a graph from a sentence (Xu et al., 2018b), there is no clear answer as to the best way of representing text as a graph. We explore both static and dynamic graph construction approaches, and systematically investigate the performance differences between these two methods in the experimental section. + +Syntax-based static graph construction: We construct a directed and unweighted passage graph based on dependency parsing. For each sentence in a passage, we first get its dependency parse tree. We then connect neighboring dependency parse trees by connecting those nodes that are at a sentence boundary and next to each other in text. + +Semantics-aware dynamic graph construction: We dynamically build a directed and weighted graph to model semantic relationships among passage words. We make the process of building such a graph depend on not only the passage, but also on the answer. The graph construction procedure consists of three steps: i) we compute a dense adjacency matrix $\mathbf{A}$ for the passage graph by applying self-attention to the word-level passage embeddings $\tilde{\mathbf{H}}^p$ , ii) a kNN-style graph sparsification strategy (Chen et al., 2019c) is adopted to obtain a sparse adjacency matrix $\bar{\mathbf{A}}$ , where we only keep the $K$ nearest neighbors (including itself) as well as the associated attention scores (i.e., the remaining attentions scores are masked off) for each node; and iii) inspired by BiLSTM over LSTM, we also compute two normalized adjacency matrices $\mathbf{A}^{-1}$ and $\mathbf{A}^{+}$ according to their incoming and outgoing directions, by applying softmax operation on the resulting sparse adjacency matrix $\bar{\mathbf{A}}$ and its transpose, respectively. + +$$ +\mathbf {A} = \operatorname {R e L U} \left(\mathbf {U} \tilde {\mathbf {H}} ^ {p}\right) ^ {T} \operatorname {R e L U} \left(\mathbf {U} \tilde {\mathbf {H}} ^ {p}\right), \quad \bar {\mathbf {A}} = \operatorname {k N N} (\mathbf {A}), \quad \mathbf {A} ^ {- 1}, \mathbf {A} ^ {\vdash} = \operatorname {s o f t m a x} \left(\left\{\bar {\mathbf {A}}, \bar {\mathbf {A}} ^ {T} \right\}\right) \tag {3} +$$ + +where $\mathbf{U}$ is a $d\times (\widetilde{F}_p + \widetilde{F}_a)$ trainable weight matrix. Note that the supervision signal is able to back-propagate through the graph sparsification operation as the $K$ nearest attention scores are kept. + +# 2.3.2 BIDIRECTIONAL GATED GRAPH NEURAL NETWORKS + +To effectively learn the graph embeddings from the constructed text graph, we propose a novel Bidirectional Gated Graph Neural Network (BiGGNN) which extends Gated Graph Sequence Neural Networks (Li et al., 2015) by learning node embeddings from both incoming and outgoing edges in an interleaved fashion when processing the directed passage graph. Similar idea has also been exploited in (Xu et al., 2018a), which extended another popular variant of GNNs - GraphSAGE (Hamilton et al., 2017). However, one of key difference between our BiGGNN and their bidirectional GraphSAGE is that we fuse the intermediate node embeddings from both incoming and outgoing directions in every iteration, whereas their model simply learns the node embeddings of each direction independently and concatenates them in the final step. + +In BiGGNN, node embeddings are initialized to the passage embeddings $\mathbf{X}$ returned by DAN. The same set of network parameters are shared at every hop of computation. At each computation hop, for every node in the graph, we apply an aggregation function which takes as input a set of incoming (or outgoing) neighboring node vectors and outputs a backward (or forward) aggregation vector. For the syntax-based static graph, we use a mean aggregator for simplicity although other operators such as max or attention (Veličković et al., 2017) could also be employed, + +$$ +\mathbf {h} _ {\mathcal {N} _ {- 1 (v)}} ^ {k} = \operatorname {M E A N} \left(\left\{\mathbf {h} _ {v} ^ {k - 1} \right\} \cup \left\{\mathbf {h} _ {u} ^ {k - 1}, \forall u \in \mathcal {N} _ {- 1 (v)} \right\}\right) +$$ + +$$ +\mathbf {h} _ {\mathcal {N} \vdash (v)} ^ {k} = \operatorname {M E A N} \left(\left\{\mathbf {h} _ {v} ^ {k - 1} \right\} \cup \left\{\mathbf {h} _ {u} ^ {k - 1}, \forall u \in \mathcal {N} _ {\vdash (v)} \right\}\right) \tag {4} +$$ + +For the semantics-aware dynamic graph we compute a weighted average for aggregation where the weights come from the normalized adjacency matrices $\mathbf{A}^{-1}$ and $\mathbf{A}^{\perp}$ , defined as, + +$$ +\mathbf {h} _ {\mathcal {N} _ {- 1 (v)}} ^ {k} = \sum_ {\forall u \in \mathcal {N} _ {- 1 (v)}} \mathbf {a} _ {v, u} ^ {- 1} \mathbf {h} _ {u} ^ {k - 1}, \quad \mathbf {h} _ {\mathcal {N} _ {- 1 (v)}} ^ {k} = \sum_ {\forall u \in \mathcal {N} _ {- 1 (v)}} \mathbf {a} _ {v, u} ^ {\vdash} \mathbf {h} _ {u} ^ {k - 1} \tag {5} +$$ + +While (Xu et al., 2018a) learn separate node embeddings for both directions independently, we opt to fuse information aggregated in two directions at each hop, which we find works better in general. + +$$ +\mathbf {h} _ {\mathcal {N} _ {(v)}} ^ {k} = \operatorname {F u s e} \left(\mathbf {h} _ {\mathcal {N} _ {- (v)}} ^ {k}, \mathbf {h} _ {\mathcal {N} _ {- (v)}} ^ {k}\right) \tag {6} +$$ + +We design the fusion function as a gated sum of two information sources, + +$$ +\operatorname {F u s e} (\mathbf {a}, \mathbf {b}) = \mathbf {z} \odot \mathbf {a} + (1 - \mathbf {z}) \odot \mathbf {b}, \quad \mathbf {z} = \sigma \left(\mathbf {W} _ {z} [ \mathbf {a}; \mathbf {b}; \mathbf {a} \odot \mathbf {b}; \mathbf {a} - \mathbf {b} ] + \mathbf {b} _ {z}\right) \tag {7} +$$ + +where $\odot$ is the component-wise multiplication, $\sigma$ is a sigmoid function, and $\mathbf{z}$ is a gating vector. + +Finally, a Gated Recurrent Unit (GRU) (Cho et al., 2014) is used to update the node embeddings by incorporating the aggregation information. + +$$ +\mathbf {h} _ {v} ^ {k} = \operatorname {G R U} \left(\mathbf {h} _ {v} ^ {k - 1}, \mathbf {h} _ {\mathbb {N} _ {(v)}} ^ {k}\right) \tag {8} +$$ + +After $n$ hops of GNN computation, where $n$ is a hyperparameter, we obtain the final state embedding $\mathbf{h}_v^n$ for node $v$ . To compute the graph-level embedding, we first apply a linear projection to the node embeddings, and then apply max-pooling over all node embeddings to get a $d$ -dim vector $\mathbf{h}^{\mathcal{G}}$ . + +# 2.3.3 RNN DECODER + +On the decoder side, we adopt the same model architecture as other state-of-the-art Seq2Seq models where an attention-based (Bahdanau et al., 2014; Luong et al., 2015) LSTM decoder with copy (Vinyals et al., 2015; Gu et al., 2016) and coverage mechanisms (Tu et al., 2016) is employed. The decoder takes the graph-level embedding $\mathbf{h}^{\mathcal{G}}$ followed by two separate fully-connected layers as initial hidden states (i.e., $\mathbf{c}_0$ and $\mathbf{s}_0$ ) and the node embeddings $\{\mathbf{h}_v^n, \forall v \in \mathcal{G}\}$ as the attention memory, and generates the output sequence one word at a time. The particular decoder used in this work closely follows (See et al., 2017). We refer the readers to Appendix A for more details. + +# 2.4 HYBRID EVALUATOR + +It has been observed that optimizing such cross-entropy based training objectives for sequence learning does not always produce the best results on discrete evaluation metrics (Ranzato et al., 2015; Wu et al., 2016; Paulus et al., 2017). Major limitations of this strategy include exposure bias and evaluation discrepancy between training and testing. To tackle these issues, some recent QG approaches (Song et al., 2017; Kumar et al., 2018b) directly optimize evaluation metrics using REINFORCE. We further use a mixed objective function with both syntactic and semantic constraints for guiding text generation. In particular, we present a hybrid evaluator with a mixed objective function that combines both cross-entropy loss and RL loss in order to ensure the generation of syntactically and semantically valid text. + +For the RL part, we employ the self-critical sequence training (SCST) algorithm (Rennie et al., 2017) to directly optimize the evaluation metrics. SCST is an efficient REINFORCE algorithm that utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. In SCST, at each training iteration, the model generates two output sequences: the sampled output $Y^{s}$ , produced by multinomial sampling, that is, each word $y_{t}^{s}$ is sampled according to the likelihood $P(y_{t}|X,y_{< t})$ predicted by the generator, and the baseline output $\hat{Y}$ , obtained by greedy search, that is, by maximizing the output probability distribution at each decoding step. We define $r(Y)$ as the reward of an output sequence $Y$ , computed by comparing it to corresponding ground-truth sequence $Y^{*}$ with some reward metrics. The loss function is defined as: + +$$ +\mathcal {L} _ {r l} = \left(r (\hat {Y}) - r \left(Y ^ {s}\right)\right) \sum_ {t} \log P \left(y _ {t} ^ {s} \mid X, y _ {< t} ^ {s}\right) \tag {9} +$$ + +As we can see, if the sampled output has a higher reward than the baseline one, we maximize its likelihood, and vice versa. + +One of the key factors for RL is to pick the proper reward function. To take syntactic and semantic constraints into account, we consider the following metrics as our reward functions: + +Evaluation metric as reward function: We use one of our evaluation metrics, BLEU-4, as our reward function $f_{\mathrm{eval}}$ , which lets us directly optimize the model towards the evaluation metrics. + +Semantic metric as reward function: One drawback of some evaluation metrics like BLEU is that they do not measure meaning, but only reward systems that have exact n-gram matches in the reference system. To make our reward function more effective and robust, we additionally use word movers distance (WMD) as a semantic reward function $f_{\mathrm{sem}}$ . WMD is the state-of-the-art approach to measure the dissimilarity between two sentences based on word embeddings (Kusner et al., 2015). Following Gong et al. (2019), we take the negative of the WMD distance between a generated sequence and the ground-truth sequence and divide it by the sequence length as its semantic score. + +We define the final reward function as $r(Y) = f_{\mathrm{eval}}(Y, Y^*) + \alpha f_{\mathrm{sem}}(Y, Y^*)$ where $\alpha$ is a scalar. + +# 2.5 TRAINING AND TESTING + +We train our model in two stages. In the first state, we train the model using regular cross-entropy loss, defined as, + +$$ +\mathcal {L} _ {l m} = \sum_ {t} - \log P \left(y _ {t} ^ {*} \mid X, y _ {< t} ^ {*}\right) + \lambda \operatorname {c o v l o s s} _ {t} \tag {10} +$$ + +where $y_{t}^{*}$ is the word at the $t$ -th position of the ground-truth output sequence and $\text{covloss}_t$ is the coverage loss defined as $\sum_{i} \min(a_{i}^{t}, c_{i}^{t})$ , with $a_{i}^{t}$ being the $i$ -th element of the attention vector over the input sequence at time step $t$ . Scheduled teacher forcing (Bengio et al., 2015) is adopted to alleviate the exposure bias problem. In the second stage, we fine-tune the model by optimizing a mixed objective function combining both cross-entropy loss and RL loss, defined as, + +$$ +\mathcal {L} = \gamma \mathcal {L} _ {r l} + (1 - \gamma) \mathcal {L} _ {l m} \tag {11} +$$ + +where $\gamma$ is a scaling factor controlling the trade-off between cross-entropy loss and RL loss. During the testing phase, we use beam search to generate final predictions. + +# 3 EXPERIMENTS + +We evaluate our proposed model against state-of-the-art methods on the SQuAD dataset (Rajpurkar et al., 2016). Our full models have two variants $\mathrm{G2S}_{\text{sta}} + \mathrm{BERT} + \mathrm{RL}$ and $\mathrm{G2S}_{\text{dyn}} + \mathrm{BERT} + \mathrm{RL}$ which adopts static graph construction or dynamic graph construction, respectively. For model settings and sensitivity analysis, please refer to Appendix B and C. The implementation of our model is publicly available at https://github.com/hugochan/RL-based-Graph2Seq-for-NQG. + +# 3.1 BASELINE METHODS + +We compare against the following baselines in our experiments: i) Transformer (Vaswani et al., 2017), ii) SeqCopyNet (Zhou et al., 2018), iii) NQG++ (Zhou et al., 2017), iv) MPQG+R (Song et al., 2017), v) AFPQA (Sun et al., 2018), vi) s2sa-at-mp-gsa (Zhao et al., 2018), vii) ASs2s (Kim et al., 2018), and viii) CGC-QG (Liu et al., 2019). Detailed descriptions of the baselines are provided in Appendix D. Experiments on baselines followed by * are conducted using released code. Results of other baselines are taken from the corresponding papers, with unreported metrics marked as - . + +# 3.2 DATA AND METRICS + +SQuAD contains more than 100K questions posed by crowd workers on 536 Wikipedia articles. Since the test set of the original SQuAD is not publicly available, the accessible parts $(\approx 90\%)$ are used as the entire dataset in our experiments. For fair comparison with previous methods, we evaluated our model on both data split-1 (Song et al., 2018a)1 that contains 75,500/17,934/11,805 (train/development/test) examples and data split-2 (Zhou et al., 2017)2 that contains 86,635/8,965/8,964 examples. + +Following previous works, we use BLEU-4 (Papineni et al., 2002), METEOR (Banerjee & Lavie, 2005), ROUGE-L (Lin, 2004) and Q-BLEU1 (Nema & Khapra, 2018) as our evaluation metrics. Initially, BLEU-4 and METEOR were designed for evaluating machine translation systems and ROUGE-L was designed for evaluating text summarization systems. Recently, Q-BLEU1 was designed for better evaluating question generation systems, which was shown to correlate significantly better with human judgments compared to existing metrics. + +Besides automatic evaluation, we also conduct a human evaluation study on split-2. We ask human evaluators to rate generated questions from a set of anonymized competing systems based on whether they are syntactically correct, semantically correct and relevant to the passage. The rating scale is from 1 to 5, on each of the three categories. Evaluation scores from all evaluators are collected and averaged as final scores. Further details on human evaluation can be found in Appendix E. + +Table 1: Automatic evaluation results on the SQuAD test set. + +
MethodsSplit-1Split-2
BLEU-4METEORROUGE-LQ-BLEU1BLEU-4METEORROUGE-LQ-BLEU1
Transformer2.568.9826.0116.703.099.6828.8620.10
SeqCopyNet----13.02-44.00-
NQG++----13.29---
MPQG+R*14.3918.9942.4652.0014.7118.9342.6050.30
AFPQA----15.64---
s2sa-at-mp-gsa15.3219.2943.91-15.8219.6744.24-
ASs2s16.2019.9243.96-16.17---
CGC-QG----17.5521.2444.53-
G2Sdyn+BERT+RL17.5521.4245.5955.4018.0621.5345.9155.00
G2Ssta+BERT+RL17.9421.7646.0255.6018.3021.7045.9855.20
+ +Table 2: Human evaluation results (± standard deviation) on the SQuAD split-2 test set. The rating scale is from 1 to 5 (higher scores indicate better results). + +
MethodsSyntactically correctSemantically correctRelevant
MPQG+R*4.34 (0.15)4.01 (0.23)3.21 (0.31)
G2Ssta+BERT+RL4.41 (0.09)4.31 (0.12)3.79 (0.45)
Ground-truth4.74 (0.14)4.74 (0.19)4.25 (0.38)
+ +# 3.3 EXPERIMENTAL RESULTS AND HUMAN EVALUATION + +Table 1 shows the automatic evaluation results comparing our proposed models against other state-of-the-art baseline methods. First of all, we can see that both of our full models $\mathrm{G2S}_{sta} + \mathrm{BERT} + \mathrm{RL}$ and $\mathrm{G2S}_{dyn} + \mathrm{BERT} + \mathrm{RL}$ achieve the new state-of-the-art scores on both data splits and consistently outperform previous methods by a significant margin. This highlights that our RL-based Graph2Seq model, together with the deep alignment network, successfully addresses the three issues we highlighted in Sec. 1. Between these two variants, $\mathrm{G2S}_{sta} + \mathrm{BERT} + \mathrm{RL}$ outperforms $\mathrm{G2S}_{dyn} + \mathrm{BERT} + \mathrm{RL}$ on all the metrics. Also, unlike the baseline methods, our model does not rely on any hand-crafted rules or ad-hoc strategies, and is fully end-to-end trainable. + +As shown in Table 2, we conducted a human evaluation study to assess the quality of the questions generated by our model, the baseline method MPQG+R, and the ground-truth data in terms of syntax, semantics and relevance metrics. We can see that our best performing model achieves good results even compared to the ground-truth, and outperforms the strong baseline method MPQG+R. Our error analysis shows that main syntactic error occurs in repeated/unknown words in generated questions. Further, the slightly lower quality on semantics also impacts the relevance. + +# 3.4 ABLATION STUDY + +Table 3: Ablation study on the SQuAD split-2 test set. + +
MethodsBLEU-4MethodsBLEU-4
G2Sdyn+BERT+RL18.06G2Sdyn16.81
G2Ssta+BERT+RL18.30G2Ssta16.96
G2Ssta+BERT-fixed+RL18.20G2Sdyn w/o DAN12.58
G2Sdyn+BERT17.56G2Ssta w/o DAN12.62
G2Ssta+BERT18.02G2Ssta w/o BiGGNN, w/ Seq2Seq16.14
G2Ssta+BERT-fixed17.86G2Ssta w/o BiGGNN, w/ GCN14.47
G2Sdyn+RL17.18G2Ssta w/ GGNN-forward16.53
G2Ssta+RL17.49G2Ssta w/ GGNN-backward16.75
+ +As shown in Table 3, we perform an ablation study to systematically assess the impact of different model components (e.g., BERT, RL, DAN, and BiGGNN) for two proposed full model variants (static vs dynamic) on the SQuAD split-2 test set. It confirms our finding that syntax-based static graph construction $(\mathrm{G2S}_{sta} + \mathrm{BERT} + \mathrm{RL})$ performs better than semantics-aware dynamic graph construction $(\mathrm{G2S}_{dyn} + \mathrm{BERT} + \mathrm{RL})$ in almost every setting. However, it may be too early to conclude which one is the method of choice for QG. On the one hand, an advantage of static graph construction is that useful domain knowledge can be hard-coded into the graph, which can greatly benefit the downstream task. However, it might suffer if there is a lack of prior knowledge for a specific domain knowledge. On the other hand, dynamic graph construction does not need any prior knowledge about the hidden structure of text, and only relies on the attention matrix to capture these structured information, which provides an easy way to achieve a decent performance. One interesting direction is to explore effective ways of combining both static and dynamic graphs. + +By turning off the Deep Alignment Network (DAN), the BLEU-4 score of $\mathrm{G2S}_{\text{sta}}$ (similarly for $\mathrm{G2S}_{\text{dyn}}$ ) dramatically drops from $16.96\%$ to $12.62\%$ , which indicates the importance of answer information for QG and shows the effectiveness of DAN. This can also be verified by comparing the performance between the DAN-enhanced Seq2Seq model (16.14 BLEU-4 score) and other carefully designed answer-aware Seq2Seq baselines such as NQG++ (13.29 BLEU-4 score), MPQG+R (14.71 BLEU-4 score) and AFPQA (15.82 BLEU-4 score). Further experiments demonstrate that both word-level ( $\mathrm{G2S}_{\text{sta}}$ w/ DAN-word only) and contextual-level ( $\mathrm{G2S}_{\text{sta}}$ w/ DAN-contextual only) answer alignments in DAN are helpful. + +We can see the advantages of Graph2Seq learning over Seq2Seq learning on this task by comparing the performance between $\mathrm{G2S}_{\text{sta}}$ and Seq2Seq. Compared to Seq2Seq based QG methods that completely ignore hidden structure information in the passage, our Graph2Seq based method is aware of more hidden structure information such as semantic similarity between any pair of words that are not directly connected or syntactic relationships between two words captured in a dependency parsing tree. In our experiments, we also observe that doing both forward and backward message passing in the GNN encoder is beneficial. Surprisingly, using GCN (Kipf & Welling, 2016) as the graph encoder (and converting the input graph to an undirected graph) does not provide good performance. In addition, fine-tuning the model using REINFORCE can further improve the model performance in all settings (i.e., w/ and w/o BERT), which shows the benefits of directly optimizing the evaluation metrics. Besides, we find that the pretrained BERT embedding has a considerable impact on the performance and fine-tuning BERT embedding even further improves the performance, which demonstrates the power of large-scale pretrained language models. + +# 3.5 CASE STUDY + +Table 4: Generated questions on SQuAD split-2 test set. Target answers are underlined. + +Passage: for the successful execution of a project, effective planning is essential. +Gold: what is essential for the successful execution of a project? +$\mathbf{G2S}_{sta}$ w/o BiGGNN (Seq2Seq): what type of planning is essential for the project? +$\mathbf{G2S}_{sta}$ w/o DAN.: what type of planning is essential for the successful execution of a project? +$\mathbf{G2S}_{sta}$ : what is essential for the successful execution of a project? +$\mathbf{G2S}_{sta} + \mathbf{BERT}$ : what is essential for the successful execution of a project? +$\mathbf{G2S}_{sta} + \mathbf{BERT} + \mathbf{RL}$ : what is essential for the successful execution of a project? +$\mathbf{G2S}_{dym} + \mathbf{BERT} + \mathbf{RL}$ : what is essential for the successful execution of a project? + +Passage: the church operates three hundred sixty schools and institutions overseas . + +Gold: how many schools and institutions does the church operate overseas? + +$\mathbf{G2S}_{sta}$ w/o BiGGNN (Seq2Seq): how many schools does the church have? + +G2Ssta w/o DAN.: how many schools does the church have? + +$\mathbf{G2S}_{sta}$ : how many schools and institutions does the church have? + +$\mathbf{G2S}_{sta} + \mathbf{BERT}$ : how many schools and institutions does the church have? + +$\mathbf{G2S}_{sta} + \mathbf{BERT} + \mathbf{RL}$ : how many schools and institutions does the church operate? + +$\mathbf{G2S}_{dyn} + \mathbf{BERT} + \mathbf{RL}$ : how many schools does the church operate ? + +In Table 4, we further show a few examples that illustrate the quality of generated text given a passage under different ablated systems. As we can see, incorporating answer information helps the + +model identify the answer type of the question to be generated, and thus makes the generated questions more relevant and specific. Also, we find our Graph2Seq model can generate more complete and valid questions compared to the Seq2Seq baseline. We think it is because a Graph2Seq model is able to exploit the rich text structure information better than a Seq2Seq model. Lastly, it shows that fine-tuning the model using REINFORCE can improve the quality of the generated questions. + +# 4 RELATED WORK + +# 4.1 NATURAL QUESTION GENERATION + +Early works (Mostow & Chen, 2009; Heilman & Smith, 2010) for QG focused on rule-based approaches that rely on heuristic rules or hand-crafted templates, with low generalizability and scalability. Recent attempts have focused on NN-based approaches that do not require manually-designed rules and are end-to-end trainable. Existing NN-based approaches (Du et al., 2017; Yao et al.; Zhou et al., 2018) rely on the Seq2Seq model with attention, copy or coverage mechanisms. In addition, various ways (Zhou et al., 2017; Song et al., 2017; Zhao et al., 2018) have been proposed to utilize the target answer for guiding the question generation. Some recent approaches (Song et al., 2017; Kumar et al., 2018b) aim at directly optimizing evaluation metrics using REINFORCE. Concurrent works have explored tackling the QG task with various semantics-enhanced rewards (Zhang & Bansal, 2019) or large-scale pretrained language models (Dong et al., 2019). + +However, the existing approaches for QG suffer from several limitations; they (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, and (iii) fail to fully exploit the answer information. To address these limitations, we propose a RL based Graph2Seq model augmented with a deep alignment network to effectively tackle the QG task. To the best of our knowledge, we are the first to introduce the Graph2Seq architecture to solve the question generation task. + +# 4.2 GRAPH NEURAL NETWORKS + +Over the past few years, graph neural networks (GNNs) (Kipf & Welling, 2016; Gilmer et al., 2017; Hamilton et al., 2017) have attracted increasing attention. Due to more recent advances in graph representation learning, a number of works have extended the widely used Seq2Seq architectures (Sutskever et al., 2014; Cho et al., 2014) to Graph2Seq architectures for machine translation, semantic parsing, AMR(SQL)-to-text, and online forums health stage prediction tasks (Bastings et al., 2017; Beck et al., 2018; Xu et al., 2018a,b;c; Song et al., 2018b; Gao et al., 2019). While the high-quality graph structure is crucial for the performance of GNN-based approaches, most existing works use syntax-based static graph structures when applied to textual data. Very recently, researchers have started exploring methods to automatically construct a graph of visual objects (Norcliffe-Brown et al., 2018) or words (Liu et al., 2018; Chen et al., 2019c;b) when applying GNNs to non-graph structured data. To the best of our knowledge, we are the first to investigate systematically the performance difference between syntactic-aware static graph construction and semantics-aware dynamic graph construction in the context of question generation. + +# 5 CONCLUSION + +We proposed a novel RL based Graph2Seq model for QG, where the answer information is utilized by an effective Deep Alignment Network and a novel bidirectional GNN is proposed to process the directed passage graph. On the SQuAD dataset, our method outperforms existing methods by a significant margin and achieves the new state-of-the-art results. Future directions include investigating more effective ways of automatically learning graph structures from text and exploiting Graph2Seq models for question generation from structured data like knowledge graphs or tables. + +# ACKNOWLEDGMENTS + +This work is supported by IBM Research AI through the IBM AI Horizons Network. We thank the human evaluators who evaluated our system. We also thank the anonymous reviewers for their constructive feedback. + +# REFERENCES + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. +Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65-72, 2005. +Junwei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuanhua Lv, Ming Zhou, and Tiejun Zhao. Table-to-text: Describing table region with natural language. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. Graph convolutional encoders for syntax-aware neural machine translation. arXiv preprint arXiv:1704.04675, 2017. +Daniel Beck, Gholamreza Haffari, and Trevor Cohn. Graph-to-sequence learning using gated graph neural networks. arXiv preprint arXiv:1806.09835, 2018. +Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 1171-1179, 2015. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051, 2017. +Yu Chen, Lingfei Wu, and Mohammed J Zaki. Bidirectional attentive memory networks for question answering over knowledge bases. arXiv preprint arXiv:1903.02188, 2019a. +Yu Chen, Lingfei Wu, and Mohammed J Zaki. Deep iterative and adaptive learning for graph neural networks. arXiv preprint arXiv:1912.07832, 2019b. +Yu Chen, Lingfei Wu, and Mohammed J Zaki. Graphflow: Exploiting conversation flow with graph neural networks for conversational machine comprehension. arXiv preprint arXiv:1908.00059, 2019c. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, pp. 1724-1734, 2014. +Guy Danon and Mark Last. A syntactic approach to domain-specific automatic question generation. arXiv preprint arXiv:1712.09827, 2017. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pp. 13042-13054, 2019. +Xinya Du, Junru Shao, and Claire Cardie. Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106, 2017. +Hady Elsahar, Christophe Gravier, and Frederique Laforest. Zero-shot question generation from knowledge graphs for unseen predicates and entity types. arXiv preprint arXiv:1802.06842, 2018. +Zhihao Fan, Zhongyu Wei, Siyuan Wang, Yang Liu, and Xuanjing Huang. A reinforcement learning framework for natural question generation using bi-discriminators. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 1763-1774, 2018. +Yuyang Gao, Lingfei Wu, Houman Homayoun, and Liang Zhao. Dyngraph2seq: Dynamic-graph-to-sequence interpretable learning for health stage prediction in online health forums. arXiv preprint arXiv:1908.08497, 2019. + +Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263-1272. JMLR.org, 2017. +Hongyu Gong, Suma Bhat, Lingfei Wu, Jinjun Xiong, and Wen-mei Hwu. Reinforcement learning based text style transfer without parallel training corpus. arXiv preprint arXiv:1903.10671, 2019. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393, 2016. +Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024-1034, 2017. +Michael Heilman. Automatic factual question generation from text. 2011. +Michael Heilman and Noah A Smith. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 609-617. Association for Computational Linguistics, 2010. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. Improving neural question generation using answer separation. arXiv preprint arXiv:1809.02393, 2018. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pp. 2575-2583, 2015. +Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. +Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pp. 67-72, Vancouver, Canada, July 2017. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P17-4012. +Vishwajeet Kumar, Kireeti Boorla, Yogesh Meena, Ganesh Ramakrishnan, and Yuan-Fang Li. Automating reading comprehension by generating question and answer pairs. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 335-348. Springer, 2018a. +Vishwajeet Kumar, Ganesh Ramakrishnan, and Yuan-Fang Li. A framework for automatic question generation from text using deep reinforcement learning. arXiv preprint arXiv:1808.04961, 2018b. +Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. From word embeddings to document distances. In International Conference on Machine Learning, pp. 957-966, 2015. +Yikang Li, Nan Duan, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, and Ming Zhou. Visual question generation as dual task of visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6116-6124, 2018. +Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015. +Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out, 2004. +Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, and Yu Xu. Learning to generate questions by learning what not to generate. arXiv preprint arXiv:1902.10418, 2019. +Pengfei Liu, Shuaichen Chang, Xuanjing Huang, Jian Tang, and Jackie Chi Kit Cheung. Contextualized non-local neural networks for sequence learning. arXiv preprint arXiv:1811.08600, 2018. + +Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025, 2015. +Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. Generating natural questions about an image. arXiv preprint arXiv:1603.06059, 2016. +Jack Mostow and Wei Chen. Generating instruction automatically for the reading strategy of self-questioning. 2009. +Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807-814, 2010. +Preksha Nema and Mitesh M Khapra. Towards a better metric for evaluating question generation systems. arXiv preprint arXiv:1808.10192, 2018. +Will Norcliffe-Brown, Stathis Vafeias, and Sarah Parisot. Learning conditioned graph structures for interpretable visual question answering. In Advances in Neural Information Processing Systems, pp. 8344-8353, 2018. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311-318. Association for Computational Linguistics, 2002. +Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017. +Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. +Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. +Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7008-7024, 2017. +Abigail See, Peter J Liu, and Christopher D Manning. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368, 2017. +Iulian Vlad Serban, Alberto García-Durán, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. arXiv preprint arXiv:1603.06807, 2016. +Linfeng Song, Zhiguo Wang, and Wael Hamza. A unified query-based generative model for question generation and question answering. arXiv preprint arXiv:1709.01058, 2017. +Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, and Daniel Gildea. Leveraging context information for natural question generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 569-574, 2018a. +Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. A graph-to-sequence model for amr-to-text generation. arXiv preprint arXiv:1805.02473, 2018b. +Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. Answer-focused and position-aware neural question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3930-3939, 2018. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112, 2014. + +Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027, 2017. +Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. Modeling coverage for neural machine translation. arXiv preprint arXiv:1601.04811, 2016. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. +Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pp. 2692-2700, 2015. +Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. +Kun Xu, Lingfei Wu, Zhiguo Wang, and Vadim Sheinin. Graph2seq: Graph to sequence learning with attention-based neural networks. arXiv preprint arXiv:1804.00823, 2018a. +Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, and Vadim Sheinin. Exploiting rich syntactic information for semantic parsing with graph-to-sequence model. arXiv preprint arXiv:1808.07624, 2018b. +Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, and Vadim Sheinin. SQL-to-text generation with graph-to-sequence model. arXiv preprint arXiv:1809.05255, 2018c. +Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, and Yanjun Wu. Teaching machines to ask questions. +Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, and Adam Trischler. Machine comprehension by text-to-text neural question generation. arXiv preprint arXiv:1705.02012, 2017. +Shiyue Zhang and Mohit Bansal. Addressing semantic drift in question generation for semi-supervised question answering. arXiv preprint arXiv:1909.06356, 2019. +Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3901–3910, 2018. +Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pp. 662-671. Springer, 2017. +Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. Sequential copying networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. + +# A DETAILS ON THE RNN DECODER + +At each decoding step $t$ , an attention mechanism learns to attend to the most relevant words in the input sequence, and computes a context vector $\mathbf{h}_t^*$ based on the current decoding state $\mathbf{s}_t$ , the current coverage vector $\mathbf{c}^t$ and the attention memory. In addition, the generation probability $p_{\mathrm{gen}} \in [0,1]$ is calculated from the context vector $\mathbf{h}_t^*$ , the decoder state $\mathbf{s}_t$ and the decoder input $y_{t-1}$ . Next, $p_{\mathrm{gen}}$ is used as a soft switch to choose between generating a word from the vocabulary, or copying a word from the input sequence. We dynamically maintain an extended vocabulary which is the union of the usual vocabulary and all words appearing in a batch of source examples (i.e., passages and answers). Finally, in order to encourage the decoder to utilize the diverse components of the input sequence, a coverage mechanism is applied. At each step, we maintain a coverage vector $\mathbf{c}^t$ , which is the sum of attention distributions over all previous decoder time steps. A coverage loss is also computed to penalize repeatedly attending to the same locations of the input sequence. + +# B MODEL SETTINGS + +We keep and fix the 300-dim GloVe vectors for the most frequent 70,000 words in the training set. We compute the 1024-dim BERT embeddings on the fly for each word in text using a (trainable) weighted sum of all BERT layer outputs. The embedding sizes of case, POS and NER tags are set to 3, 12 and 8, respectively. We set the hidden state size of BiLSTM to 150 so that the concatenated state size for both directions is 300. The size of all other hidden layers is set to 300. We apply a variational dropout (Kingma et al., 2015) rate of 0.4 after word embedding layers and 0.3 after RNN layers. We set the neighborhood size to 10 for dynamic graph construction. The number of GNN hops is set to 3. During training, in each epoch, we set the initial teacher forcing probability to 0.75 and exponentially increase it to $0.75 * 0.9999^i$ where $i$ is the training step. We set $\alpha$ in the reward function to 0.1, $\gamma$ in the mixed loss function to 0.99, and the coverage loss ratio $\lambda$ to 0.4. We use Adam (Kingma & Ba, 2014) as the optimizer, and the learning rate is set to 0.001 in the pretraining stage and 0.00001 in the fine-tuning stage. We reduce the learning rate by a factor of 0.5 if the validation BLEU-4 score stops improving for three epochs. We stop the training when no improvement is seen for 10 epochs. We clip the gradient at length 10. The batch size is set to 60 and 50 on data split-1 and split-2, respectively. The beam search width is set to 5. All hyperparameters are tuned on the development set. + +# C SENSITIVITY ANALYSIS OF HYPERPARAMETERS + +![](images/c78fe09966834d7fab433992e15fbb141b9f2e3e912e5f922a6de6cc4f3a4e1d.jpg) +Figure 3: Effect of the number of GNN hops. + +To study the effect of the number of GNN hops, we conduct experiments on the $\mathrm{G2S}_{sta}$ model on the SQuAD split-2 data. Fig. 3 shows that our model is not very sensitive to the number of GNN hops and can achieve reasonably good results with various number of hops. + +Table 5: Ablation study on the SQuAD split-2 test set. + +
MethodsBLEU-4MethodsBLEU-4
G2Sdyn+BERT+RL18.06G2Sdyn w/o feat16.51
G2Ssta+BERT+RL18.30G2Ssta w/o feat16.65
G2Ssta+BERT-fixed+RL18.20G2Sdyn w/o DAN12.58
G2Sdyn+BERT17.56G2Ssta w/o DAN12.62
G2Ssta+BERT18.02G2Ssta w/ DAN-word only15.92
G2Ssta+BERT-fixed17.86G2Ssta w/ DAN-contextual only16.07
G2Sdyn+RL17.18G2Ssta w/ GGNN-forward16.53
G2Ssta+RL17.49G2Ssta w/ GGNN-backward16.75
G2Sdyn16.81G2Ssta w/o BiGGNN, w/ Seq2Seq16.14
G2Ssta16.96G2Ssta w/o BiGGNN, w/ GCN14.47
+ +# D DETAILS ON BASELINE METHODS + +Transformer (Vaswani et al., 2017) We included a Transformer-based Seq2Seq model augmented with attention and copy mechanisms. We used the open source implementation3 provided by the OpenNMT (Klein et al., 2017) library and trained the model from scratch. Surprisingly, this baseline performed very poorly on the benchmarks even though we conducted moderate hyperparameter search and trained the model for a large amount of epochs. We suspect this might be partially because this method is very sensitive to hyperparameters as reported by Klein et al. (2017) and probably data-hungry on this task. We conjecture that better performance might be expected by extensively searching the hyperparameters and using a pretrained transformer model. + +SeqCopyNet (Zhou et al., 2018) proposed an extension to the copy mechanism which learns to copy not only single words but also sequences from the input sentence. + +$\mathbf{NQG} + +$ (Zhou et al., 2017) proposed an attention-based Seq2Seq model equipped with a copy mechanism and a feature-rich encoder to encode answer position, POS and NER tag information. + +$\mathbf{MPQG} + \mathbf{R}$ (Song et al., 2017) proposed an RL-based Seq2Seq model with a multi-perspective matching encoder to incorporate answer information. Copy and coverage mechanisms are applied. + +AFPQA (Sun et al., 2018) consists of an answer-focused component which generates an interrogative word matching the answer type, and a position-aware component which is aware of the position of the context words when generating a question by modeling the relative distance between the context words and the answer. + +s2sa-at-mp-gsa (Zhao et al., 2018) proposed a model which contains a gated attention encoder and a maxout pointer decoder to tackle the challenges of processing long input sequences. For fair comparison, we report the results of the sentence-level version of their model to match with our settings. + +ASs2s (Kim et al., 2018) proposed an answer-separated Seq2Seq model which treats the passage and the answer separately. + +CGC-QG (Liu et al., 2019) proposed a multi-task learning framework to guide the model to learn the accurate boundaries between copying and generation. + +# E DETAILS ON HUMAN EVALUATION + +We conducted a small-scale (i.e., 50 random examples per system) human evaluation on the split-2 data. We asked 5 human evaluators to give feedback on the quality of questions generated by a set of anonymized competing systems. In each example, given a triple containing a source passage, a target answer and an anonymised system output, they were asked to rate the quality of the output by answering the following three questions: i) is this generated question syntactically correct? ii) is this generated question semantically correct? and iii) is this generated question relevant to the passage? For each evaluation question, the rating scale is from 1 to 5 where a higher score means + +better quality (i.e., 1: Poor, 2: Marginal, 3: Acceptable, 4: Good, 5: Excellent). Responses from all evaluators were collected and averaged. + +# F MORE RESULTS ON ABLATION STUDY + +We perform the comprehensive ablation study to systematically assess the impact of different model components (e.g., BERT, RL, DAN, BiGGNN, FEAT, DAN-word, and DAN-contextual) for two proposed full model variants (static vs dynamic) on the SQuAD split-2 test set. Our experimental results confirmed that every component in our proposed model makes the contribution to the overall performance. \ No newline at end of file diff --git a/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/images.zip b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..17e90022e625db334101fe351daeadd38d992252 --- /dev/null +++ b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0624f2e09906046cf58133eb9e7c3546ad7198649e668e810e4b52559ee0eeb +size 388103 diff --git a/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/layout.json b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9ead7505e0d476c8a31817984694f221f82e1a28 --- /dev/null +++ b/reinforcementlearningbasedgraphtosequencemodelfornaturalquestiongeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fd715f1b7bde36c9978e5f4c3c0dfdb11964cbbbe0220fd6859773c4edb549b +size 523358 diff --git a/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_content_list.json b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0f2881c75bb659910ae0834ae65ae2289bb3c708 --- /dev/null +++ b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a15a6e1ba471049ae07bbac236b8b2f20305b838924088d9234f4ca3725225a6 +size 120447 diff --git a/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_model.json b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..77257c67fdd4f0948730e1da098cd1b340a441f2 --- /dev/null +++ b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbaa5a26b14919fb9d5a7c18e35d3570284971e3ff4451c3c9cfd2cc1174bba5 +size 145446 diff --git a/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_origin.pdf b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..74853d5ac1fe2101599cd74b1884078f21cda0a8 --- /dev/null +++ b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/1b8ecd41-162e-4eed-9d1e-dd62ae671bf2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c95321d13f723885e78c1134188d509d844711c821f8ff787ad189fff2e4d226 +size 2246697 diff --git a/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/full.md b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4c151de577bbbd66e87464c837fb11163b9d2eda --- /dev/null +++ b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/full.md @@ -0,0 +1,505 @@ +# REINFORCEMENT LEARNING WITH COMPETITIVE ENSEMBLES OF INFORMATION-CONSTRAINED PRIMITIVEIVES + +Anirudh Goyal1, Shagun Sodhani2, Jonathan Binas1, Xue Bin Peng3, Sergey Levine3, Yoshua Bengio1 + +# ABSTRACT + +Reinforcement learning agents that operate in diverse and complex environments can benefit from the structured decomposition of their behavior. Often, this is addressed in the context of hierarchical reinforcement learning, where the aim is to decompose a policy into lower-level primitives or options, and a higher-level meta-policy that triggers the appropriate behaviors for a given situation. However, the meta-policy must still produce appropriate decisions in all the states. In this work, we propose a policy design that decomposes into primitives, similarly to hierarchical reinforcement learning, but without an explicit high-level meta-policy. Instead, each primitive can decide for themselves whether they wish to act in the current state. We use an information-theoretic mechanism for enabling this decentralized decision: each primitive chooses how much information it needs about the current state to make a decision and the primitive that requests the most information about the current state acts in the environment. Regularizing the primitives to use as little information as possible leads to natural competition and specialization. We experimentally demonstrate that this policy architecture improves over both flat and hierarchical policies in terms of generalization. + +# 1 INTRODUCTION + +Learning policies that generalize to new environments or tasks is a fundamental challenge in reinforcement learning. While deep reinforcement learning has enabled training powerful policies, which outperform humans on specific, well-defined tasks (Mnih et al., 2015), their performance often diminishes when the properties of the environment or the task change to regimes not encountered during training. + +This is in stark contrast to how humans learn, plan, and act: humans can seamlessly switch between different aspects of a task, transfer knowledge to new tasks from remotely related but essentially distinct prior experience, and combine primitives (or skills) used for distinct aspects of different tasks in meaningful ways to solve new problems. A hypothesis hinting at the reasons for this discrepancy is that the world is inherently compositional, such that its features can be described by compositions of small sets of primitive mechanisms (Parascandolo et al., 2017). Since humans seem to benefit from learning skills and learning to combine skills, it might be a useful inductive bias for the learning models as well. + +This is addressed to some extent by the hierarchical reinforcement learning (HRL) methods, which focus on learning representations at multiple spatial and temporal scales, thus enabling better exploration strategies and improved generalization performance (Dayan & Hinton, 1993; Sutton et al., 1999b; Dietterich, 2000; Kulkarni et al., 2016). However, hierarchical approaches rely on some form of learned high-level controller, which decides when to activate different components in the hierarchy. While low-level sub-policies can specialize to smaller portions of the state space, the top-level controller (or master policy) needs to know how to deal with any given state. That is, it should provide optimal behavior for the entire accessible state space. As the master policy is trained + +on a particular state distribution, learning it in a way that generalizes to new environments effectively becomes the bottleneck for such approaches (Sasha Vezhnevets et al., 2017; Andreas et al., 2017). + +![](images/bfd103f5adf79369597ce87c37e046a591db77dba22c445657cd84f31efc89be.jpg) +Figure 1: Illustration of our model (Left): An intrinsic competition mechanism, based on the amount of information each primitive requests, is used to select a primitive to be active for a given input. Each primitive focuses on distinct features of the environment; in this case, one policy focuses on boxes, a second one on gates, and the third one on spheres. Right: The primitive-selection mechanism of our model. The primitive with most information acts in the environment and gets the reward. + +![](images/0c3271e3045cae7439bfb1602094f013f74afba90987c2c785780baeb8a7a867.jpg) + +We argue, and empirically show, that in order to achieve better generalization, the interaction between the low-level primitives and the selection thereof should itself be performed without requiring a single centralized network that understands the entire state space. We, therefore, propose a decentralized approach as an alternative to standard HRL, where we only learn a set of low-level primitives without learning an explicit high-level controller. In particular, we construct a factorized representation of the policy by learning simple primitive policies, which focus on distinct regions of the state space. Rather than being gated by a single meta-policy, the primitives directly compete with one another to determine which one should be active at any given time, based on the degree to which their state encoders "recognize" the current state input. While, technically, the competition between primitives implicitly realizes a global selection mechanism, we consider our model decentralized in the sense that individual primitives can function on their own, and can be combined in new ways, without relying on an explicit high-level controller. + +We frame the problem as one of information transfer between the current state and a dynamically selected primitive policy. Each policy can, by itself, decide to request information about the current state, and the amount of information requested is used to determine which primitive acts in the current state. Since the amount of state information that a single primitive can access is limited, each primitive is encouraged to use its resources wisely. Constraining the amount of accessible information in this way naturally leads to a decentralized competition and decision mechanism where individual primitives specialize in smaller regions of the state space. We formalize this information-driven objective based on the variational information bottleneck. The resulting set of competing primitives achieves both a meaningful factorization of the policy and an effective decision mechanism for which primitives to use. Importantly, not relying on a centralized meta-policy enables the individual primitive mechanisms can be recombined in a plug-and-play fashion, and the primitives can be transferred seamlessly to new environments. + +Contributions: In summary, the contributions of our work are as follows: (1) We propose a method for learning and operating a set of functional primitives in a decentralized way, without requiring an explicit high-level meta-controller to select the active primitives (see Fig. 1 for illustration). (2) We introduce an information-theoretic objective, the effects of which are twofold: a) it leads to the specialization of individual primitives to distinct regions of the state space, and b) it enables a competition mechanism, which is used to select active primitives in a decentralized manner. (3) We demonstrate the superior transfer learning performance of our model, which is due to the flexibility of the proposed framework regarding the dynamic addition, removal, and recombination of primitives. Decentralized primitives can be successfully transferred to larger or previously unseen environments, and outperform models with an explicit meta-controller for primitive selection. + +# 2 PRELIMINARIES + +We consider a Markov decision process (MDP) defined by the tuple $(S, \mathcal{A}, P, r, \gamma)$ , where the state space $S$ and the action space $\mathcal{A}$ may be discrete or continuous. The environment emits a bounded reward $r: S \times \mathcal{A} \to [r_{min}, r_{max}]$ on each transition and $\gamma \in [0,1)$ is the discount factor. $\pi(\cdot | s)$ denotes a policy over the actions given the current state $s$ . $R(\pi) = \mathbb{E}_{\pi}[\sum_{t} \gamma^{t} r(s_{t})]$ denotes the expected total return when an agent follows the policy $\pi$ . The standard objective in reinforcement learning is to maximize the expected total return $R(\pi)$ . We use the concept of the information bottleneck (Tishby et al., 2000) to learn compressed representations. The information bottleneck objective is formalized as minimizing the mutual information of a bottleneck representation layer with the input while maximizing its mutual information with the corresponding output. This type of input compression has been shown to improve generalization (Achille & Soatto, 2016; Alemi et al., 2016). + +# 3 INFORMATION-THEORETIC LEARNING OF DISTINCT PRIMITIVEIS + +Our goal is to learn a policy, composed of multiple primitive sub-policies, to maximize the expected reward over $T$ -step interactions for a distribution of tasks. Simple primitives which focus on solving a part of the given task (and not the complete task) should generalize more effectively, as they can be applied to similar aspects of different tasks (subtasks) even if the overall objective of the tasks are drastically different. Learning primitives in this way can also be viewed as learning a factorized representation of a policy, which is composed of several independent policies. + +Our proposed approach consists of three mechanisms: 1) a mechanism for restricting a particular primitive to a subset of the state space; 2) a competition mechanism between primitives to select the most effective primitive for a given state; 3) a regularization mechanism to improve the generalization performance of the policy as a whole. We consider experiments with both fixed and variable sets of primitives and show that our method allows for primitives to be added or removed during training, or recombined in new ways. Each primitive is represented by a differentiable, parameterized function approximator, such as a neural network. + +# 3.1 PRIMITIVES WITH AN INFORMATION BOTTLENECK + +To encourage each primitive to encode information from a particular part of state space, we limit the amount of information each primitive can access from the state. In particular, each primitive has an information bottleneck with respect to the input state, preventing it from using all the information from the state. + +We define the overall policy as a mixture of primitives, + +$$ +\pi (a \mid s) = \sum_ {k} c _ {k} \pi^ {k} (a \mid s), +$$ + +where $\pi^k (a\mid s)$ denotes the $k^{\mathrm{th}}$ primitive and $c_{k} = \delta_{kk^{\prime}}$ for $k^{\prime}\sim p(k^{\prime}\mid s)$ . We denote the probability of selecting the $k^{\mathrm{th}}$ primitive as $\alpha_{k}(s)\coloneqq p(k\mid s)$ . + +Rather than learning an explicit model for $p(k \mid s)$ , however, we impose an information-based mechanism for selecting primitives, wherein we limit the amount of information each primitive can contain and select the ones that request the most information about the state. To implement an information bottleneck, we design each of the $K$ primitives to be composed of an encoder $p_{\mathrm{enc}}(z_k \mid s)$ and a decoder $p_{\mathrm{dec}}(a \mid z_k)$ , together forming the primitive policy, + +$$ +\pi_ {\theta} ^ {k} (a \mid s) = \int_ {z} p _ {\mathrm {e n c}} \left(z _ {k} \mid s\right) p _ {\mathrm {d e c}} \left(a \mid z _ {k}\right) \mathrm {d} z _ {k}. +$$ + +The encoder output $z_{k}$ is meant to represent the information about the current state $s$ that an individual primitive $k$ believes is important to access in order to perform well. The decoder takes this encoded information and produces a distribution over the actions $a$ . Following the variational information bottleneck objective (Alemi et al., 2016), we penalize the KL divergence of $p_{enc}(z_k|s)$ and a prior $p(z)$ , + +$$ +\mathcal {L} _ {k} = \mathrm {D} _ {\mathrm {K L}} \left(p _ {\text {e n c}} \left(z _ {k} \mid s\right) | | p (z)\right). \tag {1} +$$ + +In other words, a primitive pays an "information cost" proportional to $\mathcal{L}_k$ for accessing the information about the current state. + +In the experiments below, we fix the prior to be a unit Gaussian. In the general case, we can learn the prior as well and include its parameters in $\theta$ . The information bottleneck encourages each primitive to limit its knowledge about the current state, but it will not prevent multiple primitives from specializing to similar parts of the state space. To mitigate this redundancy, and to make individual primitives focus on different regions of the state space, we introduce an information-based competition mechanism to encourage diversity among the primitives. + +# 3.2 COMPETING INFORMATION-CONSTRAINED PRIMITIVES + +We can use the information measure from equation 1 to define a selection mechanism for the primitives without having to learn a centralized meta-policy. The intuition is that the information content of an individual primitive encodes its effectiveness in a given state $s$ such that the primitive with the highest value $\mathcal{L}_k$ should be activated in that particular state. + +In particular, we set $\alpha_{k} = Z^{-1}\exp (\beta \mathcal{L}_{k})$ to obtain a distribution over $k$ as a function of the information content, activating the primitives with the highest information content. Here, $Z = \sum_{k}\exp (\beta \mathcal{L}_{k})$ is a normalization constant. This mechanism enables competition between primitives, leading them to focus on parts of the state space that they "understand" well and letting others act in other parts. + +Trading reward and information. To perform proper credit assignment, the environment reward is distributed to primitives according to their participation in the global decision, i.e. the reward $r_k$ given to the $k^{th}$ primitive is weighted by its selection coefficient, such that $r_k = \alpha_k r$ , with $r = \sum_k r_k$ . Hence, a primitive can potentially get a higher reward when deciding to act, but it also pays a higher price for accessing more information about the current state. The information bottleneck and the competition mechanism, when combined with the overall reward maximization objective, will lead to specialization of individual primitives to distinct regions in the state space. That is, each primitive should specialize in a part of the state space that it can reliably associate rewards with. Since the entire ensemble still needs to understand all of the state space for the given task, different primitives need to encode and focus on different parts of the state space. + +# 3.3 REGULARIZING PRIMITIVE SELECTION + +The objective described above will optimize the expected return while minimizing the information content of individual primitives. This is not sufficient, however, as it might lead to highly unbalanced outcomes: some primitives might be more active initially and learn to become even more active, completely disabling other primitives. + +Thus, in addition to minimizing each primitive's absolute information content, we need to normalize their activity with respect to each other. To do so, we penalize their information content in proportion to their activation by adding a regularization term of the form + +$$ +\mathcal {L} _ {\text {r e g}} = \sum_ {k} \alpha_ {k} \mathcal {L} _ {k}. \tag {2} +$$ + +Note that this can be rewritten (see Appendix A) as $\mathcal{L}_{\mathrm{reg}} = -H(\alpha) + \mathrm{LSE}(\mathcal{L}_1,\ldots ,\mathcal{L}_K)$ , where $H(\alpha)$ is the entropy of $\alpha$ , and LSE is the LogSumExp function, $\mathrm{LSE}(x) = \log (\sum_{j}e^{x_{j}})$ . Thus, minimizing $\mathcal{L}_{\mathrm{reg}}$ increases the entropy of $\alpha$ , leading to a diverse set of primitive selections, in turn, ensuring that different combinations of the primitives are used. Similarly, LSE approximates the maximum of its arguments, $\mathrm{LSE}(x)\approx \max_jx_j$ , and, therefore, penalizes the dominating $\mathcal{L}_k$ terms, thus equalizing their magnitudes. + +# 3.4 OBJECTIVE AND ALGORITHM SUMMARY + +Our overall objective function consists of 3 terms, + +1. The expected return from the standard RL objective, $R(\pi)$ which is distributed to the primitives according to their participation, + +2. The individual bottleneck terms leading the individual primitives to focus on specific parts of the state space, $\mathcal{L}_k$ for $k = 1,\dots ,K$ +3. The regularization term applied to the combined model, $\mathcal{L}_{\mathrm{reg}}$ + +The overall objective for the $k^{th}$ primitive thus takes the form: + +$$ +J _ {k} (\theta) \equiv \mathbb {E} _ {\pi_ {\theta}} \left[ r _ {k} \right] - \beta_ {\text {i n d}} \mathcal {L} _ {k} - \beta_ {\text {r e g}} \mathcal {L} _ {\text {r e g}}, \tag {3} +$$ + +where $\mathbb{E}_{\pi_{\theta}}$ denotes an expectation over the state trajectories generated by the agent's policy, $r_k = \alpha_k r$ is the reward given to the $k$ th primitive, and $\beta_{\mathrm{ind}}, \beta_{\mathrm{reg}}$ are the parameters controlling the impact of the respective terms. + +Implementation: In our experiments, the encoders $p_{\mathrm{enc}}(z_k \mid s)$ and decoders $p_{\mathrm{dec}}(a \mid z_k)$ (see. Fig. 1) are represented by neural networks, the parameters of which we denote by $\theta$ . Actions are sampled through each primitive every step. While our approach is compatible with any RL method, we maximize $J(\theta)$ computed on-policy from the sampled trajectories using a score function estimator (Williams, 1992; Sutton et al., 1999a) specifically A2C (Mnih et al., 2016) (unless otherwise noted). Every experimental result reported has been averaged over 5 random seeds. Our model introduces 2 extra hyper-parameters $\beta_{\mathrm{ind}}, \beta_{\mathrm{reg}}$ . + +# 4 RELATED WORK + +There are a wide variety of hierarchical reinforcement learning approaches(Sutton et al., 1998; Dayan & Hinton, 1993; Dietterich, 2000). One of the most widely applied HRL framework is the Options framework ((Sutton et al., 1999b)). An option can be thought of as an action that extends over multiple timesteps, thus providing the notion of temporal abstraction or subroutines in an MDP. Each option has its own policy (which is followed if the option is selected) and the termination condition (to stop the execution of that option). Many strategies are proposed for discovering options using task-specific hierarchies, such as pre-defined sub-goals (Heess et al., 2017), hand-designed features (Florensa et al., 2017), or diversity-promoting priors (Daniel et al., 2012; Eysenbach et al., 2018). These approaches do not generalize well to new tasks. Bacon et al. (2017) proposed an approach to learn options in an end-to-end manner by parameterizing the intra-option policy as well as the policy and termination condition for all the options. Eigen-options (Machado et al., 2017) use the eigenvalues of the Laplacian (for the transition graph induced by the MDP) to derive an intrinsic reward for discovering options as well as learning an intra-option policy. + +In this work, we consider a sparse reward setup with high dimensional action spaces. In such a scenario, performing unsupervised pretraining or using auxiliary rewards leads to much better performance (Frans et al., 2017; Florensa et al., 2017; Heess et al., 2017). Auxiliary tasks such as motion imitation have been applied to learn motor primitives that are capable of performing a variety of sophisticated skills (Liu & Hodgins, 2017; Peng et al., 2017; Merel et al., 2019b;a). Our work is also related to the Neural Module Network family of architectures (Andreas et al., 2017; Johnson et al., 2017; Rosenbaum et al., 2019) where the idea is to learn modules that can perform some useful computation like solving a subtask and a controller that can learn to combine these modules for solving novel tasks. More recently, Wu et al. (2019) proposed a framework for using diverse suboptimal world models to learn primitive policies. The key difference between our approach and all the works mentioned above is that we learn functional primitives without requiring any explicit high-level meta-controller or master policy. + +# 5 EXPERIMENTAL RESULTS + +In this section, we briefly outline the tasks that we used to evaluate our proposed method and direct the reader to the appendix for the complete details of each task along with the hyperparameters used for the model. We designed experiments to address the following questions: a) Learning primitives – Can an ensemble of primitives be learned over a distribution of tasks? b) Transfer Learning using primitives – Can the learned primitives be transferred to unseen/unsolvable sparse environments? c) Comparison to centralized methods – How does our method compare to approaches where the primitives are trained using an explicit meta-controller, in a centralized way? + +![](images/95e3de79f60c9f5535a0586ae62f4013a3573fbbe67aa693ce075b462da700c9.jpg) +Figure 2: Snapshots of motions learned by the policy. Top: Reference motion clip. Middle: Simulated character imitating the reference motion. Bottom: Probability of selecting each primitive. + +Baselines. We compare our proposed method to the following baselines: a) Option Critic (Bacon et al., 2017) - We extended the author's implementation of the Option Critic architecture and experimented with multiple variations in terms of hyperparameters and state/goal encoding. None of these yielded reasonable performance in partially observed tasks, so we omit it from the results. b) MLSH (Meta-Learning Shared Hierarchy) (Frans et al., 2017) - This method uses meta-learning to learn sub-policies that are shared across tasks along with learning a task-specific high-level master. It also requires a phase-wise training schedule between the master and the sub-policies to stabilize training. We use the MLSH implementation provided by the authors. c) Transfer A2C: In this method, we first learn a single policy on the one task and then transfer the policy to another task, followed by fine-tuning in the second task. + +# 5.1 LEARNING ENSEMBLES OF FUNCTIONAL PRIMITIVES + +We evaluate our approach on a number of RL environments to demonstrate that we can indeed learn sets of primitive policies focusing on different aspects of a task and collectively solving it. + +![](images/8acce273b19b9fe9ef7a3e8d9c7b0402e79dc4010e44fafc89e9e254b0f17ff5.jpg) +Figure 3: Convergence of four primitives on Four Room Maze: Left: We trained four primitives on the Four Room Maze task, where the goal was sampled from one of the two fixed goals. We see that the proposed algorithm is able to learn four primitives. Right: We transfer the learned primitives to the scenario where the goal is sampled from one of the four possible goals. The checkpointed model is ran on 100 different episodes (after a fixed number of steps/updates) and the normalized frequency of activation of the different primitives is plotted. + +![](images/2ceabe60ae1503d53c4a7e411a9afc2670febd88ce7f84f5a5d11c829b5a0573.jpg) + +Four Room Maze: We consider the Four-rooms gridworld environment (Sutton et al., 1999c) where the agent has to navigate its way through a grid of four interconnected rooms to reach a goal position within the grid. We consider the scenario where the starting position of the agent is fixed, but the goal is sampled from a discrete set. Fig. 3 shows that the proposed algorithm can learn four primitives. Refer to Appendix F for more details. + +Motion Imitation. To evaluate the proposed method in terms of scalability, we present a series of tasks from the motion imitation domain, showing that we can use a set of distinct primitives for imitation learning. In these tasks, we train a simulated 2D biped character to perform a variety of highly dynamic skills by imitating motion capture clips recorded from human actors. Each mocap + +clip is represented by a target state trajectory $\tau^{*} = \{s_{0}^{*}, s_{1}^{*}, \dots, s_{T}^{*}\}$ , where $s_{t}^{*}$ denotes the target state at timestep $t$ . The input to the policy is augmented with a goal $g_{t} = \{s_{t+1}^{*}, s_{t+2}^{*}\}$ , which specifies the target states for the next two timesteps. Both the state $s_{t}$ and goal $g_{t}$ are then processed by the encoder $p_{\mathrm{enc}}(z_{t}|s_{t}, g_{t})$ . The repertoire of skills consists of 8 clips depicting different types of walks, runs, jumps, and flips. The motion imitation approach closely follows Peng et al. (2018). To analyze the specialization of the various primitives, we computed 2D embeddings of states and goals which each primitive is active in, and the actions proposed by the primitives. Fig. 4 illustrates the embeddings computed with t-SNE (van der Maaten & Hinton, 2008). The embeddings show distinct clusters for the primitives, suggesting a degree of specialization of each primitive to certain states, goals, and actions. + +# 5.2 MULTI-TASK TRAINING + +We evaluate our model in a partially-observable 2D multi-task environment called Minigrid, similar to the one introduced in (Chevalier-Boisvert et al., 2018). The environment is a two-dimensional grid with a single agent, impassable walls, and many objects scattered in the environment. The agent is provided with a natural language string that specifies the task that the agent needs to complete. The setup is partially observable, and the agent only gets the small, egocentric view of the grid (along with the natural language task description). We consider three tasks here: the Pickup task (A), where the agent is re + +![](images/46782985ac7f3e41cd84d9d5ae24ae225267cc819cc92db4e2e88265c9ff79bb.jpg) +Figure 4: Embeddings visualizing the states (S) and goals (G) which each primitive is active in, and the actions (A) proposed by the primitives for the motion imitation tasks. A total of four primitives are trained. The primitives produce distinct clusters. + +![](images/f1232b734741fcefe4105a71ab5af786894430d949ebd53a7ca8a6a016cebf32.jpg) + +![](images/9c88b556643402ca9b72b857299819c17da220b1a78afb0e50969e196eda0ea5.jpg) + +quired to pick up an object specified by the goal string, the Unlock task (B) where the agent needs to unlock the door (there could be multiple keys in the environment, and the agent needs to use the key which matches the color of the door) and the UnlockPickup task (C), where the agent first needs to unlock a door that leads to another room. In this room, the agent needs to find and pick up the object specified by the goal string. Additional implementation details of the environment are provided in appendix D. Details on the agent model can be found in appendix D.3. + +We train agents with varying numbers of primitives on various tasks – concurrently, as well as in transfer settings. The different experiments are summarized in Figs. 5 and 7. An advantage of the multi-task setting is that it allows for quantitative interpretability as to when and which primitives are being used. The results indicate that a system composed of multiple primitives generalizes more easily to a new task, as compared to a single policy. We further demonstrate that several primitives can be combined dynamically and that the individual primitives respond to stimuli from new environments when trained on related environments. + +# 5.3 DO LEARNED PRIMITIVES HELP IN TRANSFER LEARNING? + +We evaluate our approach in the settings where the adaptation to the changes in the task is vital. The argument in favor of modularity is that it enables better knowledge transfer between related tasks. Naturally, the transfer is easier when the tasks are closely related, as the model will only need to learn how to compose the already-learned primitives. In general, it is difficult to determine how closely related two tasks are, however, and the inductive bias of modularity could even be harmful if the two tasks are very different. In such cases, we could add new primitives (which would need to be learned) and still obtain a sample-efficient transfer, as some part of the task structure would already have been captured by the pretrained primitives. This approach can be extended towards adding primitives during training, providing a seamless way to combine knowledge about different tasks to solve more complex tasks. We investigate here the transfer properties of a primitive trained in one environment and transferred to a different one. Results are shown in Fig. 5. + +Continuous control for ant maze We evaluate the transfer performance of pretrained primitives on the cross maze environment (Haarnoja et al., 2018). Here, a quadrupedal ant robot must walk to the different goals along the different paths (see Appendix G for details). The goal is randomly + +![](images/16daf9183af0e8c1a7e023841870597a30cbb87f69be8bb8cc04b70d4dcb460f.jpg) +Figure 5: Multitask training. Each panel corresponds to a different training setup, where different tasks are denoted A, B, C, ..., and a rectangle with $n$ circles corresponds to an agent composed of $n$ primitives trained on the respective tasks. Top row: activation of primitives for agents trained on single tasks. Bottom row: Retrain: Two primitives are trained on task A and transferred to task B. The results (success rates) indicate that the multi-primitive model is substantially more sample efficient than the baseline (transfer A2C). Copy and Combine: More primitives are added to the model over time in a plug-and-play fashion (two primitives are trained on task A; the model is extended with a copy of the two primitives; the resulting four-primitive model is trained on task B.) This is more sample efficient than other strong baselines, such as (Frans et al., 2017; Bacon et al., 2017). Zero-Shot Generalization: A set of primitives is trained on task C, and zero-shot generalization to task A and B is evaluated. The primitives learn a form of spatial decomposition which allows them to be active in both target tasks, A and B. The checkpointed model is ran on 100 different episodes, and the normalized frequency of activation of the different primitives is plotted. + +chosen from a set of available goals at the start of each episode. We pretrain a policy (see model details in Appendix G.1) with a motion reward in an environment which does not have any walls (similar to Haarnoja et al. (2018)), and then transfer the policy to the second task where the ant has to navigate to a random goal chosen from one of the 3 (or 10) available goal options. For our model, we make four copies of the pretrained policies and then finetune the model using the pretrained policies as primitives. We compare to both MLSH (Frans et al., 2017) and option-critic (Bacon et al., 2017). All these baselines have been pretrained in the same manner. As evident from Fig. 7, our method outperforms the other approaches. The fact that the initial policies successfully adapt to the transfer environment underlines the flexibility of our approach. + +Zero Shot Generalization: The purpose of this experiment is to show that the model consisting of multiple primitives is somewhat able to decompose the task C into its subtasks, A and B. The better this decomposition is the better should the model transfer to the individual subtasks. In order to test this, we trained a set of 4 primitives on task C, and then evaluate them (without finetuning) on tasks A and B. We note that the ensemble is able to solve the transfer tasks, A and B, successfully $72\%$ of the time, while a monolithic policy's success rate is $38\%$ . This further shows that the primitives learn meaningful decompositions. + +Continual Learning: 4 Rooms Scenario. We consider a continual learning scenario where we train two primitives for two-goal positions in the goal position is selected randomly from one of the two positions at the start of the episode. The primitives are then transfer (and finetuned) on four-goal + +![](images/2d3a188b29334a9e746928c8d0469e49b7616d5ea5a75ac0df0fa1806e1ed267.jpg) +Figure 6: Continual Learning Scenario: The plot on the left shows that the primitives remain activated. The solid green line shows the boundary between the tasks. The plot on the right shows the number of samples required by our model and the transfer baseline model across different tasks. We observe that the proposed model takes fewer steps than the baseline (an A2C policy trained in a similar way), and the gap in terms of the number of samples keeps increasing as tasks become harder. The checkpointed model is ran on 100 different episodes (after a fixed number of steps/updates) and the normalized frequency of activation of the different primitives is plotted. + +![](images/7fa0f94ecbabed43999e9a10b6c1fac00f496d7647fcdcd74d161582e8a4e550.jpg) + +![](images/5673884c704831bf03ae09b44a97758edec03cebdabef9cf4507025061155554.jpg) +Figure 7: Left: Multitask setup, where we show that we are able to train eight primitives when training on a mixture of four tasks in the Minigrid environment. Here, the $x$ -axis denotes the number of frames (timesteps). Right: Success rates of the different methods on the Ant Maze tasks. Success rate is measured as the number of times the ant is able to reach the goal (based on 500 sampled trajectories). + +![](images/2754d67894b7f9ef4c2bf010bc6bae7c6111603d45157c4149d84f2b006fb653.jpg) + +
Method3 goals10 goals
Flat Policy (PPO)11 ± 5 %4 ± 2 %
Option critic18 ± 10 %7 ± 3 %
MLSH32 ± 3 %5 ± 3 %
Explicit high level policy21 ± 5 %11 ± 2 %
Proposed method68 ± 3%40 ± 3%
+ +positions then transfer (and finetune) on eight-goal positions. The results are shown in fig. 6. The proposed method achieves better sample efficiency as compared to training a single monolithic policy. + +# 6 SUMMARY AND DISCUSSION + +We present a framework for learning an ensemble of primitive policies that can collectively solve tasks without learning an explicit master policy. Rather than relying on a centralized, learned meta-controller, the selection of active primitives is implemented through an information-theoretic mechanism. The learned primitives can be flexibly recombined to solve more complex tasks. Our experiments show that, on a partially observed "Minigrid" task and a continuous control "Ant Maze" walking task, our method can enable better transfer than flat policies and hierarchical RL baselines, including the Meta-learning Shared Hierarchies model and the Option-Critic framework. On Minigrid, we show how primitives trained with our method can transfer much more successfully to new tasks. On the Ant Maze, we show that primitives initialized from a pretrained walking control can learn to walk to different goals in a stochastic, multi-modal environment with nearly twice the success rate of a more conventional hierarchical RL approach, which uses the same pretraining but a centralized high-level policy. The proposed framework could be very attractive for continual learning settings, where one could add more primitive policies over time. Thereby, the already learned primitives would keep their focus on particular aspects of the task, and newly added ones could specialize on novel aspects. + +# 7 ACKNOWLEDGEMENTS + +The authors acknowledge the important role played by their colleagues at Mila throughout the duration of this work. AG would like to thank Greg Wayne, Mike Mozer, Matthew Botvinick, Bernhard Scholkopf for very useful discussions. The authors would also like to thank Nasim Rahaman, Samarth Sinha, Nithin Vasisth, Hugo Larochelle, Jordan Hoffman, Ankesh Anand, Michael Chang for feedback on the draft. The authors are grateful to NSERC, CIFAR, Google, Samsung, Nuance, IBM, Canada Research Chairs, Canada Graduate Scholarship Program, Nvidia for funding, and Compute Canada for computing resources. We are very grateful to Google for giving Google Cloud credits used in this project. + +# REFERENCES + +Alessandro Achille and Stefano Soatto. Information dropout: learning optimal representations through noise. CoRR, abs/1611.01353, 2016. URL http://arxiv.org/abs/1611.01353. +Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. CoRR, abs/1612.00410, 2016. URL http://arxiv.org/abs/1612.00410. +Jacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with policy sketches. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 166-175. JMLR.org, 2017. +Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In AAAI, pp. 1726-1734, 2017. +Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. +Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic gridworld environment for openai gym. https://github.com/maximecb/gym-minigrid, 2018. +Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. +Christian Daniel, Gerhard Neumann, and Jan Peters. Hierarchical relative entropy policy search. In Artificial Intelligence and Statistics, pp. 273-281, 2012. +Peter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In Advances in neural information processing systems, pp. 271-278, 1993. +Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. Journal of Artificial Intelligence Research, 13:227-303, 2000. +Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070, 2018. +Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforcement learning. arXiv preprint arXiv:1704.03012, 2017. +K. Frans, J. Ho, X. Chen, P. Abbeel, and J. Schulman. Meta Learning Shared Hierarchies. arXiv e-prints, October 2017. +Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, and John Schulman. Meta learning shared hierarchies. arXiv preprint arXiv:1710.09767, 2017. +Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, and Sergey Levine. Latent space policies for hierarchical reinforcement learning. arXiv preprint arXiv:1804.02808, 2018. +Nicolas Heess, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, Ali Eslami, Martin Riedmiller, et al. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286, 2017. + +Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Inferring and executing programs for visual reasoning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2989-2998, 2017. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in neural information processing systems, pp. 3675-3683, 2016. +Labin Liu and Jessica Hodgins. Learning to schedule control fragments for physics-based characters using deep q-learning. ACM Transactions on Graphics, 36(3), 2017. +Marlos C Machado, Marc G Bellemare, and Michael Bowling. A laplacian framework for option discovery in reinforcement learning. arXiv preprint arXiv:1703.00956, 2017. +Josh Merel, Arun Ahuja, Vu Pham, Saran Tunyasuvunakool, Siqi Liu, Dhruva Tirumala, Nicolas Heess, and Greg Wayne. Hierarchical visuomotor control of humanoids. In International Conference on Learning Representations, 2019a. URL https://openreview.net/forum?id=BJfYvo09Y7. +Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, and Nicolas Heess. Neural probabilistic motor primitives for humanoid control. In International Conference on Learning Representations, 2019b. URL https://openreview.net/forum?id=BJ16TjRcY7. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928-1937, 2016. +Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, and Bernhard Scholkopf. Learning independent causal mechanisms. arXiv preprint arXiv:1712.00961, 2017. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, 2017. +Xue Bin Peng, Glen Berseth, Kangkang Yin, and Michiel Van De Panne. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Trans. Graph., 36(4): 41:1-41:13, July 2017. ISSN 0730-0301. doi: 10.1145/3072959.3073602. URL http://doi.acm.org/10.1145/3072959.3073602. +Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Trans. Graph., 37 (4):143:1-143:14, July 2018. ISSN 0730-0301. doi: 10.1145/3197517.3201311. URL http://doi.acm.org/10.1145/3197517.3201311. +Clemens Rosenbaum, Ignacio Cases, Matthew Riemer, and Tim Klinger. Routing networks and the challenges of modular and compositional computation. arXiv preprint arXiv:1904.12774, 2019. +Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161, 2017. +John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015. + +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An introduction. MIT press, 1998. +Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS'99, pp. 1057-1063, Cambridge, MA, USA, 1999a. MIT Press. URL http://dl.acm.org/citation.cfm?id=3009657. 3009806. +Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181-211, 1999b. +Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181-211, 1999c. +Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop, coursera: Neural networks for machine learning. University of Toronto, Technical Report, 2012. +Naftali Tishby, Fernando C. N. Pereira, and William Bialek. The information bottleneck method. CoRR, physics/0004057, 2000. URL http://arxiv.org/abs/physics/0004057. +Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012. +Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, 2008. URL http://www.jmlr.org/papers/v9/vandermaaten08a.html. +Ronald J. Williams. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8(3-4):229-256, 1992. ISSN 0885-6125. doi: 10.1007/BF00992696. URL https://doi.org/10.1007/BF00992696. +Bohan Wu, Jayesh K Gupta, and Mykel J Kochenderfer. Model primitive hierarchical lifelong reinforcement learning. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 34-42. International Foundation for Autonomous Agents and Multiagent Systems, 2019. +Yuhuai Wu, Elman Mansimov, Roger B Grosse, Shun Liao, and Jimmy Ba. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In Advances in neural information processing systems, pp. 5279-5288, 2017. + +# A INTERPRETATION OF THE REGULARIZATION TERM + +The regularization term is given by + +$$ +\mathcal {L} _ {r e g} = \sum_ {k} \alpha_ {k} \mathcal {L} _ {k}, +$$ + +where + +$$ +\alpha_ {k} = e ^ {\mathcal {L} _ {k}} / \sum_ {j} e ^ {\mathcal {L} _ {j}}, +$$ + +and thus + +$$ +\log \alpha_ {k} = \mathcal {L} _ {k} - \log \sum_ {j} e ^ {\mathcal {L} _ {j}}, +$$ + +or + +$$ +\mathcal {L} _ {k} = \log \alpha_ {k} + \operatorname {L S E} \left(\mathcal {L} _ {1}, \dots , \mathcal {L} _ {K}\right), +$$ + +where $\mathrm{LSE}(\mathcal{L}_1,\dots ,\mathcal{L}_K) = \log \sum_je^{\mathcal{L}_j}$ is independent of $k$ + +Plugging this in, and using $\sum \alpha_{k} = 1$ , we get + +$$ +\mathcal {L} _ {r e g} = \sum_ {k} \alpha_ {k} \log \alpha_ {k} + \operatorname {L S E} \left(\mathcal {L} _ {1}, \dots , \mathcal {L} _ {K}\right) = - H (\alpha) + \operatorname {L S E} \left(\mathcal {L} _ {1}, \dots , \mathcal {L} _ {K}\right). +$$ + +Information-theoretic interpretation Notably, $\mathcal{L}_{\mathrm{reg}}$ also represents an upper bound to the KL-divergence of a mixture of the currently active primitives and a prior, + +$$ +\mathcal {L} _ {\mathrm {r e g}} \geq \mathrm {D} _ {\mathrm {K L}} \left(\sum_ {k} \alpha_ {k} p _ {\mathrm {e n c}} \left(Z _ {k} | S\right) | | \mathcal {N} (0, 1)\right), +$$ + +and thus can be regarded as a term limiting the information content of the mixture of all active primitives. This arises from the convexity properties of the KL divergence, which directly lead to + +$$ +\mathrm {D} _ {\mathrm {K L}} \left(\sum_ {k} \alpha_ {k} f _ {k} | | g\right) \leq \sum_ {k} \alpha_ {k} \mathrm {D} _ {\mathrm {K L}} \left(f _ {k} | | g\right). +$$ + +# B ADDITIONAL RESULTS + +# B.1 2D BANDITS ENVIRONMENT + +In order to test if our approach can learn distinct primitives, we used the 2D moving bandits tasks (introduced in Frans et al. (2017)). In this task, the agent is placed in a 2D world and is shown the position of two randomly placed points. One of these points is the goal point but the agent does not know which. We use the sparse reward setup where the agent receives the reward of 1 if it is within a certain distance of the goal point and 0 at all other times. Each episode lasts for 50 steps and to get the reward, the learning agent must reach near the goal point in those 50 steps. The agent's action space consists of 5 actions - moving in one of the four cardinal directions (top, down, left, right) and staying still. + +# B.1.1 RESULTS FOR 2D BANDITS + +We want to answer the following questions: + +1. Can our proposed approach learn primitives which remain active throughout training? +2. Can our proposed approach learn primitives which can solve the task? + +We train two primitives on the 2D Bandits tasks and evaluate the relative frequency of activation of the primitives throughout the training. It is important that both the primitives remain active. If only 1 primitive is acting most of the time, its effect would be the same as training a flat policy. We evaluate the effectiveness of our model by comparing the success rate with a flat A2C baseline. Fig. 8 shows that not only do both the primitives remain active throughout training, our approach also outperforms the baseline approach. + +![](images/5555d51f6aa94fe76c5162062f43482cdc4ceb68d29c95809e441da66d49df68.jpg) +Figure 8: Performance on the 2D bandits task. Left: The comparison of our model (blue curve - decentralized policy) with the baseline (red curve - flat policy) in terms of success rate shows the effectiveness of our proposed approach. Right: Relative frequency of activation of the primitives (normalized to sum up to 1). Both primitives are utilized throughout the training. + +![](images/5534bd5060ae064d4c8a371343f38c68084e5c8885908899b3e0f96179c40b6c.jpg) + +# B.2 FOUR-ROOMS ENVIRONMENT + +We consider the Four-rooms gridworld environment (Sutton et al., 1999c) where the agent has to navigate its way through a grid of four interconnected rooms to reach a goal position within the grid. The agent can perform one of the following four actions: move up, move down, move left, move right. The environment is stochastic and with $1/3$ probability, the agent's chosen action is ignored and a new action (randomly selected from the remaining 3 actions) is executed if the agent's selected action is executed with a probability of only $2/3$ and the agent takes any of the 3 remaining actions with a probability of $1/9$ each. + +# B.2.1 TASK DISTRIBUTION FOR THE FOUR-ROOM ENVIRONMENT + +In the Four-room environment, the agent has to navigate to a goal position which is randomly selected from a set of goal positions. We can use the size of this set of goal positions to define a curriculum of task distributions. Since the environment does not provide any information about the goal state, the larger the goal set, harder is the task as the now goal could be any element from a larger set. The choice of the set of goal states and the choice of curriculum does not affect the environment dynamics. Specifically, we consider three tasks - Fourroom-v0, Fourroom-v1 and Fourroom-v2 with the set of 2, 4 and 8 goal positions respectively. The set of goal positions for each task is fixed but not known to the learning agent. We expect, and empirically verify, that the Fourroom-v0 environment requires the least number of samples to be learned, followed by the Fourroom-v1 and the Fourroom-v2 environment (Fig. 6 in the paper). + +# B.2.2 RESULTS FOR FOUR-ROOMS ENVIRONMENT + +We want to answer the following questions: + +1. Can our proposed approach learn primitives that remain active when training the agent over a sequence of tasks? +2. Can our proposed approach be used to improve the sample efficiency of the agent over a sequence of tasks? + +To answer these questions, we consider two setups. In the baseline setup, we train a flat A2C policy on Fourrooms- $v0$ till it achieves a $100\%$ success rate during evaluation. Then we transfer this policy to Fourrooms- $v1$ and continue to train till it achieves a $100\%$ success rate during the evaluation on Fourrooms- $v1$ . We transfer the policy one more time to Fourrooms- $v2$ and continue to train the policy until it reaches a $60\%$ success rate. In the last task(Fourrooms- $v2$ ), we do not use $100\%$ as the threshold as the models do not achieve $100\%$ for training even after training for 10M frames. We use $60\%$ as the baseline models generally converge around this value. + +In the second setup, we repeat this exercise of training on one task and transferring to the next task with our proposed model. Note that even though our proposed model converges to a higher value than $60\%$ in the last task(Fourrooms- $\nu 2$ ), we compare the number of samples required to reach $60\%$ success rate to provide a fair comparison with the baseline. + +# C IMPLEMENTATION DETAILS + +In this section, we describe the implementation details which are common for all the models. Other task-specific details are covered in the respective task sections. + +1. All the models (proposed as well as the baselines) are implemented in Pytorch 1.1 unless stated otherwise. (Paszke et al., 2017). +2. For Meta-Learning Shared Hierarchies (Frans et al., 2017) and Option-Critic (Bacon et al., 2017), we adapted the author's implementations for our environments. +3. During the evaluation, we use 10 processes in parallel to run 500 episodes and compute the percentage of times the agent solves the task within the prescribed time limit. This metric is referred to as the "success rate". +4. The default time limit is 500 steps for all the tasks unless specified otherwise. +5. All the feedforward networks are initialized with the orthogonal initialization where the input tensor is filled with a (semi) orthogonal matrix. +6. For all the embedding layers, the weights are initialized using the unit-Gaussian distribution. +7. The weights and biases for all the GRU model are initialized using the uniform distribution from $U(-\sqrt{k}, \sqrt{k})$ where $k = \frac{1}{\text{hidden_size}}$ . +8. During training, we perform 64 rollouts in parallel to collect 5-step trajectories. +9. The $\beta_{ind}$ and $\beta_{reg}$ parameters are both selected from the set $\{0.001, 0.005, 0.009\}$ by performing validation. + +In section D.4.2, we explain all the components of the model architecture along with the implementation details in the context of the MiniGrid Environment. For the subsequent environments, we describe only those components and implementation details which are different than their counterpart in the MiniGrid setup and do not describe the components which work identically. + +# D MINIGRID ENVIRONMENT + +We use the MiniGrid environment (Chevalier-Boisvert et al., 2018) which is an open-source, gridworld environment package. It provides a family of customizable reinforcement learning environments that are compatible with the OpenAI Gym framework (Brockman et al., 2016). Since the environments can be easily extended and modified, it is straightforward to control the complexity of the task (eg controlling the size of the grid, the number of rooms or the number of objects in the grid, etc). Such flexibility is very useful when experimenting with curriculum learning or testing for generalization. + +# D.1 THE WORLD + +In MiniGrid, the world (environment for the learning agent) is a rectangular grid of size say $M \times N$ . Each tile in the grid contains either zero or one object. The possible object types are wall, floor, lava, door, key, ball, box and goal. Each object has an associated string (which denote the object type) and an associated discrete color (could be red, green, blue, purple, yellow and grey). By default, walls are always grey and goal squares are always green. Certain objects have special effects. For example, a key can unlock a door of the same color. + +# D.1.1 REWARD FUNCTION + +We consider the sparse reward setup where the agent gets a reward (of 1) only if it completes the task and 0 at all other time steps. We also apply a time limit of 500 steps on all the tasks ie the agent must complete the task in 500 steps. A task is terminated either when the agent solves the task or when the time limit is reached - whichever happens first. + +# D.1.2 ACTION SPACE + +The agent can perform one of the following seven actions per timestep: turn left, turn right, move forward, pick up an object, drop the object being carried, toggle, done (optional action). + +The agent can use the turn left and turn right actions to rotate around and face one of the 4 possible directions (north, south, east, west). The move forward action makes the agent move from its current tile onto the tile in the direction it is currently facing, provided there is nothing on that tile, or that the tile contains an open door. The toggle actions enable the agent to interact with other objects in the world. For example, the agent can use the toggle action to open the door if they are right in front of it and have the key of matching color. + +# D.1.3 OBSERVATION SPACE + +The MiniGrid environment provides partial and egocentric observations. For all our experiments, the agent sees the view of a square of 4x4 tiles in the direction it is facing. The view includes the tile on which the agent is standing. The observations are provided as a tensor of shape $4x4x3$ . However, note that this tensor does not represent RGB images. The first two channels denote the view size and the third channel encodes three integer values. The first integer value describes the type of the object, the second value describes the color of the object and the third value describes if the doors are open or closed. The benefit of using this encoding over the RGB encoding is that this encoding is more space-efficient and enables faster training. For human viewing, the fully observable, RGB image view of the environments is also provided and we use that view as an example in the paper. + +Additionally, the environment also provides a natural language description of the goal. An example of the goal description is: "Unlock the door and pick up the red ball". The learning agent and the environment use a shared vocabulary where different words are assigned numbers and the environment provides a number-encoded goal description along with each observation. Since different instructions can be of different lengths, the environment pads the goal description with $\langle unk \rangle$ tokens to ensure that the sequence length is the same. When encoding the instruction, the agent ignores the padded sub-sequence in the instruction. + +# D.2 TASKS IN MINIGRID ENVIRONMENT + +![](images/10a0726eb344713e12befa8d9c0b2086329c9de809d122c21c363975cc9213d8.jpg) +Figure 9: RGB view of the Pickup environment. + +![](images/8a4930dc345c6c3a8e3833d69c5ae2859cd8467cbceb22fc9c5e270103393485.jpg) +Figure 10: RGB view of the Unlock environment. + +![](images/921036b6740777a7d887fb7977cedabc0da19e80b28b0c65e7d02146b4b8e188.jpg) +Figure 11: RGB view of the UnlockPickup environment. + +We consider the following tasks in the MiniGrid environment: + +1. Pickup: In the Pickup task, the agent spawns at an arbitrary position in a $8 \times 8$ grid (Fig. 9). It is provided with a natural language goal description of the form "go pickup a yellow box". The agent has to navigate to the object being referred to in the goal description and pick it up. +2. Unlock: In the Unlock task, the agent spawns at an arbitrary position in a two-room grid environment. Each room is $8 \times 8$ square (Fig. 10). It is provided with a natural language goal description of the form "open the door". The agent has to find the key that corresponds to the color of the door, navigate to that key and use that key to open the door. +3. UnlockPickup: This task is basically a union of the Unlock and the Pickup tasks. The agent spawns at an arbitrary position in a two-room grid environment. Each room is $8 \times 8$ square (Fig. 11). It is provided with a natural language goal description of the form "open the door and pick up the yellow box". The agent has to find the key that corresponds to the color of the door, navigate to that key, use that key to open the door, enter the other room and pick up the object mentioned in the goal description. + +# D.3 MODEL ARCHITECTURE + +# D.3.1 TRAINING SETUP + +Consider an agent training on any task in the MiniGrid suite of environments. At the beginning of an episode, the learning agent spawns at a random position. At each step, the environment provides observations in two modalities - a $4 \times 4 \times 3$ tensor $x_{t}$ (an egocentric view of the state of the environment) and a variable length goal description $g$ . We describe the design of the learning agent in terms of an encoder-decoder architecture. + +# D.3.2 ENCODER ARCHITECTURE + +The agent's encoder network consists of two models - a CNN+GRU based observation encoder and a GRU (Cho et al., 2014) based goal encoder + +# Observation Encoder: + +It is a three layer CNN with the output channel sizes set to 16, 16 and 32 respectively (with ReLU layers in between) and kernel size set to $2 \times 2$ for all the layers. The output of the CNN is flattened and fed to a GRU model (referred to as the observation-rnn) with 128-dimensional hidden state. The output from the observation-rnn represents the encoding of the observation. + +# Goal Encoder: + +It comprises of an embedding layer followed by a unidirectional GRU model. The dimension of the embedding layer and the hidden and the output layer of the GRU model are all set to 128. + +The concatenated output of the observation encoder and the goal encoder represents the output of the encoder. + +# D.3.3 DECODER + +The decoder network comprises the action network and the critic network - both of which are implemented as feedforward networks. We now describe the design of these networks. + +# D.3.4 VALUE NETWORK + +1. Two-layer feedforward network with the tanh non-linearity. +2. Input: Concatenation of $z$ and the current hidden state of the observation-rnn. +3. Size of the input to the first layer and the second layer of the policy network are 320 and 64 respectively. +4. Produces a scalar output. + +# D.4 COMPONENTS SPECIFIC TO THE PROPOSED MODEL + +The components that we described so far are used by both the baselines as well as our proposed model. We now describe the components that are specific to our proposed model. Our proposed model consists of an ensemble of primitives and the components we describe apply to each of those primitives. + +# D.4.1 INFORMATION BOTTLENECK + +Given that we want to control and regularize the amount of information that the encoder encodes, we compute the KL divergence between the output of the action-feature encoder network and a diagonal unit Gaussian distribution. More is the KL divergence, more is the information that is being encoded with respect to the Gaussian prior and vice-versa. Thus we regularize the primitives to minimize the KL divergence. + +# D.4.2 HYPERPARAMETERS + +Table 1 lists the different hyperparameters for the MiniGrid tasks. + +
ParameterValue
Learning AlgorithmA2C (Wu et al., 2017)
OptimizerRMSProp (Tieleman & Hinton, 2012)
learning rate7·10-4
batch size64
discount0.99
lambda (for GAE (Schulman et al., 2015))0.95
entropy coefficient10-2
loss coefficient0.5
Maximum gradient norm0.5
+ +Table 1: Hyperparameters + +# E 2D BANDITS ENVIRONMENT + +# E.0.1 OBSERVATION SPACE + +The 2D bandits task provides a 6-dimensional flat observation. The first two dimensions correspond to the $(x,y)$ coordinates of the current position of the agent and the remaining four dimensions correspond to the $(x,y)$ coordinates of the two randomly chosen points. + +# E.1 MODEL ARCHITECTURE + +# E.1.1 TRAINING SETUP + +Consider an agent training on the 2D bandits tasks. The learning agent spawns at a fixed position and is randomly assigned two points. At each step, the environmental observation provides the current position of the agent as well the position of the two points. We describe the design of the learning agent in terms of an encoder-decoder architecture. + +# E.1.2 ENCODER ARCHITECTURE + +The agent's encoder network consists of a GRU-based recurrent model (referred as the observation-rnn) with a hidden state size of 128. The 6-dimensional observation from the environment is the input to the GRU model. The output from the observation-rnn represents the encoding of the observation. + +# E.2 HYPERPARAMETERS + +The implementation details for the 2D Bandits environment are the same as that for MiniGrid environment and are described in detail in section D.4.2. In the table below, we list the values of the task-specific hyperparameters. + +
ParameterValue
Learning AlgorithmPPO (Schulman et al., 2017)
epochs per update (PPO)10
OptimizerAdam (Kingma & Ba, 2014)
learning rate3·10-5
β10.9
β20.999
batch size64
discount0.99
entropy coefficient0
loss coefficient1.0
Maximum gradient norm0.5
+ +Table 2: Hyperparameters + +# F FOUR-ROOMS ENVIRONMENT + +# F.1 THE WORLD + +In the Four-rooms setup, the world (environment for the learning agent) is a square grid of say $11 \times 11$ . The grid is divided into 4 rooms such that each room is connected with two other rooms via hallways. The layout of the rooms is shown in Fig. 12. The agent spawns at a random position and has to navigate to a goal position within 500 steps. + +# F.1.1 REWARD FUNCTION + +We consider the sparse reward setup where the agent gets a reward (of 1) only if it completes the task (and reaches the goal position) and 0 at all other time steps. We also apply a time limit of 300 steps on all the tasks ie the agent must complete the task in 300 steps. A task is terminate either when the agent solves the task or when the time limit is reached - whichever happens first. + +# F.1.2 OBSERVATION SPACE + +The environment is a $11 \times 11$ grid divided into 4 interconnected rooms. As such, the environment has a total of 104 states (or cells) that can be occupied. These states are mapped to integer identifiers. At any time $t$ , the environment observation is a one-hot representation of the identifier corresponding to + +![](images/227ffd0788c3e3a7a076bc9e3b457e14a8cc2152bbba32fc4d11268e0dca37cb.jpg) +Figure 12: View of the four-room environment + +the state (or the cell) the agent is in right now. ie the environment returns a vectors of zeros with only one entry being 1 and the index of this entry gives the current position of the agent. The environment does not return any information about the goal state. + +# F.2 MODEL ARCHITECTURE FOR FOUR-ROOM ENVIRONMENT + +# F.2.1 TRAINING SETUP + +Consider an agent training on any task in the Four-room suite of environments. At the beginning of an episode, the learning agent spawns at a random position and the environment selects a goal position for the agent. At each step, the environment provides a one-hot representation of the agent's current position (without including any information about the goal state). We describe the design of the learning agent in terms of an encoder-decoder architecture. + +# F.3 ENCODER ARCHITECTURE + +The agent's encoder network consists of a GRU-based recurrent model (referred as the observation-rnn with a hidden state size of 128. The 104-dimensional one-hot input from the environment is fed to the GRU model. The output from the observation-rnn represents the encoding of the observation. + +The implementation details for the Four-rooms environment are the same as that for MiniGrid environment and are described in detail in section D.4.2. + +# G ANT MAZE ENVIRONMENT + +We use the Mujoco-based quadruple ant (Todorov et al., 2012) to evaluate the transfer performance of our approach on the cross maze environment (Haarnoja et al., 2018). The training happens in two phases. In the first phase, we train the ant to walk on a surface using a motion reward and using just 1 primitive. In the second phase, we make 4 copies of this trained policy and train the agent to navigate to a goal position in a maze (Figure 13). The goal position is chosen from a set of 3 (or 10) goals. The environment is a continuous control environment and the agent can directly manipulate the movement of joints and limbs. + +# G.0.1 OBSERVATION SPACE + +In the first phase (training the ant to walk), the observations from the environment correspond to the state-space representation is a real-valued vector that describes the state of the ant in mechanical terms - position, velocity, acceleration, angle, etc of the joints and limbs. In the second phase (training the ant to navigate the maze), the observation from the environment also contains the location of the goal position along with the mechanical state of the ant. + +![](images/28119c93f482268a06c616f3a1017eddb716cc24495c54067ce07fbba65f1a32.jpg) +Figure 13: View of the Ant Maze environment with 3 goals + +# G.1 MODEL ARCHITECTURE FOR ANT MAZE ENVIRONMENT + +# G.1.1 TRAINING SETUP + +We describe the design of the learning agent in terms of an encoder-decoder architecture. + +# G.1.2 ENCODER ARCHITECTURE + +The agent's encoder network consists of a GRU-based recurrent model (referred as the observation-rnn with a hidden state size of 128. The real-valued state vector from the environment is fed to the GRU model. The output from the observation-rnn represents the encoding of the observation. Note that in the case of phase 1 vs phase 2, only the size of the input to the observation-rnn changes and the encoder architecture remains the same. + +# G.1.3 DECODER + +The decoder network comprises the action network and the critic network. All these networks are implemented as feedforward networks. The design of these networks is very similar to that of the decoder model for the MiniGrid environment as described in section D.3.3 with just one difference. In this case, the action space is continuous so the action-feature decoder network produces the mean and log-standard-deviation for a diagonal Gaussian policy. This is used to sample a real-valued action to execute in the environment. \ No newline at end of file diff --git a/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/images.zip b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ac53f4bcfa084cd65f2f843658532d6fcd68c022 --- /dev/null +++ b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:373a24a9d9641267b2e7eb4205bd7796e94bba915187bd0fa292c1b5044e65dc +size 602930 diff --git a/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/layout.json b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b88d0cea273c8c7c82544b88ecbd67a6a06e1996 --- /dev/null +++ b/reinforcementlearningwithcompetitiveensemblesofinformationconstrainedprimitives/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e007b095a73e8f71adc9211a90cd95669ffe03fcf9a12b0cbcdf2c21212f4506 +size 581666 diff --git a/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_content_list.json b/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..844f3ca2d45c0d5a8f0d94bb17beac151e54cdba --- /dev/null +++ b/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92a6b53da112ce5ec4f877deb3e2aeb9ac1b71f79fb983bdd074dc885eac9b47 +size 111045 diff --git a/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_model.json b/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bbb66c83193e83094c46445dbafd97a8214afd69 --- /dev/null +++ b/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66ca143d0e1df003b4e78adcc08b4a15e950922c5ce24c5bf62fb1bff63ad029 +size 136610 diff --git a/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_origin.pdf b/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..57e1e3df5c7abb4101b50e25e11cc004f5e1828b --- /dev/null +++ b/relationalstatespacemodelforstochasticmultiobjectsystems/735147b4-422e-4ac2-a2cf-7815196dcfe1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:294c2c3c5cf6aa2e0c49ad518d51434e99a848eaf74e3131691044a8d11b8e36 +size 997064 diff --git a/relationalstatespacemodelforstochasticmultiobjectsystems/full.md b/relationalstatespacemodelforstochasticmultiobjectsystems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9a3ade9d3b676bebebdf3bfbe7d2277c1eba54bd --- /dev/null +++ b/relationalstatespacemodelforstochasticmultiobjectsystems/full.md @@ -0,0 +1,406 @@ +# RELATIONAL STATE-SPACE MODEL FOR STOCHASTIC MULTI-OBJECT SYSTEMS + +Fan Yang†, Ling Chen*†, Fan Zhou†, Yusong Gao‡, Wei Cao‡ + +†College of Computer Science and Technology, Zhejiang University, Hangzhou, China + +$\ddagger$ Alibaba Group, Hangzhou, China + +{fanyang01,lingchen,fanzhou}@zju.edu.cn + +{jianchuan.gys,mingsong.cw}@alibaba-inc.com + +# ABSTRACT + +Real-world dynamical systems often consist of multiple stochastic subsystems that interact with each other. Modeling and forecasting the behavior of such dynamics are generally not easy, due to the inherent hardness in understanding the complicated interactions and evolutions of their constituents. This paper introduces the relational state-space model (R-SSM), a sequential hierarchical latent variable model that makes use of graph neural networks (GNNs) to simulate the joint state transitions of multiple correlated objects. By letting GNNs cooperate with SSM, R-SSM provides a flexible way to incorporate relational information into the modeling of multi-object dynamics. We further suggest augmenting the model with normalizing flows instantiated for vertex-indexed random variables and propose two auxiliary contrastive objectives to facilitate the learning. The utility of R-SSM is empirically evaluated on synthetic and real time series datasets. + +# 1 INTRODUCTION + +Many real-world dynamical systems can be decomposed into smaller interacting subsystems if we take a fine-grained view. For example, the trajectories of coupled particles are co-determined by per-particle physical properties (e.g., mass and velocity) and their physical interactions (e.g., gravity); traffic flow can be viewed as the coevolution of a large number of vehicle dynamics. Models that are able to better capture the complex behavior of such multi-object systems are of wide interest to various communities, e.g., physics, ecology, biology, geoscience, and finance. + +State-space models (SSMs) are a wide class of sequential latent variable models (LVMs) that serve as workhorses for the analysis of dynamical systems and sequence data. Although SSMs are traditionally designed under the guidance of domain-specific knowledge or tractability consideration, recently introduced deep SSMs (Fraccaro, 2018) use neural networks (NNs) to parameterize flexible state transitions and emissions, achieving much higher expressivity. To develop deep SSMs for multi-object systems, graph neural networks (GNNs) emerge to be a promising choice, as they have been shown to be fundamental NN building blocks that can impose relational inductive bias explicitly and model complex interactions effectively (Battaglia et al., 2018). + +Recent works that advocate GNNs for modeling multi-object dynamics mostly make use of GNNs in an autoregressive (AR) fashion. AR models based on recurrent (G)NNs can be viewed as special instantiations of SSMs in which the state transitions are restricted to being deterministic (Fraccaro, 2018, Section 4.2). Despite their simplicity, it has been pointed out that their modeling capability is bottlenecked by the deterministic state transitions (Chung et al., 2015; Fraccaro et al., 2016) and the oversimplified observation distributions (Yang et al., 2018). + +In this study, we make the following contributions: (i) We propose the relational state-space model (R-SSM), a novel hierarchical deep SSM that simulates the stochastic state transitions of interacting objects with GNNs, extending GNN-based dynamics modeling to challenging stochastic multi-object systems. (ii) We suggest using the graph normalizing flow (GNF) to construct expressive joint + +state distributions for R-SSM, further enhancing its ability to capture the joint evolutions of correlated stochastic subsystems. (iii) We develop structured posterior approximation to learn R-SSM using variational inference and introduce two auxiliary training objectives to facilitate the learning. + +Our experiments on synthetic and real-world time series datasets show that R-SSM achieves competitive test likelihood and good prediction performance in comparison to GNN-based AR models and other sequential LVMs. The remainder of this paper is organized as follows: Section 2 briefly reviews neccessary preliminaries. Section 3 introduces R-SSM formally and presents the methods to learn R-SSM from observations. Related work is summarized in Section 4 and experimental evaluation is presented in Section 5. We conclude the paper in Section 6. + +# 2 PRELIMINARIES + +In this work, an attributed directed graph is given by a 4-tuple: $\mathcal{G} = (\mathcal{V},\mathcal{E},\mathbf{V},\mathbf{E})$ , where $\mathcal{V} = [N]\coloneqq \{1,\dots ,N\}$ is the set of vertices, $\mathcal{E}\subseteq [N]\times [N]$ is the set of edges, $\mathbf{V}\in \mathbb{R}^{N\times d_v}$ is a matrix of static vertex attributes, and $\mathbf{E}\in \mathbb{R}^{N\times N\times d_e}$ is a sparse tensor storing the static edge attributes. The set of direct predecessors of vertex $i$ is notated as $\mathcal{N}_i^{-} = \{p|(p,i)\in \mathcal{E}\}$ . We use the notation $\mathbf{x}_i$ to refer to the $i$ -th row of matrix $\mathbf{X}$ and write $\mathbf{x}_{ij}$ to indicate the $(i,j)$ -th entry of tensor $\mathbf{X}$ (if the corresponding matrix or tensor appears in the context). For sequences, we write $\mathbf{x}_{\leq t} = \mathbf{x}_{1:t}\coloneqq (\mathbf{x}_1,\ldots ,\mathbf{x}_t)$ and switch to $\mathbf{x}_t^{(i)}$ for referring to the $i$ -th row of matrix $\mathbf{X}_t$ . + +# 2.1 GRAPH NEURAL NETWORKS + +GNNs are a class of neural networks developed to process graph-structured data and support relational reasoning. Here we focus on vertex-centric GNNs that iteratively update the vertex representations of a graph $\mathcal{G}$ while being equivariant (Maron et al., 2019) under vertex relabeling. Let $\mathbf{H} \in \mathbb{R}^{N \times d}$ be a matrix of vertex representations, in which the $i$ -th row $\mathbf{h}_i \in \mathbb{R}^d$ is the vectorized representation attached to vertex $i$ . Conditioning on the static graph structure and attributes given by $\mathcal{G}$ , a GNN just takes the vertex representations $\mathbf{H}$ along with some graph-level context $\mathbf{g} \in \mathbb{R}^{d_g}$ as input and returns new vertex representations $\mathbf{H}' \in \mathbb{R}^{N \times d'}$ as output, i.e., $\mathbf{H}' = \mathrm{GNN}(\mathcal{G}, \mathbf{g}, \mathbf{H})$ . + +When updating the representation of vertex $i$ from $\mathbf{h}_i$ to $\mathbf{h}_i'$ , a GNN takes the representations of other nearby vertices into consideration. Popular GNN variants achieve this through a multi-round message passing paradigm, in which the vertices repeatedly send messages to their neighbors, aggregate the meseages they received, and update their own representations accordingly. Formally, the operations performed by a basic block of a message-passing GNN are defined as follows: + +$$ +\forall (j, i) \in \mathcal {E}: \quad \mathcal {M} _ {j \rightarrow i} = \text {M E S S A G E} (\mathbf {g}, \mathbf {v} _ {j}, \mathbf {v} _ {i}, \mathbf {e} _ {j i}, \mathbf {h} _ {j}, \mathbf {h} _ {i}) \tag {1} +$$ + +$$ +\forall i \in \mathcal {V}: \quad \mathcal {A} _ {i} = \text {A G G R E G A T E} \left(\left\{\mathcal {M} _ {p \rightarrow i} \right\} _ {p \in \mathcal {N} _ {i} ^ {-}}\right) \tag {2} +$$ + +$$ +\forall i \in \mathcal {V}: \quad \mathbf {h} _ {i} ^ {\prime} = \text {C O M B I N E} (\mathbf {g}, \mathbf {v} _ {i}, \mathbf {h} _ {i}, \mathcal {A} _ {i}) \tag {3} +$$ + +Throughout this work, we implement Equations (1) and (2) by adopting a multi-head attention mechanism similar to Vaswani et al. (2017) and Velikovi et al. (2018). For Equation (3), we use either a RNN cell or a residual block (He et al., 2016), depending on whether the inputs to GNN are RNN states or not. We write such a block as $\mathbf{H}' = \mathrm{MHA}(\mathcal{G},\mathbf{g},\mathbf{H})$ and give its detailed implementation in the Appendix. A GNN simply stacks $L$ separately-parameterized MHA blocks and iteratively computes $\mathbf{H} \eqqcolon \mathbf{H}^{(0)},\ldots,\mathbf{H}^{(L)} \eqqcolon \mathbf{H}'$ , in which $\mathbf{H}^{(l)} = \mathrm{MHA}(\mathcal{G},\mathbf{g},\mathbf{H}^{(l-1)})$ for $l = 1,\ldots,L$ . We write this construction as $\mathbf{H}' = \mathrm{GNN}(\mathcal{G},\mathbf{g},\mathbf{H})$ and treat it as a black box to avoid notational clutter. + +# 2.2 STATE-SPACE MODELS + +State-space models are widely applied to analyze dynamical systems whose true states are not directly observable. Formally, an SSM assumes the dynamical system follows a latent state process $\{\mathbf{z}_t\}_{t\geq 1}$ , which possibly depends on exogenous inputs $\{\mathbf{u}_t\}_{t\geq 1}$ . Parameterized by some (unknown) static parameter $\theta$ , the latent state process is characterized by an initial density $\mathbf{z}_1 \sim \pi_\theta (\cdot |\mathbf{u}_1)$ and a transition density $\mathbf{z}_{t + 1} \sim f_{\theta}(\cdot |\mathbf{z}_{\leq t},\mathbf{u}_{\leq t + 1})$ . Moreover, at each time step, some noisy measurements of the latent state are observed through an observation density: + +![](images/e0b01c6f33266eba297cc1b0a5933078fc69c1fa99db569875ffee59eab5188f.jpg) +(a) Generative model. + +![](images/54f1f4ff9999f85a25f0aeefd29f92f12109f6e44c08994cd79935a351173829.jpg) +(b) Collapsed generative model. +Figure 1: Graphical structures of R-SSM. Diamonds represent deterministic states and circles represent random variables. To be concise, the dependencies on the graph $\mathcal{G}$ and exogenous inputs $\mathbf{U}_{1:T}$ are not shown. (b) is the result of collapsing all deterministic states in (a) and writing $\mathcal{Z}_t = (\mathbf{z}_t^g, \mathbf{Z}_t)$ . In (c), solid lines represent the computation shared with the generative model and dashed lines represent additional computation for inference. + +![](images/cd788ce70040b381e7c20f656384942f3b0737df8424ee61927b1012facb6bed.jpg) +(c) Inference model. + +$\mathbf{x}_t \sim g_\theta(\cdot|\mathbf{z}_{\leq t}, \mathbf{u}_{\leq t})$ . The joint density of $\mathbf{x}_{1:T}$ and $\mathbf{z}_{1:T}$ factors as: $p(\mathbf{x}_{1:T}, \mathbf{z}_{1:T}|\mathbf{u}_{1:T}) = \pi_\theta(\mathbf{z}_1|\mathbf{u}_1)\prod_{t=2}^T f_\theta(\mathbf{z}_t|\mathbf{z}_{t}^{(i)})$ . Then we define two auxiliary CPC objectives: + +$$ +\mathcal {L} _ {1} ^ {\mathrm {a u x}} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T - 1} \sum_ {i = 1} ^ {N} \log \frac {\lambda_ {\psi , 1} (\hat {\mathbf {z}} _ {t} ^ {(i)} , \mathbf {c} _ {t} ^ {(i)})}{\sum_ {\mathbf {c} \in \Omega_ {t , i}} \lambda_ {\psi , 1} (\hat {\mathbf {z}} _ {t} ^ {(i)} , \mathbf {c})} \right], \mathcal {L} _ {2} ^ {\mathrm {a u x}} = \mathbb {E} \left[ \sum_ {t = 1} ^ {T - 1} \sum_ {i = 1} ^ {N} \log \frac {\lambda_ {\psi , 2} (\hat {\mathbf {h}} _ {t} ^ {(i)} , \mathbf {c} _ {t} ^ {(i)})}{\sum_ {\mathbf {c} \in \Omega_ {t , i}} \lambda_ {\psi , 2} (\hat {\mathbf {h}} _ {t} ^ {(i)} , \mathbf {c})} \right] +$$ + +where $\hat{\mathbf{z}}_t^{(i)} = [\mathbf{z}_t^g,\mathbf{z}_t^{(i)}]$ , $\hat{\mathbf{h}}_t^{(i)} = \mathrm{MLP}_{\psi}(\sum_{j\neq i\wedge j\in \mathcal{N}_i^-}\mathbf{h}_t^{(j)})$ , and $\Omega_{t,i}$ is a set that contains $\mathbf{c}_t^{(i)}$ and some negative samples. The expectation is over negative samples and the latent states sampled from the filtering distributions. The positive score functions $\lambda_{\psi,1}$ and $\lambda_{\psi,2}$ are specified to be simple log-bilinear models. + +Intuitively, $\mathcal{L}_1^{\mathrm{aux}}$ encourages the latent states to encode useful information that helps distinguish the future summaries from negative samples. $\mathcal{L}_2^{\mathrm{aux}}$ encourages the deterministic states to reflect the interaction effects, as it contrastingly predicts the future summary of vertex $i$ based on the states of $i$ 's neighbors only. The negative samples are selected from the future summaries of other vertices within the minibatch. The final objective to maximize is $\mathcal{L} = \mathcal{L}_K^{\mathrm{SMC}} + \beta_1\mathcal{L}_1^{\mathrm{aux}} + \beta_2\mathcal{L}_2^{\mathrm{aux}}$ , in which $\beta_{1}\geq 0$ and $\beta_{2}\geq 0$ are tunable hyperparameters. The procedure to estimate this objective is described in Appendix A.4. + +# 4 RELATED WORK + +GNN-based dynamics modeling. GNNs (Scarselli et al., 2009; Duvenaud et al., 2015; Li et al., 2016; Defferrard et al., 2016; Gilmer et al., 2017; Hamilton et al., 2017; Velikovi et al., 2018; Xu et al., 2019; Maron et al., 2019) provide a promising framework to learn on graph-structured data and impose relational inductive bias in learning models. We refer the reader to Battaglia et al. (2018) for a recent review. GNNs (or neural message passing modules) are the core components of recently developed neural physics simulators (Battaglia et al., 2016; Watters et al., 2017; Chang et al., 2017; Janner et al., 2019; Sanchez-Gonzalez et al., 2018; Mrowca et al., 2018; Li et al., 2019) and spatiotemporal or multi-agent dynamics models (Alahi et al., 2016; Hoshen, 2017; Li et al., 2018; Zhang et al., 2018; Tacchetti et al., 2019; Chen et al., 2020). In these works, GNNs usually act autoregressively or be integrated into the sequence-to-sequence (seq2seq) framework (Sutskever et al., + +2014). Besides, recently they have been combined with generative adversarial networks (Goodfellow et al., 2014) and normalizing flows for multi-agent forecasting (Gupta et al., 2018; Kosaraju et al., 2019; Rhinehart et al., 2019). R-SSM differs from all these works by introducing structured latent variables to represent the uncertainty on state transition and estimation. + +GNNs in sequential LVMs. A few recent works have combined GNNs with a sequential latent variable model, including R-NEM (van Steenkiste et al., 2018), NRI (Kipf et al., 2018), SQAIR (Kosiorek et al., 2018), VGRNN (Hajiramezanali et al., 2019), MFP (Tang & Salakhutdinov, 2019), and Graph VRNN (Sun et al., 2019; Yeh et al., 2019). The latent variables in R-NEM and NRI are discrete and represent membership relations and types of edges, respectively. In contrast, the latent variables in our model are continuous and represent the states of objects. SQAIR is also a deep SSM for multi-object dynamics, but the GNN is only used in its inference network. VGRNN is focused on modeling the topological evolution of dynamical graphs. MFP employs a conditional VAE architecture, in which the per-agent discrete latent variables are shared by all time steps. The work most relevant to ours is Graph VRNN, in which the hidden states of per-agent VRNNs interact through GNNs. Our work mainly differs from it by introducing a global latent state process to make the model hierarchical and exploring the use of normalizing flows as well as the auxiliary contrastive objectives. More subtle differences are discussed in Section 5.2. + +Deep LVMs for sequential data. There has been growing interest in developing latent variable models for sequential data with neural networks as their building blocks, among which the works most relevant to ours are stochastic RNNs and deep SSMs. Many works have proposed incorporating stochastic latent variables into vanilla RNNs to equip them with the ability to express more complex data distributions (Bayer & Osendorfer, 2014; Chung et al., 2015; Fraccaro et al., 2016; Goyal et al., 2017; Ke et al., 2019) or, from another perspective, developing deep SSMs by parameterizing flexible transition and emission distributions using neural networks (Krishnan et al., 2017; Fraccaro et al., 2017; Buesing et al., 2018; Zheng et al., 2017; Hafner et al., 2019). Approximate inference and parameter estimation methods for nonlinear SSMs have been extensively studied in the literature (Doucet & Johansen, 2009; Andrieu et al., 2010; Kantas et al., 2015; Gu et al., 2015; Karl et al., 2016; Marino et al., 2018; Gregor et al., 2019; Hirt & Dellaportas, 2019). We choose VSMC (Maddison et al., 2017; Naesseth et al., 2018; Le et al., 2018) as it combines the powers of VI and SMC. The posterior collapse problem is commonly addressed by KL annealing, which does not work with VSMC. The idea of using auxiliary costs to train deep SSMs has been explored in Z-forcing (Goyal et al., 2017; Ke et al., 2019), which predicts the future summaries directly rather than contrastingly. As a result, the backward RNN in Z-forcing may degenerate easily. + +# 5 EXPERIMENTS + +We implement R-SSM using the TensorFlow Probability library (Dillon et al., 2017). The experiments are organized as follows: In Section 5.1, we sample a toy dataset from a simple stochastic multi-object model and validate that R-SSM can fit it well while AR models and non-relational models may struggle. In Section 5.2, R-SSM is compared with state-of-the-art sequential LVMs for multi-agent modeling on a basketball gameplay dataset, and the effectiveness of GNF is tested through ablation studies. Finally, in Section 5.3, the prediction performance of R-SSM is compared with strong GNN-based seq2seq baselines on a road traffic dataset. Due to the space constraint, the detailed model architecture and hyperparameter settings for each dataset are given in the Appendix. Below, all values reported with error bars are averaged over 3 or 5 runs. + +# 5.1 SYNTHETIC TOY DATASET + +First we construct a simple toy dataset to illustrate the capability of R-SSM. Each example in this dataset is generated by the following procedure: + +$$ +\begin{array}{l} \mathcal {G} \sim \operatorname {S B M} (N, K, p _ {0}, p _ {1}), \quad \mathbf {v} _ {i} \sim \operatorname {N o r m a l} (\mathbf {0}, \mathbf {I}), \quad z _ {0} ^ {(i)} \sim \operatorname {N o r m a l} (0, 1) \\ \tilde {z} _ {t} ^ {(i)} = \boldsymbol {\eta} ^ {\top} \mathbf {v} _ {i} + \alpha_ {1} \sum_ {j \in \mathcal {N} _ {i}} z _ {t - 1} ^ {(j)} / | \mathcal {N} _ {i} | + \alpha_ {2} z _ {t - 1} ^ {(i)} \tag {6} \\ z _ {t} ^ {(i)} \sim \operatorname {N o r m a l} \left(\cos \left(\tilde {z} _ {t} ^ {(i)}, \sigma_ {z} ^ {2}\right), \quad x _ {t} ^ {(i)} \sim \operatorname {N o r m a l} \left(\tanh \left(\varepsilon z _ {t} ^ {(i)}, \sigma_ {x} ^ {2}\right) \right. \right. \\ \end{array} +$$ + +Table 2: Test log-likelihood and rollout quality comparisons on the basketball gameplay dataset (offensive players only). + +
Model\( \mathcal{L}_{1000}^{SMC} \)ELBOSpeedDistanceOOB
VRNN23600.8943.7833.78
MI-VRNN23620.7938.9215.52
R-SSM2459.8 ± .32372.3 ± .80.83 ± .0140.75 ± .151.84 ± .16
\( +\mathcal{L}_2^{\text{aux}} \)2463.3 ± .42380.2 ± .60.82 ± .0140.36 ± .232.17 ± .09
+GNF (4)2483.2 ± .32381.6 ± .40.80 ± .0039.37 ± .352.06 ± .15
+GNF (8)2501.6 ± .22382.1 ± .40.79 ± .0039.14 ± .292.12 ± .10
Ground Truth0.7737.782.21
+ +for $i = 1,\dots ,N$ and $t = 1,\ldots ,T$ . Here SBM is short for the symmetric stochastic block model, in which each vertex $i$ belongs to exact one of the $K$ communities, and two vertices $i$ and $j$ are connected with probability $p_0$ if they are in the same community, $p_1$ otherwise. A vertex-specific covariate vector $\mathbf{v}_i\in \mathbb{R}^{d_v}$ is attached to each vertex $i$ , and by Equation (6), the state of each vertex $i$ can be affected by its neighbors $\mathcal{N}_i$ . Choosing the parameters $d_v = 4$ , $N = 36$ , $K = 3$ , $p_0 = 1/3$ , $p_1 = 1/18$ , $T = 80$ , $\alpha_{1} = 5.0$ , $\alpha_{2} = -1.5$ , $\boldsymbol{\eta} = [-1.5,0.4,2.0, - 0.9]^{\top}$ , $\sigma_x = \sigma_z = 0.05$ , and $\varepsilon = 2.5$ , we generate 10K examples for training, validation, and test, respectively. A typical example is visualized in the Appendix. + +Despite the simple generating process, the resulting dataset is highly challenging for common models to fit. To show this, we compare R-SSM with several baselines, including (a) VAR: Fitting a first-order vector autoregression model for each example; (b) VRNN: A variational RNN (Chung et al., 2015) shared by all examples; (c) GNN-AR: A variant of the recurrent decoder of NRI (Kipf et al., 2018), which is exactly a GNN-based AR model when given the ground-truth graph. VAR and VRNN are given access to the observations $\{x_{1:T}^{(i)}\}_{i=1}^{N}$ only, while GNN-AR and R-SSM are additionally given access to the graph structure $(\mathcal{V}, \mathcal{E})$ (but not the vertex covariates). GNF is not used in R-SSM because the true joint transition distribution is factorized over vertices. + +For each model, we calculate three metrics: (1) LL: Average log-likelihood (or its lower bound) of test examples; (2) MSE: Average mean squared one-step prediction error given the first 75 time steps of each test example; (3) CP: Average coverage probability of a $90\%$ one-step prediction interval. For nonanalytic models, point predictions and prediction intervals are computed using 1000 Monte Carlo samples. The results are reported in Table 1. + +Table 1: Test log-likelihood and prediction performance comparisons on the synthetic toy dataset. + +
ModelLLMSECP
VAR-3660.679 ± .0000.750 ± .000
VRNN≥-26410.501 ± .0030.931 ± .002
GNN-AR-940.286 ± .0020.806 ± .004
R-SSM≥25830.029 ± .0010.883 ± .002
+L2aux≥26470.024 ± .0010.897 ± .001
+ +The generating process involves latent factors and nonlinearities, so VAR performs + +poorly as expected. VRNN largely underfits the data and struggles to generalize, which may be caused by the different topologies under the examples. In contrast, GNN-AR and R-SSM generalize well as expected, while R-SSM achieves much higher test log-likelihood and produces good one-step probabilistic predictions. This toy case illustrates the generalization ability of GNNs and suggests the importance of latent variables for capturing the uncertainty in stochastic multi-object systems. We also observed that without $\mathcal{L}_1^{\mathrm{aux}}$ the training dynamics easily get stuck in posterior collapse at the very early stage, and adding $\mathcal{L}_2^{\mathrm{aux}}$ help improve the test likelihood. + +# 5.2 BASKETBALL GAMEPLAY + +In basketball gameplay, the trajectories of players and the ball are highly correlated and demonstrate rich, dynamic interations. Here we compare R-SSM with a state-of-the-art hierarchical sequential LVM for multi-agent trajectories (Zhan et al., 2019), in which the per-agent VRNNs are coordinated + +Table 3: Test log-likelihood comparison on the basketball gameplay dataset (offensive players plus the ball). + +
ModelLL\( \mathcal{L}_{1000}^{SMC} \)
Yeh et al. (2019)
GRNN2264
VRNN>2750
GVRNN>2832
R-SSM+\( \mathcal{L}_2^{\text{aux}} \)>2761 ±12805 ±0
+GNF (8)>2783 ±12826 ±0
+ +Table 4: Forecast MAE comparison on the METRA dataset. $h$ is the number of steps predicted into the future. The ${\mathbf{X}}_{t - h}$ baseline outputs ${\mathbf{X}}_{t - h}$ to predict ${\mathbf{X}}_{t}$ . + +
Modelh=3h=6h=12
Xt-h3.974.996.65
DCRNN2.773.153.60
GaN2.713.123.64
R-SSM2.67 ±.003.14 ±.013.72 ±.02
CP0.896 ±.0010.891 ±.0010.883 ±.002
+ +by a global "macro intent" model. We note it as MI-VRNN. The dataset2 includes 107,146 training examples and 13,845 test examples, each of which contains the 2D trajectories of ten players and the ball recorded at 6Hz for 50 time steps. Following their settings, we use the trajectories of offensive team only and preprocess the data in exactly the same way to make the results directly comparable. The complete graph of players is used as the input to R-SSM. + +Several ablation studies are performed to verify the utility of the proposed ideas. In Table 3, we report test likelihood bounds and the rollout quality evaluated with three heuristic statistics: average speed (feet/step), average distance traveled (feet), and the percentage of out-of-bound (OOB) time steps. The VRNN baseline developed by Zhan et al. (2019) is also included for comparison. Note that the VSMC bound $\mathcal{L}_{1000}^{\mathrm{SMC}}$ is a tighter log-likelihood approximation than the ELBO (which is equivalent to $\mathcal{L}_1^{\mathrm{SMC}}$ ). The rollout statistics of R-SSMs are calculated from 150K 50-step rollouts with 10 burn-in steps. Several selected rollouts are visualized in the Appendix. + +As illustrated in Table 2, all R-SSMs outperform the baselines in terms of average test log-likelihood. Again, we observed that adding $\mathcal{L}_1^{\mathrm{aux}}$ is necessary for training R-SSM successfully on this dataset. Training with the proposed auxiliary loss $\mathcal{L}_2^{\mathrm{aux}}$ and adding GNFs do improve the results. R-SSM with 8 GNFs (4 in prior, 4 in proposal) achieves higher likelihood than R-SSM with 4 GNFs, indicating that increasing the expressivity of joint state distributions helps fit the data better. As for the rollout quality, the OOB rate of the rollouts sampled from our model matches the ground-truth significantly better, while the other two statistics are comparable to the MI-VRNN baseline. + +In Table 3, we also provide preliminary results for the setting that additionally includes the trajectory of the ball. This enables us to compare with the results reported by Yeh et al. (2019) for Graph VRNN (GVRNN). The complete graph of ball and players served as input to R-SSM is annotated with two node types (player or ball) and three edge types (player-to-ball, ball-to-player or player-to-player). R-SSM achieves competitive test likelihood, and adding GNFs helps improve the performance. We point out that several noticeable design choices of GVRNN may help it outperform R-SSM: (i) GVRNN uses a GNN-based observation model, while R-SSM uses a simple factorized observation model. (ii) GVRNN encodes $\mathbf{X}_{1:t - 1}$ into $\mathbf{H}_t$ and thus enables the prior of $\mathbf{Z}_t$ to depend on past observations, which is not the case in R-SSM. (iii) GVRNN uses several implementation tricks, e.g., predicting the changes in observations only ( $\mathbf{X}_t = \mathbf{X}_{t - 1} + \Delta \mathbf{X}_t$ ) and passing raw observations as additional input to GNNs. We would like to investigate the effect of these interesting differences in future work. + +# 5.3 ROAD TRAFFIC + +Traffic speed forecasting on road networks is an important but challenging task, as the traffic dynamics exhibit complex spatiotemporal interactions. In this subsection, we demonstrate that R-SSM is comparable to the state-of-the-art GNN-based seq2seq baselines on a real-world traffic dataset. The METR-LA dataset (Li et al., 2018) contains 4 months of 1D traffic speed measurements that were recorded via 207 sensors and aggregated into 5 minutes windows. For this dataset, all conditional inputs $\mathcal{G} = (\mathcal{V},\mathcal{E},\mathbf{V},\mathbf{E})$ and $\mathbf{U}_{1:T}$ are provided to R-SSM, in which $\mathcal{E}$ is constructed by connecting two sensors if their road network distance is below a threshold, $\mathbf{V}$ stores the geographic + +positions and learnable embeddings of sensors, $\mathbf{E}$ stores the road network distances of edges, and $\mathbf{U}_{1:T}$ provides the time information (hour-of-day and day-of-week). We impute the missing values for training and exclude them from evaluation. GNF is not used because of GPU memory limitation. Following the settings in Li et al. (2018), we train our model on small time windows spanning 2 hours and use a 7:1:2 split for training, validation, and test. + +The comparison of mean absolute forecast errors (MAE) is reported in Table 4. The three forecast horizons correspond to 15, 30, and 60 minutes. We give point predictions by taking the element-wise median of 2K Monte Carlo forecasts. Compared with DCRNN (Li et al., 2018) and GaAN (Zhang et al., 2018), R-SSM delivers comparable short-term forecasts but slightly worse long-term forecasts. + +We argue that the results are admissible because: (i) By using MAE loss and scheduled sampling, the DCRNN and GaAN baselines are trained on the multi-step objective that they are later evaluated on, making them hard to beat. (ii) Some stochastic systems are inherently unpredictable beyond a few steps due to the process noise, e.g., the toy model in Section 5.1. In such case, multi-step MAE may not be a reasonable metric, and probabilistic forecasts may be preferred. The average coverage probabilities (CP) of $90\%$ prediction intervals reported in Table 4 indicate that R-SSM provides good uncertainty estimates. (iii) Improving the multi-step prediction ability of deep SSMs is still an open problem with a few recent attempts (Ke et al., 2019; Hafner et al., 2019). We would like to explore it in future work. + +# 6 CONCLUSIONS + +In this work, we present a deep hierarchical state-space model in which the state transitions of correlated objects are coordinated by graph neural networks. To effectively learn the model from observation data, we develop a structured posterior approximation and propose two auxiliary contrastive prediction tasks to help the learning. We further introduce the graph normalizing flow to enhance the expressiveness of the joint transition density and the posterior approximation. The experiments show that our model can outperform or match the state-of-the-arts on several time series modeling tasks. Directions for future work include testing the model on high-dimensional observations, extending the model to directly learn from visual data, and including discrete latent variables in the model. + +# ACKNOWLEDGMENTS + +This work was supported by the National Key Research and Development Program of China (No. 2018YFB0505000) and the Alibaba-Zhejiang University Joint Institute of Frontier Technologies. Fan Yang would like to thank Qingchen Yu for her helpful feedback on early drafts of this paper. + +# REFERENCES + +Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Li Fei-Fei, and Silvio Savarese. Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. +Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle markov chain monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269-342, 2010. +Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and koray kavukcuoglu. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems 29. 2016. +Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. +Justin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprint arXiv:1411.7610, 2014. + +Lars Buesing, Theophane Weber, Sebastien Racaniere, SM Eslami, Danilo Rezende, David P Reichert, Fabio Viola, Frederic Besse, Karol Gregor, Demis Hassabis, et al. Learning and querying fast generative models for reinforcement learning. arXiv preprint arXiv:1802.03006, 2018. +Michael Chang, Tomer Ullman, Antonio Torralba, and Joshua Tenenbaum. A compositional object-based approach to learning physical dynamics. In International Conference on Learning Representations. 2017. +Weiqi Chen, Ling Chen, Yu Xie, Wei Cao, Yusong Gao, and Xiaojie Feng. Multi-range attentive bicomponent graph convolutional network for traffic forecasting. In AAAI Conference on Artificial Intelligence, 2020. +Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in Neural Information Processing Systems 28, 2015. +Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems 29. 2016. +Joshua V Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A Saurous. Tensorflow distributions. arXiv preprint arXiv:1711.10604, 2017. +Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. In International Conference on Learning Representations, 2017. +Arnaud Doucet and Adam M Johansen. A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of Nonlinear Filtering, 2009. +David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems 28. 2015. +Marco Fraccaro. Deep Latent Variable Models for Sequential Data. PhD thesis, 2018. +Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic layers. In Advances in Neural Information Processing Systems 29, 2016. +Marco Fraccaro, Simon Kamronn, Ulrich Paquet, and Ole Winther. A disentangled recognition and nonlinear dynamics model for unsupervised learning. In Advances in Neural Information Processing Systems 30, 2017. +Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the International Conference on Machine Learning, 2017. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27. 2014. +Anirudhand Goyal, Alessandro Sordoni, Marc-Alexandre Côté, Nan Rosemary Ke, and Yoshua Bengio. Z-forcing: Training stochastic recurrent networks. In Advances in Neural Information Processing Systems 30, 2017. +Karol Gregor, George Papamakarios, Frederic Besse, Lars Buesing, and Theophane Weber. Temporal difference variational auto-encoder. In International Conference on Learning Representations, 2019. +Shixiang (Shane) Gu, Zoubin Ghahramani, and Richard E Turner. Neural adaptive sequential monte carlo. In Advances in Neural Information Processing Systems 28. 2015. +Agrim Gupta, Justin Johnson, Li Fei-Fei, Silvio Savarese, and Alexandre Alahi. Social gan: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. + +Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In Proceedings of the International Conference on Machine Learning, 2019. +Ehsan Hajiramezanali, Arman Hasanzadeh, Nick Duffield, Krishna R Narayanan, Mingyuan Zhou, and Xiaoning Qian. Variational graph recurrent neural networks. In Advances in Neural Information Processing Systems 32. 2019. +Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30. 2017. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. +Marcel Hirt and Petros Dellaportas. Scalable bayesian learning for state space models using variational inference with smc samplers. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2019. +Emiel Hoogeboom, Rianne van den Berg, and Max Welling. Emerging convolutions for generative normalizing flows. In Proceedings of the International Conference on Machine Learning, 2019. +Yedid Hoshen. Vain: Attentional multi-agent predictive modeling. In Advances in Neural Information Processing Systems 30. 2017. +Michael Janner, Sergey Levine, William T. Freeman, Joshua B. Tenenbaum, Chelsea Finn, and Jiajun Wu. Reasoning about physical interactions with object-centric models. In International Conference on Learning Representations, 2019. +Nikolas Kantas, Arnaud Doucet, Sumeetpal S Singh, Jan Maciejowski, Nicolas Chopin, et al. On particle methods for parameter estimation in state-space models. Statistical Science, 30(3):328-351, 2015. +Maximilian Karl, Maximilian Soelch, Justin Bayer, and Patrick van der Smagt. Deep variational bayes filters: Unsupervised learning of state space models from raw data. In International Conference on Learning Representations. 2016. +Nan Rosemary Ke, Amanpreet Singh, Ahmed Touati, Anirudh Goyal, Yoshua Bengio, Devi Parikh, and Dhruv Batra. Modeling the long term future in model-based reinforcement learning. In International Conference on Learning Representations, 2019. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations. 2014. +Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems 31. 2018. +Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems 29. 2016. +Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. In Proceedings of the International Conference on Machine Learning, 2018. +Vineet Kosaraju, Amir Sadeghian, Roberto Martín-Martín, Ian Reid, S. Hamid Rezatofighi, and Silvio Savarese. Social-bigat: Multimodal trajectory forecasting using bicycle-gan and graph attention networks. In Advances in Neural Information Processing Systems 32. 2019. +Adam Kosiorek, Hyunjik Kim, Yee Whye Teh, and Ingmar Posner. Sequential attend, infer, repeat: Generative modelling of moving objects. In Advances in Neural Information Processing Systems 31. 2018. + +Rahul G Krishnan, Uri Shalit, and David Sontag. Structured inference networks for nonlinear state space models. In AAAI Conference on Artificial Intelligence, 2017. +Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, and Frank Wood. Auto-encoding sequential monte carlo. In International Conference on Learning Representations, 2018. +Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. In International Conference on Learning Representations, 2018. +Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. International Conference on Learning Representations, 2016. +Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B. Tenenbaum, and Antonio Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. In International Conference on Learning Representations, 2019. +Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, and Kevin Swersky. Graph normalizing flows. In Advances in Neural Information Processing Systems 32. 2019. +Chris J Maddison, John Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Teh. Filtering variational objectives. In Advances in Neural Information Processing Systems 30. 2017. +Joseph Marino, Milan Cvitkovic, and Yisong Yue. A general method for amortizing variational filtering. In Advances in Neural Information Processing Systems 31. 2018. +Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. In International Conference on Learning Representations, 2019. +Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li F Fei-Fei, Josh Tenenbaum, and Daniel L Yamins. Flexible neural representation for physics prediction. In Advances in Neural Information Processing Systems 31. 2018. +C. A. Naesseth, F. Lindsten, and T. B. Schon. High-dimensional filtering using nested sequential monte carlo. IEEE Transactions on Signal Processing, 67(16):4177-4188, 2019. +Christian Naesseth, Scott Linderman, Rajesh Ranganath, and David Blei. Variational sequential monte carlo. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018. +Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. +George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems 30. 2017. +Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the International Conference on Machine Learning, 2015. +Nicholas Rhinehart, Rowan McAllister, Kris Kitani, and Sergey Levine. Precog: Prediction conditioned on goals in visual multi-agent settings. In The IEEE International Conference on Computer Vision, 2019. +Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. In Proceedings of the International Conference on Machine Learning, 2018. +Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. +Chen Sun, Per Karlsson, Jiajun Wu, Joshua B Tenenbaum, and Kevin Murphy. Stochastic prediction of multi-agent interactions from partial observations. In International Conference on Learning Representations, 2019. + +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27. 2014. +Andrea Tacchetti, H. Francis Song, Pedro A. M. Mediano, Vinicius Zambaldi, János Kramár, Neil C. Rabinowitz, Thore Graepel, Matthew Botvinick, and Peter W. Battaglia. Relational forward models for multi-agent learning. In International Conference on Learning Representations, 2019. +Yichuan Charlie Tang and Ruslan Salakhutdinov. Multiple futures prediction. In Advances in Neural Information Processing Systems 32. 2019. +Sjoerd van Steenkiste, Michael Chang, Klaus Greff, and Jürgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. In International Conference on Learning Representations, 2018. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems 30. 2017. +Petar Velikovi, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018. +Nicholas Watters, Daniel Zoran, Theophane Weber, Peter Battaglia, Razvan Pascanu, and Andrea Tacchetti. Visual interaction networks: Learning a physics simulator from video. In Advances in Neural Information Processing Systems 30. 2017. +Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. +Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. Breaking the softmax bottleneck: a high-rank RNN language model. In International Conference on Learning Representations, 2018. +Raymond A. Yeh, Alexander G. Schwing, Jonathan Huang, and Kevin Murphy. Diverse generation for multi-agent sports games. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Eric Zhan, Stephan Zheng, Yisong Yue, Long Sha, and Patrick Lucey. Generating multi-agent trajectories using programmatic weak supervision. In International Conference on Learning Representations, 2019. +Jiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, and Dit-Yan Yeung. Gaan: Gated attention networks for learning on large and spatiotemporal graphs. In Conference on Uncertainty in Artificial Intelligence, 2018. +Xun Zheng, Manzil Zaheer, Amr Ahmed, Yuan Wang, Eric P Xing, and Alexander J Smola. State space lstm models with particle mcmc inference. arXiv preprint arXiv:1711.11179, 2017. + +# A APPENDIX + +# A.1 MHA IMPLEMENTATION + +Each attention head $k \in [K]$ is separately parameterized and operates as follows. For $\forall (j,i) \in \mathcal{E}$ , it produces a message vector $\beta_{j \to i}^{k}$ together with an unnormalized scalar weight $\omega_{j \to i}^{k}$ . Then for $\forall i \in \mathcal{V}$ , it aggregates all messages sent to $i$ in a permutation-invariant manner: + +$$ +\left\{\alpha_ {p \rightarrow i} ^ {k} \right\} _ {p \in \mathcal {N} _ {i} ^ {-}} = \operatorname {s o f t m a x} \left(\left\{\omega_ {p \rightarrow i} ^ {k} \right\} _ {p \in \mathcal {N} _ {i} ^ {-}}\right), \quad \mathbf {a} _ {i} ^ {k} = \sum_ {p \in \mathcal {N} _ {i} ^ {-}} \alpha_ {p \rightarrow i} ^ {k} \boldsymbol {\beta} _ {p \rightarrow i} ^ {k}. +$$ + +Putting all $K$ heads together, it turns out that $\mathcal{M}_{j\rightarrow i} = \{(\omega_{j\rightarrow i}^k,\beta_{j\rightarrow i}^k)\}_{k = 1}^K$ and $\mathcal{A}_i = \{\mathbf{a}_i^k\}_{k = 1}^K$ . Specifically, each attention head is parameterized in a query-key-value style: + +$$ +\begin{array}{l} \forall i \in \mathcal {V}: \quad \tilde {\mathbf {v}} _ {i} = \operatorname {M L P} _ {v} (\mathbf {v} _ {i}) \in \mathbb {R} ^ {\tilde {d} _ {v}}, \quad \tilde {\mathbf {h}} _ {i} = [ \mathbf {h} _ {i}, \tilde {\mathbf {v}} _ {i}, \mathbf {g} ] \\ \mathbf {Q} = \tilde {\mathbf {H}} \mathbf {W} _ {Q}, \quad \mathbf {A} = \tilde {\mathbf {H}} \mathbf {W} _ {A}, \quad \mathbf {C} = \tilde {\mathbf {H}} \mathbf {W} _ {C} \\ \end{array} +$$ + +$$ +\boldsymbol {\beta} _ {j \rightarrow i} = \mathbf {c} _ {j}, \quad \omega_ {j \rightarrow i} = \mathbf {q} _ {i} ^ {\top} \mathbf {a} _ {j} / \sqrt {d _ {q}} + \operatorname {M L P} _ {e} (\mathbf {e} _ {j i}) \in \mathbb {R} +$$ + +for $\tilde{d} = d + \tilde{d}_v + d_g$ , $\mathbf{W}_Q \in \mathbb{R}^{\tilde{d} \times d_q}$ , $\mathbf{W}_A \in \mathbb{R}^{\tilde{d} \times d_q}$ , and $\mathbf{W}_C \in \mathbb{R}^{\tilde{d} \times d_c}$ . + +# A.2 MODEL DETAILS + +In this work, the READOUT function is implemented by passing the concatenation of the outputs of a mean aggregator and an element-wise max aggregator through a gated activation unit. The transition densities in the generative model are specified to be: + +$$ +f _ {\theta} ^ {g} \left(\mathbf {z} _ {t} ^ {g} \mid \mathbf {h} _ {t} ^ {g}\right) = \operatorname {N o r m a l} \left(\cdot \mid \boldsymbol {\mu} _ {\theta} ^ {g} \left(\mathbf {h} _ {t} ^ {g}\right), \boldsymbol {\Sigma} _ {\theta} ^ {g} \left(\mathbf {h} _ {t} ^ {g}\right)\right), \tag {7} +$$ + +$$ +f _ {\theta} ^ {\star} \left(\mathbf {Z} _ {t} \mid \mathbf {H} _ {t}\right) = \prod_ {i = 1} ^ {N} \operatorname {N o r m a l} \left(\mathbf {z} _ {t} ^ {(i)} \mid \boldsymbol {\mu} _ {\theta} ^ {\star} \left(\mathbf {h} _ {t} ^ {(i)}\right), \boldsymbol {\Sigma} _ {\theta} ^ {\star} \left(\mathbf {h} _ {t} ^ {(i)}\right)\right), \tag {8} +$$ + +where $\pmb{\mu}_{\theta}^{g}$ and $\pmb{\Sigma}_{\theta}^{g}$ (similarly $\pmb{\mu}_{\theta}^{\star}$ and $\pmb{\Sigma}_{\theta}^{\star}$ ) are 3-layer MLPs that share their first layer. $\pmb{\Sigma}_{\theta}^{g}$ and $\pmb{\Sigma}_{\theta}^{\star}$ output diagonal covariance matrices using the softplus activation. The proposal densities $r_{\phi}^{g}$ and $r_{\phi}^{\star}$ are specified in a similar way. Then GNFs can be stacked on top of $f_{\theta}^{\star}$ and $r_{\phi}^{\star}$ to make them more expressive. + +# A.3 EXPERIMENTS + +We use the Adam optimizer with an initial learning rate of 0.001 and a gradient clipping of 1.0 for all experiments. The learning rate was annealed according to a linear cosine decay. We set $\beta_{1} = \beta_{2} = 1.0$ for the auxiliary losses in all experiments. + +Synthetic toy dataset. A typical example in the dataset is visualized in Figure 2. The architectures of the models are specified as follows. + +(a) VRNN: Using 128-dimensional latent variables and a two-layer, 512-unit GRU. +(b) GNN-AR: Using a two-layer GNN and an one-layer, 128-unit GRU shared by all nodes. +(c) R-SSM: We let $d_{g} = d_{z} = 8$ . All RNNs are specified to be two-layer, 32-unit LSTMs. All MLPs use 64 hidden units. The generative model and the proposal both use a 4-head MHA layer. 4 SMC samples and a batch size of 16 are used in training. + +Basketball player movement. We let $d_{g} = d_{z} = 32$ . All RNNs are specified to be two-layer, 64-unit LSTMs and all MLPs use 256 hidden units. The generative model uses one 8-head MHA layer and the proposal uses two 8-head MHA layers. Each GNF uses an additional MHA layer shared by the functions $s(\cdot)$ and $t(\cdot)$ . 4 SMC samples and a batch size of 64 are used in training. Eight selected rollouts from the trained model are visualized in Figure 3. + +Road traffic. We let $d_g = d_z = 8$ . A 32-dimensional embedding for each sensor is jointly learned as a part of the vertex attribute. All RNNs are specified to be two-layer, 32-unit LSTMs and all MLPs use 64 hidden units. The generative model and the proposal both use two 8-head MHA layers. 3 SMC samples and a batch size of 16 are used in training. + +![](images/9a08bdb8bd8614a5cf14ca5484b9bcf0484f7f655f125fc38eaf77d941882c23.jpg) +Figure 2: An example from the toy dataset $(N = 36, T = 80)$ . + +![](images/ce24610fdf2a319d3173fde0d899f794d293640825ceac666b539639bc4faa68.jpg) +Figure 3: Selected rollouts from the trained model. Black dots represent the starting points. + +# A.4 TRAINING + +We optimize the VSMC bound estimated by the following SMC algorithm: + +Algorithm 1 Estimate the VSMC bound $\mathcal{L}_K^{\mathrm{SMC}}$ +Input: graph $\mathcal{G}$ , observations $\mathbf{X}_{1:T}$ , exogenous inputs $\mathbf{U}_{1:T}$ +Require: generative model $\{f_{\theta}^{g}, f_{\theta}^{\star}, g_{\theta}\}$ , proposal $\{r_{\phi}^{g}, r_{\phi}^{\star}\}$ , number of particles $K$ + +for $k = 1\dots K$ do Simulate $\mathbf{z}_{1,k}^{g}\sim r_{\phi}^{g}(\cdot |\mathcal{G},\mathbf{X}_{1},\mathbf{U}_{1})$ Simulate $\mathbf{Z}_{1,k}\sim r_{\phi}^{\star}(\cdot |\mathbf{z}_{1,k}^{g},\mathcal{G},\mathbf{X}_{1},\mathbf{U}_{1})$ Set $w_{1}^{k} = \frac{f_{\theta}^{g}(\mathbf{z}_{1,k}^{g})f_{\theta}^{\star}(\mathbf{Z}_{1,k}|\mathbf{z}_{1,k}^{g},\mathcal{G},\mathbf{U}_{1})\prod_{i = 1}^{N}g_{\theta}(\mathbf{x}_{1}^{(i)}|\mathbf{z}_{1,k}^{(i)},\mathbf{z}_{1,k}^{g},\dots)}{r_{\phi}^{g}(\mathbf{z}_{1,k}^{g}|\dots)r_{\phi}^{\star}(\mathbf{Z}_{1,k}|\dots)}$ +end for +Initialize $\hat{\mathcal{L}}_K^{\mathrm{SMC}} = \log \sum_{k = 1}^K w_1^k /K$ + +for $t = 2\dots T$ do $\{\mathbf{z}_{< t,k}^{g},\mathbf{Z}_{< t,k}\}_{k = 1}^{K} = \mathrm{RESAMPLE}(\{\mathbf{z}_{< t,k}^{g},\mathbf{Z}_{< t,k},w_{t - 1}^{k}\}_{k = 1}^{K})$ +for $k = 1\dots K$ do Simulate $\mathbf{z}_{t,k}^{g}\sim r_{\phi}^{g}(\cdot |\mathbf{z}_{< t,k}^{g},\mathbf{Z}_{< t,k},\mathcal{G},\mathbf{X}_{\leq t},\mathbf{U}_{\leq t})$ Simulate $\mathbf{Z}_{t,k}\sim r_{\phi}^{\star}(\cdot |\mathbf{z}_{\leq t,k}^{g},\mathbf{Z}_{< t,k},\mathcal{G},\mathbf{X}_{\leq t},\mathbf{U}_{\leq t})$ Set $w_{t}^{k} = \frac{f_{\theta}^{g}(\mathbf{z}_{t,k}^{g}|...)f_{\theta}^{\star}(\mathbf{Z}_{t,k}|\mathbf{z}_{\leq t,k}^{g},\mathbf{Z}_{< t,k},\dots)\prod_{i = 1}^{N}g_{\theta}(\mathbf{x}_t^{(i)}|\mathbf{z}_{t,k}^{(i)},\mathbf{z}_{\leq t,k}^{g},\mathbf{Z}_{< t,k},\dots)}{r_{\phi}^{g}(\mathbf{z}_{t,k}^{g}|...)r_{\phi}^{\star}(\mathbf{Z}_{t,k}|...)}$ Set $\mathbf{z}_{\leq t,k}^{g} = (\mathbf{z}_{< t,k}^{g},\mathbf{z}_{t,k}^{g}),\mathbf{Z}_{\leq t,k} = (\mathbf{Z}_{< t,k},\mathbf{Z}_{t,k})$ +end for Update $\hat{\mathcal{L}}_K^{\mathrm{SMC}} = \hat{\mathcal{L}}_K^{\mathrm{SMC}} + \log \sum_{k = 1}^{K}w_t^k /K$ +end for + +Output: $\hat{\mathcal{L}}_K^{\mathrm{SMC}}$ + +In our model, dependencies on $\mathbf{z}_{ 0.8$ and $\hat{m}_i = 0$ otherwise, and sample magnitude bins from Categorical(Normalize( $\hat{m}$ )). To update the weights of the sampled transformations, we first sample a magnitude bin $m_i$ for each transformation parameter uniformly at random. The resulting transformations are applied to a labeled example $x$ with label $p$ to obtain an augmented version $\hat{x}$ . Then, we measure the extent to which the model's prediction matches the label as $\omega = 1 - \frac{1}{2L} \sum |p_{\mathrm{model}}(y|\hat{x};\theta) - p|$ . The weight for each sampled magnitude bin is updated as $m_i = \rho m_i + (1 - \rho)\omega$ where $\rho = 0.99$ is a fixed exponential decay hyperparameter. + +# 3.3 PUTTING IT ALL TOGETHER + +ReMixMatch's algorithm for processing a batch of labeled and unlabeled examples is shown in algorithm 1. The main purpose of this algorithm is to produce the collections $\mathcal{X}'$ and $\mathcal{U}'$ , consisting of augmented labeled and unlabeled examples with MixUp applied. The labels and label guesses in $\mathcal{X}'$ and $\mathcal{U}'$ are fed into standard cross-entropy loss terms against the model's predictions. Algorithm 1 also outputs $\hat{\mathcal{U}}_1$ , which consists of a single heavily-augmented version of each unlabeled image and its label guesses without MixUp applied. $\hat{\mathcal{U}}_1$ is used in two additional loss terms which provide a mild boost in performance in addition to improved stability: + +Pre-mixup unlabeled loss We feed the guessed labels and predictions for example in $\hat{\mathcal{U}}_1$ as-is into a separate cross-entropy loss term. + +Rotation loss Recent result have shown that applying ideas from self-supervised learning to SSL can produce strong performance (Gidaris et al., 2018; Zhai et al., 2019). We integrate this idea by rotating each image $u \in \hat{\mathcal{U}}_1$ as $\mathrm{Rotate}(u, r)$ where we sample the rotation angle $r$ uniformly from $r \sim \{0, 90, 180, 270\}$ and then ask the model to predict the rotation amount as a four-class classification problem. + +In total, the ReMixMatch loss is + +$$ +\begin{array}{l} \sum_ {x, p \in \mathcal {X} ^ {\prime}} \mathrm {H} (p, p _ {\text {m o d e l}} (y | x; \theta)) + \lambda_ {\mathcal {U}} \sum_ {u, q \in \mathcal {U} ^ {\prime}} \mathrm {H} (q, p _ {\text {m o d e l}} (y | u; \theta)) (3) \\ + \lambda_ {\hat {\mathcal {U}} _ {1}} \sum_ {u, q \in \hat {\mathcal {U}} _ {1}} \mathrm {H} (q, p _ {\text {m o d e l}} (y | u; \theta)) + \lambda_ {r} \sum_ {u \in \hat {\mathcal {U}} _ {1}} \mathrm {H} (r, p _ {\text {m o d e l}} (r | \operatorname {R o t a t e} (u, r); \theta)) (4) \\ \end{array} +$$ + +Hyperparameters ReMixMatch introduce two new hyperparameters: the weight on the rotation loss $\lambda_{r}$ and the weight on the un-augmented example $\lambda_{\hat{\mathcal{U}}_1}$ . In practice both are fixed $\lambda_{r} = \lambda_{\hat{\mathcal{U}}_{1}} = 0.5$ . ReMixMatch also shares many hyperparameters from MixMatch: the weight for the unlabeled loss $\lambda_{\mathcal{U}}$ , the sharpening temperature $T$ , the MixUp Beta parameter, and the number of augmentations $K$ . All experiments (unless otherwise stated) use $T = 0.5$ , $\mathrm{Beta} = 0.75$ , and $\lambda_{\mathcal{U}} = 1.5$ . We found + +using a larger number of augmentations monotonically increases accuracy, and so set $K = 8$ for all experiments (as running with $K$ augmentations increases computation by a factor of $K$ ). + +We train our models using Adam (Kingma & Ba, 2015) with a fixed learning rate of 0.002 and weight decay (Zhang et al., 2018) with a fixed value of 0.02. We take the final model as an exponential moving average over the trained model weights with a decay of 0.999. + +# 4 EXPERIMENTS + +We now test the efficacy of ReMixMatch on a set of standard semi-supervised learning benchmarks. Unless otherwise noted, all of the experiments performed in this section use the same codebase and model architecture (a Wide ResNet-28-2 (Zagoruyko & Komodakis, 2016) with 1.5 million parameters, as used in (Oliver et al., 2018)). + +# 4.1 REALISTIC SSL SETTING + +We follow the Realistic Semi-Supervised Learning (Oliver et al., 2018) recommendations for performing SSL evaluations. In particular, as mentioned above, this means we use the same model and training algorithm in the same codebase for all experiments. We compare against VAT (Miyato et al., 2018) and MeanTeacher (Tarvainen & Valpola, 2017), copying the re-implementations over from the MixMatch codebase (Berthelot et al., 2019). + +Fully supervised baseline To begin, we train a fully-supervised baseline to measure the highest accuracy we could hope to obtain with our training pipeline. The experiments we perform use the same model and training algorithm, so these baselines are valid for all discussed SSL techniques. On CIFAR-10, we obtain an fully-supervised error rate of $4.25\%$ using weak flip + crop augmentation, which drops to $3.62\%$ using AutoAugment and $3.91\%$ using CTAugment. Similarly, on SVHN we obtain $2.70\%$ error using weak (flip) augmentation and $2.31\%$ and $2.16\%$ using AutoAugment and CTAugment respectively. While AutoAugment performs slightly better on CIFAR-10 and slightly worse on SVHN compared to CTAugment, it is not our intent to design a better augmentation strategy; just one that can be used without a pre-training or tuning of hyper-parameters. + +CIFAR-10 Our results on CIFAR-10 are shown in table 1, left. ReMixMatch sets the new state-of-the-art for all numbers of labeled examples. Most importantly, ReMixMatch is $16 \times$ more data efficient than MixMatch (e.g., at 250 labeled examples ReMixMatch has identical accuracy compared to MixMatch at 4,000). + +SVHN Results for SVHN are shown in table 1, right. ReMixMatch reaches state-of-the-art at 250 labeled examples, and within the margin of error for state-of-the-art otherwise. + +
CIFAR-10SVHN
Method250 labels1000 labels4000 labels250 labels1000 labels4000 labels
VAT36.03±2.8218.64±0.4011.05±0.318.41±1.015.98±0.214.20±0.15
Mean Teacher47.32±4.7117.32±4.0010.36±0.256.45±2.433.75±0.103.39±0.11
MixMatch11.08±0.877.75±0.326.24±0.063.78±0.263.27±0.312.89±0.06
ReMixMatch6.27±0.345.73±0.165.14±0.043.10±0.502.83±0.302.42±0.09
UDA, reported*8.76±0.905.87±0.135.29±0.252.76±0.172.55±0.092.47±0.15
+ +Table 1: Results on CIFAR-10 and SVHN. * For UDA, due to adaptation difficulties, we report the results from Xie et al. (2019) which are not comparable to our results due to a different network implementation, training procedure, etc. For VAT, Mean Teacher, and MixMatch, we report results using our reimplementation, which makes them directly comparable to ReMixMatch's scores. + +# 4.2 STL-10 + +The STL-10 dataset consists of 5,000 labeled $96 \times 96$ color images drawn from 10 classes and 100,000 unlabeled images drawn from a similar—but not identical—data distribution. The labeled + +set is partitioned into ten pre-defined folds of 1,000 images each. For efficiency, we only run our analysis on five of these ten folds. We do not perform evaluation here under the Realistic SSL (Oliver et al., 2018) setting when comparing to non-MixMatch results. Our results are, however, directly comparable to the MixMatch results. Using the same WRN-37-2 network (23.8 million parameters), we reduce the error rate by a factor of two compared to MixMatch. + +
MethodError Rate
SWWAE25.70
CC-GAN22.20
MixMatch10.18 ± 1.46
ReMixMatch (K=1)6.77 ± 1.66
ReMixMatch (K=4)6.18 ± 1.24
+ +Table 2: STL-10 error rate using 1000-label splits. SWWAE and CCGAN results are from (Zhao et al., 2015) and (Denton et al., 2016). + +
AblationError RateAblationError Rate
ReMixMatch5.94No rotation loss6.08
With K=17.32No pre-mixup loss6.66
With K=26.74No dist. alignment7.28
With K=46.21L2 unlabeled loss17.28
With K=165.93No strong aug.12.51
MixMatch11.08No weak aug.29.36
+ +Table 3: Ablation study. Error rates are reported on a single 250-label split from CIFAR-10. + +# 4.3 TOWARDS FEW-SHOT LEARNING + +We find that ReMixMatch is able to work in extremely low-label settings. By only changing $\lambda_r$ from 0.5 to 2 we can train CIFAR-10 with just four labels per class and SVHN with only 40 labels total. On CIFAR-10 we obtain a median-of-five error rate of $15.08\%$ ; on SVHN we reach $3.48\%$ error and on SVHN with the "extra" dataset we reach $2.81\%$ error. Full results are given in Appendix B. + +# 4.4 ABLATION STUDY + +Because we have made several changes to the existing MixMatch algorithm, here we perform an ablation study and remove one component of ReMixMatch at a time to understand from which changes produce the largest accuracy gains. Our ablation results are summarized in Table 3. We find that removing the pre-mixup unlabeled loss, removing distribution alignment, and lowering $K$ all hurt performance by a small amount. Given that distribution alignment improves performance, we were interested to see whether it also had the intended effect of making marginal distribution of model predictions match the ground-truth marginal class distribution. We measure this directly in appendix D. Removing the rotation loss reduces accuracy at 250 labels by only 0.14 percentage points, but we find that in the 40-label setting rotation loss is necessary to prevent collapse. Changing the cross-entropy loss on unlabeled data to an L2 loss as used in MixMatch hurts performance dramatically, as does removing either of the augmentation components. This validates using augmentation anchoring in place of the consistency regularization mechanism of MixMatch. + +# 5 CONCLUSION + +Progress on semi-supervised learning over the past year has upended many of the long-held beliefs about classification, namely, that vast quantities of labeled data is necessary. By introducing augmentation anchoring and distribution alignment to MixMatch, we continue this trend: ReMixMatch reduces the quantity of labeled data needed by a large factor compared to prior work (e.g., beating MixMatch at 4000 labeled examples with only 250 on CIFAR-10, and closely approaching MixMatch at 5000 labeled examples with only 1000 on STL-10). In future work, we are interested in pushing the limited data regime further to close the gap between few-shot learning and SSL. We also note that in many real-life scenarios, a dataset begins as unlabeled and is incrementally labeled until satisfactory performance is achieved. Our strong empirical results suggest that it will be possible to achieve gains in this "active learning" setting by using ideas from ReMixMatch. Finally, in this paper we present results on widely-studied image benchmarks for ease of comparison. However, the true power of data-efficient learning will come from applying these techniques to real-world problems where obtaining labeling data is expensive or impractical. + +# REFERENCES + +Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems, 2014. +Yoshua Bengio, Olivier Delalleau, and Nicolas Le Roux. Label Propagation and Quadratic Criterion, chapter 11. MIT Press, 2006. +David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249, 2019. +John S. Bridle, Anthony J. R. Heading, and David J. C. MacKay. Unsupervised classifiers, mutual information and phantom targets. In Advances in neural information processing systems, 1992. +Glenn W. Brier. Verification of forecasts expressed in terms of probability. *Monthey Weather Review*, 78(1):1-3, 1950. +Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-Supervised Learning. MIT Press, 2006. +Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018. +Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: An automated, randomized data augmentation strategy. arXiv preprint arXiv:XXXX.XXXXX, 2019. +Emily Denton, Sam Gross, and Rob Fergus. Semi-supervised learning with context-conditional generative adversarial networks. arXiv preprint arXiv:1611.06430, 2016. +Alexander Gammerman, Volodya Vovk, and Vladimir Vapnik. Learning by transduction. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, 1998. +Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018. +Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In Advances in Neural Information Processing Systems, 2005. +Geoffrey Hinton and Drew van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Proceedings of the 6th Annual ACM Conference on Computational Learning Theory, 1993. +Thorsten Joachims. Transductive inference for text classification using support vector machines. In International Conference on Machine Learning, 1999. +Thorsten Joachims. Transductive learning via spectral graph partitioning. In International Conference on Machine Learning, 2003. +Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Third International Conference on Learning Representations, 2015. +Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In *Fifth International Conference on Learning Representations*, 2017. +Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICML Workshop on Challenges in Representation Learning, 2013. +Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast autoaugment. arXiv preprint arXiv:1905.00397, 2019. +Bin Liu, Zhirong Wu, Han Hu, and Stephen Lin. Deep metric transfer for label propagation with limited annotated data. arXiv preprint arXiv:1812.08781, 2018. + +Geoffrey J. McLachlan. Iterative reclassification procedure for constructing an asymptotically optimal rule of allocation in discriminant analysis. Journal of the American Statistical Association, 70(350):365-369, 1975. +Takeru Miyato, Shin-ichi Maeda, Shin Ishii, and Masanori Koyama. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 2018. +Avital Oliver, Augustus Odena, Colin Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems, pp. 3235-3246, 2018. +Chuck Rosenberg, Martial Hebert, and Henry Schneiderman. Semi-supervised self-training of object detection models. In Proceedings of the Seventh IEEE Workshops on Application of Computer Vision, 2005. +Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Advances in Neural Information Processing Systems, 2016. +Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, 2017. +Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, and David Lopez-Paz. Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825, 2019. +Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848, 2019. +Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Proceedings of the British Machine Vision Conference (BMVC), 2016. +Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. $s^4 l$ : Self-supervised semi-supervised learning. arXiv preprint arXiv:1905.03670, 2019. +Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. arXiv preprint arXiv:1810.12281, 2018. +Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. +Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders. arXiv preprint arXiv:1506.02351, 2015. + +# A PROOF OF EQUATION 2 + +The proof here follows closely Bridle et al. (1992). We begin with the definition + +$$ +\mathcal {I} (y; x) = \iint p (y, x) \log \frac {p (y , x)}{p (y) p (x)} d y d x \tag {5} +$$ + +Rewriting terms we obtain + +$$ +\begin{array}{l} = \int p (x) \mathrm {d} x \int p (y | x) \log \frac {p (y | x)}{p (y)} \mathrm {d} y (6) \\ = \int p (x) \mathrm {d} x \int p (y | x) \log \frac {p (y | x)}{\int p (x) p (y | x) \mathrm {d} x} \mathrm {d} y (7) \\ \end{array} +$$ + +Then, rewriting both integrals as expectations we obtain + +$$ +\begin{array}{l} = \mathbb {E} _ {x} \left[ \int p (y | x) \log \frac {p (y | x)}{\mathbb {E} _ {x} [ p (y | x) ]} \mathrm {d} y \right] (8) \\ = \mathbb {E} _ {x} \left[ \sum_ {i = 1} ^ {L} p \left(y _ {i} \mid x\right) \log \frac {p \left(y _ {i} \mid x\right)}{\mathbb {E} _ {x} [ p \left(y _ {i} \mid x\right) ]} \right] (9) \\ = \mathbb {E} _ {x} \left[ \sum_ {i = 1} ^ {L} p \left(y _ {i} \mid x\right) \log p \left(y _ {i} \mid x\right) \right] - \mathbb {E} _ {x} \left[ \sum_ {i = 1} ^ {L} p \left(y _ {i} \mid x\right) \log \mathbb {E} _ {x} [ p \left(y _ {i} \mid x\right) ] \right] (10) \\ = \mathbb {E} _ {x} \left[ \sum_ {i = 1} ^ {L} p \left(y _ {i} \mid x\right) \log p \left(y _ {i} \mid x\right) \right] - \sum_ {i = 1} ^ {L} \mathbb {E} _ {x} \left[ p \left(y _ {i} \mid x\right) \right] \log \mathbb {E} _ {x} \left[ p \left(y _ {i} \mid x\right) \right] (11) \\ = \mathcal {H} \left(\mathbb {E} _ {x} \left[ p _ {\text {m o d e l}} (y | x; \theta) \right]\right) - \mathbb {E} _ {x} \left[ \mathcal {H} \left(p _ {\text {m o d e l}} (y | x; \theta)\right) \right] (12) \\ \end{array} +$$ + +# B FULL 40 LABEL RESULTS + +We now report the full results for running ReMixMatch with just 40 labeled examples. We sort the table by error rate over five different splits (i.e., 40-label subsets) of the training data. High variance is to be expected when choosing so few labeled examples at random. + +
DatasetSplit (ordered by error rate)
12345
CIFAR-1010.8812.6515.0816.7819.49
SVHN3.433.463.484.0612.24
SVHN+extra2.592.712.813.5015.14
+ +Table 4: Sorted error rate of ReMixMatch with 40 labeled examples. + +C TRANSFORMATIONS INCLUDED IN CTAUGMENT + +
TransformationDescriptionParameterRange
AutocontrastMaximizes the image contrast by setting the dark-est (lightest) pixel to black (white), and then blends with the original image with blending ratio λ.λ[0, 1]
Blur
BrightnessAdjusts the brightness of the image. B = 0 re-turns a black image, B = 1 returns the original image.B[0, 1]
ColorAdjusts the color balance of the image like in a TV. C = 0 returns a black & white image, C = 1 returns the original image.C[0, 1]
ContrastControls the contrast of the image. A C = 0 re-turns a gray image, C = 1 returns the original image.C[0, 1]
CutoutSets a random square patch of side-length (L×image width) pixels to gray.L[0, 0.5]
EqualizeEqualizes the image histogram, and then blends with the original image with blending ratio λ.λ[0, 1]
InvertInverts the pixels of the image, and then blends with the original image with blending ratio λ.λ[0, 1]
IdentityReturns the original image.
PosterizeReduces each pixel to B bits.B[1, 8]
RescaleTakes a center crop that is of side-length (L×image width), and rescales to the original image size using method M.L[0.5, 1.0]
Msee caption
RotateRotates the image by θ degrees.θ[-45, 45]
SharpnessAdjusts the sharpness of the image, where S = 0 returns a blurred image, and S = 1 returns the original image.S[0, 1]
Shear_xShears the image along the horizontal axis with rate R.R[-0.3, 0.3]
Shear_yShears the image along the vertical axis with rate R.R[-0.3, 0.3]
SmoothAdjusts the smoothness of the image, where S = 0 returns a maximally smooth image, and S = 1 returns the original image.S[0, 1]
SolarizeInverts all pixels above a threshold value of T.T[0, 1]
Translate_xTranslates the image horizontally by (λ×image width) pixels.λ[-0.3, 0.3]
Translate_yTranslates the image vertically by (λ×image width) pixels.λ[-0.3, 0.3]
+ +Table 5: The ranges for all of the listed parameters are discretized into 17 equal bins. The only exception is the $M$ parameter of the Rescale transformation, which takes on one of the following six options: anti-alias, bicubic, bilinear, box, hamming, and nearest. + +# D MEASURING THE EFFECT OF DISTRIBUTION ALIGNMENT + +Recall that the goal of distribution alignment is to encourage the marginal distribution of the model's predictions $\tilde{p}(y)$ to match the true marginal class distribution $p(y)$ . To measure whether distribution alignment indeed has this effect, we monitored the KL divergence between $\tilde{p}(y)$ and $p(y)$ over the course of training. We show the KL divergence for a training run on CIFAR-10 with 250 labels with + +![](images/985ff235b4c5c6975d89f4356ad3406d8477cf04d093961be6994c019a5023a5.jpg) +Figure 3: KL divergence between the marginal distribution of model predictions vs. the true marginal distribution of class labels over the course of training with and without distribution alignment. This figure corresponds to a training run on CIFAR-10 with 250 labels. + +and without distribution alignment in fig. 3. Indeed, the KL divergence between $\tilde{p}(y)$ and $p(y)$ is significantly smaller throughout training. + +# E CTAUGMENT PARAMETERS EFFECTS + +In this section we compare the effects of varying the CTAugment hyper-parameters on CIFAR10 with 250 labels, using the standard ReMixMatch settings. The exponential weight decay $\rho$ does not effect the results significantly while depth and threshold have significant effects. The default settings are highlighted in the table. They appear to perform well and have been shown to be robust across many datasets in our previous experiments. + +
DepthThresholdρError rate
10.800.9923.90
20.800.996.25
30.800.996.36
20.500.9910.51
20.800.996.25
20.900.9910.80
20.950.9918.47
20.800.96.15
20.800.996.25
20.800.9996.02
+ +Table 6: Effects of hyper-parameters for CTAugment, the bold results are the default settings used for all experiments. \ No newline at end of file diff --git a/remixmatchsemisupervisedlearningwithdistributionmatchingandaugmentationanchoring/images.zip b/remixmatchsemisupervisedlearningwithdistributionmatchingandaugmentationanchoring/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6e39068398c990f682e29f6ef1d3eb1a2dd5d0a0 --- /dev/null +++ b/remixmatchsemisupervisedlearningwithdistributionmatchingandaugmentationanchoring/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81a402771899fc83eb7e444a01ed6dba053fdbfa8fe9910f466efd7a753d4cfe +size 496813 diff --git a/remixmatchsemisupervisedlearningwithdistributionmatchingandaugmentationanchoring/layout.json b/remixmatchsemisupervisedlearningwithdistributionmatchingandaugmentationanchoring/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..648833fb38e557e62c0bf8cbe9173557567be13e --- /dev/null +++ b/remixmatchsemisupervisedlearningwithdistributionmatchingandaugmentationanchoring/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dce7ab59336c0a06ea04f666091e2f59cbad97d73c3d6307c39bd4dfa6f3de48 +size 399103 diff --git a/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_content_list.json b/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..eed8258494d0b3ae3b5ea8c1d4aa1310d60b85ac --- /dev/null +++ b/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e986156842705e72a539e81ddde9ec77addb3af507bf6029e5e31769a527beb7 +size 110001 diff --git a/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_model.json b/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e4b66f3ec83ba9e19224abc7b6eccadc11947a7a --- /dev/null +++ b/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af87e514788e511b4e0b059c0ba84164d05c324aaa4a7ad95ec2d1225c377447 +size 129315 diff --git a/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_origin.pdf b/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..84c0824828dec01e2f8ad0cb2d895c0bd98f0172 --- /dev/null +++ b/residualenergybasedmodelsfortextgeneration/734ba10c-75c0-4ade-a3a8-1f810d2b2cdb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7bf534f83771f86f0c8ca8a08cbe65f1f1f3cb41bf65fe25344bc69b9cdf549 +size 1046206 diff --git a/residualenergybasedmodelsfortextgeneration/full.md b/residualenergybasedmodelsfortextgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c3e1cc5185b10b41173d1d1129504db266b2ccee --- /dev/null +++ b/residualenergybasedmodelsfortextgeneration/full.md @@ -0,0 +1,352 @@ +# RESIDUAL ENERGY-BASED MODELS FOR TEXT GENERATION + +Yuntian Deng $^{1}$ , Anton Bakhtin $^{2}$ , Myle Ott $^{2}$ , Arthur Szlam $^{2}$ , Marc'Aurelio Ranzato $^{2}$ + +Harvard University1 + +Facebook AI Research2 + +dengyuntian@seas.harvard.edu {yolo,myleott,aszlam,ranzato}@fb.com + +# ABSTRACT + +Text generation is ubiquitous in many NLP tasks, from summarization, to dialogue and machine translation. The dominant parametric approach is based on locally normalized models which predict one word at a time. While these work remarkably well, they are plagued by exposure bias due to the greedy nature of the generation process. In this work, we investigate un-normalized energy-based models (EBMs) which operate not at the token but at the sequence level. In order to make training tractable, we first work in the residual of a pretrained locally normalized language model and second we train using noise contrastive estimation. Furthermore, since the EBM works at the sequence level, we can leverage pretrained bi-directional contextual representations, such as BERT and RoBERTa. Our experiments on two large language modeling datasets show that residual EBMs yield lower perplexity compared to locally normalized baselines. Moreover, generation via importance sampling is very efficient and of higher quality than the baseline models according to human evaluation. + +# 1 INTRODUCTION + +The dominant approach to parametric text generation is based on large neural auto-regressive models (Radford et al., 2019). These models can be trained efficiently via maximum likelihood and they can efficiently generate samples of remarkable quality. Key to their success is local normalization, i.e. they are defined in terms of a product of conditional distributions, one for each token in the sequence. Such distributions are relatively cheap to compute with modern hardware given the limited vocabulary size of common sub-word units like BPE (Sennrich et al., 2015). + +Unfortunately, local normalization also brings some drawbacks. First, the designer of the model needs to specify the order in which tokens are generated. Second, at training time the model is conditioned on ground truth context while at test time it is conditioned on its own generations, a discrepancy referred to as exposure bias (Ranzato et al., 2016). Finally, while heuristics like beam search somewhat help rescore at the sequence level, generation generally lacks long-range coherency because it is produced by the greedy selection of one token at a time without lookahead. + +Energy-based models (EBMs) (Hinton, 2002; LeCun et al., 2006; Ranzato et al., 2007) are a more general framework which potentially address all these issues, as they do not require any local normalization. They only require the definition of an energy function defined over the whole input sequence. Training aims at shaping the energy function such that regions of high density of training data points have lower energy than elsewhere. In principle, EBMs are ideal for modeling text as they can score the whole input at once, they are not prone to label bias (Bottou, 1991) and they may enable generation of large chunks of text, which should help improve coherency. + +However, so far EBMs had limited application in text generation, because sampling from the model is intractable, and so is maximum likelihood training. The problem is that shaping the energy function is accomplished by updating the model parameters such that the energy is decreased at the training data points (a.k.a. positive examples) and increased at other data points (a.k.a. negative examples). In maximum likelihood training negatives are generated from the model, but in text application we cannot use gradient-based MCMC methods (Teh et al., 2003; Du & Mordatch, 2019) and Gibbs sampling (Welling et al., 2005) is too slow to be practical. Generating negatives by local + +perturbations of the ground truth would be efficient but hardly useful for generation purposes, when at test time the model needs to generate from scratch. + +Recently, Bakhtin et al. (2019) carefully studied the problem of training a discriminator to distinguish human written text from language model generations. They experimented with different language model and discriminator architectures, training/test time corpora and concluded that the discriminator can generalize rather well to weaker language models when the training/test corpora match. Bakhtin et al. (2019) found that the learned discriminator is not robust to random perturbations, and argued that the discriminator operates in the "residual" space of the language model. + +Concurrently, Grover et al. (2019) proposed a general approach to "de-bias" a generator, by simply training a discriminator and using its output for importance sampling. + +In this work, we build upon these two works. First, we formalize the residual interpretation by Bakhtin et al. (2019) and use a generative model of the form: + +$$ +P _ {\theta} (x) \propto P _ {L M} (x) \exp (- E _ {\theta} (x)) \tag {1} +$$ + +where $P_{LM}(x)$ is a locally normalized language model which is fixed during training, and $E_{\theta}$ is the energy function parameterized by $\theta$ . The resulting model $P_{\theta}(x)$ is globally normalized due to the energy term. Note that the same residual formulation was also used in Rosenfeld et al. (2001); Wang & Ou (2018b); Parshakova et al. (2019). + +This formulation has multi-fold benefits. First, by incorporating a locally normalized language model, we can leverage recent advancements in locally normalized language modeling. Second, the language model provides a natural proposal distribution for training (Bakhtin et al., 2019), and training can be made efficient by using the conditional noise contrastive estimation objective (Gutmann & Hyvarinen, 2010) as we shall see in §3. Lastly, this formulation enables efficient evaluation and generation via importance sampling (Horvitz & Thompson, 1952; Grover et al., 2019). + +In some sense, this last point is perhaps the central contribution of the paper, as it allows estimating perplexity of the residual EBM, and thus allows these EBMs to be compared in a standard way to other models. Indeed, in §4 we show that our joint model decreases perplexity on two large datasets, when compared to various auto-regressive language model baselines. Finally, the EBM generations are significantly preferred by humans according to our qualitative evaluation. To the best of our knowledge, this is the first time that an EBM has demonstrated improved generation ability against very strong auto-regressive baselines, both in terms of estimated perplexity and through human evaluation. + +# 2 RELATED WORK + +Energy-based models have a long history in machine learning (Hopfield, 1982; Hinton, 2002; LeCun et al., 2006; Ranzato et al., 2007). The key challenge of training is mining for good negatives. This can be accomplished explicitly by fantasizing inputs where the energy should be increased or implicitly via global constraints such as sparsity (Ranzato et al., 2007). Methods attempting at maximizing the likelihood of the data require to sample from the distribution induced by the model. Unfortunately, gradient-based MCMC approaches like Hybrid Monte Carlo (Teh et al., 2003) and Langevyn dynamics (Ranzato et al., 2007; Du & Mordatch, 2019; Xie et al., 2016; 2017; 2019; 2018; Gao et al., 2018; Nijkamp et al., 2019) are not applicable when the input is discrete like in text applications. Other approaches like Gibbs sampling (Hinton, 2002) were applied to binary inputs but do not scale well to large dictionaries once the energy function is a large bidirectional transformer model like the one used in this work. Several variants of auto-encoders have also been investigated for representing and generating text (Bowman et al., 2016; Zhao et al., 2018), but they have not shown significant improvements in terms of perplexity and they have so far been applied to relatively small datasets only. + +Our approach appears similar to discriminative reranking approaches used in the parsing and machine translation community (Shen et al., 2004). However, our approach provides a generative model, and parameters/hyper-parameters are directly tuned to close the gap between the model distribution and the data distribution, rather than relying on surrogate ranking losses. This approach is also related to other sequence level training objectives (Edunov et al., 2018), with the major differ + +ence that in those works training aims at improving the baseline model, but generation at test time is still greedy. + +Energy Networks have been used for sequence modeling (Rosenfeld et al., 2001; Wang et al., 2015; 2017; Wang & Ou, 2017; 2018a; Parshakova et al., 2019). In particular, our residual modeling form and the training algorithm is the same as in Wang & Ou (2018b), where they used an LSTM as the generator and a CNN-LSTM as the energy function, and showed significant gains compared to LSTM baselines in speech recognition. Our work builds on these prior works and develops new lower and upper bounds for the log-probability under the joint model, which makes it possible to show that the residual EBM approach gets better perplexity. We also develop an importance weighting sampling scheme used at generation time, which is focused on conditional generation as opposed to rescoring in speech recognition (Wang & Ou, 2018b). The residual EBM formalism makes it very natural to use BERT for language modeling, and we show that empirically this type of approach can outperform modern state-of-the-art language modeling baselines, both in terms of perplexity, and through human evaluation. + +Generative Adversarial Networks (Goodfellow et al., 2014) also relate to EBMs, except that in EBMs the generator is implicit and negatives samples are produced by the discriminator itself. In our work, the pretrained locally normalized language model can be seen as a fixed generator, like in Bakhtin et al. (2019). Azadi et al. (2018) also share our same goal but their generator is not locally normalized and they propose to improve the sampling from the generator by using the discriminator for rejection sampling. Similar to our work, Grover et al. (2019) propose to use the discriminator to de-bias the pretrained generator using importance sampling. We adapt this work to the application of text generation. In particular, we adopt the conditional noise contrastive estimation (NCE) objective (Ma & Collins, 2018; Gutmann & Hyvarinen, 2010) to our residual model energy function and then sample from the joint model using importance sampling. We want to note that the same formulation has been proposed in (Wang & Ou, 2018b; Parshakova et al., 2019). While Ma & Collins (2018) used conditional NCE to predict the next word in a sequence, we apply it to produce a whole sequence at once with the pretrained auto-regressive language model as the noise distribution. + +# 3 RESIDUAL ENERGY-BASED MODELS + +We study the problem of conditional generation of discrete sequences. Given a prefix $x_{1}, \dots, x_{p}$ with $x_{j} \in V$ where $V$ is the vocabulary, we want to model the probabilities of generating a sequence of total length $T > p^{1}$ . The generative model is: + +$$ +P _ {\theta} \left(x _ {p + 1}, \dots , x _ {T} \mid x _ {1}, \dots , x _ {p}\right) = \frac {P _ {L M} \left(x _ {p + 1} , \cdots , x _ {T} \mid x _ {1} , \cdots , x _ {p}\right) \exp \left(- E _ {\theta} \left(x _ {1} , \cdots , x _ {T}\right)\right)}{Z _ {\theta} \left(x _ {1} , \cdots , x _ {p}\right)} \tag {2} +$$ + +where $Z_{\theta}(x_1,\dots ,x_p)$ is a normalizing factor known as partition function. Computing the partition function is intractable in our case since it involves a sum over $|V|^{T - p}$ terms which grow exponentially with the sequence length: in our experiments the size of the vocabulary is 50,096 and the length of the generation is 40 tokens. We call $P_{\theta}$ the joint model, and $E_{\theta}$ the residual energy function since $P_{LM}$ is fixed throughout training. The goal of training is to learn the parameters of the energy function such that the joint model distribution gets close to the data distribution. For the sake of reducing clutter in the notation, we will drop the conditioning variables in the following discussion. + +# 3.1 TRAINING + +When the partition function is intractable, Maximum Likelihood Estimation (MLE) requires samples from the model distribution, which is usually approximated with Monte Carlo sampling or mean field inference (Hinton, 2012; LeCun et al., 2006) for globally normalized models. Unfortunately, both approaches are too computationally expensive for text applications when using large bidirectional transformer models. For instance, if we were to employ Gibbs sampling exactly, we would need to perform at every position as many forward passes as words in the dictionary to compute each conditional distribution. On large datasets where training locally normalized models on multiple machines already takes days, having such additional overhead means that the model would learn + +from much less data for the same amount of time, and this is seldom a beneficial strategy for learning models that generalize well. Therefore, we do not use either MCMC nor mean field methods, as the latter would introduce additional variational parameters or an inference network which anyway yields an approximation to MLE learning. + +Instead, we train our residual energy function using Noise Contrastive Estimation (NCE) (Gutmann & Hyvarinen, 2010), and more specifically its conditional version (Ma & Collins, 2018). NCE requires two distributions: The model distribution and a noise distribution. In our case, the model distribution is the joint model of Eq. 2, $P_{\theta}$ , while the noise distribution is the pretrained language model, $P_{LM}$ . NCE then trains a binary classifier on the difference of log-probability scores of these two models. Since our joint model is the product of the energy function (whose parameters we want to learn) with $P_{LM}$ , the difference reduces to: $\log P_{\theta} - \log P_{LM} = -E_{\theta}$ . Therefore, under these modeling assumptions of residual learning and noise model, the objective function becomes: + +$$ +\max \mathbb {E} _ {x _ {+} \sim P _ {d a t a}} \log \frac {1}{1 + \exp \left(E _ {\theta} \left(x _ {+}\right)\right)} + \mathbb {E} _ {x _ {-} \sim P _ {L M}} \log \frac {1}{1 + \exp \left(- E _ {\theta} \left(x _ {-}\right)\right)} \tag {3} +$$ + +where $x_{+}$ is a positive sequence taken from the human generated training set, and $x_{-}$ is a negative sequence drawn from $P_{LM}$ (for a given ground truth prefix). In other words, training the energy function reduces to training a binary classifier to discriminate between real text and text generated by an auto-regressive language model. The aim of training is to assign as negative energy as possible to real data, and as positive energy as possible to machine generated data. Interestingly, the role of positive and negative samples is totally symmetric in this loss function, §5 will discuss the consequences of this. + +With the theoretical guarantee of NCE, we can show that the optimum of the above objective is reached at data distribution with infinite amount of data and model with enough capacity, which is also proved in Ma & Collins (2018)2. + +Theorem 1. If $P_{LM}$ has the same support as $P_{data}$ , then the objective function in Eq. 3 reaches its maximum at $\log P_{LM}(x) - E_{\theta}(x) = \log P_{data}$ , if there exists such $\theta$ . + +Proof. This theorem directly follows from the proof in Gutmann & Hyvarinen (2010). Note that at optimum, $P_{LM}(x)\exp (-E_{\theta}(x))$ is self-normalizing: instead of $P_{\theta}(x)\propto P_{LM}(x)\exp (-E_{\theta}(x))$ , we have $P_{\theta}(x) = P_{LM}(x)\exp (-E_{\theta}(x))$ . However, we still need to estimate the partition function throughout the rest of this paper, since we cannot guarantee that this optimum can be reached. + +# 3.2 EVALUATION + +A commonly used protocol for evaluating generative sequence models, especially language models, is perplexity (PPL), which is equal to $2^{-\frac{1}{T - p}\sum_{i = p + 1}^{T}\log_{2}P(x_{i}|x_{i - 1},\dots ,x_{1})}$ . PPL can be interpreted as the average number of tokens the model is uncertain of at every time step. Since the log-likelihood required by PPL relies on estimating the partition function $Z_{\theta} = \sum_{x}P_{LM}(x)\exp (-E_{\theta}(x)) = \mathbb{E}_{x\sim P_{LM}}\exp (-E_{\theta}(x))$ , we derive two estimators for the log-partition function $\log Z_{\theta}$ based on the work of Nowozin (2018). + +Theorem 2. Denote $T_{n}$ as the empirical estimate of $\log \mathbb{E}_{x\sim P_{LM}}\exp (-E(x))$ with $n$ samples $x_{i}\sim P_{LM}(i = 1,\dots ,n)$ : $T_{n} = \log \frac{1}{n}\sum_{i = 1}^{n}\exp (-E(x_{i}))$ , then $\forall \epsilon >0, \exists N > 0$ such that $\forall n > N$ we have + +$$ +Z _ {\theta} - \epsilon < \mathbb {E} [ T _ {n} ] < Z _ {\theta} < \mathbb {E} [ (2 n - 1) T _ {n} - 2 (n - 1) T _ {n - 1} ] < Z _ {\theta} + \epsilon \tag {4} +$$ + +The proof is given in Appendix A.2. + +We can use the above two estimators to estimate the lower and upper bounds of the partition function, but we want to emphasize that they are true only asymptotically (when $n$ is sufficiently large). We also want to note that to get lower variance estimates we use leave-one-out strategy to estimate $T_{n-1}$ . See Nowozin (2018) for implementation details and methods to improve numeric stability. + +Similarly to locally normalized models, we can also factorize the probabilities of an entire sequence step by step, as $P(x) = \prod_{t=1}^{T} P(x_t | x_{3. Without the top-k constraint, as the number of samples goes to infinity, we would recover exact samples from the joint model distribution. + +# 4 EXPERIMENTS + +In this section, we describe the experimental set up and the results we obtained by using the residual EBM for text generation, both in terms of perplexity and generation quality. + +# 4.1 EXPERIMENTAL SETUP + +Datasets We consider two datasets: the Toronto Book Corpus (Zhu et al., 2015; Kiros et al., 2015) and CC-News (Bakhtin et al., 2019). The former dataset consists of fiction books in 16 different + +genres, totaling about half a billion words. The latter is a de-duplicated subset of the English portion of the CommonCrawl news dataset (Nagel, 2016), which totals around 16 Billion words. The book corpus is more challenging because the range of style and topics is more diverse than CC-News. Also, the book corpus is 30 times smaller than CC-News and may pose generalization challenges because of its smaller size. + +In all our experiments we use a prefix of size 120 tokens and we generate the following 40 tokens; with the notation of Eq. 2, $p = 120$ and $T = 160$ . For training the joint models, for efficiency we generated 16/128 samples per prefix for CC-News/Book Corpus offline, and sample uniformly from those samples at training time. + +Baselines We consider as base language model (BASE LM) used to generate negatives for the residual EBM, a transformer language model with 12 layers, $h = 16$ , $d_{model} = 1024$ , $d_{ff} = 4096$ (we refer to Vaswani et al. (2017) for notations). This is also our first baseline model. + +The joint model has as many parameters as the sum of the number of parameters in the base LM and the number of parameters in the energy network. To make a fair comparison, we consider two additional baselines that have the same number of parameters as our joint model. + +The first baseline is a Residual Auto-regressive Language Model baseline (RALM): + +$$ +\log P _ {R A L M} \left(x _ {t} \mid x _ {< t}\right) = \log P _ {L M} \left(x _ {t} \mid x _ {< t}\right) + \log P _ {\phi} \left(x _ {t} \mid x _ {< t}\right) + c o n s t \tag {6} +$$ + +where $P_{\phi}$ takes the form of another auto-regressive language model. The parameters of $P_{\phi}$ are trained by exact maximum likelihood training of $P_{RALM}$ . + +The second baseline is an auto-regressive language model of the same size of our joint model (sum of the base LM and energy function parameters), we dub this model Big Auto-regressive Language Model (BALM). BALM has 12 layers, $h = 16$ , $d_{model} = 1568$ , $d_{ff} = 6272$ , and is trained by standard token level cross-entropy loss. + +Residual EBM Architecture We consider two architectures for our residual EBM, both of them are based on transformers (Vaswani et al., 2017; Devlin et al., 2018). The first version uses causal self-attention and is derived from the base LM, a unidirectional transformer (UNIT). It is of the same architecture as BASE LM, except that in the final layer we project the mean-pooled hidden states to a scalar energy value. We initialize its parameters with a language model trained on the same dataset. + +The second version is instead bi-directional (BIT), and the energy function is computed by projecting the mean-pooled top hidden states down to a single scalar value. We consider three variants, a BIT-BASE following the architecture of RoBERTa-Base, and a BIT-LARGE* following RoBERTa-Large (Liu et al., 2019), and a BIT-MED with the same number of parameters as UNIT (such that JOINT BIT-MED has roughly the same number of parameters as BALM). We initialize the parameters with a trained BERT, and we use * to mark usage of external data, otherwise it means that BERT was trained on our training set. Notice how our model can be interpreted as a natural way to finetune large bidirectional pretrained models for the text generation task. + +While we expect BiT to yield better results because it can fully leverage context also for intermediate tokens, we also consider UNIT to compare to the RALM baseline, which uses the same architecture and only differs in the way parameters are trained and in the presence of local normalization. + +We train our models on 8 DGX nodes, each with 8 Nvidia V100s. To improve training speed, we use mixed precision training5. We use the Adam optimizer, with cosine learning rate decay and learning rate warmup. To stabilize training we used gradient norm clipping (Pascanu et al., 2013). Detailed hyper-parameter settings can be found in Appendix A.3. + +For generation, we use top-k sampling with $k = 10$ for all human evaluations. We take 10,000 samples from BASE LM for our joint sampling. + +# 4.2 RESULTS + +
Model (#parameters)CC-NewsToronto Book Corpus
ValTestValTest
BASE LM (203M)18.4117.5716.1618.29
RALM (LM+203M)17.0116.1715.7117.85
BALM (408M)16.5015.7415.0016.99
JOINT UNIT (LM+203M)16.42-16.4415.57-15.5815.12-15.1316.98-17.00
JOINT BIT-BASE (LM+125M)15.32-15.3514.61-14.64--
JOINT BIT-BASE* (LM+125M)15.11-15.1714.37-14.4214.14-14.1615.72-15.74
JOINT BIT-LARGE* (LM+355M)14.59-14.6113.97-14.0013.80-13.8315.33-15.36
BASE LM-24L (203M)15.7114.8915.6118.14
RALM-24L (LM-24L+203M)15.7014.8915.6318.17
BALM-24L (408M)14.5813.9215.2018.24
JOINT UNIT (LM-24L+203M)14.59-14.6113.81-13.8215.12 - 15.1617.46-17.48
JOINT BIT-BASE (LM-24L+125M)13.68-13.6913.01-13.03--
JOINT BIT-BASE* (LM-24L+125M)13.60-13.6212.93-12.9514.11-14.1216.17-16.18
JOINT BIT-MED (LM-24L+203M)12.97-13.0112.38-12.42--
JOINT BIT-LARGE* (LM-24L+355M)12.71-12.7712.10-12.1613.30-13.3415.17-15.22
+ +Table 1: Validation and test perplexity on CC-News and Toronto Book Corpus. * denotes models initialized with RoBERTa trained on additional data. The joint model perplexity ranges are estimated using 100,000 samples, see Eq. 4. The number of parameters of each model is shown in parentheses. + +![](images/2b76dd9cf55c245a237cbca8e541402264c51b7f32d1df309e0ddc55b601c651.jpg) +Figure 1: Perplexity gain of JOINT BIT-MED and JOINT BIT-LARGE* (using BASE LM-24L) at each position relative to BASE LM-24L on the test set of CC-News. At each position the lower and upper bounds (Eq. 5 estimated using the method in Eq. 4, see §3.2 for more details) are estimated using 20,000 samples. The shorter the horizon (moving to the right), the tighter the estimation is but also the more limited the gains compared to base LM as un-normalized models are most useful on longer generations. + +Automatic Evaluation Our main result is reported in Table 1 where we compare models in terms of their perplexity. We can see that on both datasets, residual EBMs with causal attention JOINT UNIT outperforms the baseline RALM with approximately the same number of parameters. The non-residual baseline BALM performs similarly to JOINT UNIT, which might be due to the limitation that $P_{LM}$ is not trained jointly with the residual model in both JOINT UNIT and RALM. However, by using our EBM approach, we can remove the causal attention mask and use bi-directional models, which achieves better performance than baselines and JOINT UNIT: without external data, JOINT BIT-BASE reaches a higher performance than JOINT UNIT with fewer parameters. By initializing from the state-of-the-art pretrained bi-directional transformers RoBERTa-Base and RoBERTa-Large, JOINT BIT-BASE* and JOINT BIT-LARGE* reach even better performance than JOINT BIT-BASE. + +
Model1 (baseline)Model2 (compared model)Ratep-value
BASE LMJOINT UNIT52.85%0.16
BASE LMJOINT BIT-BASE56.25%0.015
BASE LMJOINT BIT-LARGE*58.93%0.00084
BASE LMBALM46.77%0.88
BALM<JOINT UNIT50.00%0.52
BALMJOINT BIT-BASE57.89%0.0027
BALMJOINT BIT-LARGE*59.89%0.00020
BALM-24LJOINT BIT-MED (24L)56.23%0.015
JOINT BIT-LARGE* (24L)HUMAN55.21%0.036
BASE LMBALM54.85%0.050
+ +Table 2: Human evaluation results on a subset of 333 sentences on the CC-News test set. The rate is computed as the percentage of sentences where the number of turkers preferring Model1 is strictly less than (denoted with $<$ ) or not greater than (denoted with $\leq$ ) those preferring Model2. Attention check is used to drop some votes, so there might exist ties. p-value is based on single-sided binomial test. + +In the lower part of the table, we show that if we make the big language model baseline BALM deeper (BALM-24L) (24 layers instead of 12, for the same number of parameters) we attain lower perplexity. However, training the joint model JOINT BIT-BASE on the residual of a deeper language model BASE LM-24L yields even lower perplexity, despite having fewer parameters. By using the same number of parameters as BALM-24L, JOINT BIT-MED further decreases perplexity. Finally, by initializing from RoBERTa-Large, JOINT BIT-BASE* obtains the best results. + +One caveat of our evaluation protocol is that the perplexity bounds are only estimates, which might not reflect the true value, particularly since the number of possible sequences grows exponentially with the number of words that are generated. We therefore break down perplexity per position in the generated sequences as in Eq. 5, and compare the estimated PPLs to the true enumerated PPLs at the last position, as shown in Figure 1. We find that at the final generation step, the estimated bounds agree remarkably well with the exact values, proving that our method at least gets a reasonable PPL estimate at the last generation step, and that JOINT BIT-MED outperforms baselines at the last generation step for sure. + +Human Evaluation Better perplexity results do not necessarily imply better generations. Besides, since generation from the residual EBM requires approximations as in Algorithm 1, the limited sample size might induce approximation errors compared to truly sampling from the joint distribution. Therefore, we conducted human evaluations to compare generations from the residual EBM model to generations from the baseline language models. + +For each prefix, we present one completion from each model, and ask humans to select the one that is a better continuation. More details about human evaluation can be found in the Appendix A.4. The preference rates reported in Table 2 confirm that indeed the generation quality of JOINT BIT-BASE and JOINT BIT-LARGE* is better than both language model baselines. Depending on the model variant, our joint model (with bidirectional EBM) is preferred between $56\%$ and almost $60\%$ of the times; interestingly, the preference rate does not change much as we compare against base LM as opposed to BALM. In fact, humans do not seem to have a strong preference for BALM over base LM, despite the former scores two perplexity points lower. Similarly, JOINT UNIT is not strongly preferred over BASE LM despite its lower perplexity score. We surmise that unidirectional scoring functions and auto-regressive models exhibit generation artifacts which are easily detected by humans, and these may overshadow the improvements brought by perplexity gains. + +# 4.3 ANALYSES + +In this section, we analyze some of the results we obtained. First, we check whether we used a sufficient number of samples in our perplexity estimates. Second, we assess whether the joint model produces fewer repetitions compared to the base language model, and finally we check how well some statistics of the model and data distributions match. + +![](images/19efbf34fa47f5639e89c51a23de7cf899e44219c626bca3073949d017c7324a.jpg) +Figure 2: Left: PPL estimation for joint BIT-BASE on CC-News validation set as we vary the number of samples. Right: Percentage of Unique n-grams found in real data, samples from the joint model BIT-BASE and samples from the base language model. The joint sampling is done with 10,000 samples. + +![](images/4151977af5a71984d77491cc259159e531870ef46557cbbf07995013994a2f5a.jpg) + +![](images/d15c1c6539b9b04c00ee87bf77d66cce804e63849bc2135e1e9349ac64915c99.jpg) +Figure 3: Density plot of log-probability scores using the base language model (left) or the joint model (right). The red curve corresponds to real samples, the black curve to samples from BASE LM and the green curve to samples from BIT-BASE. The joint model provides a much better fit than the base language model. + +![](images/ac7bbf0109a1f11136cbafa7b6b8ffd6f04820d8ab7add2c8d9121689b51272b.jpg) + +Number of samples. In Figure 2, we vary the number of samples we take in order to estimate PPL upper and lower bounds. Beyond 20,000 samples the upper estimate becomes very stable, although we have to emphasize that these estimates might be biased even though the gap between lower and upper bound closes as we take more samples. + +Repetitions. A typical artifact of auto-regressive language models is their tendency to repeat phrases. It is then interesting to check whether the joint model is able to alleviate this artifact. Fig. 2 shows that indeed the joint model has a slightly higher percentage of unique n-grams compared to the baseline language model with $n = 2,3,4$ , although still not as high as the original human generated text. + +A necessary condition for the model to match the data distribution. If the joint model $p_{\theta}$ matches the data distribution $p_d$ , then statistics computed on a large population of samples from the two distributions should also match. In particular, Fig. 3 show the density plots of log-likelihood scores of the baseline language model (left) and joint model (right) when fed with their own samples versus samples from the test set. We observe that the histogram of samples from the joint model matches the real data distribution more closely: The difference of means in the LM BASE case is 21.64 whereas the difference is 6.20 in the joint approach. + +# 5 LIMITATIONS + +In the previous sections we highlighted the strengths of residual EBMs, namely their simplicity, efficiency both at training and test time, and their improved perplexity scores against strong autoregressive language model baselines. In this section, we comment on their limitations to caution the + +reader about when these methods are more likely to succeed and to inform other researchers about what future avenues of research may naturally derive from this work. + +In order to make training efficient and side step costly negative mining using the energy function itself, the current approach uses negatives generated from a pretrained auto-regressive language model. Therefore, our model works as long as the base language model from which we draw samples is strong enough, and as long as the ground truth and other plausible sequences are reachable by the baseline language model. + +If the base language model has poor quality, then generation from our joint model is going to be poor as well, as the joint model merely resamples generations from the original language model. Moreover, training is going to be trivial if the base language model is poor, because the residual energy function merely needs to detect trivial generation artifacts from the base language model. In fact, observe that the role of positive and negative samples is symmetric in the loss of Eq. 3. This means that the energy function can choose to minimize the loss by either modeling the true data or the negative samples; since the latter have much simpler structure, it is going to model the negative samples. Therefore, importance sampling amounts to mostly down-weighing the worst samples from the base language model. The consequence of this is that search with a poor base language model is going to be catastrophically inefficient, as we would need to sample an impractically large number of negatives in order to find samples that are reasonably close to the true data manifold. + +To summarize, this work makes a rather strong implicit assumption on the quality of the base language model, and it is expected to work well only when this is rather strong. In our application, this assumption is met quite well in practice as large auto-regressive language models trained on large datasets have improved significantly in recent years (Radford et al., 2019). In general however, residual learning always carries liability to its base model. + +# 6 CONCLUSIONS AND FUTURE WORK + +We investigated an EBM trained on the residual of a pretrained autoregressive language model (Wang & Ou, 2018b; Parshakova et al., 2019). The resulting joint model scores sequences holistically, thanks to the energy function. Training is very efficient and consists of a binary classification task between positives from the training set and pregenerated negatives from the fixed language model. Generation is also very efficient as it amounts to resampling from the large set of negatives produced by the base language model. Our estimates show that the resulting model has lower perplexity than the base language model. Finally, this approach may be interpreted as a natural way to finetune a large bidirectional transformer like BERT for text generation applications. + +In the future, we plan to investigate other ways to generate negatives that may strike a better tradeoff between the amount of compute each negative requires and their closeness to the joint model distribution. It would also be interesting to explore other loss functions and the generation of longer pieces of text by using this model auto-regressively at the chunk level, as opposed to the token level. + +# REFERENCES + +Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian Goodfellow, and Augustus Odena. Discriminator rejection sampling. arXiv preprint arXiv:1810.06758, 2018. +Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc'Aurelio Ranzato, and Arthur Szlam. Real or fake? learning to discriminate machine from human generated text. arXiv preprint arXiv:1906.03351, 2019. +Léon Bottou. Une approche théorique de l'apprentissage connexionniste: Applications à la reconnaissance de la parole, 1991. Ph.D. thesis, Université de Paris XI, Orsay, France. +Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. In SIGNLL Conference on Computational Natural Language Learning, 2016. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805. + +Yilun Du and Igor Mordatch. Implicit generation and generalization in energy-based models. CoRR, abs/1903.08689, 2019. URL http://arxiv.org/abs/1903.08689. +Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. Classical structured prediction losses for sequence to sequence learning. In North American Chapter of the Association for Computational Linguistics, 2018. +Ruiqi Gao, Yang Lu, Junpei Zhou, Song-Chun Zhu, and Ying Nian Wu. Learning generative convnets via multi-grid modeling and sampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9155-9164, 2018. +Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. +Aditya Grover, Jiaming Song, Alekh Agarwal, Kenneth Tran, Ashish Kapoor, Eric Horvitz, and Stefano Ermon. Bias correction of learned generative models using likelihood-free importance weighting. arXiv preprint arXiv:1906.09531, 2019. +Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297-304, 2010. +Geoffrey E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771-1800, 2002. +Geoffrey E Hinton. A practical guide to training restricted boltzmann machines. In Neural networks: Tricks of the trade, pp. 599-619. Springer, 2012. +Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019. +John Hopfield. Neural networks and physical systems with emergent collective computational abilities. In National Academy of Sciences of the USA, volume 79, pp. 2554-2558, 1982. +Daniel G. Horvitz and Donovan J. Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 1952. +Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, and Sanja Urtasun, Raquel and Fidler. Skip-thought vectors. arXiv preprint arXiv:1506.06726, 2015. +Yann LeCun, Sumit Chopra, Raia Hadsell, Marc'Aurelio Ranzato, and Fu-Jie Huang. A tutorial on energy-based learning. Predicting Structured Outputs, 2006. MIT Press. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. +Zhuang Ma and Michael Collins. Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency. In Empirical Methods for Natural Language Processing, 2018. +Sebastian Nagel. Cc-news. http://web.archive.org save/http://commoncrawl.org/2016/10/news-dataset-available/, 2016. +Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent non-persistent short-run mcmc toward energy-based model. In Advances in Neural Information Processing Systems, pp. 5233-5243, 2019. +Sebastian Nowozin. Debiasing evidence approximations: On importance-weighted autoencoders and jackknife variational inference. In International Conference on Learning Representations, 2018. +Art B. Owen. Monte Carlo theory, methods and examples. 2013. URL https://statweb.stanford.edu/~owen/mc/. chapter 9. + +Tetiana Parshakova, Jean-Marc Andreoli, and Marc Dymetman. Global autoregressive models for data-efficient sequence learning. In Conference on Computational Natural Language Learning, 2019. +Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pp. 1310-1318, 2013. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019. +Marc'Aurelio Ranzato, Y-Lan Boureau, Sumit Chopra, and Yann LeCun. A unified energy-based framework for unsupervised learning. In 11-th International Workshop on Artificial Intelligence and Statistics (AISTATS), 2007. +Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. In International Conference on Learning Representation, 2016. +Ronald Rosenfeld, Stanley F Chen, and Xiaojin Zhu. Whole-sentence exponential language models: a vehicle for linguistic-statistical integration. Computer Speech & Language, 15(1):55-73, 2001. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015. +Libin Shen, Anoop Sarkar, and Franz Josef Och. Discriminative reranking for machine translation. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pp. 177-184, 2004. +Y. W. Teh, M. Welling, S. Osindero, and Hinton G. E. Energy-based models for sparse overcomplete representations. Journal of Machine Learning Research, 4:1235-1260, 2003. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998-6008, 2017. +Bin Wang and Zhijian Ou. Language modeling with neural trans-dimensional random fields. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 294-300. IEEE, 2017. +Bin Wang and Zhijian Ou. Improved training of neural trans-dimensional random field language models with dynamic noise-contrastive estimation. In 2018 IEEE Spoken Language Technology Workshop (SLT), pp. 70-76. IEEE, 2018a. +Bin Wang and Zhijian Ou. Learning neural trans-dimensional random field language models with noise-contrastive estimation. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6134–6138. IEEE, 2018b. +Bin Wang, Zhijian Ou, and Zhiqiang Tan. Trans-dimensional random fields for language modeling. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 785-794, 2015. +Bin Wang, Zhijian Ou, and Zhiqiang Tan. Learning trans-dimensional random fields with applications to language modeling. IEEE transactions on pattern analysis and machine intelligence, 40 (4):876-890, 2017. +Max Welling, Michal Rosen-Zvi, and Geoffrey E. Hinton. Exponential family harmoniums with an application to information retrieval. In Neural Information Processing Systems, 2005. +Jianwen Xie, Yang Lu, Song-Chun Zhu, and Yingnian Wu. A theory of generative convnet. In International Conference on Machine Learning, pp. 2635-2644, 2016. +Jianwen Xie, Song-Chun Zhu, and Ying Nian Wu. Synthesizing dynamic patterns by spatial-temporal generative convnet. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7093-7101, 2017. + +Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, and Ying Nian Wu. Learning descriptor networks for 3d shape synthesis and analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8629-8638, 2018. +Jianwen Xie, Song-Chun Zhu, and Ying Nian Wu. Learning energy-based spatial-temporal generative convnets for dynamic patterns. IEEE transactions on pattern analysis and machine intelligence, 2019. +Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann LeCun. Adversarially regularized autoencoders. In International Conference in Machine Learning, 2018. +Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In The IEEE International Conference on Computer Vision (ICCV), December 2015. + +# A APPENDIX + +# A.1 TOP-K AUTO-REGRESSIVE SAMPLING + +In this subsection, we factorize the joint model BIT-BASE auto-regressively, and compare its differences with BASE LM. Since even estimating the per step probabilities according to Eq. 5 is too computationally expensive, we further approximate it by only considering the top 128 words predicted by BASE LM, where we sample 10,000 completions for each of them to estimate $P(x_{t}|x_{< t})$ . Then we take the top 10 entries and re-normalize, and compare it to the top 10 probabilities of BASE LM. + +Our initial explorations suggested that the joint model tends to generate fewer repetitions. Therefore we picked a few LM samples where there are repetitions at $x_{t}$ , and use the same context $x_{Context x< tModelRankxtP(xt|x< t)6... is aimed at setting common benchmarks for orderly migration practices, thereby reducing irregular flows. The Global Compact contains ten guiding principles, including that migrants cannot be settled by countries with better integration policies and a fair and sustainable development. "For the first time in our history, a legally binding andBASE LM0binding0.391legally0.332internationally0.063comprehensive0.054transparent0.04BIT-BASE0binding0.181legally0.172internationally0.123comprehensive0.094transparent0.087... companies that land their first-choice candidates 90-100% of the time, 24% of them have "thoroughly defined" their high performer attitudes. By contrast, only 1% of companies that struggle to land their first-choice candidates "thoroughly defined" their high performer attitudes. So it seems pretty clear that companies that land their top-choice candidates are not always as willing andBASE LM0able0.661willing0.092eager0.073ready0.054well0.04BIT-BASE0able0.751willing0.052eager0.053ready0.044well0.038... it reveals a key skill needed to lead the Fed. "You need to know what you don't know. And you need to be willing to listen when you don't know something," said Karen Dynan, who as an assistant Treasury Secretary in Barack Obama's second administration would regularly meet Fed governors. iEOSi. New Delhi Dec 5 The following are mergers under review by India's financial services andBASE LM0banking0.641financial0.102insurance0.093technology0.054IT0.04BIT-BASE0banking0.921financial0.062insurance0.013technology0.004IT0.00 + +Table 3: Comparison of $P(x_{t}|x_{< t})$ between BASE LM and BiT-BASE on a few examples. Repetitions are marked with red. Only the top 5 probabilities are shown. + +# A.2 PROOF OF THEOREM 2 + +Theorem 2. Denote $T_{n}$ as the empirical estimate of $\log \mathbb{E}_{x\sim P_{LM}}\exp (-E(x))$ with $n$ samples $x_{i}\sim P_{LM}(i = 1,\dots ,n)$ , and let $T_{n} = \log \frac{1}{n}\sum_{i = 1}^{n}\exp (-E(x_{i}))$ , then $\forall \epsilon >0, \exists N > 0$ such that $\forall n > N$ we have + +$$ +Z _ {\theta} - \epsilon < \mathbb {E} [ T _ {n} ] < Z _ {\theta} < \mathbb {E} [ (2 n - 1) T _ {n} - 2 (n - 1) T _ {n - 1} ] < Z _ {\theta} + \epsilon \tag {7} +$$ + +Proof. From Nowozin (2018) Eq. 35, we can write $\mathbb{E}[T_n]$ as + +$$ +\begin{array}{l} \mathbb {E} \left[ T _ {n} \right] = Z _ {\theta} - \frac {\mu_ {2}}{2 \mu^ {2}} \frac {1}{n} + \frac {1}{3 \mu^ {3}} \frac {\mu_ {3}}{n ^ {2}} - \frac {1}{4 \mu^ {4}} \left(\frac {3}{n ^ {2}} \mu_ {2} ^ {2} + \frac {1}{n ^ {3}} \left(\mu_ {4} - 3 \mu_ {2} ^ {2}\right)\right) \\ + \frac {1}{5 \mu^ {5}} \left(\frac {1 0}{n ^ {3}} \mu_ {3} \mu_ {2} + \frac {1}{n ^ {4}} \left(\mu_ {5} - 1 0 \mu_ {3} \mu_ {2}\right)\right) + o (n ^ {- 3}) \tag {8} \\ \end{array} +$$ + +Where $\mu = \mathbb{E}[T_n]$ , $\mu_k = \mathbb{E}[(T_n - \mu)^k]$ . Equivalently, + +$$ +\mathbb {E} \left[ T _ {n} \right] = Z _ {\theta} - \frac {\mu_ {2}}{2 \mu^ {2}} \frac {1}{n} + o \left(n ^ {- 1}\right) \tag {9} +$$ + +Therefore, $\lim_{n\to \infty}\mathbb{E}[T_n] = Z_\theta$ . So $\forall \epsilon >0,\exists N_1 > 0$ such that when $n > N_{1}$ , $\mathbb{E}[T_n] > Z_\theta -\epsilon$ . On the other hand, $\lim_{n\to \infty}n(Z_{\theta} - \mathbb{E}[T_{n}]) = \lim_{n\to \infty}\frac{\mu_{2}}{2\mu^{2}} +o(1) = \frac{\mu_{2}}{2\mu^{2}} >0$ , so $\exists N_{2} > 0$ such that when $n > N_{2}$ we have $Z_{\theta} > \mathbb{E}[T_n]$ . Up to this point, we have proved that $Z_{\theta} - \epsilon < \mathbb{E}[T_n] < Z_{\theta}$ . + +For the other half part of the proof, using Eq. 8 we have + +$$ +\mathbb {E} \left[ T _ {n} \right] = Z _ {\theta} - \frac {\mu_ {2}}{2 \mu^ {2}} \frac {1}{n} + \frac {c}{n ^ {2}} + o \left(n ^ {- 2}\right) \tag {10} +$$ + +where $c$ is a constant. Therefore, $\mathbb{E}[(2n - 1)T_n - 2(n - 1)T_{n - 1}] = (2n - 1)\mathbb{E}[T_n] - 2(n - 1)\mathbb{E}[T_{n - 1}] = Z_\theta +\frac{\mu_2}{2\mu^2}\frac{1}{n} +o(n^{-1})$ . Therefore $\lim_{n\to \infty}\mathbb{E}[(2n - 1)T_n - 2(n - 1)T_{n - 1}] = Z_\theta$ , hence $\forall \epsilon >0,\exists N_3 > 0$ such that $\forall n > N_3\mathbb{E}[(2n - 1)T_n - 2(n - 1)T_{n - 1}] < Z_\theta +\epsilon$ . Furthermore, $\lim_{n\to \infty}n(\mathbb{E}[(2n - 1)T_n - 2(n - 1)T_{n - 1}] - Z_\theta) = \lim_{n\to \infty}\frac{\mu_2}{2\mu^2} +o(1) > 0$ , so $\exists N_4 > 0$ such that when $n > N_4$ we have $\mathbb{E}[(2n - 1)T_n - 2(n - 1)T_{n - 1}] > Z_\theta$ . + +Putting the above together, $\forall \epsilon > 0$ , let $N = \max \{N_1, N_2, N_3, N_4\}$ , then $\forall n > N$ , + +$$ +Z _ {\theta} - \epsilon < \mathbb {E} [ T _ {n} ] < Z _ {\theta} < \mathbb {E} [ (2 n - 1) T _ {n} - 2 (n - 1) T _ {n - 1} ] < Z _ {\theta} + \epsilon +$$ + +# A.3 OPTIMIZATION SETTINGS + +
Modelfp16batch sizewarmup stepsmax stepsmax lrmax grad norm
BASE LM-322,000180,0000.000110
RALM-642,000180,0000.000110
BALM-322,000180,0000.000110
JOINT UNIT+642,000180,0000.000310
JOINT BIT-BASE-602,00090,0000.000050.25
JOINT BIT-BASE*-602,00090,0000.000050.25
JOINT BIT-LARGE*+642,00090,0000.000310
BASE LM-24L-502,00090,0000.00030.25
RALM-24L-281,00090,0000.000150.25
BALM-24L-282,00090,0000.00030.25
JOINT UNIT (LM-24L)+642,000180,0000.000310
JOINT BIT-BASE (LM-24L)-602,00090,0000.000050.25
JOINT BIT-BASE* (LM-24L)-602,00090,0000.000050.25
JOINT BIT-MED (LM-24L)-322,00090,0000.000050.25
JOINT BIT-LARGE* (LM-24L)-202,00090,0000.000050.25
+ +Table 4: Optimization settings. We use the same setting for CC-News and Toronto Book Corpus. + +The optimization settings are presented in Table 4. + +Read each of the three pairs of text below and decide which is a more reasonable extension of the initial words. Note: do not worry if one or both extensions is incomplete. + +... 'If you try to tinker with this without the tools that only Congress has, you are as likely to break the cloud as you are to fix it,' he said. Google, which has waged similar battles with the government, and an array of other leading tech companies are supporting Microsoft in the case. Justices Sonia Sotomayor and Ruth Bader Ginsburg suggested the wait-for-Congress approach had some appeal. 'Wouldn't it be wiser just to say, 'Let's leave things as they are—if Congress wants to regulate in this brave new world, it should just give it up,' Ginsburg said, according to a summary of the opinion written for the high court's concurrence. The tech companies have a history of fighting government regulations in court, and have... +$\bigcirc$ ... 'If you try to tinker with this without the tools that only Congress has, you are as likely to break the cloud as you are to fix it,' he said. Google, which has waged similar battles with the government, and an array of other leading tech companies are supporting Microsoft in the case. Justices Sonia Sotomayor and Ruth Bader Ginsburg suggested the wait-for-Congress approach had some appeal. 'Wouldn't it be wiser just to say, 'Let's leave things as they are—if Congress wants to regulate in this brave new world, it should be regulating in this brave new world?',' wrote Sotomayor and Bader Ginsburg. A ruling is due by the end of June. If it's approved by Congress, the court could... + +Figure 4: Screenshot of the human evaluation. + +# A.4 HUMAN EVALUATION + +A screenshot of the human evaluation experiments can be found in Fig 4. Every page asks for 4 comparisons, one of which we know what the ground truth answer is. We subsampled 333 sentences from the test set of CC-News, and asked 3 Amazon Mechanical turkers to vote. We consider one continuation better if it gets more votes. To check the quality of the received ratings, we performed a qualification task beforehand, where one of the continuations is real text, and we kept the top half performing turkers for further evaluation (corresponding to higher than $66.67\%$ accuracy for discriminating real from LM samples – for a total of 26 qualified turkers). Then in the actual experiment, we use one out of every four comparisons as an attention check and drop responses if the turker did not pass the check. + +# A.5 QUALITATIVE ANALYSIS + +We present generation examples when our approach BASE LM outperforms baseline BALM in Table 5, and when our approach underperforms in Table 6. Here the judgment is based on human evaluation when all three turkers unanimously voted in favor of one model over the other. + +
PrefixModelGeneration
9the timing of the donation couldn't be better, and even went on to say that no one is more relevant than Chance right now. Thank you to Doug and @jewelosco for donating $1,000,000 to The New Chance Arts & Literature Fund, which will ensure more students have access to arts enrichment education! #supportcpiac.twitter.com/MXZtpscU5b — SocialWorks (@SocialWorks_Chi) November 20, 2017 “In the past, CPS hasn’t really gotten a lot of money from non-profitsBASE LM. And the fact that the money is coming from the government, it makes a big difference,” he said. “We’re not the biggest donor of arts education, so to
BALM, and it doesn’t even have the kind of resources or funding for the arts or literary programs that we need to do the things that people want to support,” said Chance.
BIT-BASE-WORST, said Cpl. Chris Bissonnette of the Stony Brook Police Department. “So the reason is, as I think about the impact of these grants and these grants to schools
BIT-BASEto fund programming. So, I think that it’s a great time for us to do something to support that,” said Chance. “We want to make sure that our programs
GOLDand charitable arms and organizations because there’s been a past of corrupt activity in the city of Chicago,” Chance said. “And so, when it comes down to it, it
10quarter. The penalties are still somewhat of a problem but tight ends Travis Kelce and Demetrius Harris made some impressive grown-man football plays. -It was nice to see running back Kareem Hunt get in the end zone for the first time since Week 3. He must feel good to end the drought. -Kelce was visibly frustrated on the sidelines and rightfully so. The officials seem to be leaning toward Oakland with calls today. Still, Kelce should’ve caught that easy pass that he dropped. -Quarterback Alex Smith has delivered a couple of nice deep balls toBASE LMget his hands on the ball this week. He threw two touchdown passes on Thursday. He should get another touchdown as the season goes on. He’s got a good chance to be one of
BALMthe Chiefs and the Chiefs defense has made them pay for it. The offense has done a nice job with the ball, and they’s been a factor. It’s nice to have
BIT-BASE-WORSTtight end Martavis Bryant to get the ball back, but the throw from Smith to Davis Webb is incomplete, which could lead to anincompletion. -The Chiefs are now 5-2 after winning
BIT-BASEhelp the Chiefs win this game. His completion of a deep ball to wide receiver Sammy Watkins is a nice sign. -Jalen Ramsey should be back in the lineup today. The Jaguars are going
GOLDreceivers Albert Wilson and Tyreek Hill. The footwork by Wilson on the deep ball was amazing and Hill’s ball-tracking skills were on point as usual. Good things happen when Smith trusts
11has an amazing ability to detoxify itself namely by the liver, kidneys, and skin,” she says. Detox programs and diets do sometimes make people feel better, but that happens more often if you’re eating and/or drinking things that make feel like crap (something it doesn’t sound like Kardashian was doing in the first place). And if, somehow, you discover that you have elevated levels of mercury or lead in your body like Kardashian said she had, it’s important to figure out why, Dr. Wider says. “Finding the source is extremely important for any heavy metal, so youBASE LMcan’t say ‘no or I’ve never seen something of that value, so I’ll try and find it again and see what happens.” So don’t be afraid to find a way to detox or
BALMwant to get the full picture and not just have to look at the whole picture,” she says. “It can be a great idea to talk to your doctor to determine what’s causing your symptoms.”
BIT-BASE-WORSTcan get rid of toxins that are found in foods like whole wheat bread, nuts like walnuts, walnuts and walnuts,” she says. “If a source of the metals has elevated levels of
BIT-BASEcan’t say, ‘Oh my God, it’s a lead, but I’m fine.’ But it’s definitely a problem in our bodies,” she says. “And if it’s a problem in
GOLDcan lower your exposure over time,” she says. Meaning, a detox diet probably isn’t going to help you as much as, say, reducing your intake of foods that are high in mercury (like
+ +Table 5: Example generations when BIT-BASE outperforms BALM according to human evaluation. BiT-BASE-WORST shows the LM sample with the highest energy score. + +
PrefixModelGeneration
12but as the audio only came on halfway through the incident, it did not record his initial words to the girl or her response. It was also dropped in the hallway during their interaction, so it did not record any of what happened once the two left the library. Jones brought her outside to the library's front steps, where he waited for responding officers to arrive. They did not call for an ambulance, according to the lawsuit. Rather, when the officers arrived, they told her to spit blood on the grass instead of the concrete, and to move from the steps to a bench. Some 20 minutesBASE LMlater, she was dead, according to the lawsuit. "It's not uncommon for an officer to spit blood on an unarmed, vulnerable adult. The use of spit is illegal in a public place."
BALMlater, Jones was arrested for disorderly conduct and released after posting bond, the suit said. She was arrested again on May 2. In the suit, Jones is seeking more than $1 million. A
BIT-BASE-WORSTlater, he walked up on the ground, where they began to yell and yell again. When Jones refused, the officers ordered him not to leave or leave any more. Jones then pulled his gun on
BIT-BASEafter the officers' arrival, Jones and the girl left the library and headed to the parking lot of a nearby apartment complex. "This is the type of behavior that is most likely to cause harm to
GOLDlater, J.G's mother arrived and took her daughter to Cleveland Clinic in Lakewood, where she was treated for a dislocated jaw, a head injury, and an injury to her knee.
13, Bronson said. "The initiative provides a variety of supports to early childhood programs' children, families and staff. The resources provided through this partnership increase the quality of the participating programs, which benefits the community and impacts our future in such a positive way," Scott said. Visit PNCGrowUpGreat.com/donorschoose. \nHere are Varsity sports headlines for April 13, 2018. Refresh to get the latest as we add news throughout the night as we collect scores: Best of the best in track and field Our Sentinel coverage area top performers lists for girls track and fieldBASE LMat the Varsity Track & Field Invite. \nThe U.S. Army Corps of Engineers has approved $2 billion in construction work along the U.S.-Mexico boundary as a way to
BALM. Check back frequently for updates. The Sentinel also has a feature on the boys basketball and wrestling teams. Boys golf The Sentinel boys golf and wrestling teams have been one of those teams who has had some
BIT-BASE-WORST\ nLONDON, April 13 (IFR) - The following are some of the main factors expected to affect Spanish corporate bond prices on Wednesday. BAML: Spanish sovereign wealth fund PPV
BIT-BASE\ nA few weeks back, it's been rumored that the HTC Desire was going to be the company's last flagship phone, and now, a new leak has confirmed that it
GOLDand boys track and field are updated going into the Saturday district meets. The season is heating up with more district and region races coming up next week. Click these links for girls top performers and boys top
14leaders meeting November 17-18 in Papua New Guinea as potential Xi-TRump meet dates. If all else fails, Trump and Xi are also expected to meet for a bit at the G20 meeting at the end of November. On the economic calendar next week, the update on jobs and the U.S. trade deficit are the headliners on November 2. Notable earnings reports: Akamai Technologies (NASDAQ:AKAM), Mondelez International (NASDAQ:MDLZ) and Olin Corp. (NYSE:OLN) on October 29; Under Armour (NYSE:BASE LMUAA), General Motors (NYSE:GM) on November 4; and Procter & Gamble (NYSE:PG) for October. On the retail front, Lowe's Companies (NYSE:L
BALMUA) on October 30; CVS Health (NASDAQ:CVS) on November 27; Intel Corporation (NASDAQ:IntC) on October 28; and Verizon Communications (NYSE:V
BIT-BASE-WORSTUAA) and Adidas (OTCPK:AddDYFF; OTCQX:AddDYYFGF; OLYMP), on November 30; and Qualcomm Incorporated (NASDAQ:
BIT-BASEUAA), Johnson Controls (NYSE:ICI) and Cisco Systems (NASDAQ:CSCO) on November 6. \nA woman who had to have her nose and mouth taped as punishment
GOLDUAA), eBay (NASDAQ:EBAY), General Electric (NYSE:GE), Coca-Cola (NYSE:KO), Pfizer (NYSE:PFE) and Electronic Arts (NAS
+ +Table 6: Example generations when BIT-BASE underperforms BALM according to human evaluation. B1T-BASE-WORST shows the LM sample with the highest energy score. \ No newline at end of file diff --git a/residualenergybasedmodelsfortextgeneration/images.zip b/residualenergybasedmodelsfortextgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e52c7abcda746e5f9e7010587df74874176c3dc7 --- /dev/null +++ b/residualenergybasedmodelsfortextgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63ff083b09b8d0bd97ce106581241e278bf4f35e01b24ad1f74a7898eafd32d0 +size 1227660 diff --git a/residualenergybasedmodelsfortextgeneration/layout.json b/residualenergybasedmodelsfortextgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3552a8229242d4b1b1bc6f64c768666cc678582f --- /dev/null +++ b/residualenergybasedmodelsfortextgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76f06f30d5c1ab2ad4f9e6ceeef593f2b4b6ef8cee7ba28a641786be891bd867 +size 480312 diff --git a/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_content_list.json b/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b5a0baeb7961865cc00b9189dfed6e131620dfdf --- /dev/null +++ b/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac7848f659b4fa59ef9e32ecd0d49527077428c2f4e61f20b12e58bde168e95b +size 115696 diff --git a/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_model.json b/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5b536a94fe9a4fcab4ec62b741861b869af62bbd --- /dev/null +++ b/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6636aba67136c727cd012c0dbddf7546b38166ac0a54043bbe829cb7aef9020 +size 141905 diff --git a/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_origin.pdf b/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f05ad81d8af461643b5bb1b1c70adf35ccc5c876 --- /dev/null +++ b/restrictingtheflowinformationbottlenecksforattribution/164e4c15-913a-4f81-853c-4cc1cac7ef99_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:245726016b65646b9b9b1c4c72c0da669f8dd762bea924b84d264906c2681039 +size 5076556 diff --git a/restrictingtheflowinformationbottlenecksforattribution/full.md b/restrictingtheflowinformationbottlenecksforattribution/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a9078a95c7e3c5d7931edb9aae616ab6a5b0b933 --- /dev/null +++ b/restrictingtheflowinformationbottlenecksforattribution/full.md @@ -0,0 +1,557 @@ +# RESTRICTING THE FLOW: INFORMATION BOTTLENECKS FOR Attribution + +Karl Schulz $^{1*†}$ , Leon Sixt $^{2*}$ , Federico Tombari $^{1}$ , Tim Landgraf $^{2}$ + +* contributed equally † work done at the Freie Universität Berlin + +Technische Universität München1 Freie Universität Berlin2 + +Corresponding authors: karl.schulz@tum.de, leon.sixt@fu-berlin.de + +# ABSTRACT + +Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work, we adopt the information bottleneck concept for attribution. By adding noise to intermediate feature maps, we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The method's information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision. + +# 1 INTRODUCTION + +Deep neural networks have become state of the art in many real-world applications. However, their increasing complexity makes it difficult to explain the model's output. For some applications such as in medical decision making or autonomous driving, model interpretability is an important requirement with legal implications. Attribution methods (Selvaraju et al., 2017; Zeiler & Fergus, 2014; Smilkov et al., 2017) aim to explain the model behavior by assigning a relevance score to each input variable. When applied to images, the relevance scores can be visualized as heatmaps over the input pixel space, thus highlighting salient areas relevant for the network's decision. + +For attribution, no ground truth exists. If an attribution heatmap highlights subjectively irrelevant areas, this might correctly reflect + +the network's unexpected way of processing the data, or the heatmap might be inaccurate (Nie et al., 2018; Viering et al., 2019; Sixt et al., 2019). Given an image of a railway locomotive, the attribution map might highlight the train tracks instead of the train itself. Current attribution methods cannot guarantee that the network is ignroing the low-scored locomotive for the prediction. + +We propose a novel attribution method that estimates the amount of information an image region provides to the network's prediction. We use a variational approximation to upper-bound this estimate and therefore, can guarantee that areas with zero bits of information are not necessary for the prediction. Figure 1 shows an exemplary heatmap of our method. Up to 3 bits per pixel are available for regions corresponding to the monkeys' faces, whereas the tree is scored with close to zero bits per pixel. We can thus guarantee that the tree is not necessary for predicting the correct class. + +To estimate the amount of information, we adapt the information bottleneck concept (Tishby et al., 2000; Alemi et al., 2017). The bottleneck is inserted into an existing neural network and restricts the information flow by adding noise to the activation maps. Unimportant activations are replaced almost entirely by noise, removing all information for subsequent network layers. We developed two approaches to learn the parameters of the bottleneck – either using a single sample (Per-Sample Bottleneck), or the entire dataset (Readout Bottleneck). + +![](images/a5b52c5b1ad811a54159462d6db92ae0e5145d3e2bc7308fa26fd9b80551d6ed.jpg) +Figure 1: Exemplary heatmap of the Per-Sample Bottleneck for the VGG-16. + +We evaluate against ten different baselines. First, we calculated the Sensitivity-n metric proposed by Ancona et al. (2018). Secondly, we quantified how well the object of interest was localized using bounding boxes and extend the degradation task proposed by Ancona et al. (2017). In all these metrics, our method outperforms the baselines consistently. Additionally, we test the impact of cascading layer-wise weight randomizations on the attribution heatmaps (Adebayo et al., 2018). For reproducibility, we share our source code and provide an easy-to-use* implementation. + +We name our method IBA which stands for Information Bottleneck Attribution. It provides a theoretic upper-bound on the used information while demonstrating strong empirical performance. Our work improves model interpretability and increases trust in attribution results. + +To summarize our contributions: + +- We adapt the information bottleneck concept for attribution to estimate the information used by the network. Information theory provides a guarantee that areas scored irrelevant are indeed not necessary for the network's prediction. +- We propose two ways - Per-Sample and Readout Bottleneck - to learn the parameters of the information bottleneck. +- We contribute a novel evaluation method for attribution based on bounding boxes and we also extend the metric proposed by Ancona et al. (2017) to provide a single scalar value and improve the metric's comparability between different network architectures. + +# 2 RELATED WORK + +Attribution is an active research topic. Gradient Maps (Baehrens et al., 2010) and Saliency Maps (Simonyan & Zisserman, 2014) are based on calculating the gradient of the target output neuron w.r.t. to the input features. Integrated Gradient (Sundararajan et al., 2017) and SmoothGrad (Smilkov et al., 2017) improve over gradient-based attribution maps by averaging the gradient of multiple inputs, either over brightness level interpolations or in a local neighborhood. Other methods, such as Layerwise Relevance Propagation (LRP) (Bach et al., 2015), Deep Taylor Decomposition (DTD) (Montavon et al., 2017), Guided Backpropagation (GuidedBP) (Springenberg et al., 2014) or DeepLIFT (Shrikumar et al., 2017) modify the propagation rule. PatternAttribution (Kindermans et al., 2018) builds upon DTD by estimating the signal's direction for the backward propagation. Perturbation-based methods are not based on backpropagation and treat the model as a black-box. Occlusion (Zeiler & Fergus, 2014) measures the importance as the drop in classification accuracy after replacing individual image patches with zeros. Similarly, blurring image patches has been used to quantify the importance of high-frequency features (Greydanus et al., 2018). + +Grad-Cam (Selvaraju et al., 2017) take the activations of the final convolutional layer to compute relevance scores. They also combine their method with GuidedBP: GuidedGrad-CAM. Ribeiro et al. (2016) uses image superpixels to explain deep neural networks. High-level concepts rather input pixels are scored by TCAV (Kim et al., 2018). Khakzar et al. (2019) prune irrelevant neurons. Similar to our work, Macdonald et al. (2019) uses a rate-distortion perspective, but minimize the norm of the mask instead of the shared information. To the best of our knowledge, we are the first to estimate the amount of used information for attribution purposes. + +Although many attribution methods exist, no standard evaluation benchmark is established. Thus, determining the state of the art is difficult. The performance of attribution methods is highly dependent on the used model and dataset. Often only a purely visual comparison is performed (Smilkov et al., 2017; Springenberg et al., 2014; Montavon et al., 2017; Sundararajan et al., 2017; Bach et al., 2015). The most commonly used benchmark is to degrade input images according to the attribution heatmap and measure the impact on the model output. (Kindermans et al., 2018; Samek et al., 2016; Ancona et al., 2017). The Sensitivity-n score (Ancona et al., 2018) is obtained by randomly masking the input image and then measuring the correlation between removed attribution mass and model performance. For the ROAR score (Hooker et al., 2018), the network is trained from scratch on the degraded images. It is computationally expensive but avoids out-of-domain inputs – an inherent problem of masking. Adebayo et al. (2018) proposes a sanity check: if the network's parameters are randomized, the attribution output should change too. + +Adding noise to a signal reduces the amount of information (Shannon, 1948). It is a popular way to regularize neural networks (Srivastava et al., 2014; Kingma et al., 2015; Gal et al., 2017). For regularization, the noise is applied independently from the input and no attribution maps can be obtained. In Variational Autoencoders (VAEs) (Kingma & Welling, 2013), noise restricts the information capacity of the latent representation. In our work, we construct a similar information bottleneck that can be inserted into an existing network. Deep convolutional neural networks have been augmented with information bottlenecks before to improve the generalization and robustness against adversarial examples (Achille & Soatto, 2018; Alemi et al., 2017). Zhmoginov et al. (2019) and Taghanaki et al. (2019) extract salient regions using information bottlenecks. In contrast to our work, they add the information bottleneck already when training the network and do not focus on post-hoc explanations. + +# 3 INFORMATION BOTTLENECK FOR Attribution + +Instead of a backpropagation approach, we quantify the flow of information through the network in the forward pass. Given a pre-trained model, we inject noise into a feature map, which restrains the flow of information through it. We optimize the intensity of the noise to minimize the information flow, while simultaneously maximizing the original model objective, the classification score. The parameters of the original model are not changed. + +# 3.1 INFORMATION BOTTLENECK FOR Attribution + +Generally, the information bottleneck concept (Tishby et al., 2000) describes a limitation of available information. Usually, the labels $Y$ are predicted using all information from the input $X$ . The information bottleneck limits the information to predict $Y$ by introducing a new random variable $Z$ . The information the new variable $Z$ shares with the labels $Y$ is maximized while minimizing the information $Z$ and $X$ share: + +$$ +\max I [ Y; Z ] - \beta I [ X, Z ], \tag {1} +$$ + +where $I[X,Z]$ denotes the mutual information and $\beta$ controls the trade-off between predicting the labels well and using little information of $X$ . A common way to reduce the amount of information is to add noise (Alemi et al., 2017; Kingma & Welling, 2013). + +For attribution, we inject an information bottleneck into a pretrained network. The bottleneck is inserted into an layer which still contains local information, e.g. for the ResNet the bottleneck is added after conv3_* (after the last conv3 block). Let $R$ denote the intermediate representations at this specific depth of the network, i.e. $R = f_{l}(X)$ where $f_{l}$ is the $l$ -th layer output. We want to reduce information in $R$ by adding noise. As the neural network is trained already, adding noise should preserve the variance of the input to the following layers. Therefore, we also damp the signal $R$ when increasing the noise, effectively replacing the signal partly with noise. In the extreme case, when no signal is transmitted, we replace $R$ completely with noise of the same mean and variance as $R$ . For this purpose, we estimate the mean $\mu_{R}$ and variance $\sigma_{R}^{2}$ of each feature of $R$ empirically. As information bottleneck, we then apply a linear interpolation between signal and noise: + +$$ +Z = \lambda (X) R + (1 - \lambda (X)) \epsilon , \tag {2} +$$ + +where $\epsilon \sim \mathcal{N}(\mu_R,\sigma_R^2)$ and $\lambda (X)$ controls the damping of the signal and the addition of the noise. The value of $\lambda$ is a tensor with the same dimensions as $R$ and $\lambda_{i}\in [0,1]$ . Given $\lambda_i(X) = 1$ at the feature map location $i$ , the bottleneck transmits all information as $Z_{i} = R_{i}$ . Whereas if $\lambda_{i} = 0$ , then $Z_{i} = \epsilon$ and all information of $R_{i}$ is lost and replaced with noise. It could be tempting to think that $Z$ from equation 2 has the same mean and variance as $R$ . This is not the case in general as $\lambda (X)$ and $R$ both depend on $X$ (for more detail see Appendix E). + +In our method, we consider an area relevant if it contains useful information for classification. We need to estimate how much information $Z$ still contains about $R$ . This quantity is the mutual information $\mathrm{I}[R,Z]$ that can be written as: + +$$ +\mathrm {I} [ R, Z ] = \mathbb {E} _ {R} \left[ D _ {\mathrm {K L}} [ P (Z | R) | | P (Z) ] \right], \tag {3} +$$ + +where $P(Z|R)$ and $P(Z)$ denote the respective probability distributions. We have no analytic expression for $P(Z)$ since it would be necessary to integrate over the feature map $p(z) = \int_{R} p(z|r)p(r)\mathrm{d}r$ + +![](images/e325136224b066581f2bbfc9beb684a8bff2b86900674d324bd82e30eaa4e319.jpg) +Figure 2: Per-Sample Bottleneck: The mask (blue) contains an $\alpha_{i}$ for each $r_i$ in the intermediate feature maps $R$ (green). The parameter $\alpha$ controls how much information is passed to the next layer. The mask $\alpha$ is optimized for each sample individually according to equation 6. + +- an intractable integral. It is a common problem that the mutual information can not be computed exactly but is instead approximated (Poole et al., 2019; Suzuki et al., 2008). We resort to a variational approximation $Q(Z) = \mathcal{N}(\mu_R, \sigma_R)$ which assumes that all dimensions of $Z$ are distributed normally and independent - a reasonable assumption, as activations after linear or convolutional layers tend to have a Gaussian distribution (Klambauer et al., 2017). The independence assumption will generally not hold. However, this only overestimates the mutual information as shown below. Substituting $Q(Z)$ into the previous equation 3, we obtain: + +$$ +\mathrm {I} [ R, Z ] = \mathbb {E} _ {R} \left[ D _ {\mathrm {K L}} [ P (Z | R) | | Q (Z) ] \right] - D _ {\mathrm {K L}} [ P (Z) | | Q (Z) ]. \tag {4} +$$ + +The derivation is shown in appendix D and follows Alemi et al. (2017). The first term contains the KL-divergence between two normal distributions, which is easy to evaluated and we use it to approximate the mutual information. The information loss function $\mathcal{L}_I$ is therefore: + +$$ +\mathcal {L} _ {I} = \mathbb {E} _ {R} \left[ D _ {\mathrm {K L}} [ P (Z | R) | | Q (Z) ] \right]. \tag {5} +$$ + +We know that $\mathcal{L}_I$ overestimates the mutual information, i.e. $\mathcal{L}_I \geq \mathrm{I}[R,Z]$ as the second KL-divergence term $D_{\mathrm{KL}}[P(Z)||Q(Z)]$ has to be positive. If $\mathcal{L}_I$ is zero for an area, we can guarantee that information from this area is not necessary for the network's prediction. Information from this area might still be used when no noise is added. + +We aim to keep only the information necessary for correct classification. Thus, the mutual information should be minimal while the classification score should remain high. Let $\mathcal{L}_{CE}$ be the cross-entropy of the classification. Then, we obtain the following optimization problem: + +$$ +\mathcal {L} = \mathcal {L} _ {C E} + \beta \mathcal {L} _ {I}, \tag {6} +$$ + +where the parameter $\beta$ controls the relative importance of both objectives. For a small $\beta$ , more bits of information are flowing and less for a higher $\beta$ . We propose two ways of finding the parameters $\lambda$ – the Per-Sample Bottleneck and the Readout Bottleneck. For the Per-Sample Bottleneck, we optimize $\lambda$ for each image individually, whereas in the readout bottleneck, we train a distinct neural network to predict $\lambda$ . + +# 3.2 PER-SAMPLE BOTTLENECK + +For the Per-Sample Bottleneck, we use the bottleneck formulation described above and optimize $\mathcal{L}$ for individual samples – not for the complete dataset at once. Given a sample $x$ , $\lambda$ is fitted to the sample to reflect important and unimportant regions in the feature space. A diagram of the Per-Sample Bottleneck is shown in Figure 2. + +Parameterization The bottleneck parameters $\lambda$ have to be in [0, 1]. To simplify optimization, we parametrize $\lambda = \mathrm{sigmoid}(\alpha)$ . This parametrization allows $\alpha \in \mathbb{R}^d$ and avoids any clipping of $\lambda$ to [0, 1] during optimization. + +Initialization As when training neural networks, the initialization of parameters matters. In the beginning, we want to retain all the information. For all dimensions $i$ , we initialize $\alpha_{i} = 5$ and thus $\lambda_{i}\approx 0.993\Rightarrow Z\approx R$ . At first, the bottleneck has practically no impact on the model performance. It then deviates from this starting point to suppress unimportant regions. + +![](images/e2b77695ed4788bb007a864c2ab17b61a26a4c087edd1463e07ebcf94780c116.jpg) +Figure 3: Readout Bottleneck: In the first forward pass (1), feature maps are collected at different depths. The readout network uses a resized version of the feature maps to predict the parameters for the bottleneck layer. In the second forward pass (2), the bottleneck is inserted and noise added. All parameters of the analyzed network are kept fixed. + +Optimization We do 10 iterations using the Adam optimizer (Kingma & Ba, 2014) with learning rate 1 to fit the mask $\alpha$ . To stabilize the training, we copy the single sample 10 times and apply different noise to each. In total, we execute the model on 100 samples to create a heatmap, comparable to other methods such as SmoothGrad. After the optimization, the model usually predicts the target with probability close to 1, indicating that only negative evidence was removed. + +Measure of information To measure the importance of each feature in $Z$ , we evaluate $D_{\mathrm{KL}}(P(Z|R)||Q(Z))$ per dimension. It shows where the information flows. We obtain a two-dimensional heatmap $m$ by summing over the channel axis: $m_{[h,w]} = \sum_{i=0}^{c} D_{\mathrm{KL}}(P(Z_{[i,h,w]})|R_{[i,h,w]})||Q(Z_{[i,h,w]})$ . As convolutional neural networks preserve the locality in their channel maps, we use bilinear interpolation to resize the map to the image size. For the ResNet-50, we insert the bottleneck after layer conv3_*. Choosing a later layer with a lower spatial resolution would increase the blurriness of the attribution maps, due to the required interpolation. The effect of different depth choices for the bottleneck is visualized in figure 4. + +Enforcing local smoothness Pooling operations and convolutional layers with stride greater than 1 are ignoring parts of the input. The ignored parts cause the Per-Sample Bottleneck to overfit to a grid structure (shown in Appendix B). To obtain a robust and smooth attribution map, we convolve the sigmoid output with a fixed Gaussian kernel with standard deviation $\sigma_{s}$ . Smoothing the mask during training is not equivalent to smoothing the resulting attribution map, as during training also the gradient is averaged locally. Combining everything, the parametrization for the Per-Sample Bottleneck is: + +$$ +\lambda = \operatorname {b l u r} \left(\sigma_ {s}, \operatorname {s i g m o i d} (\alpha)\right). \tag {7} +$$ + +# 3.3 READOUT BOTTLENECK + +For the Readout Bottleneck, we train a second neural network to predict the mask $\alpha$ . In contrast to the Per-Sample Bottleneck, this model is trained on the entire training set. In Figure 3, the Readout Bottleneck is depicted. Kummerer et al. (2014) introduced the readout concept for gaze prediction. The general idea is to collect feature maps from different depths and then combine them using 1x1 convolutions. + +In a first forward pass, no noise is added and we collect the different feature maps. As the spatial resolution of the feature maps differs, we interpolate them bilinearly to match the spatial dimensions of the bottleneck layer. The readout network then predicts the information mask based on the collected feature maps. In a second forward pass, we insert the bottleneck layer into the network and restrict the flow of information. + +Except for the learned importance values, the Readout Bottleneck is identical to the mechanism of the Per-Sample Bottleneck. The measure of information works in the same way as for the Per-Sample Bottleneck and we also use the same smoothing. Given a new sample, we can obtain a heatmap by merely collecting the feature maps and executing the readout network. + +![](images/f85f409d16b679746e4f55116d689045c72b9f7a7702655aec419c584e6216a7.jpg) + +![](images/e70653b757177db615dd48311ea056b74dfa8ab5a6a248cd36c083501834aa4d.jpg) + +![](images/89cac803e12a76f9e4b4a36a10aeba248da9b6145b227c27b5902a38fd8c4084.jpg) + +![](images/3d354a266980ae33f3e936ab98f34f841dcb19c743fff0411176ed9d674b3517.jpg) + +![](images/ec37ee34dd46d23c3fbe3ce400e465b77b11af87c8df5e13205209570f39c21e.jpg) + +![](images/c2b65abb3eccef09ba6149ad4da8f738de3c386dc1dfd50af1af5bc18f0377b2.jpg) +Figure 4: Effect of varying layer depth and $\beta$ on the Per-Sample Bottleneck for the ResNet-50. The color bars measure the bits per input pixel. The resulting output probability of the correct class $p = p(y|x,\beta)$ is decreasing for higher $\beta$ . + +![](images/e678dcf0aa6f60a2fdb8e410c4220348560af3e1749ceb556e8e179f537f6327.jpg) + +![](images/cca8de1f481ec9b724b2b5b6bbf34af8dc0105c2148380ee831aa642fa1e233b.jpg) + +![](images/44bf1db090140a933e972a82d8ad3726d4dbfc5c49aed4c084640ccae2cce1b0.jpg) + +![](images/18a56df430f9030ac74da6b082c1796a034d1f38dbe3903a7ef23395a131adc9.jpg) + +The readout network consists of three 1x1 convolutional layers. ReLU activations are applied between convolutional layers and a final sigmoid activation yields $\lambda \in [0,1]$ . As the input consists of upscaled feature maps, the field-of-view is large, although the network itself only uses 1x1 conv. kernels. + +# 4 EVALUATION + +# 4.1 EXPERIMENTAL SETUP + +As neural network architectures, we selected the ResNet-50 (He et al., 2016) and the VGG-16 Simonyan & Zisserman (2014), using pretrained weights from the torchvision package (Paszke et al., 2017). These two models cover a range of concepts: dimensionality by stride or max-pooling, residual connections, batch normalization, dropout, low depth (16-weight-layer VGG), and high depth (50-weight-layer ResNet). This variety makes the evaluation less likely to overfit on a specific model type. They are commonly used in the literature concerning attribution methods. For PatternAttribution on the VGG-16, we obtained weights for the signal estimators from Kindermans et al. (2018). + +As naive baselines, we selected random attribution, Occlusion with patch sizes 8x8 and 14x14, and Gradient Maps. SmoothGrad and Integrated Gradients cover methods that accumulate gradients. We include three methods with modified backpropagation rules: PatternAttribution, GuidedBP, and LRP. As our implementation of PatternAttribution and LRP does not support skip connections, we report no results for them on the ResNet-50. We also include Grad-CAM and its combination with GuidedBP, GuidedGrad-CAM. + +For the compared methods, we use the hyperparameters suggested by the original authors. For LRP (Bach et al., 2015), different rules exist. We include the most commonly used $\alpha = 1$ , $\beta = 0$ , $\epsilon = 0$ rule which is also displayed in the figures. We also include $\alpha = 0$ , $\beta = -1$ , $\epsilon = 5$ as it gives better results on the bounding-box task. For sensitivity-n and sanity checks, only the later LRP variante is evaluated. For our methods, the hyperparameters are obtained using grid search with the degradation metric as objective (Appendix F). The readout network is trained on the training set of the ILSVRC12 dataset (Russakovsky et al., 2015) for $E = 20$ epochs. + +The optimization objective of the bottleneck is $\mathcal{L}_{CE} + \beta \mathcal{L}_I$ as given in equation 6. Generally, the information loss $\mathcal{L}_I$ is larger than the classifier loss by several orders of magnitude as it sums over height $h$ , width $w$ and channels $c$ . We therefore use $k = hwc$ as a reference point to select $\beta$ . In figure 4, we display the effect of different orders of $\beta$ and the effect of layer depth for ResNet-50 (see Appendix C for VGG-16). A uniform uninformative heatmap is obtained for $\beta \geq 1000 / k - \text{all}$ information gets discarded, resulting in an output probability of 0 for the correct class. The heatmap for $\beta = 0.1 / k$ is similar to $\beta = 1 / k$ but more information is passed, i.e. less noises is added. In appendix F, we also compare pre- to post-training accuracy and estimate the mutual information for + +![](images/38632d4880f7cf3683467353954a7615cb8a4a1a24c91a915204345a76b32465.jpg) +(a) Gradient + +![](images/4bd7b95547c960c21d5afa74997942a87432efa7f4a4dcb3e097a258a5249dfc.jpg) +(b) Saliency + +![](images/beddc05114955538876a7c521c62270fda941f12cb40e2180f3e9e47aa05d102.jpg) +(c) SmoothGrad + +![](images/7dc07c9516336208bf4b017e51feee7ed3f2299e774c51d7300195bcd42a91ef.jpg) +(d) Int. Grad. + +![](images/affbb132760d0a4dee317820850a11e85e1d466e828039b7f129e86a056e0772.jpg) +(e) GuidedBP + +![](images/5f876ddf07bfc9d78afa7336af481c532ec132dcfbc8030e5a8c90a15a9d071f.jpg) +(f) Occlusion-14 + +![](images/65ecf0af3683dba8b9dde0d16ea4855b77f09d68fb512287a034155318248495.jpg) +(g) Grad-CAM + +![](images/11e889c702eaae94a9f7d03c79dcd02f9bad28a477730cb2d3fba407b84f36c1.jpg) +(h) G.Grad-CAM + +![](images/b1d003271197099173acbdf0dd96d2862fef6cecad8e7fb9cbf015f92425c8c5.jpg) +(i) PatternAttr. + +![](images/1a3eda9fb6875a660e14c4deebf8dccbd3333577fe2df50dc76266b755bd9956.jpg) +(j) LRP +Figure 5: Heatmaps of all implemented methods for the VGG-16 (see Appendix A for more). + +![](images/a5a587c9b6961b22c4e71986e1d2e01a594d059588bf02f0a754a416afb454d1.jpg) +(k) Per-Sample + +![](images/b3f3a8582bfaad328473d77100bb9a13ba1e8b487e9a5b8889914d3c210c4a88.jpg) +(1) Readout + +both Bottleneck types. For the Per-Sample Bottleneck, we investigate $\beta = 1 / k, 10 / k, 100 / k$ . The Readout network is trained with the best performing value $\beta = 10 / k$ . + +# 4.2 QUALITATIVE ASSESSMENT + +In Figure 5, the heatmaps of all evaluated samples are shown (for more samples, see Appendix A). Subjectively, both the Per-Sample and Readout Bottleneck identify areas relevant to the classification well. While Guided Backpropagation and PatternAttribution tend to highlight edges, the Per-Sample Bottleneck focuses on image regions. For the Readout Bottleneck, the attribution is concentrated a little more on the object edges. Compared to Grad-CAM, both our methods are more specific, i.e. fewer pixels are scored high. + +# 4.3 SANITY CHECK: RANDOMIZATION OF MODEL PARAMETERS + +Adebayo et al. (2018) investigates the effect of parameter randomization on attribution masks. A sound attribution method should depend on the entire network's parameter set. Starting from the last layer, an increasing proportion of the network parameters is re-initialized until all parameters are random. We excluded PatternAttribution as the randomization would require to re-estimate the signal directions. The difference between the original heatmap and the heatmap obtained from the randomized model is quantified using SSIM (Wang et al., 2004). We discuss implementation details in the appendix G.1. + +In figure 6, we display the results of the sanity check. For our methods, we observe that randomizing the final dense layer drops the mean SSIM by around 0.4. The values for the Readout Bottleneck are of limited expressiveness as we did not re-train it after randomization. For SmoothGrad and Int. Gradients, the SSIM drops by more than 0.4. The heatmaps of GuidedBP and LRP remain similar even if large portion of the network's parameters are randomized – they do not explain the network prediction faithfully. Nie et al. (2018) provides theoretical insights about why GuidedBP fails. Sixt et al. (2019) analyzes why LRP and other modified BP methods fail. + +# 4.4 SENSITIVITY-N + +Ancona et al. (2018) proposed Sensitivity-n as an evaluation metric for attribution methods. Sensitivity-n masks the network's input randomly and then measures how strongly the amount of attribution in the mask correlates with the drop in classifier score. Given a set $T_{n}$ containing $n$ randomly selected pixel indices, Sensitivity-n measures the Pearson correlation coefficient: + +$$ +\operatorname {c o r r} \left(\sum_ {i \in T _ {n}} R _ {i} (x), S _ {c} (x) - S _ {c} \left(x _ {\left[ x _ {T _ {n}} = 0 \right]}\right)\right), \tag {8} +$$ + +where $S_{c}(x)$ is the classifier logit output for class $c$ , $R_{i}$ is the relevance at pixel $i$ and $x_{[x_{T_n} = 0]}$ denotes the input with all pixels in $T_{n}$ set to zero. As in the original paper, we pick the number of masked + +![](images/e8c474c06388fae716f8dd831059e239ce9f9fdab94a1dbffb2249c1c6b71030.jpg) + +![](images/4a3d2b65d28c67f0932f056c63e30ec2261405d570f71ce9779ffd06fa6e75dd.jpg) + +![](images/f293c713ff583830c748aa22ef210a10f99e0e3efe7129ba5b4bf7334833efc1.jpg) + +![](images/61e2f5696a7b35760ec554d5a52b64fbdb9e04d00bee99d8da0e0bc6b700dfe7.jpg) +Figure 6: First row drop in SSIM score when network layers are randomized. Best viewed in color. Second-row Sensitivity-n scores for ResNet-50 and VGG-16. For the ResNet-50, also tile size 1x1 is shown. We clipped the y-axis at 0.4 to improve discriminability. + +![](images/c95bcda5245049673a3604ed1c801b234d4e7b2fbec37f7d4a3dcd76b2f71715.jpg) + +![](images/44406c04dcddd7721de283ad6c71e59f01d5beff7aea2cbc7330bafd3efcf418.jpg) + +pixels $n$ in a log-scale between 1 and $80\%$ of all pixels. For each $n$ , we generate 100 different index sets $T$ and test each on 1000 randomly selected images from the validation set. The correlation is calculated over the different index sets and then averaged over all images. + +In Figure 6, the Sensitivity-n scores are shown for ResNet-50 and VGG-16. When masking inputs pixel-wise as done in Ancona et al. (2018), the Sensitivity-n score across methods are not well discriminable, all scores range within the lower $10\%$ of the scale. We ran the metric with a tile size of 8x8 pixels. This resulted in an increase of the Sensitivity-n score to $30\%$ of the scale. Although not shown in the figure due to the zooming, Occlusion-8x8 performs perfectly for small $n$ as its relevance scores correspond directly to the drop in logits per 8x8 tile. We find that the Per-Sample Bottlenecks $\beta = 10 / k$ perform best for both models above $n = 2 \cdot 10^3$ pixels, i.e. when more than $2\%$ of all pixels are masked. + +# 4.5 BOUNDING BOX + +To quantify how well attribution methods identify and localize the object of interest, we rely on bounding boxes available for the ImageNet dataset. Bounding boxes may contain irrelevant areas, in particular for non-rectangular objects. We restrict our evaluation to images with bounding boxes covering less than $33\%$ of the input image. In total, we run the bounding box evaluation on 11,849 images from the ImageNet validation set. + +If the bounding box contains $n$ pixels, we measure how many of the $n$ -th highest scored pixels are contained in the bounding box. By dividing by $n$ , we obtain a ratio between 0 and 1. The results are shown in Table 1. The Per-Sample Bottleneck outperforms all baselines on both VGG-16 and ResNet-50 each by a margin of 0.152. + +An alternative metric would be to take the sum of attribution in the bounding box and compare it to the total attribution in the image. We found this metric is not robust against extreme values. For the ResNet-50, we found basic Gradient Maps to be the best method as a few pixels receiving extreme scores are enough to dominate the sum. + +# 4.6 IMAGE DEGRADATION + +As a further quantitative evaluation, we rely on the degradation task as used by Ancona et al. (2017); Kindermans et al. (2018); Hooker et al. (2018); Samek et al. (2016). Given an attribution heatmap, the input is split in tiles, which are ranked by the sum of attribution values within each corresponding + +
Model & EvaluationResNet-50 deg.VGG-16 deg.ResNet +bboxVGG +bbox
8x814x148x814x14
Random0.0000.0000.0000.0000.1670.167
Occlusion-8x80.1620.1300.2670.2580.2960.312
Occlusion-14x140.2280.2310.4020.4040.3410.358
Gradient0.0020.0050.0010.0050.2590.276
Saliency0.2870.3050.3260.3620.3630.393
GuidedBP0.4910.5150.4600.4930.3880.373
PatternAttribution--0.4400.457-0.404
LRP α=1, β=0--0.4710.486-0.397
LRP α=0, β=1, ε=5--0.4620.467-0.441
Int. Grad.0.4010.4240.4200.4530.3720.396
SmoothGrad0.4850.5020.4380.4550.4390.399
Grad-CAM0.5360.5410.5100.5170.4650.399
GuidedGrad-CAM0.5650.5770.5550.5760.4680.419
IBA Per-Sample β=1/k0.5730.5730.5810.5830.6060.566
IBA Per-Sample β=10/k0.5720.5710.5820.5850.6200.593
IBA Per-Sample β=100/k0.5340.5350.5420.5450.5740.568
IBA Readout β=10/k0.5360.5360.4900.5360.4840.437
+ +Table 1: Degradation (deg.): Integral between LeRF and MoRF in the degradation benchmark for different models and window sizes over the ImageNet test set. Bounding Box (bbox): the ratio of the highest scored pixels within the bounding box. For ResNet-50, we show no results for PatternAttribution and LRP as no PyTorch implementation supports skip-connections. + +tile of the attribution. At each iteration, the highest-ranked tile is replaced with a constant value, the modified input is fed through the network, and the resulting drop in target class score is measured. A steep descent of the accuracy curve indicates a meaningful attribution map. + +When implemented in the described way, the most relevant tiles are removed first (MoRF). However, Ancona et al. (2017) argues that using only the MoRF curve for evaluation is not sufficient. For the MoRF score, it is beneficial to find tiles disrupting the output of the neural network as quickly as possible. Neural networks are sensitive to subtle changes in the input (Szegedy et al., 2013). The tiles do not necessarily have to contain meaningful information to disrupt the network. Ancona et al. (2017) proposes to invert the degradation direction, removing tiles ranked as least relevant by the attribution method first (LeRF). The LeRF task favors methods that identify areas sufficient for classification. We scale the network's output probabilities to be in $[0, 1]$ : + +$$ +s (x) = \frac {p (y | x) - b}{t _ {1} - b}, \tag {9} +$$ + +![](images/378efd2884c5d9499d1d6a8a4c76b0d4782977f8a0c926e79e5727ecfb1f0384.jpg) +Figure 7: Mean MoRF and LeRF for the Per-Sample Bottleneck. The area is the final degradation score. + +where $t_1$ is the model's average top-1 probability on the original samples and $b$ is the mean model output on the fully degraded images. Both averages are taken over the validation set. A score of 1 corresponds to the original model performance. + +Both LeRF and MoRF degradation yield curves as visualized in Figure 7, measuring different qualities of the attribution method. To obtain a scalar measure of attribution performance and to combine both metrics, we propose to calculate the integral between the MoRF and LeRF curves. + +The results for all implemented attribution methods on the degradation task are given in Table 1. We evaluated both models on the full validation set using 8x8 and 14x14 tiles. In Appendix H, we show the mean LeRF and MoRF curves for 14x14 tiles. The Per-Sample Bottleneck outperforms all other methods in the degradation benchmark except for GuidedGrad-CAM on ResNet-50 where it scores comparably (score difference of 0.004). The Readout Bottleneck generally achieves a lower degradation scores but still perform competitively. + +# 5 CONCLUSION + +We propose two novel attribution methods that return an upper bound on the amount of information each input region provides for the network's decision. Our models' core functionality is a bottleneck layer used to inject noise into a given feature layer and a mechanism to learn the parametrized amount of noise per feature. The Per-Sample Bottleneck is optimized per single data point, whereas the Readout Bottleneck is trained on the entire dataset. + +Our method does not constrain the internal network structure. In contrast to several modified backpropagation methods, it supports any activation function and network architecture. To evaluate our method, we extended the degradation task to quantify model performance deterioration when removing both relevant and irrelevant image tiles first. We also show results on how well ground-truth bounding boxes are scored. Our Per-Sample and Readout Bottleneck both show competitive results on all metrics used, outperforming state of the art with a significant margin for some of the tasks. + +Generally, we would advise using the Per-Sample Bottleneck over the Readout Bottleneck. It performs better and is more flexible as it only requires to estimate the mean and variance of the feature map. The Readout Bottleneck has the advantage of producing attribution maps with a single forward pass once trained. Images with multiple object instances provide the network with redundant class information. The Per-Sample Bottleneck may discard some of the class evidence. Even for single object instances, the heatmaps of the Per-Sample Bottleneck may vary slightly due to the randomness of the optimization process. + +The method's information-theoretic foundation provides a guarantee that the network does not require regions of zero-valued attribution for correct classification. To our knowledge, our attribution method is the only one to provide scores with units (bits). This absolute frame of reference allows a quantitative comparison between models, inputs, and input regions. We hope this contributes to a deeper understanding of neural networks and creates trust to use modern models in sensitive areas of application. + +# ACKNOWLEDGEMENT + +We want to thank our anonymous reviewers for their valuable suggestions. In particular, we thank reviewer 1 for encouraging us to include the sanity checks. We are indebted to B. Wild, A. Elimelech, M. Granz, and D. Dormagen for helpful discussions and feedback. For LRP, we used the open source implementation by M. Böhle and F. Eitel (Böhle et al., 2019) and we thank A-K. Dombrowski for sharing an implementation and patterns for PatternAttribution with us. We thank Chanakya Ajit Ekbote for pointing out a mistake in equation 4. LS was supported by the Elsa-Neumann-Scholarship by the state of Berlin. We are also grateful to Nvidia for providing us with a Titan Xp and to ZEDAT for granting us access to their HPC system. + +# REFERENCES + +Alessandro Achille and Stefano Soatto. Information dropout: Learning optimal representations through noisy computation. IEEE transactions on pattern analysis and machine intelligence, 40 (12):2897-2905, 2018. +Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems, pp. 9505-9515, 2018. +Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. In 5th International Conference on Learning Representations (ICLR 2017), 2017. +Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. A unified view of gradient-based attribution methods for deep neural networks. In NIPS 2017-Workshop on Interpreting, Explaining and Visualizing Deep Learning. ETH Zurich, 2017. +Marco Ancona, Enea Ceolini, Cengiz Oztireli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In 6th International Conference on Learning Representations (ICLR 2018), 2018. +Sebastian Bach, Alexander Binder, Gregoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10(7), 2015. +David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. How to explain individual classification decisions. Journal of Machine Learning Research, 11(Jun):1803-1831, 2010. +Moritz Böhle, Fabian Eitel, Martin Weygandt, and Kerstin Ritter. Layer-wise relevance propagation for explaining deep neural network decisions in mri-based alzheimers disease classification. Frontiers in Aging Neuroscience, 11:194, 2019. ISSN 1663-4365. doi: 10.3389/fnagi.2019.00194. +Yarin Gal, Jiri Hron, and Alex Kendall. Concrete dropout. In Advances in Neural Information Processing Systems, pp. 3581-3590, 2017. +Samuel Greydanus, Anurag Koul, Jonathan Dodge, and Alan Fern. Visualizing and understanding atari agents. In International Conference on Machine Learning, pp. 1787-1796, 2018. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. Evaluating Feature Importance Estimates. arXiv e-prints, 2018. +Ashkan Khakzar, Soroosh Baselizadeh, Saurabh Khanduja, Seong Tae Kim, and Nassir Navab. Explaining neural networks via perturbing important learned features. arXiv preprint arXiv:1911.11081, 2019. +Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International Conference on Machine Learning, pp. 2668-2677, 2018. +Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, and Sven Dähne. Learning how to explain neural networks: Patternnet and pattern attribution. In International Conference on Learning Representations, 2018. +Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv e-prints, 2014. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pp. 2575-2583, 2015. +Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In Advances in neural information processing systems, pp. 971-980, 2017. +Matthias Kümmerer, Lucas Theis, and Matthias Bethge. Deep gaze i: Boosting saliency prediction with feature maps trained onImagenet. arXiv:1411.1045 [cs, q-bio, stat], 2014. +Jan Macdonald, Stephan Wäldchen, Sascha Hauch, and Gitta Kutyniok. A rate-distortion framework for explaining neural network decisions. arXiv preprint arXiv:1905.11092, 2019. +Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65:211-222, 2017. +Weili Nie, Yang Zhang, and Ankit Patel. A theoretical explanation for perplexing behaviors of backpropagation-based visualizations. In International Conference on Machine Learning, pp. 3809-3818, 2018. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, 2017. +Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5171-5180. PMLR, 2019. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135-1144. ACM, 2016. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115 (3):211-252, 2015. +Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization of what a deep neural network has learned. *IEEE transactions on neural networks and learning systems*, 28(11):2660-2673, 2016. +Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618-626, 2017. +Claude Elwood Shannon. A mathematical theory of communication. Bell system technical journal, 27(3):379-423, 1948. +Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3145-3153. JMLR.org, 2017. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. +Leon Sixt, Maximilian Granz, and Tim Landgraf. When Explanations Lie: Why Many Modified BP Attributions Fail. arXiv preprint arXiv:1912.09818, 2019. +Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viegas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. arXiv:1706.03825 [cs, stat], 2017. + +Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for Simplicity: The All Convolutional Net. arXiv e-prints, 2014. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. +Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3319-3328. JMLR.org, 2017. +Taiji Suzuki, Masashi Sugiyama, Jun Sese, and Takafumi Kanamori. Approximating mutual information by maximum likelihood density ratio estimation. In New challenges for feature selection in data mining and knowledge discovery, pp. 5-20, 2008. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. +Saeid Asgari Taghanaki, Mohammad Havaei, Tess Berthier, Francis Dutil, Lisa Di Jorio, Ghassan Hamarneh, and Yoshua Bengio. Infomask: Masked variational latent representation to localize chest disease. In Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan (eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, pp. 739–747, Cham, 2019. Springer International Publishing. ISBN 978-3-030-32226-7. +Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000. +Tom Viering, Ziqi Wang, Marco Loog, and Elmar Eisemann. How to Manipulate CNNs to Make Them Lie: the GradCAM Case. arXiv:1907.10901 [cs], August 2019. URL http://arxiv.org/abs/1907.10901. +Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. Image quality assessment: From error visibility to structural similarity. IEEE transactions on image processing, 13(4): 600-612, 2004. +Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818-833. Springer, 2014. +Andrey Zhmoginov, Ian Fischer, and Mark Sandler. Information-bottleneck approach to self-attention. In ICML 2019 Workshop on Self-Supervised learning, 2019. + +# A VISUAL COMPARISON OF Attribution METHODS + +![](images/df3e4364b0f7768ecea2b90c549dfc6b93e114407116e33011d1342390e07276.jpg) +Figure 8: Blue indicates negative relevance and red positive. The authors promise that the samples were picked truly randomly, no cherry-picking, no lets-sample-again-does-not-look-nice-enough. + +# B GRID ARTIFACTS WHEN NOT USING SMOOTHING + +![](images/30bddcb50cded57a2de630d732bd5302a598b2ef096e1d075825e8ba99d16e32.jpg) +(a) $T = 1$ + +![](images/e3b8c7069c1af0e198e5996a24b2cd86119fdbfb2a722ee92d02e5993b7e649b.jpg) +Figure 9: Development of $D_{\mathrm{KL}}(Q(Z|X)||Q(Z))$ of the Per-Sample Bottleneck for layer conv1_3 of the ResNet-50. Red indicate areas with maximal information flow and semi-transparent green for zero information flow. Top row: without smoothing the mask exhibits a grid structure. Bottom row: smoothing with $\sigma_s = 2$ . The smoothing both prevents the artifacts and reduces overfitting to small areas. + +![](images/82bae09770e0f6e4270453a6c293ac3562510619fbcbab28da3e99205346a0c3.jpg) +(b) $T = 5$ + +![](images/9ea7a4bd788ba8bf52522b57b47f8a8a006604ced95e7766f65334acbe7381bc.jpg) + +![](images/46eb6c4960c89c5ec09b340f3441dc777d9606341e24d756629282cd3ac8f261.jpg) +(c) $T = 10$ + +![](images/52e145feb553284c06f7a27b36d4ff397e3c358b4fe8ee6921b26c0bf28851a9.jpg) + +![](images/e97ffb6b4727b4de13c3a7ea216d63a09f080515fa30782d45f9a9683cc266ea.jpg) +(d) $T = 100$ + +![](images/e1fc4fb6d45e9bbc16b5a19ffd0f8f321beccdf6761f4cb56fd24b3befedfcd9.jpg) + +# C EFFECTS OF DIFFERENT $\beta$ AND LAYER DEPTH FOR THE VGG-16 + +![](images/0e14bcd4d97ff48b4abb3c31ea0536b2ba3d262859f38aa785adf98da43be98a.jpg) +conv4_1 +$\beta = 0.1 / k$ + +![](images/5da39fc50a8569c9f6ac615de76afdf33552eb08f549a46ba7c7d40625c16591.jpg) +$\beta = 1 / k$ + +![](images/000ac4c8ccecf86d95f91610554d4462d971db6640e582b308422e7e0e5b60d7.jpg) +$\beta = 10 / k$ + +![](images/105b4bd879d6a396f8c5e4b84ea4caa096973c97e3d859872e05a95c00f9df77.jpg) +$\beta = 100 / k$ + +![](images/0c4f79e2a85731202c5f57bf928bfb91bc8f2f45cf581a29d6789a525ae08b95.jpg) +$\beta = 1000 / k$ + +![](images/26c2e8af15b6574b0d18aca6b6ddcaa05d3216cdbe7cd8f0e5c65d1db5c4b36b.jpg) +$p = 0.99$ +$\beta = 10$ +conv1_1 +Figure 10: Effect of varying layer depth and $\beta$ on the Per-Sample Bottleneck for the VGG-16. The resulting output probability for the correct class is given as $p$ . + +![](images/218d928d87a9ef53d4203f99bd549056e861e0317988e35d1fa3282eec2bc5eb.jpg) +$p = 0.95$ +conv2-1 + +![](images/e48ddc59ed48763a5b9ab9e394cff991095c7acaa90c75833fe9f4d2ca576417.jpg) +$p = 0.80$ +conv4_1 + +![](images/596263c6ada0a88c162f52f09df28e68e7a1d9b17e06a0332b637a561ee14fb0.jpg) +$p = 0.55$ +conv5_1 + +![](images/7a8b70734d8cd592e0569b8ab106fa5922d25ab0c737101640bf5a1044f51879.jpg) +$p = 0.00$ +conv5_3 + +# D DERIVATION OF THE UPPER-BOUND OF MUTUAL INFORMATION + +For the mutual information $\mathbf{I}[R,Z]$ , we have: + +$$ +\begin{array}{l} \mathbf {I} [ R, Z ] = \mathbb {E} _ {R} \left[ D _ {\mathrm {K L}} [ P (Z | R) | | P (Z) ] \right] (10) \\ = \int_ {R} p (r) \left(\int_ {Z} p (z | r) \log \frac {p (z | r)}{p (z)} d z\right) d r (11) \\ = \int_ {R} \int_ {Z} p (r, z) \log \frac {p (z | r)}{p (z)} \frac {q (z)}{q (z)} d z d r (12) \\ = \int_ {R} \int_ {Z} p (r, z) \log \frac {p (z \mid r)}{q (z)} d z d r + \int_ {R} \int_ {Z} p (r, z) \log \frac {q (z)}{p (z)} d z d r (13) \\ = \int_ {R} \int_ {Z} p (r, z) \log \frac {p (z \mid r)}{q (z)} d z d r + \int_ {Z} p (z) \left(\int_ {R} p (r \mid z) d r\right) \log \frac {q (z)}{p (z)} d z (14) \\ = \mathbb {E} _ {R} \left[ D _ {\mathrm {K L}} [ P (Z | R) | | Q (Z) ] \right] - D _ {\mathrm {K L}} [ P (Z) | | Q (Z) ] (15) \\ \leq \mathbb {E} _ {R} \left[ D _ {\mathrm {K L}} [ P (Z | R) | | Q (Z) ] \right] (16) \\ \end{array} +$$ + +# E MEAN AND VARIANCE OF Z + +The $\lambda(X)$ linearly interpolate between the feature map $R$ and the noise $\epsilon \sim \mathcal{N}(\mu_R, \sigma_R^2)$ , where $\mu_R$ and $\sigma_R$ are the estimated mean and standard derivation of $R$ . Both $R = f_l(X)$ and $\lambda(X)$ depend on the input random variable $X$ . + +$$ +Z = \lambda (X) R + (1 - \lambda (X)) \epsilon \tag {17} +$$ + +For the mean of $Z$ , we have: + +$$ +\begin{array}{l} \mathbb {E} [ Z ] = \mathbb {E} [ \lambda (X) R ] + \mathbb {E} [ (1 - \lambda (X)) \epsilon ] \quad \triangleright \text {s u b s t i t u t i n g} \\ = E [ \lambda (X) R ] + \mathbb {E} [ 1 - \lambda (X) ] \mathbb {E} [ \epsilon ] \quad \triangleright \text {i n d e p e n d e n c e} \lambda \text {a n d} \epsilon \\ = \operatorname {c o v} (\lambda (X), R) + \mathbb {E} [ \lambda (X) ] \mathbb {E} [ R ] + \mathbb {E} [ 1 - \lambda (X) ] \mathbb {E} [ \epsilon ] \quad \triangleright \operatorname {c o v} (A, B) = \mathbb {E} [ A B ] - \mathbb {E} [ A ] \mathbb {E} [ B ] \\ \approx \operatorname {c o v} (\lambda (X), R) + \mathbb {E} [ \epsilon ] \quad \triangleright \mathbb {E} [ \epsilon ] = \mu_ {R} \approx \mathbb {E} [ R ] \\ \end{array} +$$ + +As $\lambda(X)$ and $R$ are multiplied together, they form a complex product distribution. If they do not correlate, $\mathbb{E}[Z] \approx \mathbb{E}[\epsilon] \approx \mathbb{E}[R]$ . + +A similar problem arises for the variance: + +$$ +\begin{array}{l} \operatorname {V a r} [ Z ] = \mathbb {E} [ Z ^ {2} ] - \mathbb {E} [ Z ] ^ {2} \\ = \mathbb {E} \left[ (\lambda (X) R + (1 - \lambda (X) \epsilon) ^ {2} \right] - \left(\operatorname {c o v} (\lambda (X), R) + \mathbb {E} [ \epsilon ]\right) ^ {2} \\ \end{array} +$$ + +The multiplication of $\lambda(X)$ and $R$ causes in general the variance of $Z$ and $R$ to not match: $\operatorname{Var}[Z] \neq \operatorname{Var}[R]$ . + +# F HYPERPARAMETERS + +
ParameterResNet-50VGG-16Search space
Target layerconv3_4conv4_1
OptimizerAdam (Kingma & Ba (2014))
Learning Rateη = 1{0.03, 0.1, 0.3, 1, 3, 10}
Balance Factorβ = 10/k{0.001, 0.01, 0.1, 1, 10, 100, 300}
IterationsT = 10{1, 3, 5, 10, 30, 100}
Batch SizeB = 10{1, 5, 10, 30}
Smoothingσs = 1{0.5, 1, 2}
+ +Table 2: Hyperparameters for Per-Sample Bottleneck. The layer notations for the ResNet-50 are taken from the original publication (He et al., 2016). The first index denotes the block and the second the layer within the block. For the VGG-16, conv_n denotes the n-th convolutional layer. + +
Orig.Initialβ=0.01/k0.1/k1/k10/k100/k
Per-Sample\( \mathcal{L}_I/k \)-2.5003.1740.5250.2480.0980.019
Top-5 Acc.0.9280.9281.0000.9711.0000.9900.522
Top-1 Acc.0.7600.7601.0000.9631.0000.9840.427
Readout\( \mathcal{L}_I/k \)-2.5001.8220.6280.2220.0790.023
Top-5 Acc.0.9280.9280.9300.9280.9170.8700.505
Top-1 Acc.0.7600.7600.7610.7560.7350.6600.302
+ +Table 4: Influence of $\beta$ on the information loss $\mathcal{L}_I$ and the test accuracy on ResNet-50. $k$ is the size of the feature map, i.e. $k = hwc$ . Initial: Configuration of the untrained bottleneck with $\alpha = 5$ . Original: Values for the original model without the bottleneck. + +
ParameterResNet-50VGG-16Search space
Target layerconv2_3conv3_1
Reading outconv2_3conv3_1
conv3_4conv4_1
conv4_6conv5_1
conv5_3conv5_3
fcfc
OptimizerAdam (Kingma & Ba (2014))
Learning Rateη = 10-5{e-4, e-5, e-6}
Balance Factorβ = 10/k{0.1/k, 1/k, 10/k, 100/k}
EpochsE = 10
Batch SizeB = 16
Smoothingσs = 1
+ +Table 3: Hyperparameters for the Readout Bottleneck. + +In Table 2, we provide hyperparameters for the Per-Sample Bottleneck. In Table 3, we provide hyperparameters for the Readout Bottleneck. + +In Table 4, a comparison of pre- to post-training accuracy and the estimated mutual information is shown for both Bottleneck types. The Per-Sample Bottleneck can learn to suppress negative evidence for each sample and the accuracy is close to 1 for $\beta \leq 10 / k$ . For $\beta = 100 / k$ , also positive evidence is discarded and the accuracy decreases to 0.43. The Readout Bottleneck learns to suppress negative evidence for small $\beta \leq 0.1 / k$ and slightly improves the final accuracy. + +# G EVALUATION METRICS + +# G.1 SANITY CHECK: WEIGHT RANDOMIZATION + +We use the cascading parameter randomization sanity check from Adebayo et al. (2018). Following the original paper, we used the skimage SSIM implementation with a window of size 5. For LRP, we found that the weight randomization flips the values of the saliceny heatmap, e.g. $h_r \approx -h_o$ where $h_r$ is the heatmap with random weights and $h_o$ the heatmap on the original model. Therefore for LRP, we used: $\max(\text{ssim}(h_o, \text{normalize}(h_r), \text{ssim}(h_o, \text{normalize}(-h_r)))$ . We normalized the heatmaps by first clamping the 1-th and 99-th percentile and then rescaling the heatmap it to [0, 1]. We run the sanity check on 200 randomly selected samples from the ImageNet validation set. + +# H MORF AND LERF DEGRADATION PATHS + +![](images/0037d383d25297cf61762aea5bc0ab7096e8084791af4e2aecbd60901dea8a4e.jpg) + +![](images/1105bf50db553b47ce25c29fae03845d7905112d79335e14e0051cca78befd53.jpg) + +![](images/c165b50de21fc125777efaebf44757572af84a92139bed68181a9493532bd6b4.jpg) + +![](images/1b951b2ec933f18f4fedb65912792c5245a0551a922269fb4c2b8b7c9b16cc21.jpg) + +![](images/dca92dfcfdaa8ee5f703b31308668a124e0471628d3d7a7215c43d0228b57c50.jpg) + +![](images/6d2ec11b6a371ff20c3c98089c612da773b35bd5f7011f303c15183c0104115d.jpg) +(a) Random + +![](images/d9089876dac95fc6973b2a0bd71ec8fd9dff3639728c83342e58d5ec0f74999f.jpg) +(b) Occlusion-8x8 + +![](images/393d33a5d9e2b27a2aa17bea0a74c41f0e49b85bd0b78054d76b4e443cad6ec6.jpg) +(c) Occlusion-14x14 + +![](images/2995a4d7f2978a8f0fc68f9ffbbcb8ab9ac9c19c653a521f95e2b0f310d7119e.jpg) +(d) Gradient + +![](images/e505948a399e6ebcf59421f0c0abf52537b3da6ba1f1918f30132ba8b4f4d57a.jpg) +(e) Saliency + +![](images/265faaa2424525c1218ea124c94e53993842254329eaeeeae3ee122b59b61227.jpg) +(f) Guided BP +(k) Per-Sample $\beta = 1 / k$ +(g) Int. Grad. +Figure 11: MoRF and LeRF for the ResNet-50 network using 14x14 tiles. + +![](images/b1ca06c1cd0b53ae1430ba081f4e5f601c9f24f87308d23d638a9e8cdaa57008.jpg) +(1) Per-Sample $\beta = 10 / k$ +(h) Smoothgrad + +![](images/b1f724e0148d29e2275f86f96aee7df1ef6bc57c75eb33869d37525da12ddfc8.jpg) +(m) Per-Sam. $\beta = 100 / k$ + +![](images/08d14d52fde66a788d803376947863a0cf0fd6d4a1cc19b794d7c09d6df7cddb.jpg) +(n) Readout $\beta = 10 / k$ + +![](images/0ab9369165971b7d2c5c1d29773505e530909880a114eaa9e27004b4bde9aa2e.jpg) + +![](images/09c73e1ee77e444c82fe0f69e36b4323a796602923970211621ff1e7f685bdae.jpg) +(a) Random + +![](images/3593d5da3e01fa1b70d06a0fef2a5428aceb75ed50b656ad3f6df2df8491f383.jpg) + +![](images/2facd30db11e567128eecd44b9074227a9dab21d8891cadaa6c26a6f1273e865.jpg) +(b) Occlusion-8x8 + +![](images/98cb21706a8d10514ec41ee5c1bad24e4c99283ee510c388fdaec50ddfae8d32.jpg) + +![](images/0e7a292d6e067f1656fd71ce3edb741320c1cb5f9e2f4dc8e5224c162773bb9d.jpg) +(c) Occlusion-14x14 + +![](images/a4a20e9ec95bc02f3269b1c3c31f6374bcf819fe44bb6ec3d0c34701f3c9dbf0.jpg) + +![](images/ac18ba68ff241847f001ad4c47dc109065b15fc1eda39616797f67cb745f2ea7.jpg) + +![](images/8ba42f81f156226e69b7c2deff90eb3af8d4fdbc4cac50d31ebc7e23662a13aa.jpg) +(e) Saliency + +![](images/60607d7d8daaeae407bb4c32e23617bd50496a74d2b1d57e5dda484ce1466152.jpg) +(f) Guided BP +(k) Grad-CAM + +![](images/5c44dd99945687d752d67e01adeaa5b7a4bd3010fb9c3e6e3a28b897d579e7f9.jpg) +(g) PatternAttr. +(1) GuidedGrad-CAM + +![](images/1828187b7d4d1f5106fdbf543a2cd2f8706bd3fcfb17737c15d17e0a238d1db3.jpg) +(h) LRP + +![](images/23ad7facc9d3ffed9f82b8444e7500bc24cf4f4265558c13adb9eeed3c4df5a7.jpg) +(m) Per-Sample $\beta = 1 / k$ +(p) Readout $\beta = 10 / k$ +Figure 12: MoRF and LeRF paths for the VGG-16 network using $14 \times 14$ tiles. + +![](images/39abbde4fe76d7de0e4e742e44cca25487a2f5dbf489044845623f3c034deeb5.jpg) +(d) Gradient + +![](images/9d8365acc548c054d9818731ac2c4ee803cf71eb45f3bdbe390d901de33454ed.jpg) +(o) Per-Sam. $\beta = 100 / k$ + +![](images/74522686da3e74685aeba23ea8a9a379e0850155a11a9591e8e48c0871787d0c.jpg) +(i) Int. Grad. +(n) Per-Sample $\beta = 10 / k$ \ No newline at end of file diff --git a/restrictingtheflowinformationbottlenecksforattribution/images.zip b/restrictingtheflowinformationbottlenecksforattribution/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c5fd7d2db705e3152881468b6c12dc553e352e40 --- /dev/null +++ b/restrictingtheflowinformationbottlenecksforattribution/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b62e983c9700f247181d465a84cac71d0aab697cd4acd93523a53e9df89429be +size 1431608 diff --git a/restrictingtheflowinformationbottlenecksforattribution/layout.json b/restrictingtheflowinformationbottlenecksforattribution/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..20789f7d99414fac00ef4af24b7e039d8748e6c9 --- /dev/null +++ b/restrictingtheflowinformationbottlenecksforattribution/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1d66dbc7d3d0c4d6d6f88b9ecb7c75a03aa5aee86add808ed7ef4764a824cfc +size 720547 diff --git a/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_content_list.json b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d59b73e9140f01ad6a106e2bb97316846122e555 --- /dev/null +++ b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8162d984404bc4598463b8f54810b41cba58564c031499e94abfb1376d919835 +size 125065 diff --git a/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_model.json b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f7a5b0f86f5c7c1bf0e3c90ef190939b4b98b1bc --- /dev/null +++ b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6053b51b3290b425d62867b3aa1755031c916f19c86b53df480253e63924928c +size 151257 diff --git a/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_origin.pdf b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..113d415451b85df52876621a24859ca165324d2b --- /dev/null +++ b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/c3ed52dd-d407-48b0-83d4-9c6e80130c07_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3aec2dc42e26717ef3871390bafaea614b4ff30dcb400d6b2feca5c56538aaf +size 2717620 diff --git a/rethinkingsoftmaxcrossentropylossforadversarialrobustness/full.md b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1f57b28bd1cd9e964129d3fa61ac75c005fde779 --- /dev/null +++ b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/full.md @@ -0,0 +1,493 @@ +# RETHINKING SOFTMAX CROSS-ENTROPY LOSS FOR ADVERSARIAL ROBUSTNESS + +Tianyu Pang, Kun Xu, Yinpeng Dong, Chao Du, Ning Chen, Jun Zhu* + +Dept. of Comp. Sci. & Tech., BNRist Center, Institute for AI, Tsinghua University; RealAI + +{pty17,xu-k16,dyp17,du-c14}@mails.tsinghua.edu.cn,{ningchen,dcszj}@tsinghua.edu.cn + +# ABSTRACT + +Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models. Since collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning. We first formally show that the softmax cross-entropy (SCE) loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training. This inspires us to propose the Max-Mahalanobis center (MMC) loss to explicitly induce dense feature regions in order to benefit robustness. Namely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes. We empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping high accuracy on clean inputs comparable to the SCE loss with little extra computation. + +# 1 INTRODUCTION + +The deep neural networks (DNNs) trained by the softmax cross-entropy (SCE) loss have achieved state-of-the-art performance on various tasks (Goodfellow et al., 2016). However, in terms of robustness, the SCE loss is not sufficient to lead to satisfactory performance of the trained models. It has been widely recognized that the DNNs trained by the SCE loss are vulnerable to adversarial attacks (Carlini & Wagner, 2017a; Goodfellow et al., 2015; Kurakin et al., 2017; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016), where human imperceptible perturbations can be crafted to fool a high-performance network. To improve adversarial robustness of classifiers, various kinds of defenses have been proposed, but many of them are quickly shown to be ineffective to the adaptive attacks, which are adapted to the specific details of the proposed defenses (Athalye et al., 2018). + +Besides, the methods on verification and training provably robust networks have been proposed (Dvijotham et al., 2018a;b; Hein & Andriushchenko, 2017; Wong & Kolter, 2018). While these methods are exciting, the verification process is often slow and not scalable. Among the previously proposed defenses, the adversarial training (AT) methods can achieve state-of-the-art robustness under different adversarial settings (Madry et al., 2018; Zhang et al., 2019b). These methods either directly impose the AT mechanism on the SCE loss or add additional regularizers. Although the AT methods are relatively strong, they could sacrifice accuracy on clean inputs and are computationally expensive (Xie et al., 2019). Due to the computational obstruction, many recent efforts have been devoted to proposing faster verification methods (Wong et al., 2018; Xiao et al., 2019) and accelerating AT procedures (Shafahi et al., 2019; Zhang et al., 2019a). However, the problem still remains. + +Schmidt et al. (2018) show that the sample complexity of robust learning can be significantly larger than that of standard learning. Given the difficulty of training robust classifiers in practice, they further postulate that the difficulty could stem from the insufficiency of training samples in the commonly used datasets, e.g., CIFAR-10 (Krizhevsky & Hinton, 2009). Recent work intends to solve this problem by utilizing extra unlabeled data (Carmon et al., 2019; Stanforth et al., 2019), + +while we focus on the complementary strategy to exploit the labeled data in hand more efficiently. Note that although the samples in the input space are unchangeable, we could instead manipulate the local sample distribution, i.e., sample density in the feature space via appropriate training objectives. Intuitively, by inducing high-density feature regions, there would be locally sufficient samples to train robust classifiers and return reliable predictions (Schmidt et al., 2018). + +Similar to our attempt to induce high-density regions in the feature space, previous work has been proposed to improve intra-class compactness. Contrastive loss (Sun et al., 2014) and triplet loss (Schroff et al., 2015) are two classical objectives for this purpose, but the training iterations will dramatically grow to construct image pairs or triplets, which results in slow convergence and instability. The center loss (Wen et al., 2016) avoids the pair-wise or triplet-wise computation by minimizing the squared distance + +![](images/b479cc0d9fa7bbac47c68691e535053b9613addeb225317e95b3f8d438a3fa1b.jpg) +Figure 1: Intuitive illusion of how training data moves and how sample density varies in a two-dimensional feature space during the training procedure. + +between the features and the corresponding class centers. However, since the class centers are updated w.r.t. the learned features during training, the center loss has to be jointly used with the SCE loss to seek for a trade-off between inter-class dispersion and intra-class compactness. Therefore, the center loss cannot concentrate on inducing strong intra-class compactness to construct high-density regions and consequently could not lead to reliable robustness, as shown in our experiments. + +In this paper, we first formally analyze the sample density distribution induced by the SCE loss and its other variants (Pang et al., 2018; Wan et al., 2018) in Sec. 3.2, which demonstrates that these previously proposed objectives convey unexpected supervisory signals on the training points, which make the learned features tend to spread over the space sparsely. This undesirable behavior mainly roots from applying the softmax function in training, which makes the loss function only depend on the relative relation among logits and cannot directly supervise on the learned representations. + +We further propose a novel training objective which can explicitly induce high-density regions in the feature space and learn more structured representations. To achieve this, we propose the Max-Mahalanobis center (MMC) loss (detailed in Eq. (8)) as the substitute of the SCE loss. Specifically, in the MMC loss, we first preset untrainable class centers with optimal inter-class dispersion in the feature space according to Pang et al. (2018), then we encourage the features to gather around the centers by minimizing the squared distance similar with the center loss. The MMC loss can explicitly control the inter-class dispersion by a single hyperparameter, and further concentrate on improving intra-class compactness in the training procedure to induce high-density regions, as intuitively shown in Fig. 1. Behind the simple formula, the MMC loss elegantly combines the favorable merits of the previous methods, which leads to a considerable improvement on the adversarial robustness. + +In experiments, we follow the suggestion by Carlini et al. (2019) that we test under different threat models and attacks, including the adaptive attacks (Athalye et al., 2018) on MNIST, CIFAR-10, and CIFAR-100 (Krizhevsky & Hinton, 2009; LeCun et al., 1998). The results demonstrate that our method can lead to reliable robustness of the trained models with little extra computation, while maintaining high clean accuracy with faster convergence rates compared to the SCE loss and its variants. When combined with the existing defense mechanisms, e.g., the AT methods (Madry et al., 2018), the trained models can be further enhanced under the attacks different from the one used to craft adversarial examples for training. + +# 2 PRELIMINARIES + +This section first provides the notations, then introduces the adversarial attacks and threat models. + +# 2.1 NOTATIONS + +In this paper, we use the lowercases to denote variables and the uppercases to denote mappings. Let $L$ be the number of classes, we define the softmax function $\mathrm{softmax}(h): \mathbb{R}^L \to \mathbb{R}^L$ as $\mathrm{softmax}(h)_i = \frac{\exp(h_i)}{\sum_{l=1}^{L} \exp(h_l)}, i \in [L]$ , where $[L] := \{1, \dots, L\}$ and $h$ is termed as logit. + +A deep neural network (DNN) learns a non-linear mapping from the input $x \in \mathbb{R}^p$ to the feature $z = Z(x) \in \mathbb{R}^d$ . One common training objective for DNNs is the softmax cross-entropy (SCE) loss: + +$$ +\mathcal {L} _ {\mathrm {S C E}} (Z (x), y) = - 1 _ {y} ^ {\top} \log [ \operatorname {s o f t m a x} (W z + b) ], \tag {1} +$$ + +for a single input-label pair $(x,y)$ , where $1_{y}$ is the one-hot encoding of $y$ and the logarithm is defined as element-wise. Here $W$ and $b$ are the weight matrix and bias vector of the SCE loss, respectively. + +# 2.2 ADVERSARIAL ATTACKS AND THREAT MODELS + +Previous work has shown that adversarial examples can be easily crafted to fool DNNs (Biggio et al., 2013; Nguyen et al., 2015; Szegedy et al., 2014). A large amount of attacking methods on generating adversarial examples have been introduced in recent years (Carlini & Wagner, 2017a; Chen et al., 2017; Dong et al., 2018; Goodfellow et al., 2015; Ilyas et al., 2018; Kurakin et al., 2017; Madry et al., 2018; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016; Uesato et al., 2018). Given the space limit, we try to perform a comprehensive evaluation by considering five different threat models and choosing representative attacks for each threat model following Carlini et al. (2019): + +White-box $l_{\infty}$ distortion attack: We apply the projected gradient descent (PGD) (Madry et al., 2018) method, which is efficient and widely studied in previous work (Pang et al., 2019). + +White-box $l_{2}$ distortion attack: We apply the C&W (Carlini & Wagner, 2017a) method, which has a binary search mechanism on its parameters to find the minimal $l_{2}$ distortion for a successful attack. + +Black-box transfer-based attack: We use the momentum iterative method (MIM) (Dong et al., 2018) that is effective on boosting adversarial transferability (Kurakin et al., 2018). + +Black-box gradient-free attack: We choose SPSA (Uesato et al., 2018) since it has broken many previously proposed defenses. It can still perform well even when the loss is difficult to optimize. + +General-purpose attack: We also evaluate the general robustness of models when adding Gaussian noise (Gilmer et al., 2019) or random rotation (Engstrom et al., 2019) on the input images. + +Furthermore, to exclude the false robustness caused by, e.g., gradient mask (Athalye et al., 2018), we modify the above attacking methods to be adaptive attacks (Carlini & Wagner, 2017b; Carlini et al., 2019; Herley & Van Oorschot, 2017) when evaluating on the robustness of our method. The adaptive attacks are much more powerful than the non-adaptive ones, as detailed in Sec. 4.2. + +# 3 METHODOLOGY + +Various theoretical explanations have been developed for adversarial examples (Fawzi et al., 2016; Ilyas et al., 2019; Papernot et al., 2018). In particular, Schmidt et al. (2018) show that training robust classifiers requires significantly larger sample complexity compared to that of training standard ones, and they further postulate that the difficulty of training robust classifiers stems from, at least partly, the insufficiency of training samples in the common datasets. Recent efforts propose alternatives to benefit training with extra unlabeled data (Carmon et al., 2019; Stanforth et al., 2019), while we explore the complementary way to better use the labeled training samples for robust learning. + +Although a given sample is fixed in the input space, we can instead manipulate the local sample distribution, i.e., sample density in the feature space, via designing appropriate training objectives. Intuitively, by inducing high-density regions in the feature space, it can be expected to have locally sufficient samples to train robust models that are able to return reliable predictions. In this section, we first formally define the notion of sample density in the feature space. Then we provide theoretical analyses of the sample density induced by the SCE loss and its variants. Finally, we propose our new Max-Mahalanobis center (MMC) loss and demonstrate its superiority compared to previous losses. + +# 3.1 SAMPLE DENSITY IN THE FEATURE SPACE + +Given a training dataset $\mathcal{D}$ with $N$ input-label pairs, and the feature mapping $Z$ trained by the objective $\mathcal{L}(Z(x),y)$ on this dataset, we define the sample density nearby the feature point $z = Z(x)$ + +following the similar definition in physics (Jackson, 1999) as + +$$ +\mathbb {S} \mathbb {D} (z) = \frac {\Delta N}{\operatorname {V o l} (\Delta B)}. \tag {2} +$$ + +Here $\mathrm{Vol}(\cdot)$ denotes the volume of the input set, $\Delta B$ is a small neighbourhood containing the feature point $z$ , and $\Delta N = |Z(\mathcal{D}) \cap \Delta B|$ is the number of training points in $\Delta B$ , where $Z(\mathcal{D})$ is the set of all mapped features for the inputs in $\mathcal{D}$ . Note that the mapped feature $z$ is still of the label $y$ . + +In the training procedure, the feature distribution is directly induced by the training loss $\mathcal{L}$ , where minimizing the loss value is the only supervisory signal for the feature points to move (Goodfellow et al., 2016). This means that the sample density varies mainly along the orthogonal direction w.r.t. the loss contours, while the density along a certain contour could be approximately considered as the same. For example, in the right panel of Fig. 1, the sample density induced by our MMC loss (detailed in Sec. 3.3) changes mainly along the radial direction, i.e., the directions of red arrows, where the loss contours are dashed concentric circles. Therefore, supposing $\mathcal{L}(z,y) = C$ , we choose $\Delta B = \{\mathbf{z} \in \mathbb{R}^d | \mathcal{L}(\mathbf{z},y) \in [C,C + \Delta C]\}$ , where $\Delta C > 0$ is a small value. Then $\mathrm{Vol}(\Delta B)$ is the volume between the loss contours of $C$ and $C + \Delta C$ for label $y$ in the feature space. + +# 3.2 THE SAMPLE DENSITY INDUCED BY THE GENERALIZED SCE LOSS + +Generalized SCE loss. To better understand how the SCE loss and its variants (Pang et al., 2018; Wan et al., 2018) affect the sample density of features, we first generalize the definition in Eq. (1) as: + +$$ +\mathcal {L} _ {\mathrm {g - S C E}} (Z (x), y) = - 1 _ {y} ^ {\top} \log [ \operatorname {s o f t m a x} (h) ], \tag {3} +$$ + +where the logit $h = H(z) \in \mathbb{R}^L$ is a general transformation of the feature $z$ , for example, $h = Wz + b$ in the SCE loss. We call this family of losses as the generalized SCE (g-SCE) loss. Wan et al. (2018) propose the large-margin Gaussian Mixture (L-GM) loss, where $h_i = -(z - \mu_i)^\top \Sigma_i(z - \mu_i) - m\delta_{i,y}$ under the assumption that the learned features $z$ distribute as a mixture of Gaussian. Here $\mu_i$ and $\Sigma_i$ are extra trainable means and covariance matrices respectively, $m$ is the margin, and $\delta_{i,y}$ is the indicator function. Pang et al. (2018) propose the Max-Mahalanobis linear discriminant analysis (MMLDA) loss, where $h_i = -\|z - \mu_i^*\|_2^2$ under the similar mixture of Gaussian assumption, but the main difference is that $\mu_i^*$ are not trainable, but calculated before training with optimal inter-class dispersion. These two losses both fall into the family of the g-SCE loss with quadratic logits: + +$$ +h _ {i} = - \left(z - \mu_ {i}\right) ^ {\top} \Sigma_ {i} \left(z - \mu_ {i}\right) + B _ {i}, \tag {4} +$$ + +where $B_{i}$ are the bias variables. Besides, note that for the SCE loss, there is + +$$ +\operatorname {s o f t m a x} (W z + b) _ {i} = \frac {\exp (W _ {i} ^ {\top} z + b _ {i})}{\sum_ {l \in [ L ]} \exp (W _ {l} ^ {\top} z + b _ {l})} = \frac {\exp (- \| z - \frac {1}{2} W _ {i} \| _ {2} ^ {2} + b _ {i} + \frac {1}{4} \| W _ {i} \| _ {2} ^ {2})}{\sum_ {l \in [ L ]} \exp (- \| z - \frac {1}{2} W _ {l} \| _ {2} ^ {2} + b _ {l} + \frac {1}{4} \| W _ {l} \| _ {2} ^ {2})}. +$$ + +According to Eq. (4), the SCE loss can also be regraded as a special case of the g-SCE loss with quadratic logits, where $\mu_{i} = \frac{1}{2} W_{i}$ , $B_{i} = b_{i} + \frac{1}{4}\| W_{i}\|_{2}^{2}$ and $\Sigma_{i} = I$ are identity matrices. Therefore, later when we refer to the g-SCE loss, we assume that the logits are quadratic as in Eq. (4) by default. + +The contours of the g-SCE loss. To provide a formal representation of the sample density induced by the g-SCE loss, we first derive the formula of the contours, i.e., the closed-form solution of $\mathcal{L}_{\mathrm{g - SCE}}(Z(x),y) = C$ in the space of $z$ , where $C\in (0, + \infty)$ is a given constant. Let $C_e = \exp (C)\in$ $(1, + \infty)$ , from Eq. (3), we can represent the contours as the solution of + +$$ +\log \left(1 + \frac {\sum_ {l \neq y} \exp \left(h _ {l}\right)}{\exp \left(h _ {y}\right)}\right) = C \Rightarrow h _ {y} = \log \left[ \sum_ {l \neq y} \exp \left(h _ {l}\right) \right] - \log \left(C _ {e} - 1\right). \tag {5} +$$ + +The function in Eq. (5) does not provide an intuitive closed-form solution for the contours, since the existence of the term $\log \left[\sum_{l\neq y}\exp (h_l)\right]$ . However, note that this term belongs to the family of Log-Sum-Exp (LSE) function, which is a smooth approximation to the maximum function (Nesterov, 2005; Nielsen & Sun, 2016). Therefore, we can locally approximate the function in Eq. (5) with + +$$ +h _ {y} - h _ {\tilde {y}} = - \log \left(C _ {e} - 1\right), \tag {6} +$$ + +![](images/f257832d01e94ccfa300017b6d18bd289a43d268d88591e0ba29a0975d5a1afe.jpg) +(a) g-SCE + +![](images/f329e1567da69aa02373701d502c3c59829adb2bf9412f6e38ed6f7cbe77ff1e.jpg) +(b) MMC +Figure 2: Intuitive illustration on the inherent limitations of the g-SCE loss. Reasonably learned features for a classification task should distribute in clusters, so it is counter-intuitive that the feature points tend to move to infinity to pursue lower loss values when applying the g-SCE loss. In contrast, MMC induces models to learn more structured and orderly features. + +where $\tilde{y} = \arg \max_{l\neq y}h_l$ . In the following text, we apply colored characters with tilde like $\tilde{y}$ to better visually distinguish them. According to Eq. (6), we can define $\mathcal{L}_{y,\tilde{y}}(z) = \log [\exp (h_{\tilde{y}} - h_y) + 1]$ as the local approximation of the g-SCE loss nearby the feature point $z$ , and substitute the neighborhood $\Delta B$ by $\Delta B_{y,\tilde{y}} = \{\mathbf{z}\in \mathbb{R}^d |\mathcal{L}_{y,\tilde{y}}(\mathbf{z})\in [C,C + \Delta C]\}$ . For simplicity, we assume scaled identity covariance matrix in Eq. (4), i.e., $\Sigma_i = \sigma_iI$ , where $\sigma_{i} > 0$ are scalars. Through simple derivations (detailed in Appendix A.1), we show that if $\sigma_y\neq \sigma_{\tilde{y}}$ , the solution of $\mathcal{L}_{y,\tilde{y}}(z) = C$ is a $(d - 1)$ -dimensional hypersphere with the center $\mathbf{M}_{y,\tilde{y}} = (\sigma_y - \sigma_{\tilde{y}})^{-1}(\sigma_y\mu_y - \sigma_{\tilde{y}}\mu_{\tilde{y}})$ ; otherwise if $\sigma_y = \sigma_{\tilde{y}}$ , the hypersphere-shape contour will degenerate to a hyperplane. + +The induced sample density. Since the approximation in Eq. (6) depends on the specific $y$ and $\tilde{y}$ , we define the training subset $\mathcal{D}_{k,\tilde{k}} = \{(x,y)\in \mathcal{D}|y = k$ , $\tilde{y} = \tilde{k}\}$ and $N_{k,\tilde{k}} = |\mathcal{D}_{k,\tilde{k}}|$ . Intuitively, $\mathcal{D}_{k,\tilde{k}}$ includes the data with the true label of class $k$ , while the highest prediction returned by the classifier is class $\tilde{k}$ among other classes. Then we can derive the approximated sample density in the feature space induced by the g-SCE loss, as stated in the following theorem: + +Theorem 1. (Proof in Appendix A.1) Given $(x,y)\in \mathcal{D}_{k,\tilde{k}}$ , $z = Z(x)$ and $\mathcal{L}_{g - SCE}(z,y) = C$ , if there are $\Sigma_{k} = \sigma_{k}I$ , $\Sigma_{\tilde{k}} = \sigma_{\tilde{k}}I$ , and $\sigma_{k}\neq \sigma_{\tilde{k}}$ , then the sample density nearby the feature point $z$ based on the approximation in Eq. (6) is + +$$ +\mathbb {S D} (z) \propto \frac {N _ {k , \tilde {k}} \cdot p _ {k , \tilde {k}} (C)}{\left[ \mathbf {B} _ {k , \tilde {k}} + \frac {\log \left(C _ {e} - 1\right)}{\sigma_ {k} - \sigma_ {\tilde {k}}} \right] ^ {\frac {d - 1}{2}}}, a n d \mathbf {B} _ {k, \tilde {k}} = \frac {\sigma_ {k} \sigma_ {\tilde {k}} \| \mu_ {k} - \mu_ {\tilde {k}} \| _ {2} ^ {2}}{\left(\sigma_ {k} - \sigma_ {\tilde {k}}\right) ^ {2}} + \frac {B _ {k} - B _ {\tilde {k}}}{\sigma_ {k} - \sigma_ {\tilde {k}}}, \tag {7} +$$ + +where for the input-label pair in $\mathcal{D}_{k,\tilde{k}}$ , there is $\mathcal{L}_{g-SCE} \sim p_{k,\tilde{k}}(c)$ . + +Limitations of the g-SCE loss. Based on Theorem 1 and the approximation in Eq. (6), let $C^* = \log(1 + \exp(\mathbf{B}_{k,\tilde{k}}(\sigma_{\tilde{k}} - \sigma_k)))$ and $C_e^* = \exp(C^*)$ , such that $\mathbf{B}_{k,\tilde{k}} + \frac{\log(C_e^* - 1)}{\sigma_k - \sigma_{\tilde{k}}} = 0$ . According to Appendix A.1, if $\sigma_k > \sigma_{\tilde{k}}$ , then $C^*$ will act as a tight lower bound for $C$ , i.e., the solution set of $C < C^*$ is empty. This will make the training procedure tend to avoid this case since the loss $C$ cannot be further minimized to zero, which will introduce unnecessary biases on the returned predictions. On the other hand, if $\sigma_k < \sigma_{\tilde{k}}$ , $C$ could be minimized to zero. However, when $C \to 0$ , the sample density will also tend to zero since there is $\mathbf{B}_{k,\tilde{k}} + \frac{\log(C_e - 1)}{\sigma_k - \sigma_{\tilde{k}}} \to \infty$ , which means the feature point will be encouraged to go further and further from the hypersphere center $\mathbf{M}_{k,\tilde{k}}$ only to make the loss value $C$ be lower, as intuitively illustrated in Fig. 2(a). + +This counter-intuitive behavior mainly roots from applying the softmax function in training. Namely, the softmax normalization makes the loss value only depend on the relative relation among logits. This causes indirect and unexpected supervisory signals on the learned features, such that the points with low loss values tend to spread over the space sparsely. Fortunately, in practice, the feature point will not really move to infinity, since the existence of batch normalization layers (Ioffe & Szegedy, 2015), and the squared radius from the center $\mathbf{M}_{k,\tilde{k}}$ increases as $\mathcal{O}(|\log C|)$ when minimizing the loss $C$ . These theoretical conclusions are consistent with the empirical observations on the two-dimensional features in previous work (cf. Fig. 1 in Wan et al. (2018)). + +Another limitation of the g-SCE loss is that the sample density is proportional to $N_{k,\tilde{k}}$ , which is on average $N / L^2$ . For example, there are around 1.3 million training data in ImageNet (Deng et al., 2009), but with a large number of classes $L = 1,000$ , there are averagely less than two samples in each $\mathcal{D}_{k,\tilde{k}}$ . These limitations inspire us to design the new training loss as in Sec 3.3. + +Remark 1. If $\sigma_{k} = \sigma_{\tilde{k}}$ (e.g., as in the SCE loss), the features with loss values in $[C, C + \Delta C]$ will be encouraged to locate between two hyperplane contours without further supervision, and consequently there will not be explicit supervision on the sample density as shown in the left panel of Fig. 1. + +Remark 2. Except for the g-SCE loss, Wen et al. (2016) propose the center loss in order to improve the intra-class compactness of learned features, formulated as $\mathcal{L}_{\mathrm{Center}}(Z(x),y) = \frac{1}{2}\| z - \mu_y\| _2^2$ . Here the center $\mu_{y}$ is updated based on a mini-batch of learned features with label $y$ in each training iteration. The center loss has to be jointly used with the SCE loss as $\mathcal{L}_{\mathrm{SCE}} + \lambda \mathcal{L}_{\mathrm{Center}}$ , since simply supervise the DNNs with the center loss term will cause the learned features and centers to degrade to zeros (Wen et al., 2016). This makes it difficult to derive a closed-form formula for the induced sample density. Besides, the center loss method cannot concentrate on improving intra-class compactness, since it has to seek for a trade-off between inter-class dispersion and intra-class compactness. + +# 3.3 MAX-MAHALANOBIS CENTER LOSS + +Inspired by the above analyses, we propose the Max-Mahalanobis center (MMC) loss to explicitly learn more structured representations and induce high-density regions in the feature space. The MMC loss is defined in a regression form without the softmax function as + +$$ +\mathcal {L} _ {\mathrm {M M C}} (Z (x), y) = \frac {1}{2} \| z - \mu_ {y} ^ {*} \| _ {2} ^ {2}. \tag {8} +$$ + +Here $\mu^{*} = \{\mu_{l}^{*}\}_{l\in [L]}$ are the centers of the Max-Mahalanobis distribution (MMD) (Pang et al., 2018). The MMD is a mixture of Gaussian distribution with identity covariance matrix and preset centers $\mu^{*}$ , where $\| \mu_l^*\| _2 = C_{\mathrm{MM}}$ for any $l\in [L]$ , and $C_\mathrm{MM}$ is a hyperparameter. These MMD centers are invariable during training, which are crafted according to the criterion: $\mu^{*} = \arg \min_{\mu}\max_{i\neq j}\langle \mu_i,\mu_j\rangle$ . Intuitively, this criterion is to maximize the minimal angle between any two centers, which can provide optimal inter-class dispersion as shown in Pang et al. (2018). In Appendix B.1, we provide the generation algorithm for $\mu^{*}$ in MMC. We derive the sample density induced by the MMC loss in the feature space, as stated in Theorem 2. Similar to the previously introduced notations, here we define the subset $\mathcal{D}_k = \{(x,y)\in \mathcal{D}|y = k\}$ and $N_{k} = |\mathcal{D}_{k}|$ . + +Theorem 2. (Proof in Appendix A.2) Given $(x,y)\in \mathcal{D}_k$ , $z = Z(x)$ and $\mathcal{L}_{MMC}(z,y) = C$ , the sample density nearby the feature point $z$ is + +$$ +\mathbb {S D} (z) \propto \frac {N _ {k} \cdot p _ {k} (C)}{C ^ {\frac {d - 1}{2}}}, \tag {9} +$$ + +where for the input-label pair in $\mathcal{D}_k$ , there is $\mathcal{L}_{MMC} \sim p_k(c)$ . + +According to Theorem 2, there are several attractive merits of the MMC loss, as described below. + +Inducing higher sample density. Compared to Theorem 1, the sample density induced by MMC is proportional to $N_{k}$ rather than $N_{k,\tilde{k}}$ , where $N_{k}$ is on average $N / L$ . It facilitates producing higher sample density. Furthermore, when the loss value $C$ is minimized to zero, the sample density will exponentially increase according to Eq. (9), as illustrated in Fig. 2(b). The right panel of Fig. 1 also provides an intuitive insight on this property of the MMC loss: since the loss value $C$ is proportional to the squared distance from the preset center $\mu_y^*$ , the feature points with lower loss values are certain to locate in a smaller volume around the center. Consequently, the feature points of the same class are encouraged to gather around the corresponding center, such that for each sample, there will be locally enough data in its neighborhood for robust learning (Schmidt et al., 2018). The MMC loss value also becomes a reliable metric of the uncertainty on returned predictions. + +Better exploiting model capacity. Behind the simple formula, the MMC loss can explicitly monitor inter-class dispersion by the hyperparameter $C_{\mathrm{MM}}$ , while enabling the network to concentrate on minimizing intra-class compactness in training. Instead of repeatedly searching for an internal trade-off in training as the center loss, the monotonicity of the supervisory signals induced by MMC can better exploit model capacity and also lead to faster convergence, as empirically shown in Fig. 3(a). + +![](images/34e8511f3cfc02eed505e288926dd05cdf5db94a386df3f67f1bb96aabb9a399.jpg) +(a) Test error rates w.r.t training time + +![](images/87328d4c55d98e5cfb0321f2db3177a761dd86f70f0d80ef00f9165ca487af2b.jpg) +(b) Visualization in the learned feature space of MNIST +Figure 3: (a) Test error rates on clean images w.r.t training time on CIFAR-10. Here AT refers to 10-steps targeted PGD adversarial training, i.e., $\mathrm{AT}_{10}^{\mathrm{tar}}$ . (b) Two-dimensional visualization of the attacks on trained MMC networks in the feature space of MNIST. For each attack there is $\epsilon = 0.3$ with step size of 0.01. The total number of iteration steps is 50, where $\mathrm{Iter}-n$ indicates the perturbed features at $n$ -th iteration step. + +Avoiding the degradation problem. The MMC loss can naturally avoid the degradation problem encountered in Wen et al. (2016) when the center loss is not jointly used with the SCE loss, since the preset centers $\mu^{*}$ for MMC are untrainable. In the test phase, the network trained by MMC can still return a normalized prediction with the softmax function. More details about the empirical superorities of the MMC loss over other previous losses are demonstrated in Sec. 4. + +Remark 3. In Appendix B.2, we discuss on why the squared-error form in Eq. (8) is preferred compared to, e.g., the absolute form or the Huber form in the adversarial setting. We further introduce flexible variants of the MMC loss in Appendix B.3, which can better adapt to various tasks. + +Remark 4. Pang et al. (2018) propose a Max-Mahalanobis linear discriminant analysis (MMLDA) method, which assumes the features to distribute as an MMD. Due to the Gaussian mixture assumption, the training loss for the MMLDA method is obtained by the Bayes' theorem as + +$$ +\mathcal {L} _ {\mathrm {M M L D A}} (Z (x), y) = - \log \left[ \frac {\exp \left(- \frac {\| z - \mu_ {y} ^ {*} \| _ {2} ^ {2}}{2}\right)}{\sum_ {l \in [ L ]} \exp \left(- \frac {\| z - \mu_ {l} ^ {*} \| _ {2} ^ {2}}{2}\right)} \right] = - \log \left[ \frac {\exp \left(z ^ {\top} \mu_ {y} ^ {*}\right)}{\sum_ {l \in [ L ]} \exp \left(z ^ {\top} \mu_ {l} ^ {*}\right)} \right]. \tag {10} +$$ + +Note that there is $\Sigma_{i} = \frac{1}{2} I$ in Eq. (4) for the MMLDA loss, similar with the SCE loss. Thus the MMLDA method cannot explicitly supervise on the sample density and induce high-density regions in the feature space, as analyzed in Sec. 3.2. Compared to the MMLDA method, the MMC loss introduces extra supervision on intra-class compactness, which facilitates better robustness. + +# 4 EXPERIMENTS + +In this section, we empirically demonstrate several attractive merits of applying the MMC loss. We experiment on the widely used MNIST, CIFAR-10, and CIFAR-100 datasets (Krizhevsky & Hinton, 2009; LeCun et al., 1998). The main baselines for the MMC loss are SCE (He et al., 2016), Center loss (Wen et al., 2016), MMLDA (Pang et al., 2018), and L-GM (Wan et al., 2018). The codes are provided in https://github.com/P2333/Max-Mahalanobis-Training. + +# 4.1 PERFORMANCE ON THE CLEAN INPUTS + +The network architecture applied is ResNet-32 with five core layer blocks (He et al., 2016). Here we use MMC-10 to indicate the MMC loss with $C_{\mathrm{MM}} = 10$ , where $C_{\mathrm{MM}}$ is assigned based on the cross-validation results in Pang et al. (2018). The hyperparameters for the center loss, L-GM loss and the MMLDA method all follow the settings in the original papers (Pang et al., 2018; Wan et al., 2018; Wen et al., 2016). The pixel values are scaled to the interval [0, 1]. For each training loss with or without the AT mechanism, we apply the momentum SGD (Qian, 1999) optimizer with the initial learning rate of 0.01, and train for 40 epochs on MNIST, 200 epochs on CIFAR-10 and CIFAR-100. The learning rate decays with a factor of 0.1 at 100 and 150 epochs, respectively. + +Table 1: Classification accuracy (%) on the white-box adversarial examples crafted on the test set of CIFAR-10. The superscript tar indicates targeted attacks, while un indicates untargeted attacks. The subscripts indicate the number of iteration steps when performing attacks. The results w.r.t the MMC loss are reported under the adaptive versions of different attacks. The notation $\leq 1$ represents accuracy less than $1\%$ . The MMC-10 (rand) is an ablation study where the class centers are uniformly sampled on the hypersphere. + +
MethodsCleanPerturbation ε = 8/255Perturbation ε = 16/255
PGD10tarPGD10unPGD50tarPGD50unPGD10tarPGD10unPGD50tarPGD50un
SCE92.9≤13.7≤13.6≤12.9≤12.6
Center loss92.8≤14.4≤14.3≤13.1≤12.9
MMLDA92.4≤116.5≤19.7≤16.7≤15.5
L-GM92.537.619.88.94.926.011.02.52.8
MMC-10 (rand)92.343.529.220.918.431.317.98.611.6
MMC-1092.748.736.026.624.836.125.213.417.5
AT10tar (SCE)83.770.649.769.847.848.426.731.216.0
AT10tar (MMC-10)83.069.254.867.053.558.647.344.745.1
AT10un (SCE)80.969.855.469.453.953.334.138.521.5
AT10un (MMC-10)81.870.856.370.155.054.737.439.927.7
+ +![](images/a4dcbb43e3b9e0848e2e6d5f0f75d612b4f654c0174d87e4acbe5a9fdfa2a63e.jpg) +Figure 4: Classification accuracy under the black-box transfer-based attacks on the test set of CIFAR-10. The substitute model refers to the one used to craft adversarial examples, and the target model is the one that an adversary actually intends to fool. Here AT refers to $\mathrm{AT}_{10}^{\mathrm{tar}}$ . + +When applying the AT mechanism (Madry et al., 2018), the adversarial examples for training are crafted by 10-steps targeted or untargeted PGD with $\epsilon = 8 / 255$ . In Fig. 3(a), we provide the curves of the test error rate w.r.t. the training time. Note that the MMC loss induces faster convergence rate and requires little extra computation compared to the SCE loss and its variants, while keeping comparable performance on the clean images. In comparison, implementing the AT mechanism is computationally expensive in training and will sacrifice the accuracy on the clean images. + +# 4.2 ADAPTIVE ATTACKS FOR THE MMC LOSS + +As stated in Athalye et al. (2018), only applying the existing attacks with default hyperparameters is not sufficient to claim reliable robustness. Thus, we apply the adaptive versions of existing attacks when evading the networks trained by the MMC loss (detailed in Appendix B.4). For instance, the non-adaptive objectives for PGD are variants of the SCE loss (Madry et al., 2018), while the adaptive objectives are $-\mathcal{L}_{\mathrm{MMC}}(z,y)$ and $\mathcal{L}_{\mathrm{MMC}}(z,y_t)$ in the untargeted and targeted modes for PGD, respectively. Here $y_{t}$ is the target label. To verify that the adaptive attacks are more effective than the non-adaptive ones, we modify the network architecture with a two-dimensional feature layer and visualize the PGD attacking procedure in Fig. 3(b). The two panels separately correspond to two randomly selected clean inputs indicated by black stars. The ten colored clusters in each panel consist of the features of all the 10,000 test samples in MNIST, where each color corresponds to one class. We can see that the adaptive attacks are indeed much more efficient than the non-adaptive one. + +# 4.3 PERFORMANCE UNDER THE WHITE-BOX ATTACKS + +We first investigate the white-box $l_{\infty}$ distortion setting using the PGD attack, and report the results in Table 1. According to Carlini et al. (2019), we evaluate under different combinations of the attacking parameters: the perturbation $\epsilon$ , iteration steps, and the attack mode, i.e., targeted or untargeted. + +Table 2: Experiments on CIFAR-10. Part I: Averaged $l_{2}$ distortion of the white-box adversarial examples crafted by C&W with 1,000 iteration steps. Part II: Classification accuracy $(\%)$ under the block-box SPSA attack. Part III: Classification accuracy $(\%)$ under general transformations. The standard deviation $\sigma$ for the Gaussian noise is 0.05, the degree range is $\pm 30^{\circ}$ for random rotation. + +
MethodsPart IPart II (ε=8/255)Part II (ε=16/255)Part III
C&WtarC&WunSPSA10tarSPSA10unSPSA10tarSPSA10unNoiseRotation
SCE0.120.0712.31.25.1≤152.083.5
Center loss0.130.0721.26.010.62.055.484.9
MMLDA0.170.1025.613.211.35.757.984.8
L-GM0.230.1261.945.946.128.259.282.4
MMC-100.340.1769.556.957.241.569.387.2
AT10tar(SCE)1.190.6381.167.877.959.482.276.0
AT10tar(MMC-10)1.910.8579.169.274.562.783.575.2
AT10un(SCE)1.260.6878.867.073.760.378.973.7
AT10un(MMC-10)1.550.7380.469.674.662.480.375.8
+ +Following the setting in Madry et al. (2018), we choose the perturbation $\epsilon = 8 / 255$ and $16 / 255$ , with the step size be $2 / 255$ . We have also run PGD-100 and PGD-200 attacks, and find that the accuracy converges compared to PGD-50. In each PGD experiment, we ran several times with different random restarts to guarantee the reliability of the reported results. + +Ablation study. To investigate the effect on robustness induced by high sample density in MMC, we substitute uniformly sampled center set (Liu et al., 2018; Duan et al., 2019), i.e., $\mu^r = \{\mu_l^r\}_{l \in [L]}$ for the MM center set $\mu^*$ , and name the resulted method as "MMC-10 (rand)" as shown in Table 1. There is also $\| \mu_l^r \|_2 = C_{\mathrm{MM}}$ , but $\mu^r$ is no longer the solution of the min-max problem in Sec. 3.3. + +From the results in Table 1, we can see that higher sample density alone in "MMC-10 (rand)" can already lead to much better robustness than other baseline methods even under the adaptive attacks, while using the optimal center set $\mu^{*}$ as in "MMC-10" can further improve performance. When combining with the AT mechanism, the trained models have better performance under the attacks different from the one used to craft adversarial examples for training, e.g., $\mathrm{PGD}_{50}^{\mathrm{un}}$ with $\epsilon = 16 / 255$ . + +Then we investigate the white-box $l_{2}$ distortion setting. We apply the C&W attack, where it has a binary search mechanism to find the minimal distortion to successfully mislead the classifier under the untargeted mode, or lead the classifier to predict the target label in the targeted mode. Following the suggestion in Carlini & Wagner (2017a), we set the binary search steps to be 9 with the initial constant $c = 0.01$ . The iteration steps for each value of $c$ are set to be 1,000 with the learning rate of 0.005. In the Part I of Table 2, we report the minimal distortions found by the C&W attack. As expected, it requires much larger distortions to successfully evade the networks trained by MMC. + +# 4.4 PERFORMANCE UNDER THE BLACK-BOX ATTACKS + +As suggested in Carlini et al. (2019), providing evidence of being robust against the black-box attacks is critical to claim reliable robustness. We first perform the transfer-based attacks using PGD and MIM. Since the targeted attacks usually have poor transferability (Kurakin et al., 2018), we only focus on the untargeted mode in this case, and the results are shown in Fig. 4. We further perform the gradient-free attacks using the SPSA method and report the results in the Part II of Table 2. To perform numerical approximations on gradients in SPSA, we set the batch size to be 128, the learning rate is 0.01, and the step size of the finite difference is $\delta = 0.01$ , as suggested by Uesato et al. (2018). We also evaluate under stronger + +SPSA attacks with batch size to be 4096 and 8192 in Table 3, where the $\epsilon = 8 / 255$ . With larger batch sizes, we can find that the accuracy under the black-box SPSA attacks converges to it under the white-box PGD attacks. These results indicate that training with the MMC loss also leads to + +Table 3: Accuracy $(\%)$ of MMC-10 under SPSA with different batch sizes. + +
CIFAR-10
BatchSPSAun10SPSATar10
12857.069.0
409641.052.0
819237.049.0
+ +Table 4: Experiments on CIFAR-100. Part I: Classification accuracy (\%) on the clean test samples. Part II: Classification accuracy (\%) under the white-box PGD attacks and the block-box SPSA attack. The attacks are adaptive for MMC. Here the batch size for SPSA is 128. Part III: Averaged $l_{2}$ distortion of the white-box adversarial examples crafted by C&W with 1,000 iteration steps and 9 binary search epochs. + +
MethodsPart I CleanPart II (ε = 8/255)Part I
PGD10tarPGD10unSPSA10tarSPSA10unC&WtarC&Wun
SCE72.9≤ 18.014.01.90.160.047
Center72.8≤ 110.214.72.30.180.048
MMLDA72.2≤ 113.918.55.60.210.050
L-GM71.315.815.322.87.60.310.063
MMC-1071.923.923.433.415.80.370.085
+ +robustness under the black-box attacks, which verifies that our method can induce reliable robustness, rather than the false one caused by, e.g., gradient mask (Athalye et al., 2018). + +# 4.5 PERFORMANCE UNDER THE GENERAL-PURPOSE ATTACKS + +To show that our method is generally robust, we further test under the general-purpose attacks (Carlini et al., 2019). We apply the Gaussian noise (Fawzi et al., 2016; Gilmer et al., 2019) and rotation transformation (Engstrom et al., 2019), which are not included in the data augmentation for training. The results are given in the Part III of Table 2. Note that the AT methods are less robust to simple transformations like rotation, as also observed in previous work (Engstrom et al., 2019). In comparison, the models trained by the MMC loss are still robust to these easy-to-apply attacks. + +# 4.6 EXPERIMENTS ON CIFAR-100 + +In Table 4 and Table 5, we provide the results on CIFAR-100 under the white-box PGD and C&W attacks, and the black-box gradient-free SPSA attack. The hyperparameter setting for each attack is the same as it on CIFAR-10. Compared to previous defense strategies which also evaluate on CIFAR-100 (Pang et al., 2019; Mustafa et al., 2019), MMC can improve robustness more significantly, while keeping better performance on the clean inputs. Compared to the results on CIFAR-10, the averaged distortion of C&W on CIFAR-100 is larger for a successful targeted attack and is much smaller for a successful untargeted attack. This is because when only the number of classes increases, e.g., from 10 to 100, it is easier to achieve a coarse untargeted attack, but harder to make a subtle targeted attack. Note that in Table 5, we also train on the ResNet-110 model with eighteen core block layers except for the ResNet-32 model. The results show that MMC can further benefit from deep network architectures and better exploit model capacity to improve robustness. Similar properties are also observed in previous work when applying the AT methods (Madry et al., 2018). In contrast, as shown in Table 5, the models trained by SCE are comparably sensitive to adversarial perturbations for different architectures, which demonstrates that SCE cannot take full advantage of the model capacity to improve robustness. This verifies that MMC provides effective robustness promoting mechanism like the AT methods, with much less computational cost. + +# 5 CONCLUSION + +In this paper, we formally demonstrate that applying the softmax function in training could potentially lead to unexpected supervisory signals. To solve this problem, we propose the MMC loss to learn more structured representations and induce high-density regions in the feature space. In our experiments, we empirically demonstrate several favorable merits of our method: (i) Lead to reliable robustness even under strong adaptive attacks in different threat models; (ii) Keep high performance on clean inputs comparable to SCE; (iii) Introduce little extra computation compared to the SCE loss; (iv) Compatible with the existing defense mechanisms, e.g., the AT methods. Our analyses in this paper also provide useful insights for future work on designing new objectives beyond the SCE framework. + +# ACKNOWLEDGEMENTS + +This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, U19B2034, U1811461), Beijing NSF Project (No. L172037), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, the JP Morgan Faculty Research Program and the NVIDIA NVAIL Program with GPU/DGX Acceleration. + +# REFERENCES + +Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning (ICML), 2018. +Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pp. 387-402. Springer, 2013. +Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (S&P), 2017a. +Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In ACM Workshop on Artificial Intelligence and Security (AISec), 2017b. +Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019. +Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, Percy Liang, and John C Duchi. Unlabeled data improves adversarial robustness. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec), pp. 15-26. ACM, 2017. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248-255. IEEE, 2009. +Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4690-4699, 2019. +Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. +Yueqi Duan, Jiwen Lu, and Jie Zhou. Uniformface: Learning deep equidistributed representation for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3415-3424, 2019. +Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265, 2018a. +Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and Pushmeet Kohli. A dual approach to scalable verification of deep networks. In Annual Conference on Uncertainty in Artificial Intelligence (UAI), 2018b. + +Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. A rotation and a translation suffice: Fooling cnns with simple transformations. In International Conference on Machine Learning (ICML), 2019. +Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1632-1640, 2016. +Alhussein Fawzi, Hamza Fawzi, and Omar Fawzi. Adversarial vulnerability for any classifier. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics New York, 2001. +Justin Gilmer, Nicolas Ford, Nicolas Carlini, and Ekin Cubuk. Adversarial examples are a natural consequence of test error in noise. In International Conference on Machine Learning (ICML), 2019. +Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015. +Yandong Guo and Lei Zhang. One-shot face recognition by promoting underrepresented classes. arXiv preprint arXiv:1707.05574, 2017. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision (ECCV), pp. 630-645. Springer, 2016. +Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems (NeurIPS), pp. 2266-2276, 2017. +Cormac Herley and Paul C Van Oorschot. Sok: Science, security and the elusive goal of security as a scientific pursuit. In 2017 IEEE Symposium on Security and Privacy (S&P), pp. 99-120. IEEE, 2017. +Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning (ICML), 2018. +Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Anish Athalye, Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. 448-456, 2015. +John David Jackson. Classical electrodynamics. American Journal of Physics, 1999. +Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018. +Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In The International Conference on Learning Representations (ICLR) Workshops, 2017. +Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, et al. Adversarial attacks and defences competition. arXiv preprint arXiv:1804.00097, 2018. + +Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +Xuezhi Liang, Xiaobo Wang, Zhen Lei, Shengcai Liao, and Stan Z Li. Soft-margin softmax for deep classification. In International Conference on Neural Information Processing (ICNIP), pp. 413-421. Springer, 2017. +Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. Large-margin softmax loss for convolutional neural networks. In International Conference on Machine Learning (ICML), 2016. +Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. Spherical hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 212-220, 2017. +Weiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, and Le Song. Learning towards minimum hyperspherical energy. In Advances in Neural Information Processing Systems (NeurIPS), pp. 6222-6233, 2018. +Pavel Loskot and Norman C. Beaulieu. On monotonicity of the hypersphere volume and area. Journal of Geometry, 2007. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018. +Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574-2582, 2016. +Aamir Mustafa, Salman Khan, Munawar Hayat, Roland Goecke, Jianging Shen, and Ling Shao. Adversarial defense by restricting the hidden space of deep neural networks. In International Conference on Computer Vision (ICCV), 2019. +Yu Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1): 127-152, 2005. +Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427-436, 2015. +Frank Nielsen and Ke Sun. Guaranteed bounds on the kullback-leibler divergence of univariate mixtures. IEEE Signal Processing Letters, 23(11):1543-1546, 2016. +Tianyu Pang, Chao Du, and Jun Zhu. Max-mahalanobis linear discriminant analysis networks. In International Conference on Machine Learning (ICML), 2018. +Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. In International Conference on Machine Learning (ICML), 2019. +Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372-387. IEEE, 2016. +Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Towards the science of security and privacy in machine learning. In European Symposium on Security and Privacy (EuroS&P), 2018. +Xianbiao Qi and Lei Zhang. Face recognition via centralized coordinate learning. arXiv preprint arXiv:1801.05678, 2018. +Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1): 145-151, 1999. + +Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems (NeurIPS), pp. 5019-5031, 2018. +Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815-823, 2015. +Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Robert Stanforth, Alhussein Fawzi, Pushmeet Kohli, et al. Are labels required for improving adversarial robustness? In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaou Tang. Deep learning face representation by joint identification-verification. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1988-1996, 2014. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014. +Jonathan Uesato, Brendan O'Donoghue, Aaron van den Oord, and Pushmeet Kohli. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning (ICML), 2018. +Weitao Wan, Yuanyi Zhong, Tianpeng Li, and Jiansheng Chen. Rethinking feature distribution for loss functions in image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9117-9126, 2018. +Feng Wang, Xiang Xiang, Jian Cheng, and Alan Loddon Yuille. Normface: 12 hypersphere embedding for face verification. In Proceedings of the 25th ACM international conference on Multimedia (ACMM), pp. 1041-1049. ACM, 2017. +Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5265-5274, 2018. +Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for deep face recognition. In European Conference on Computer Vision (ECCV), pp. 499-515. Springer, 2016. +Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning (ICML), pp. 5283-5292, 2018. +Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J Zico Kolter. Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems (NeurIPS), pp. 8400-8409, 2018. +Kai Y Xiao, Vincent Tjeng, Nur Muhammad Shafiullah, and Aleksander Madry. Training for faster adversarial robustness verification via inducing relu stability. In International Conference on Learning Representations (ICLR), 2019. +Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. +Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, and Bin Dong. You only propagate once: Accelerating adversarial training via maximal principle. In Advances in Neural Information Processing Systems (NeurIPS), 2019a. +Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning (ICML), 2019b. + +# A PROOF + +In this section, we provide the proof of the theorems proposed in the paper. + +# A.1 PROOF OF THEOREM 1 + +According to the definition of sample density + +$$ +\mathbb {S D} (z) = \frac {\Delta N}{\operatorname {V o l} (\Delta B)}, +$$ + +we separately calculate $\Delta N$ and $\mathrm{Vol}(\Delta B)$ . Since $\mathcal{L}_{\mathrm{g - SCE}}\sim p_{k,\tilde{k}}(c)$ for the data points in $\mathcal{D}_{k,\tilde{k}}$ , recall that $\Delta B = \{z\in \mathbb{R}^d |\mathcal{L}_{\mathrm{g - SCE}}\in [C,C + \Delta C]\}$ , then there is + +$$ +\begin{array}{l} \Delta N = \left| Z \left(\mathcal {D} _ {\tilde {k}, k}\right) \cap \Delta B \right| \\ = N _ {k, \tilde {k}} \cdot p _ {k, \tilde {k}} (C) \cdot \Delta C. \tag {11} \\ \end{array} +$$ + +Now we calculate $\mathrm{Vol}(\Delta B)$ by approximating it with $\mathrm{Vol}(\Delta B_{y,\tilde{y}})$ . We first derive the solution of $\mathcal{L}_{y,\tilde{y}} = C$ . For simplicity, we assume scaled identity covariance matrix, i.e., $\Sigma_i = \sigma_i I$ , where $\sigma_i > 0$ are scalars. Then $\forall i, j \in [L]$ , $c$ is any constant, if $\sigma_i \neq \sigma_j$ , the solution of $h_i - h_j = c$ is a $(d-1)$ -dimensional hypersphere embedded in the $d$ -dimensional space of the feature $z$ : + +$$ +\left\| z - \mathbf {M} _ {i, j} \right\| _ {2} ^ {2} = \mathbf {B} _ {i, j} - \frac {c}{\sigma_ {i} - \sigma_ {j}}, \text {w h e r e} \mathbf {M} _ {i, j} = \frac {\sigma_ {i} \mu_ {i} - \sigma_ {j} \mu_ {j}}{\sigma_ {i} - \sigma_ {j}}, \mathbf {B} _ {i, j} = \frac {\sigma_ {i} \sigma_ {j} \| \mu_ {i} - \mu_ {j} \| _ {2} ^ {2}}{(\sigma_ {i} - \sigma_ {j}) ^ {2}} + \frac {B _ {i} - B _ {j}}{\sigma_ {i} - \sigma_ {j}}. \tag {12} +$$ + +Note that each value of $c$ corresponds to a specific contour, where $\mathbf{M}_{i,j}$ and $\mathbf{B}_{i,j}$ can be regraded as constant w.r.t. $c$ . When $\mathbf{B}_{i,j} < (\sigma_i - \sigma_j)^{-1}c$ , the solution set becomes empty. Specially, if $\sigma_{i} = \sigma_{j} = \sigma$ , the hypersphere-shape contour will degenerate to a hyperplane: $z^{\top}(\mu_i - \mu_j) = \frac{1}{2}\left[\| \mu_i\| _2^2 -\| \mu_j\| _2^2 +\sigma^{-1}(B_j - B_i + c)\right]$ . For example, for the SCE loss, the solution of the contour is $z^{\top}(W_i - W_j) = b_j - b_i + c$ . For more general $\Sigma_{i}$ , the conclusions are similar, e.g., the solution in Eq. (12) will become a hyperellipse. Now it easy to show that the solution of $\mathcal{L}_{y,\tilde{y}} = C$ when $y = k,\tilde{y} = \tilde{k}$ is the hypersphere: + +$$ +\left\| z - \mathbf {M} _ {k, \tilde {k}} \right\| _ {2} ^ {2} = \mathbf {B} _ {k, \tilde {k}} + \frac {\log \left(C _ {e} - 1\right)}{\sigma_ {k} - \sigma_ {\tilde {k}}}. \tag {13} +$$ + +According to the formula of the hypersphere surface area (Loskot & Beaulieu, 2007), the volume of $\Delta B_{y,\tilde{y}}$ is + +$$ +\operatorname {V o l} \left(\Delta B _ {y, \tilde {y}}\right) = \frac {2 \pi^ {\frac {d}{2}}}{\Gamma \left(\frac {d}{2}\right)} \left(\mathbf {B} _ {k, \tilde {k}} + \frac {\log \left(C _ {e} - 1\right)}{\sigma_ {k} - \sigma_ {\tilde {k}}}\right) ^ {\frac {d - 1}{2}} \cdot \Delta C, \tag {14} +$$ + +where $\Gamma (\cdot)$ is the gamma function. Finally we can approximate the sample density as + +$$ +\begin{array}{l} \mathbb {S D} (z) \approx \frac {\Delta N}{\Delta B _ {y , \tilde {y}}} \\ \propto \frac {N _ {k , \tilde {k}} \cdot p _ {k , \tilde {k}} (C)}{\left[ \mathbf {B} _ {k , \tilde {k}} + \frac {\log \left(C _ {e} - 1\right)}{\sigma_ {k} - \sigma_ {\tilde {k}}} \right] ^ {\frac {d - 1}{2}}}. \tag {15} \\ \end{array} +$$ + +![](images/84952b783aef17310227c4a45f32b75f3dbc5983f56db2eacb8a851a1dc8a880.jpg) + +# A.2 PROOF OF THEOREM 2 + +Similar to the proof of Theorem 1, there is + +$$ +\begin{array}{l} \Delta N = \left| Z \left(\mathcal {D} _ {k}\right) \cap \Delta B \right| \\ = N _ {k} \cdot p _ {k} (C) \cdot \Delta C. \tag {16} \\ \end{array} +$$ + +Unlike for the g-SCE, we can exactly calculate $\mathrm{Vol}(\Delta B)$ for the MMC loss. Note that the solution of $\mathcal{L}_{\mathrm{MMC}} = C$ is the hypersphere: + +$$ +\| z - \mu_ {y} ^ {*} \| _ {2} ^ {2} = 2 C. \tag {17} +$$ + +![](images/e90f96ece4b9d820afde7cb89760d2076c38e3e4c504cfe913d1e1ca98a083fc.jpg) +Figure 5: Intuitive illustration of the Max-Mahalanobis centers in the cases of $L = 2,3,4$ + +![](images/f8df236f04fd561b6ebdb6fc4a5780793aecc64f63534218871ee3d550934e51.jpg) + +![](images/dadeef6d8c646a6e1dd55b0caaf4182c19ffc8f783a0b3f796890ea9899dbe10.jpg) + +According to the formula of the hypersphere surface area (Loskot & Beaulieu, 2007), we have + +$$ +\operatorname {V o l} (\Delta B) = \frac {2 ^ {\frac {d + 1}{2}} \pi^ {\frac {d}{2}} C ^ {\frac {d - 1}{2}}}{\Gamma \left(\frac {d}{2}\right)} \cdot \Delta C, \tag {18} +$$ + +where $\Gamma (\cdot)$ is the gamma function. Finally we can obtain the sample density as + +$$ +\begin{array}{l} \mathbb {S D} (z) = \frac {\Delta N}{\Delta B} \tag {19} \\ \propto \frac {N _ {k} \cdot p _ {k} (C)}{C ^ {\frac {d - 1}{2}}}. \\ \end{array} +$$ + +![](images/02d9ffe60cbfbfbda0fb06aa1684f65892a7b18121a020aa71882e34e716541c.jpg) + +# B TECHNICAL DETAILS + +In this section, we provide more technical details we applied in our paper. Most of our experiments are conducted on the NVIDIA DGX-1 server with eight Tesla P100 GPUs. + +# B.1 GENERATION ALGORITHM FOR THE MAX-MAHALANOBIS CENTERS + +We give the generation algorithm for crafting the Max-Mahalanobis Centers in Algorithm 1, proposed by Pang et al. (2018). Note that there are two minor differences from the originally proposed algorithm. First is that in Pang et al. (2018) they use $C = \| \mu_i\| _2^2$ , while we use $C_{\mathrm{MM}} = \| \mu_i\| _2$ . Second is that we denote the feature $z\in \mathbb{R}^d$ , while they denote $z\in \mathbb{R}^p$ . The Max-Mahalanobis centers generated in the low-dimensional cases are quite intuitive and comprehensible as shown in Fig. 5. For examples, when $L = 2$ , the Max-Mahalanobis centers are the two vertexes of a line segment; when $L = 3$ , they are the three vertexes of an equilateral triangle; when $L = 4$ , they are the four vertexes of a regular tetrahedron. + +# Algorithm 1 GenerateMMcenters + +Input: The constant $C_{\mathrm{MM}}$ , the dimension of vectors $d$ and the number of classes $L$ . ( $L \leq d + 1$ ) Initialization: Let the $L$ mean vectors be $\mu_1^* = e_1$ and $\mu_i^* = 0_d, i \neq 1$ . Here $e_1$ and $0_d$ separately denote the first unit basis vector and the zero vector in $\mathbb{R}^d$ . + +```latex +for $i = 2$ to $L$ do +for $j = 1$ to $i - 1$ do + $\mu_i^* (j) = -[1 + \langle \mu_i^*,\mu_j^*\rangle \cdot (L - 1)] / [\mu_j^* (j)\cdot (L - 1)]$ +end for + $\mu_i^* (i) = \sqrt{1 - \| \mu_i^*\|_2^2}$ +end for +for $k = 1$ to $L$ do + $\mu_k^* = C_{\mathrm{MM}}\cdot \mu_k^*$ +end for +Return: The optimal mean vectors $\mu_i^*$ , $i\in [L]$ . +``` + +# B.2 WHY THE SQUARED-ERROR FORM IS PREFERRED + +In the feature space, penalizing the distance between the features and the prefixed centers can be regarded as a regression problem. In the MMC loss, we apply the squared-error form as $\| z - \mu_y^* \|_2^2$ . Other substitutes could be the absolute form $\| z - \mu_y^* \|_2$ or the Huber form. As stated in Friedman et al. (2001), the absolute form and the Huber form are more resistant to the noisy data (outliers) or the misspecification of the class labels, especially in the data mining applications. However, in the classification tasks that we focus on in this paper, the training data is clean and reliable. Thus the squared-error form can lead to high accuracy with faster convergence rate compared to other forms. Furthermore, in the adversarial setting, the adversarial examples have similar properties as the outliers. When we apply the AT mechanism in the training procedure, we expect the classifiers to pay more attention to the adversarial examples, i.e., the outliers. Note that this goal is the opposite of it in the data mining applications, where outliers are intended to be ignored. Therefore, due to the sensitivity to the outliers, the squared-error form can better collaborate with the AT mechanism to improve robustness. + +Besides, the MMC loss can naturally perform stronger AT mechanism without additional regularizer term. Specifically, let $x$ be the clean input, $x^{*}$ be the adversarial example crafted based on $x$ , then in the adversarial logit pairing (ALP) method (Kannan et al., 2018), there is an extra regularizer except for SCE as: + +$$ +\left\| z (x) - z \left(x ^ {*}\right) \right\| _ {2} ^ {2}. \tag {20} +$$ + +When adding $x^{*}$ as an extra training point for MMC, then the MMC loss will minimize $\| z(x) - \mu_y^*\|_2^2 + \| z(x^*) - \mu_y^*\|_2^2$ , which is an upper bound for $\frac{1}{2}\| z(x) - z(x^*)\|_2^2$ . Thus performing naive adversarial training (Goodfellow et al., 2015; Madry et al., 2018) with MMC is equivalent to performing stronger adversarial training variants like ALP. As analyzed above, the squared-error form in the MMC loss can accelerate the convergence of the AT mechanism, since the objective is sensitive to the crafted adversarial examples. + +# B.3 VARIANTS OF THE MMC LOSS + +In the MMC loss, we encourage the features to gather around the preset Max-Mahalanobis (MM) centers $\mu^{*} = \{\mu_{l}^{*}\}_{l\in [L]}$ , which leads to many attractive properties. However, this 'hard' supervision, which induces quite an orderly feature distribution may beyond the reach of the model capability, especially when the classification tasks themselves are already challenging to learn, e.g., ImageNet (Deng et al., 2009). Therefore, we propose potential variants of the MMC loss that could probably solve the problem and make our method more adaptable. We leave the experimental investigations as future work. + +Note that the MMC loss can be regarded as minimizing the negative log likelihood (NLL) of $-\log(P(z|y))$ , where the conditional feature distribution is modeled as $z|y \sim \mathcal{N}(\mu_y^*, I)$ . As described above, this distribution model may not be easy to learn by the DNNs in some cases. Thus, we construct a softer model: $z|y, \mu_y \sim \mathcal{N}(\mu_y, I)$ and $\mu_y \sim \mathcal{N}(\mu_y^*, \alpha I)$ , where $\alpha > 0$ is a scalar. Here we give the feature center $\mu_y$ a prior distribution, while the prior is centered at $\mu_y^*$ . Intuitively, we relax the constraint that the features have to gather around $\mu_y^*$ . Instead, we encourage the features to gather around a substitute $\mu_y$ , while $\mu_y$ should be in the vicinity of $\mu_y^*$ . In the training, we minimize the joint NLL of $-\log(P(z, \mu_y|y)) = -\log(P(z|y, \mu_y)) - \log(P(\mu_y))$ , which is equivalent to minimize the what we call elastic Max-Mahalanobis center (EMC) loss as: + +$$ +\mathcal {L} _ {\mathrm {E M C}} (Z (x), y) = \frac {1}{2} \| z - \mu_ {y} \| ^ {2} + \frac {1}{2 \alpha} \| \mu_ {y} - \mu_ {y} ^ {*} \| ^ {2}. \tag {21} +$$ + +Here $\mu = \{\mu_l\}_{l\in [L]}$ are simply extra trainable parameters, the prior variance $\alpha$ is a hyperparameter. When $\alpha \to 0$ , the EMC loss degenerates to the MMC loss. Note that although $\mu_l^*$ are all on the hypersphere $\{\mathbf{z}\in \mathbb{R}^d ||\mathbf{z}|| = C_{\mathrm{MM}}\}$ , the support sets of $\mu_l$ are the entire feature space $\mathbb{R}^d$ . + +Further improvement can be made w.r.t. the MM centers $\mu^{*}$ . An implicit assumption behind the generation process of $\mu^{*}$ is that any two classes are mutually independent. This assumption could be approximately true for MNIST and CIFAR-10, but for more complex datasets, e.g., CIFAR-100 or ImageNet, this assumption may not be appropriate since there are structures in the relation among classes. These structures can usually be visualized by a tree. To solve this problem, we introduce the hierarchical Max-Mahalanobis (HM) centers $\mu^{\mathrm{H}} = \{\mu_l^{\mathrm{H}}\}_{l\in [L]}$ , which adaptively craft the centers + +![](images/dcba3f466c9f1b73da32df47268fac8abe873736d14432a35bd763806e0ef1c7.jpg) +Figure 6: Intuitive demonstration of the attacking mechanisms under different adaptive objectives. Here $y$ is the original label, $\tilde{y} = \arg \max_{l \neq y} h_l$ is the label of the nearest other decision region w.r.t. the feature $z$ , and $y_t$ is the target label of targeted attacks. + +according to the tree structure. Specifically, we first assign a virtual center (i.e., the origin) to the root node. For any child node $n_c$ in the tree, we denote its parent node as $n_p$ , and the number of its brother nodes as $L_c$ . We locally generate a set of MM centers as $\mu^{(s,L_c)} = \text{GenerateMMcenters}(C^s,d,L_c)$ , where $s$ is the depth of the child node $n_c$ , $C^s$ is a constant with smaller values for larger $s$ . Then we assign the virtual center to each child node of $n_p$ from $\mu_{n_p} + \mu^{(s,L_c)}$ , i.e., a shifted set of crafted MM centers, where $\mu_{n_p}$ is the virtual center assigned to $n_p$ . If the child node $n_c$ is a leaf node, i.e., it corresponds to a class label $l$ , then there is $\mu_l^{\mathrm{H}} = \mu_{n_c}$ . For example, in the CIFAR-100 dataset, there are 20 superclasses, with 5 classes in each superclass. We first craft 20 MM centers as $\mu^{(1,20)} = \text{GenerateMMcenters}(C^1,d,20)$ and 5 MM centers as $\mu^{(2,5)} = \text{GenerateMMcenters}(C^2,d,5)$ , where $C^2 \ll C^1$ . Note that $\mu^{(2,5)}$ could be different for each superclass, e.g., by a rotation transformation. Then if the label $l$ is the $j$ -th class in the $i$ -th superclass, there is $\mu_l^{\mathrm{H}} = \mu_i^{(1,20)} + \mu_j^{(2,5)}$ . + +# B.4 ADAPTIVE OBJECTIVES AND THE INDUCED ATTACKING MECHANISMS + +We apply the adaptive versions of existing attacks when evading the networks trained by the MMC loss. We separately design two adaptive adversarial objectives $\mathcal{L}_{\mathrm{Ada}}$ to minimize under the untargeted mode: $\mathcal{L}_{\mathrm{Ada}}^{\mathbf{un},1} = -\mathcal{L}_{\mathrm{MMC}}(z,y)$ ; $\mathcal{L}_{\mathrm{Ada}}^{\mathbf{un},2} = \mathcal{L}_{\mathrm{MMC}}(z,\tilde{y}) - \mathcal{L}_{\mathrm{MMC}}(z,y)$ , and under the targeted mode: $\mathcal{L}_{\mathrm{Ada}}^{\mathbf{tar},1} = \mathcal{L}_{\mathrm{MMC}}(z,y_t)$ ; $\mathcal{L}_{\mathrm{Ada}}^{\mathbf{tar},2} = \mathcal{L}_{\mathrm{MMC}}(z,y_t) - \mathcal{L}_{\mathrm{MMC}}(z,y)$ , where $y_{t}$ is the targeted label, $\tilde{y}$ is generally the highest predicted label except for $y$ as defined in Sec. 3.2. These objectives refer to previous work by Carlini & Wagner (2017a;b). Specifically, the adaptive objectives $\mathcal{L}_{\mathrm{Ada}}^{\mathbf{tar},1}$ and $\mathcal{L}_{\mathrm{Ada}}^{\mathbf{un},1}$ are used in the PGD, MIM and SPSA attacks, while the objectives $\mathcal{L}_{\mathrm{Ada}}^{\mathbf{tar},2}$ and $\mathcal{L}_{\mathrm{Ada}}^{\mathbf{un},2}$ are used in the C&W attacks. + +In Fig. 6, we demonstrate the attacking mechanisms induced by different adaptive adversarial objectives. Note that we only focus on the gradients and ignore the specific method which implements the attack. Different adaptive objectives are preferred under different adversarial goals. For examples, when decreasing the confidence of the true label is the goal, $\mathcal{L}_{\mathrm{Ada}}^{\mathrm{un},1}$ is the optimal choice; in order to mislead the classifier to predict an untrue label or the target label, $\mathcal{L}_{\mathrm{Ada}}^{\mathrm{un},2}$ and $\mathcal{L}_{\mathrm{Ada}}^{\mathrm{tar},2}$ are the optimal choices, respectively. Sometimes there are additional detectors, then the adversarial examples generated by $\mathcal{L}_{\mathrm{Ada}}^{\mathrm{tar},1}$ could be assigned to the target label with high confidence by the classifiers. + +# B.5 RELATED WORK IN THE FACE RECOGNITION AREA + +There are many previous work in the face recognition area that focus on angular margin-based softmax (AMS) losses (Liu et al., 2016; 2017; Liang et al., 2017; Wang et al., 2018; Deng et al., 2019). They mainly exploit three basic operations: weight normalization (WN), feature normalization (FN), and angular margin (AN). It has been empirically shown that WN can benefit the cases with unbalanced data (Guo & Zhang, 2017); FN can encourage the models to focus more on hard examples (Wang et al., 2017); AN can induce larger inter-class margins and lead to better generalization in different facial tasks (Wang et al., 2018; Deng et al., 2019). However, there are two critical differences between our MMC loss and these AMS losses: + +Table 5: Classification accuracy (%) on the white-box adversarial examples crafted on the test set of CIFAR-10 and CIFAR-100. The results w.r.t the MMC loss are reported under the adaptive versions of different attacks. MMC can better exploit deep architectures, while SCE cannot. + +
MethodsCle.Perturbation ε = 8/255Perturbation ε = 16/255
PGD10tarPGD10unPGD50tarPGD50unPGD10tarPGD10unPGD50tarPGD50un
CIFAR-10
SCE (Res.32)93.6≤13.7≤13.6≤12.7≤12.9
MMC (Res.32)92.748.736.026.624.836.125.213.417.5
SCE (Res.110)94.7≤13.0≤12.9≤12.1≤12.0
MMC (Res.110)93.654.746.034.431.441.030.716.221.6
CIFAR-100
SCE (Res.32)72.3≤17.8≤17.4≤14.8≤14.7
MMC (Res.32)71.923.923.415.121.916.416.78.015.7
SCE (Res.110)74.8≤17.5≤17.3≤14.7≤14.5
MMC (Res.110)73.234.622.423.716.524.114.913.910.5
+ +# Difference one: The inter-class margin + +- The AMS losses induce the inter-class margins mainly by encouraging the intra-class compactness, while the weights are not explicitly forced to have large margins (Qi & Zhang, 2018). +- The MMC loss simultaneously fixes the class centers to be optimally dispersed and encourages the intra-class distribution to be compact. Note that both of the two mechanisms can induce inter-class margins, which can finally lead to larger inter-class margins compared to the AMS losses. + +# Difference two: The normalization + +- The AMS losses use both WN and FN to exploit the angular metric, which makes the normalized features distribute on hyperspheres. The good properties of the AMS losses are at the cost of abandoning the radial degree of freedom, which may reduce the capability of models. +- In the MMC loss, there is only WN on the class centers, i.e., $\| \mu_y^* \| = C_{\mathrm{MM}}$ , and we leave the degree of freedom in the radial direction for the features to keep model capacity. However, note that the MMC loss $\| z - \mu_y^* \|_2^2 \geq (\| z \|_2 - C_{\mathrm{MM}})^2$ is a natural penalty term on the feature norm, which encourages $\| z \|_2$ to not be far from $C_{\mathrm{MM}}$ . This prevents models from increasing feature norms for easy examples and ignoring hard examples, just similar to the effect caused by FN but more flexible. \ No newline at end of file diff --git a/rethinkingsoftmaxcrossentropylossforadversarialrobustness/images.zip b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4212b99c6240ab31f503ae2efe58ae6e42eb580f --- /dev/null +++ b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5aa3c60d9aa537c11ac5989576af0ca78c44f897edbbd5290761a1fa36d1788e +size 665010 diff --git a/rethinkingsoftmaxcrossentropylossforadversarialrobustness/layout.json b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..42de53c9ffffcc023b384b9685a0edd2b9516a6d --- /dev/null +++ b/rethinkingsoftmaxcrossentropylossforadversarialrobustness/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6008c7709fe92f6354d7159020cd4246952fccfcf7bccd89b22489c836d248de +size 741764 diff --git a/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_content_list.json b/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fb6476524bb949d05abc7f3a48cc419e6caec8b3 --- /dev/null +++ b/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a325c76bf9e68e234af6f8c653994bdaf0139e0a9936edeac54b2bd2f7fb897 +size 119432 diff --git a/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_model.json b/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9efcbd57addaae0be1fe477c28b30434daabe801 --- /dev/null +++ b/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:220b0c1ae6f533db18e5bca3f72e9cf83fc1d9b39373929d99b6d4536ccbbc5e +size 151946 diff --git a/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_origin.pdf b/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..93af2248a22e83df478de72b35c378af36ea0d67 --- /dev/null +++ b/rethinkingthehyperparametersforfinetuning/225084a0-86f8-478a-b25e-f0366115eeb4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f175a13151d00cd44b305ec21e28ef85ff46191694d6dc9accf3a22bef6bc611 +size 2258781 diff --git a/rethinkingthehyperparametersforfinetuning/full.md b/rethinkingthehyperparametersforfinetuning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bb2d10f062c88855bdc88cf074e43eeaa69b74ae --- /dev/null +++ b/rethinkingthehyperparametersforfinetuning/full.md @@ -0,0 +1,477 @@ +# RETHINKING THE HYPERPARAMETERS FOR FINE-TUNING + +Hao Li $^{1}$ , Pratik Chaudhari $^{2*}$ , Hao Yang $^{1}$ , Michael Lam $^{1}$ , Avinash Ravichandran $^{1}$ , Rahul Bhotika $^{1}$ , Stefano Soatto $^{1,3}$ + +1Amazon Web Services, 2University of Pennsylvania, 3University of California, Los Angeles {haolimax, haoyng, michlam, ravinash, bhotikar, soattos} @amazon.com, pratikac@seas.upenn.edu + +# ABSTRACT + +Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for "dissimilar" datasets. Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning. + +# 1 INTRODUCTION + +Many real-world applications often have a limited number of training instances, which makes directly training deep neural networks hard and prone to overfitting. Transfer learning with the knowledge of models learned on a similar task can help to avoid overfitting. Fine-tuning is a simple and effective approach of transfer learning and has become popular for solving new tasks in which pre-trained models are fine-tuned with the target dataset. Specifically, fine-tuning on pre-trained ImageNet classification models (Simonyan & Zisserman, 2015; He et al., 2016b) has achieved impressive results for tasks such as object detection (Ren et al., 2015) and segmentation (He et al., 2017; Chen et al., 2017) and is becoming the de-facto standard of solving computer vision problems. It is believed that the weights learned on the source dataset with a large number of instances provide better initialization for the target task than random initialization. Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly (He et al., 2019). + +The common practice of fine-tuning is to adopt the default hyperparameters for training large models while using smaller initial learning rate and shorter learning rate schedule. It is believed that adhering to the original hyperparameters for fine-tuning with small learning rate prevents destroying the originally learned knowledge or features. For instance, many studies conduct fine-tuning of ResNets (He et al., 2016b) with these default hyperparameters: learning rate 0.01, momentum 0.9 and weight decay 0.0001. However, the default setting is not necessarily optimal for fine-tuning on other tasks. While few studies have performed extensive hyperparameter search for learning rate and weight decay (Mahajan et al., 2018; Kornblith et al., 2019), the momentum coefficient is rarely changed. Though the effectiveness of the hyperparameters has been studied extensively for training a model from scratch, how to set the hyperparameters for fine-tuning is not yet fully understood. + +In addition to using ad-hoc hyperparameters, commonly held beliefs for fine-tuning also include: + +- Fine-tuning pre-trained networks outperforms training from scratch; recent work (He et al., 2019) has already revisited this. +- Fine-tuning from similar domains and tasks works better (Ge & Yu, 2017; Cui et al., 2018; Achille et al., 2019; Ngiam et al., 2018). +- Explicit regularization with initial models matters for transfer learning performance (Li et al., 2018; 2019). + +Are these practices or beliefs always valid? From an optimization perspective, the difference between fine-tuning and training from scratch is all about the initialization. However, the loss landscape of the pre-trained model and the fine-tuned solution could be much different, so as their optimization strategies and hyperparameters. Would the hyperparameters for training from scratch still be useful for fine-tuning? In addition, most of the hyperparameters (e.g., batch size, momentum, weight decay) are frozen; will the conclusion differ when some of them are changed? + +With these questions in mind, we re-examined the common practices for fine-tuning. We conducted extensive hyperparameter search for fine-tuning on various transfer learning benchmarks with different source models. The goal of our work is not to obtain state-of-the-art performance on each fine-tuning task, but to understand the effectiveness of each hyperparameter for fine-tuning, avoiding unnecessary computation. We explain why certain hyperparameters work so well on certain datasets while fail on others, which can guide hyperparameter search for fine-tuning. + +Our main findings are as follows: + +- Optimal hyperparameters for fine-tuning are not only dataset dependent, but are also dependent on the similarity between the source and target domains, which is different from training from scratch. Therefore, the common practice of using optimization schedules derived from ImageNet training cannot guarantee good performance. It explains why some tasks are not achieving satisfactory results after fine-tuning because of inappropriate hyperparameter selection. Specifically, as opposed to the common practice of rarely tuning the momentum value beyond 0.9, we find that zero momentum sometimes work better for fine-tuning on tasks that are similar with the source domain, while nonzero momentum works better for target domains that are different from the source domain. +- Hyperparameters are coupled together and it is the effective learning rate—which encapsulates the learning rate and momentum—that matters for fine-tuning performance. While effective learning rate has been studied for training from scratch, to the best of our knowledge, no previous work investigates effective learning rate for fine-tuning and is less used in practice. Our observation of momentum can be explained as small momentum actually decreases the effective learning rate, which is more suitable for fine-tuning on similar tasks. We show that the optimal effective learning rate depends on the similarity between the source and target domains. +- We find regularization methods that were designed to keep models close to the initial model does not necessarily work for "dissimilar" datasets, especially for nets with Batch Normalization. Simple weight decay can result in as good performance as the reference-based regularization methods for fine-tuning with better search space. + +# 2 RELATED WORK + +In transfer learning for image classification, the last layer of a pre-trained network is usually replaced with a randomly initialized fully connected layer with the same size as the number of classes in the target task (Simonyan & Zisserman, 2015). It has been shown that fine-tuning the whole network usually results in better performance than using the network as a static feature extractor (Yosinski et al., 2014; Donahue et al., 2014; Huh et al., 2016; Mormont et al., 2018; Kornblith et al., 2019). Ge & Yu (2017) select images that have similar local features from source domain to jointly fine-tune pre-trained networks. Cui et al. (2018) estimate domain similarity with ImageNet and demonstrate that transfer learning benefits from pre-training on a similar source domain. Besides image classification, many object detection frameworks also rely on fine-tuning to improve over training from scratch (Girshick et al., 2014; Ren et al., 2015). + +Many researchers re-examined whether fine-tuning is a necessity for obtaining good performance. Ngiam et al. (2018) find that when domains are mismatched, the effectiveness of transfer learning is negative, even when domains are intuitively similar. Kornblith et al. (2019) examine the fine-tuning performance of various ImageNet models and find a strong correlation between ImageNet top-1 accuracy and the transfer accuracy. They also find that pre-training on ImageNet provides minimal benefits for some fine-grained object classification dataset. He et al. (2019) questioned whether ImageNet pre-training is necessary for training object detectors. They find the solution of training from scratch is no worse than the fine-tuning counterpart as long as the target dataset is large enough. Raghu et al. (2019) find that transfer learning has negligible performance boost on medical imaging applications, but speed up the convergence significantly. + +There are many literatures on hyperparameter selection for training neural networks from scratch, mostly on batch size, learning rate and weight decay (Goyal et al., 2017; Smith et al., 2018; Smith & Topin, 2019). There are few works on the selection of momentum (Sutskever et al., 2013). Zhang & Mitliagkas (2017) proposed an automatic tuner for momentum and learning rate in SGD. There are also studies on the correlations of the hyperparameters, such as linear scaling rule between batch size and learning (Goyal et al., 2017; Smith et al., 2018; Smith, 2017). However, most of these advances on hyperparameter tuning are designed for training from scratch and have not examined on fine-tuning tasks for computer vision problems. Most work on fine-tuning simply choose fixed hyperparameters (Cui et al., 2018) or use dataset-dependent learning rates (Li et al., 2018) in their experiments. Due to the huge computational cost for hyperparameter search, only a few works (Kornblith et al., 2019; Mahajan et al., 2018) performed large-scale grid search of learning rate and weight decay for obtaining the best performance. + +# 3 TUNING HYPERPARAMETERS FOR FINE-TUNING + +In this section, we first introduce the notations and experimental settings, and then present our observations on momentum, effective learning rate and regularization. The fine-tuning process is not different from learning from scratch except for the weights initialization. The goal of the process is still to minimize the objective function $L = \frac{1}{N}\sum_{i=1}^{N}\ell(f(x_i,\theta),y_i) + \frac{\lambda}{2}\|\theta\|_2^2$ , where $\ell$ is the loss function, $N$ is the number of samples, $x_i$ is the input data, $y_i$ is its label, $f$ is the neural network, $\theta$ is the model parameters and $\lambda$ is the regularization hyperparameter or weight decay. Momentum is widely used for accelerating and smoothing the convergence of SGD by accumulating a velocity vector in the direction of persistent loss reduction (Polyak, 1964; Sutskever et al., 2013; Goh, 2017). The commonly used Nesterov's Accelerated Gradient (Nesterov, 1983) is given by: + +$$ +v _ {t + 1} = m v _ {t} - \eta_ {t} \frac {1}{n} \sum_ {i = 1} ^ {n} \nabla \ell \left(f \left(x _ {i}, \theta_ {t} + m v _ {t}\right), y _ {i}\right) \tag {1} +$$ + +$$ +\theta_ {t + 1} = \theta_ {t} + v _ {t + 1} - \eta \lambda \theta_ {t} \tag {2} +$$ + +where $\theta_{t}$ indicates the model parameters at iteration $t$ . The hyperparameters include the learning rate $\eta_{t}$ , batch size $n$ , momentum coefficient $m \in [0,1)$ , and the weight decay $\lambda$ . + +# 3.1 EXPERIMENTAL SETTINGS + +We evaluate fine-tuning on seven widely used image classification datasets, which covers tasks for fine-grained object recognition, scene recognition and general object recognition. Detailed statistics of each dataset can be seen in Table 1. We use ImageNet (Russakovsky et al., 2015), Places-365 (Zhou et al., 2018) and iNaturalist (Van Horn et al., 2018) as source domains for pre-trained models. We resize the input images such that the aspect ratio is preserved and the shorter side is 256 pixels. The images are normalized with mean and std values calculated over ImageNet. For data augmentation, we adopt the common practices used for training ImageNet models (Szegedy et al., 2015): random mirror, random scaled cropping with scale and aspect variations, and color jittering. The augmented images are resized to $224 \times 224$ . Note that state-of-the-art results could achieve even better performance by using higher resolution images (Cui et al., 2018) or better data augmentation (Cubuk et al., 2018). + +We mainly use ResNet-101-V2 (He et al., 2016a) as our base network, which is pre-trained on ImageNet (Russakovsky et al., 2015). Similar observations are also made on DenseNets (Huang et al., 2017) and MobileNet (Howard et al., 2017). The hyperparameters to be tuned (and ranges) + +Table 1: Datasets statistics. For the Caltech-256 dataset, we randomly sampled 60 images for each class following the procedure used in (Li et al., 2018). For the Aircraft and Flower dataset, we combined the original training set and validation set and evaluated on the test set. For iNat 2017, we combined the original training set and $90\%$ of the validation set following (Cui et al., 2018). + +
DatasetsTask CategoryClassesTrainingTest
Oxford Flowers (Nilsback & Zisserman, 2008)fine-grained object recog.1022,0406,149
CUB-Birds 200-2011 (Wah et al., 2011)fine-grained object recog.2005,9945,794
FGVC Aircrafts (Maji et al., 2013)fine-grained object recog.1006,6673,333
Stanford Cars (Krause et al., 2013)fine-grained object recog.1968,1448,041
Stanford Dogs (Khosla et al., 2011)fine-grained object recog.12012,0008,580
MIT Indoor-67 (Sharif Razavian et al., 2014)scene classification675,3601,340
Caltech-256-60 (Griffin et al., 2007)general object recog.25615,36015,189
iNaturalist 2017 (Van Horn et al., 2018)fine-grained object recog.5,089665,5719,599
Place365 (Zhou et al., 2018)scene classification3651,803,46036,500
+ +are: learning rate (0.1, 0.05, 0.01, 0.005, 0.001, 0.0001), momentum (0.9, 0.99, 0.95, 0.9, 0.8, 0.0) and weight decay (0.0, 0.0001, 0.0005, 0.001). We set the default hyperparameters to be batch size $256^{1}$ , learning rate 0.01, momentum 0.9 and weight decay 0.0001. To avoid insufficient training and observe the complete convergence behavior, we use 300 epochs for fine-tuning and 600 epochs for scratch-training, which is long enough for the training curves to converge. The learning rate is decayed by a factor of 0.1 at epoch 150 and 250. We report the Top-1 validation (test) error at the end of training. The total computation time for the experiments is more than 10K GPU hours. + +# 3.2 EFFECT OF MOMENTUM AND DOMAIN SIMILARITY + +Momentum 0.9 is the most widely used value for training from scratch (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016b) and is also widely adopted for fine-tuning (Kornblith et al., 2019). To the best of our knowledge, it is rarely changed, regardless of the network architectures or target tasks. To check the influence of momentum on fine-tuning, we first search for the best momentum value for fine-tuning on the Birds dataset with different weight decay and learning rate. Figure 1(a) shows the performance of fine-tuning with and without weight decays. Surprisingly, momentum zero actually outperforms the nonzero momentum. The optimal learning rate also increases when the momentum is disabled as shown in Figure 1(b). + +![](images/ec56b4f0478e01c3c7d9fc0f829e202be820c2e737db8a28bc852168ce210ab6.jpg) +Figure 1: (a) Searching for the optimal momentum on Birds dataset with fixed learning rate 0.01 and different weight decays. Detailed learning curves and results of other hyperparameters can be found in Appendix A. (b) Comparison of momentum 0.9 and 0.0 with different learning rates on the Birds dataset, $\lambda$ is fixed at 0.0001. + +![](images/7737c67ad6718b8290fae9e61e6b5115f7940047f785be09927b03ae5bc5d523.jpg) + +To verify this observation, we further compare momentum 0.9 and 0.0 on other datasets. Table 2 shows the performance of 8 hyperparameter settings on 7 datasets. We observe a clear pattern that disabling momentum works better for Dogs, Caltech and Indoor, while momentum 0.9 works better for Cars, Aircrafts and Flowers. + +Table 2: Top-1 validation errors on seven datasets by fine-tuning ImageNet pre-trained ResNet-101 with different hyperparameters. Each row represents a network fine-tuned by a set of hyperparameters (left four columns). The datasets are ranked by the relative improvement by disabling momentum. The lowest error rates with the same momentum are marked as bold. Note that the performance difference for Birds is not very significant. + +
mηλDogsCaltechIndoorBirdsCarsAircraftsFlowers
0.90.010.000117.2014.8523.7618.109.1017.553.12
0.90.01017.4114.5124.5918.429.6017.403.33
0.90.0050.000114.1413.4224.5917.249.0818.213.50
0.90.005014.8013.6722.7917.549.3117.823.53
00.010.000111.0012.1121.1417.4111.0720.585.48
00.01010.8712.1621.2917.2110.6520.465.25
00.0050.000110.2111.8621.9618.2413.2224.397.03
00.005010.1211.6120.7618.4013.1123.916.78
+ +Table 3: Verification of the effect of momentum on other source domains rather than ImageNet. The hyperparameters are $n = 256$ , $\eta = 0.01$ , and $\lambda = 0.0001$ . Momentum 0 works better for transferring from iNat-2017 to Birds and transferring from Places-365 to Indoor comparing to momentum 0.9 counterparts. + +
Source domainmIndoorBirdsDogsCaltechCarsAircrafts
iNat-20170.930.7314.6924.7420.1211.1619.86
034.1112.2923.8721.4716.8927.21
Places-3650.922.1927.7230.8422.5311.0621.27
020.1632.1732.4722.6014.6725.29
+ +Interestingly, datasets such as Dogs, Caltech, Indoor and Birds are known to have high overlap with ImageNet dataset $^2$ , while Cars and Aircrafts are identified to be difficult to benefit from fine-tuning from pre-trained ImageNet models (Kornblith et al., 2019). According to Cui et al. (2018), in which the Earth Mover's Distance (EMD) is used to calculate the similarity between ImageNet and other domains, the similarity to Dogs and Birds are 0.619 and 0.563, while the similarity to Cars, Aircrafts and Flowers are 0.560, 0.556 and $0.525^3$ . The relative order of similarities to ImageNet is + +# Dogs, Birds, Cars, Aircrafts and Flowers + +which aligns well with the transition of optimal momentum value from 0.0 to 0.9. Following the similarity calculation, we can also verified Caltech and Indoor are more close to ImageNet than Cars/Aircrafts/Flowers (Table 3.3). + +To verify the connection between momentum and domain similarity, we further fine-tune from different source domains such as Places-365 and iNaturalist, which are known to be better source domains than ImageNet for fine-tuning on Indoor and Birds dataset (Cui et al., 2018). We may expect that fine-tuning from iNaturalist works better for Birds with $m = 0$ and similarly, Places for Indoor. Indeed, as shown in Table 3, disabling momentum improves the performance when the source and target domain are similar, such as Places for Indoor and iNaturalist for Birds. + +Small momentum works better for fine-tuning on domains that are close to the source domain. One explanation for the above observations is that because the Dogs dataset is very close to ImageNet, the pre-trained ImageNet model is expected to be close to the fine-tuned solution on the Dogs dataset. In this case, momentum may not help much as the gradient direction around the minimum could be much random and accumulating the momentum direction could be meaningless. Whereas, for + +![](images/ffda5705e22b98ae9be27023416535e1a149007fe23b8297289a453fd771831f.jpg) +(a) dissimilar, $m = 0.9$ + +![](images/b2da2c9cea7e24d1dffd9900f44bd4de96ba48020c227b9b90d0816bcb0f7399.jpg) +(b) dissimilar, $m = 0$ +Figure 2: An illustration of the effect of momentum on different fine-tuning scenarios from the loss-landscape perspective. The red point is the pre-trained model and the blue point is the fine-tuned solution. The dashed lines are loss contours. Assuming the step size is fixed, large momentum accelerates the convergence when the initialization is far from the minimum ((a) and (b)). On the contrary, large momentum may impede the convergence as shown in (c) and (d) when the initialization is close to the minimum. + +![](images/8b1eaef42a2957e1fa70cd2be0c7f1ed6fbfe18e4253731f9b1abc210af31a82.jpg) +(c) similar, $m = 0.9$ + +![](images/625d45edecd235d9c2448a0913b33fc388d3c16e580dd2bfa62bfe7e78f4303a.jpg) +(d) similar, $m = 0.0$ + +faraway target domains (e.g., Cars and Aircrafts) where the pre-trained ImageNet model could be much different with the fine-tuned solution, the fine-tuning process is more similar with training from scratch, where large momentum stabilizes the decent directions towards the minimum. An illustration of the difference can be found in Figure 2. + +Connections to early observations on decreasing momentum Early work (Sutskever et al., 2013) actually pointed out that reducing momentum during the final stage of training allows finer convergence while aggressive momentum would prevent this. They recommended reducing momentum from 0.99 to 0.9 in the last 1000 parameter updates but not disabling it completely. Recent work (Liu et al., 2018; Smith, 2018) showed that a large momentum helps escape from saddle points but can hurt the final convergence within the neighborhood of the optima, implying that momentum should be reduced at the end of training. Liu et al. (2018) find that a larger momentum introduces higher variance of noise and encourages more exploration at the beginning of optimization, and encourages more aggressive exploitation at the end of training. They suggest that at the final stage of the step size annealing, momentum SGD should use a much smaller step size than that of vanilla SGD. When applied to fine-tuning, we can interpret that if the pre-trained model lies in the neighborhood of the optimal solution on the target dataset, the momentum should be small. Our work identifies the empirical evidence of disabling momentum helps final convergence, and fine-tuning on close domains is a good exemplar. + +# 3.3 COUPLED HYPERPARAMETERS AND THE VIEW OF EFFECTIVE LEARNING RATE + +Now that we had discovered the effect of momentum by fixing other hyperparameters and only allowed momentum to change. But note that the two difficult scenarios shown in Figure 2 (b) and (c) might also be mitigated by increasing or decreasing learning rate. That is, hyperparameters are coupled and varying one hyperparameter can change the optimal values of the other hyperparameters that lead to the best performance. In addition, optimal values of certain hyperparameters depend on the values of other hyperparameters in systematic ways. For example, learning rate is entangled with batch size, momentum and weight decay. There is a notion of effective learning rate (ELR) (Hertz et al., 1991; Smith et al., 2018; Smith & Le, 2018) for SGD with momentum: $\eta' = \eta / (1 - m)$ , which was shown to be more closely related with training dynamics and final performance rather than $\eta$ . The effective learning rate with $m = 0.9$ is $10 \times$ higher than the one with $m = 0.0$ if other hyperparameters are fixed, which is probably why we see an increase in optimal learning rate when momentum is disabled in Figure 1(b) and Appendix A. + +![](images/c2b7250e0b2943e07b991a710781fcd389110e8742521335177fbd52a65f7d0e.jpg) +Figure 3: The effect of momentum w/ and w/o fixing ELR $\eta^{\prime}$ . When $\eta^{\prime}$ is the same, momentum 0 and 0.9 are almost equivalent. If $\eta^{\prime}$ is allowed to change, there is almost no difference between optimal performance obtained by different $m$ . + +It is the effective learning rate that matters for fine-tuning performance Because hyperparameters are coupled, looking at the performance with only one hyperparameter varied may give a + +misleading understanding of the effect of hyperparameters. Therefore, to examine the effect of momentum, we should report the best result obtainable with and without momentum, as long as other hyperparameters explored are sufficiently explored. We re-examine previous experiments that demonstrated the importance of momentum tuning when the ELR $\eta' = \eta / (1 - m)$ is held fixed instead of simply fixing learning rate $\eta$ . Figure 3 shows that when $\eta'$ is constant, the best performance obtained by $m = 0.9$ and $m = 0$ are almost equivalent when other hyperparameters are allowed to change. However, different ELR does result in different performance, which indicates its importance for the best performance. It explains why the common practice of changing only learning rate generally works, though changing momentum may results in the same result, they both change the ELR. In fact, as long as the initial learning rate is small enough, we can always search for the optimal momentum as it is an amplifier, making the ELR larger by a factor of $1 / (1 - m)$ . Therefore, momentum does determine the search ranges of learning rate. + +Optimal ELR depends on the similarity between source domain and target domain Now that we have shown ELR is critical for fine-tuning performance, we are interested in the factors that determine the optimal ELR for a given task. Previous work (Smith & Le, 2018) found that there is an optimum ELR which maximizes the test accuracy. However, the observations are only based on scratch training on small datasets (e.g., CIFAR-10); the relationship between ELR and domain similarity, especially for fine-tuning, is still unexplored. To examine this, we search the best ELR on each fine-tuning task and reports in Fig. 4 the best validation error obtained by each ELR while allowing other hyperparameters to change. It shows the optimal ELR depends on both source domain and target domain. As shown in Fig. 4 (a-c), the optimal ELR for Dogs/Caltech/Indoor are much smaller than these for Aircrafts/Flowers/Cars when fine-tuned from ImageNet pre-trained model. Similar observations can be made on DenseNets and MobileNet. Though the optimal ELR value is different, the relative order of domain similarity is consistent and architecture agnostic. We can also see a smaller ELR works better when source domain and target domain are similar, such as Dogs for ImageNet and Birds for iNat2017 (Fig. 4 (a, d-e)). Interestingly, the optimal ELR for training from scratch is much larger and very similar across different target datasets, which indicates the distance from a random initialization is uniformly similar to different target dataset. + +![](images/8c5301fcb72344eca42317a9e2376c61a1ffa0f4a5c67520d1895ad378220aad.jpg) +(a) ImageNet, ResNet-101 + +![](images/ee81c145190582ede0d753fb17467d6df5ac7e99258f2eeacb4fa0673b57a7ee.jpg) +(b) ImageNet, DenseNet-121 + +![](images/e6fbc393dbf2ceab3d81c9422e570dfde67c335691460f81132ecf13a855703b.jpg) +(c) ImageNet, MobileNet1.0 + +![](images/379a684789034f172ee52610e39bdcadf45c27c5d681831623ed8ea708dc14dc.jpg) +(d) iNat2017, ResNet-101 +Figure 4: The best validation errors obtained by different ELRs for different source-target domains. Note that the optimal ELR for each target dataset falls in the interior of search space. Each point in (a-e) is the lowest validation error obtained with different weight decay values while ELR is fixed. The first row suggests that the connection between optimal ELR and domain similarity is architecture agnostic. The second row verifies that optimal ELR depends on the similarity between source domain and target domain. + +![](images/ac0b6c4d1b4d6ce43218b07a51471fda7c22f5d408c674631b78aab0641c95ba.jpg) +(e)Places365,ResNet-101 + +![](images/d6cdd4558ad60056eeac426ef49f9e28f14542ec3bd6f8ed24c49a6fb8976935.jpg) +(f) Scratch, ResNet-101 + +Table 4: The connection between domain similarity and optimal ELR. The values in the second column is provided by Cui et al. (2018), in which JFT pretrained ResNet-101 was used as the feature extractor. Note that neither the pre-trained model or the dataset is released and we cannot calculate the metric for other datasets. In other columns, we calculate domain similarity using ImageNet pre-trained model as the feature extractor. The 1st, 2nd, 3rd and 4th highest scores are color coded. The optimal ELRs are also listed, which corresponds to the values in Fig 4. + +
JFTImageNetiNat2017Places365
ResNet-101ResNet-101DenseNet-121MobileNetResNet-101ResNet-101
simsimη'simη'simη'simη'simη'
Dogs0.6190.8620.0010.8510.010.8520.010.8540.050.8560.5
Caltech-0.8920.0050.8810.010.8780.010.8710.10.8880.05
Indoor-0.8560.010.8500.050.8390.010.8430.10.9010.05
Birds0.5630.8600.050.8420.050.8490.10.9010.0050.8610.5
Cars0.5600.8450.50.8310.50.8301.00.8471.00.8641.0
Aircrafts0.5560.8401.00.8170.10.8311.00.8460.50.8530.5
Flowers0.5250.8440.10.8210.50.8250.10.8790.10.8511.0
+ +Optimal ELR selection based on domain similarity Now we have made qualitative observations about the relationship between domain similarity and optimal ELR. A quantitative characterization of the relationship could reduce the hyperparameter search ranges for HPO or even eliminate HPO by accurately predicting hyperparameters. We followed the domain similarity calculation in (Cui et al., 2018) and recalculate similarity scores for all source-target domain pairs. Note the original domain similarity calculation in (Cui et al., 2018) use pre-trained JFT (Sun et al., 2017) models as feature extractor, which are not public available. We alternatively use ImageNet pre-trained model or the source model as feature extractor. As shown in Table 4, there is a good correlation between domain similarity score and the scale of optimal ELR. Generally, the more similar the two domains, the smaller the optimal ELR. Though it is not strictly corresponding to the domain similarity score, the score provides reasonable prediction about the scale of optimal ELR, such as [0.001, 0.01], [0.01, 0.1], [0.1, 1.0] and therefore can reduce the search space for optimal ELR. Based on this correlation, a simple strategy can be developed for optimal ELR selection given a frequently used source model: one can calculate domain similarities and perform exhaustive hyperparameter searches for few reference datasets, including similar and dissimilar datasets. Then given a new dataset to fine-tune, one can calculate the domain similarity and compare with the scores of reference datasets, and choose the range of ELRs with the closest domain similarity. + +Weight Decay and Learning Rate The relationship between weight decay and effective learning rate is recently well-studied (van Laarhoven, 2017; Zhang et al., 2018; Loshchilov & Hutter, 2018). It was shown that the effect of weight decay on models with BN layers is equivalent to increasing the ELR by shrinking the weights scales, i.e., $\eta' \sim \eta / \| \theta \|_2^2$ . And if the optimal effective learning rate exists, the optimal weight decay value $\lambda$ is inversely related with the optimal learning rate $\eta$ . The 'effective' weight decay is $\lambda' = \lambda / \eta$ . We show in Figure 5 that the optimal effective weight decay is also correlated with domain similarity. + +![](images/0393937aaa867300ac3dd1230fda9a9938364626c7593457b15f9871badc67ea.jpg) +(a) ImageNet +Figure 5: The relationship between optimal effective weight decay and source datasets. The optimal effective weight decay is larger when the source domain is similar with the target domain. + +![](images/c294a23b95c8a31e9b192e0c8163f2f0a65b4756c5261084412168b50218bde4.jpg) +(b) iNat2017 + +![](images/c9f4cd78b0df40cfae5a0a64088c419b0b9622a76187a33b0a11591e16a26081.jpg) +(c)Places-365 + +# 3.4 THE CHOICE OF REGULARIZATION + +$L_{2}$ regularization or weight decay is widely used for constraining the model capacity (Hanson & Pratt, 1989; Krogh & Hertz, 1992). Recently Li et al. (2018; 2019) pointed out that standard $L_{2}$ regularization, which drives the parameters towards the origin, is not adequate in transfer learning. To retain the knowledge learned by the pre-trained model, reference-based regularization was used to regularize the distance between fine-tuned weights and the pre-trained weights, so that the fine-tuned model is not too different from the initial model. Li et al. (2018) propose $L_{2}$ -SP norm, i.e., $\frac{\lambda_1}{2} \| \theta' - \theta_0 \|_2^2 + \frac{\lambda_2}{2} \| \theta'' \|_2^2$ , where $\theta'$ refers to the part of network that shared with the source network, and $\theta''$ refers to the novel part, e.g., the last layer with different number of neurons. While the motivation is intuitive, there are several issues for adopting reference based regularization for fine-tuning: + +- Many applications actually adopt fine-tuning on target domains that are quite different from source domain, such as fine-tuning ImageNet models for medical imaging (Mormont et al., 2018; Raghu et al., 2019). The fine-tuned model does not necessarily have to be close with the initial model. +- The scale invariance introduced by Batch Normalization (BN) (Ioffe & Szegedy, 2015) layers enable models with different parameter scales to function the same, i.e., $f(\theta) = f(\alpha \theta)$ . Therefore, when $L_{2}$ regularization drives $\| \theta \|_2^2$ towards zeros, it could still have the same functionality as the initial model. On the contrary, a model could still be different even when the $L_{2}$ -SP norm is small. +- $L_{2}$ -SP regularization would constrain $\theta''$ to be close to $\theta_0$ , so that $\| \theta \|_2^2$ is relatively stable in comparison with $L_{2}$ regularization. Given that ELR is approximately proportional to $\eta / \| \theta \|_2^2$ and a smaller ELR is beneficial for fine-tuning from similar domains, it may explain why $L_{2}$ -SP provides better performance. If this is true, then by decreasing the initial ELR, $L_{2}$ -norm may function the same. + +To examine these conjectures, we revisited the work of (Li et al., 2018) with additional experiments. To show the effectiveness of $L_{2}$ -SP norm, the authors conducted experiments on datasets such as Dogs, Caltech and Indoor, which are all close to the source domain (ImageNet or Places-365). We extend their experiments by fine-tuning on both "similar" and "dissimilar" datasets, including Birds, Cars, Aircrafts and Flowers, with both $L_{2}$ and $L_{2}$ -SP regularization (details in Appendix D). For fair comparison, we perform the same hyperparameter search for both methods. As expected, Table 5 shows that $L_{2}$ regularization is very competitive with $L_{2}$ -SP on Birds, Cars, Aircrafts and Flowers, which indicates that reference based regularization may not generalize well for fine-tuning on dissimilar domains. + +Table 5: The average class error of (Li et al., 2018) and the extension of their experiments of on "dissimilar" datasets. The italic datasets and numbers are our experimental results. Note that the original Indoor result is fine-tuned from Places-365, while we fine-tune just from ImageNet pre-trained models. + +
MethodDogsCaltechIndoorBirdsCarsFlowersAircrafts
L2(Li et al., 2018)18.614.720.4----
L2-SP (Li et al., 2018)14.913.615.8----
L2 with HPO16.7914.9823.0022.5110.105.7013.03
L2-SP with HPO13.8614.4521.7722.329.595.2813.31
+ +We also check the change of regularization terms during training for both methods as well as their best hyperparameters. As shown in Figure 6, the $L_{2}$ regularization usually decreases the weights norm more aggressively, depending on the value of $\lambda$ , while $L_{2}$ -SP regularization keeps the norm less changed. We can see that the optimal learning rate of $L_{2}$ regularization is mostly smaller than $L_{2}$ -SP, which may compensate for the decreased weight norm or increased ELR. Interestingly, for Dogs dataset, both regularization terms grow much larger after a few iterations and then become stable, which means constraining the weights to be close to initialization is not necessarily the reason for $L_{2}$ -SP to work even for close domains. It also seems contradicting to previous finding (Zhang et al., 2018) that weight decay functions as increasing ELR by decreasing weight norms. However, it + +might be reasonable as large norm actually decreases the ELR, which could be helpful due to the close domain similarity between Dogs and ImageNet. + +![](images/5ef66b7fa26bd99a5ebf83c648be1e2706e98d862ce199ea39dde77becf7c8a2.jpg) +(a) normalized $L_{2}$ norm + +![](images/6f69064295043f53a300d12bd541e29fc65ab418799f7ebceb0fb7fabb8a1e27.jpg) +(b) normalized $L_{2}$ -SP norm +Figure 6: The normalized $L_{2}$ norm and $L_{2}$ -SP norm during training. The $y$ -axis is the relative change of the regularization term in comparison to the initial value, i.e., $\| \theta_t\| _2^2 /\| \theta_0\| _2^2$ for $L_{2}$ norm and $(\lambda_1\| \theta_t' - \theta_0\| _2^2 + \lambda_2\| \theta_t''\| _2^2) / (\lambda_2\| \theta_0''\| _2^2)$ for $L_{2}$ -SP norm. Optimal hyperparameters are also given in the legend. Note that experiment uses batch size 64 instead of 256, which results in smaller optimal learning rate comparing to previous result. + +# 4 DISCUSSION + +The two extreme ways for selecting hyperparameters—performing exhaustive hyperparameter search or taking ad-hoc hyperparameters from scratch training—could be either too computationally expensive or yield inferior performance. Different from training from scratch, where the default hyperparameter setting may work well for random initialization, the choice of hyperparameters for fine-tuning is not only dataset dependent but is also influenced by the similarity between the target and source domains. The rarely tuned momentum value could also improve or impede the performance when the target domain and source domain are close given insufficiently searched learning rate. These observations connect with previous theoretical works on decreasing momentum at the end of training and effective learning rate. We further identify that the optimal effective learning rate correlates with the similarity between the source and target domains. With this understanding, one can significantly reduce the hyperparameter search space. We hope these findings could be one step towards better and efficient hyperparameter selection for fine-tuning. + +# ACKNOWLEDGMENTS + +The authors would like to thank all anonymous reviewers for their valuable feedback. + +# REFERENCES + +Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. arXiv preprint arXiv:1902.03545, 2019. +Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE T-PAMI, 40(4):834-848, 2017. +Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018. +Yin Cui, Yang Song, Chen Sun, Andrew Howard, and Serge Belongie. Large scale fine-grained categorization and domain-specific transfer learning. In CVPR, 2018. +Jia Deng, Alexander C Berg, Kai Li, and Li Fei-Fei. What does classifying more than 10,000 image categories tell us? In ECCV, 2010. + +Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. +Weifeng Ge and Yizhou Yu. Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning. In CPVR, 2017. +Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. +Gabriel Goh. Why momentum really works. Distill, 2(4):e6, 2017. +Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. +Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007. +Stephen Jose Hanson and Lorien Y. Pratt. Comparing biases for minimal network construction with back-propagation. In NIPS, 1989. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016a. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016b. +Kaiming He, Georgia Gkioxari, Piotr Dolkar, and Ross Girshick. Mask R-CNN. In ICCV, 2017. +Kaiming He, Ross Girshick, and Piotr Dólar. Rethinking imagenet pre-training. In ICCV, 2019. +John Hertz, A Krogh, and Richard G Palmer. Introduction to the theory of neural computation. 1991. +Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. +Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, 2017. +Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes imagenet good for transfer learning? arXiv preprint arXiv:1608.08614, 2016. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. +Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, CVPR, 2011. +Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In CVPR, 2019. +Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. +Anders Krogh and John A Hertz. A simple weight decay can improve generalization. In NIPS, 1992. +Xingjian Li, Haoyi Xiong, Hanchao Wang, Yuxuan Rao, Liping Liu, and Jun Huan. Delta: Deep learning transfer using feature map with attention for convolutional networks. In ICLR, 2019. + +Xuhong Li, Yves Grandvalet, and Franck Davoine. Explicit inductive bias for transfer learning with convolutional networks. In ICML, 2018. +Tianyi Liu, Zhehui Chen, Enlu Zhou, and Tuo Zhao. Toward deeper understanding of nonconvex stochastic optimization with momentum using diffusion approximations. arXiv preprint arXiv:1802.05155, 2018. +Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR, 2018. +Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. In ECCV, 2018. +S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013. +Romain Mormont, Pierre Geurts, and Raphaël Marée. Comparison of deep transfer learning strategies for digital pathology. In CVPR Workshops, 2018. +Yurixi E Nesterov. A method for solving the convex programming problem with convergence rate of $(1 / k^2)$ . In Dokl. akad. nauk Sssr, volume 269, pp. 543-547, 1983. +Jiquan Ngiam, Daiyi Peng, Vijay Vasudevan, Simon Kornblith, Quoc V Le, and Ruoming Pang. Domain adaptive transfer learning with specialist models. arXiv preprint arXiv:1811.07056, 2018. +Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics & Image Processing. IEEE, 2008. +Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1-17, 1964. +Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding transfer learning with applications to medical imaging. In NIPS, 2019. +Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211-252, 2015. +Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In CVPR workshops, pp. 806-813, 2014. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. +Leslie N Smith. Cyclic learning rates for training neural networks. In WACV, 2017. +Leslie N Smith. A disciplined approach to neural network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820, 2018. +Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, volume 11006, pp. 1100612. International Society for Optics and Photonics, 2019. +Samuel L Smith and Quoc V Le. A bayesian perspective on generalization and stochastic gradient descent. In ICLR, 2018. +Samuel L Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V Le. Don't decay the learning rate, increase the batch size. In ICLR, 2018. +Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In CVPR, 2017. + +Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013. +Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015. +Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In CVPR, 2018. +Twan van Laarhoven. L2 regularization versus batch and weight normalization. In NIPS, 2017. +C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. +Junyuan Xie, Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, and Mu Li. Bag of tricks for image classification with convolutional neural networks. arXiv preprint arXiv:1812.01187, 2018. +Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, 2014. +Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. arXiv preprint arXiv:1810.12281, 2018. +Jian Zhang and Ioannis Mitliagkas. Yellowfin and the art of momentum tuning. arXiv preprint arXiv:1706.03471, 2017. +Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE T-PAMI, 40(6):1452-1464, 2018. + +# A THE EFFECTIVENESS OF MOMENTUM + +Searching for Optimal Momentum To check the effectiveness of momentum on fine-tuning, we can search the best momentum values for fine-tuning with fixed learning rate but different weight decay and batch size. Taking Birds dataset as an example, Figure 7 provides the convergence curves for the results shown in Figure 1(a), which shows the learning curves of fine-tuning with 6 different batch sizes and weight decay combinations. Zero momentum outperforms the nonzero momentum in 5 out of 6 configurations. + +![](images/83ce9b8eb2e8486f9f3220df367d96d6767d319d102541b36d37a76285ba71f7.jpg) +(a) $n = 256, \lambda = 0.0005$ + +![](images/54c5763a66d28ae9c8f3924b23f00baf2a5a4b9af7ff41e91a8c155c3a087f3e.jpg) +(b) $n = 256, \lambda = 0.0001$ + +![](images/fbb46fb410d86ecaed222924a8afecaf9326e948011ed2a789367ef34c3f55bd.jpg) +(c) $n = 256, \lambda = 0.0$ + +![](images/0f2989fa3955216aea47e8efb830f84733d6f406741a804a471d5735eb4f824f.jpg) +(d) $n = 16, \lambda = 0.0005$ +Figure 7: Searching for the optimal momentum on Birds dataset with fixed learning rate and weight decays. The solid lines are training errors and the dashed lines are validation errors. + +![](images/3719c3cc64ab5bf06e64c03d00f57507e67be4644015a42ae81eea713d5f3375.jpg) +(e) $n = 16$ $\lambda = 0.0001$ + +![](images/c650989fe6e2e3e78fafa4f0d941045d39ae5533210a0066368cc27100723adb.jpg) +(f) $n = 16,\lambda = 0.0$ + +Effective learning rate increases after disabling momentum. Figure 8 compares the performance of with and without momentum for Dogs dataset with a range of different learning rates. Note that the learning rate with similar performance generally increases $10\mathrm{x}$ after changing $m$ from 0.9 to 0.0, which is coherent with the rule of effective learning rate $\eta' = \eta / (1 - m)$ . Same observations can be made on other datasets as shown in Figure 9. + +![](images/be455e12458575155fc788dadecfbae55377407758d7c99ec9c2d758dd71e64f.jpg) +(a) Dogs, $m = 0.9$ +Figure 8: The effect of momentum when learning rate is allowed to change. The learning rate for the best performance increases $10\mathrm{x}$ after changing $m$ from 0.9 to 0.0, which is coherent with the rule of effective learning rate. Note that weight decay $\lambda$ is fixed at 0.0001. + +![](images/83d638a6577cb05f4a50529195bfcaa5d859e783709d5b53f599d1220a780473.jpg) +(b) Dogs, $m = 0.0$ + +![](images/3ee616171577f02dc06008e142cb90c411855e90728b3556d1e4e0b7f027f538.jpg) +(a) Caltech, $m = 0.9$ + +![](images/33c61fefb31c68cac78d681751396165087aa7ec582225111f910870947064be.jpg) +(b) Caltech, $m = 0.0$ + +![](images/645ba1b904c73e595d237e0d8ea438df387bb8474c86673335276dcee956ca51.jpg) +(c) Indoor, $m = 0.9$ + +![](images/dc06aa75044cb39d72ac52436dc7c45818772a2468daefe85922c2deb51677cd.jpg) +(d) Indoor, $m = 0.0$ + +![](images/19137327f2e7fceae5f63703090240b78eef239b39547f8a2972deeff3b2897d.jpg) +(e) Birds, $m = 0.9$ + +![](images/7ddd4cfb45d952275fe8a4001de585a0adb063a326a584b2b09cdd7272ddc612.jpg) +(f) Birds, $m = 0.0$ + +![](images/3d34f6f8e523a26d9878790eeed7c648f387585ea39669324c1ad0ae70104179.jpg) +(g) Cars, $m = 0.9$ + +![](images/ceadea24e4a999e84e4d28b448d1717dd92c333ab04d5d95fb0118963d00dc12.jpg) +(h) Cars, $m = 0.0$ + +![](images/6ff35e3dbb1f0fbc66564a39c668f2079d6a920cfa1f6e2245fd341345c6706b.jpg) +(i) Aircrafts, $m = 0.9$ +Figure 9: The effect of momentum when learning rate is allowed to change (Figure 8 continued). The learning rate for the best performance increases $10\mathrm{x}$ after changing $m$ from 0.9 to 0.0, which is coherent with the rule of effective learning rate. + +![](images/783f8dd669842eb79128863b21ffce48888c38bc6936f4663ab440d36410948b.jpg) +(j) Aircrafts, $m = 0.0$ + +![](images/ff7271a41cd4e2399a05d8391d022371ade3233ce5c78d44d5aaaa28f4ef5e4f.jpg) +(k) Flowers, $m = 0.9$ + +![](images/42913ff9fe10e3e225952528df176b27a7a58b8bed5dc11dc4c94b159553d851.jpg) +(1) Flowers, $m = 0.0$ + +# B DOMAIN SIMILARITY + +The domain similarity calculation based on Earth Mover Distance (EMD) is introduced in the section 4.1 of (Cui et al., 2018). Here we briefly introduce the steps. In (Cui et al., 2018), the authors first train ResNet-101 on the large scale JFT dataset (Sun et al., 2017) and use it as a feature extractor. They extracted features from the penultimate layer of the model for each image of the training set of the source domain and target domain. For ResNet-101, the length of the feature vector is 2048. The features of images belonging to the same category are averaged and $g(s_i)$ denotes the average feature vector of $i$ th label in source domain $S$ , similarly, $g(t_j)$ denotes the average feature vector of $j$ th label in target domain $T$ . The distance between the averaged features of two labels is $d_{i,j} = \| g(s_i) - g(t_j)\|$ . Each label is associated with a weight $w \in [0,1]$ corresponding to the percentage of images with this label in the dataset. So the source domain $S$ with $m$ labels and the target domain $T$ with $n$ labels can be represented as $S = \{(s_i, w_{s_i})\}_{i=1}^m$ and $T = \{(t_j, w_{t_j})\}_{i=1}^n$ . The EMD between the two domains is defined as + +$$ +d (S, T) = \operatorname {E M D} (S, T) = \frac {\sum_ {i = 1 , j = 1} ^ {m , n} f _ {i , j} d _ {i , j}}{\sum_ {i = 1 , j = 1} ^ {m , n} f _ {i , j}} \tag {3} +$$ + +where the optimal flow $f_{i,j}$ corresponds to the least amount of total work by solving the EMD optimization problem. The domain similarity is defined as + +$$ +\operatorname {s i m} (S, T) = e ^ {- \gamma d (S, T)} \tag {4} +$$ + +where $\gamma$ is 0.01. Note that the domain similarity value is not ranging from 0 to 1. + +Due to the unavailability of the large-scale JFT dataset (300x larger than ImageNet) and its pre-trained ResNet-101 model, we cannot use it for extracting features for new datasets, such as Caltech256 and + +MIT67-Indoor. Instead of using the powerful feature representation, we use our pre-trained ImageNet model (ResNet-101) as the feature extractor. Table 4 compares the domain similarities calculated by different pre-trained models and we can see some consistent patterns across different architectures: e.g., The 1st and 2nd highest similarity scores are Caltech and Dogs regardless of architectures; the 3rd and 4th highest similarity scores refers to Birds and Indoor; the most dissimilar datasets are Cars, Aircrafts and Flowers, though the relative orders for them are not exactly the same. Besides using fixed feature extractor, an alternative way is to use the source domain model directly as the feature extractor for both source domain and target domain, which may captures the transfer learning process more precisely than a uniform feature extractor. + +# C THE EFFECTIVENESS OF BN MOMENTUM + +Kornblith et al. (2019) conducted extensive fine-tuning experiments with different hyperparameters. One observation they made is that the momentum parameter of BN layer is essential for fine-tuning. They found it useful to decrease the BN momentum parameter from its ImageNet value to $\max(1 - 10/s, 0.9)$ where $s$ is the number of steps per epoch. This will change the default BN momentum value (0.9) when $s$ is larger than 100, but it only applies when the dataset size is larger than 25.6K with batch size 256. The maximum data size used in our experiments is Caltech-256, which is 15K, so this strategy seems not applicable. + +We further validate the effect of BN momentum by performing a similar study as to ELR. The goal is to identify whether there is an optimal BN momentum for a given task. For each dataset, we fine-tune the pre-trained model using previously obtained best hyperparameters and only vary BN momentum. In addition to the default value 0.9, we also set it to 0.0, 0.95 and 0.99. The rational is that if BN momentum is a critical hyperparameter, we should expect significant performance differences when the value is changed from the optimal value. As shown in Figure 10, we can see $m_{bn} = 0.99$ slightly improves the performance for some datasets, however, there is no significant performance difference among values greater than 0.9. One hypothesis is that similar domains will share similar BN parameters and statistics, BN momentum may affect the parameter adaptation. More investigation is still needed to fully understand its effectiveness. + +![](images/c588b421b3bb07e0585e9c9aba3ee831d8ca426a1c0f2ff91c77c201123c6e5d.jpg) +Figure 10: Performance of different BN momentum for each dataset with existing optimal hyperparameters. + +# D EXPERIMENTAL SETTINGS FOR COMPARISON OF $L_{2}$ AND $L_{2}$ -SP + +The experiments in Section 3.4 is based the code5 provided by (Li et al., 2018). The base network is ImageNet pretrained ResNet-101-V1. The model is fine-tuned with batch size 64 for 9000 iterations, and learning rate is decayed once at iteration 6000. Following the original setting, we use momentum 0.9. We performed grid search on learning rate and weight decay, with the range of $\eta : \{0.02, 0.01, 0.005, 0.001, 0.0001\}$ and $\lambda_1: \{0.1, 0.01, 0.001, 0.0001\}$ , and report the best average class error (1 - average accuracy) for both methods. For $L_2$ -SP norm, we follow the authors' setting to use constant $\lambda_2 = 0.01$ . Different with the original setting for $L_2$ regularization, we set $\lambda_2 = \lambda_1$ to simulate normal $L_2$ -norm. + +# DATA AUGMENTATION + +Data augmentation is an important way of increasing data quantity and diversity to make models more robust. It is even critical for transfer learning with few instances. The effect of data augmentation can be viewed as a regularization and the choice of data augmentation can be also viewed as a hyperparameter. Most current widely used data augmentation methods have verified their effectiveness on training ImageNet models, such as random mirror flipping, random rescaled cropping6, color jittering and etc (Szegedy et al., 2015; Xie et al., 2018). + +Do these methods transfer for fine-tuning on other datasets? Here we compare three settings for data augmentation with different momentum settings: 1) random resized cropping: our default data augmentation; 2) random cropping: the same as standard data augmentation except that we use random cropping with fixed size; 3) random flip: simply random horizontal flipping. The training and validation errors of fine-tuning with different data augmentation strategies and hyperparameters are shown in Figure 11 and Figure 12. + +![](images/18c381c025f17cc24c83c1be9328c4093fb14da7eddbe02c3ade89a0917cc9b6.jpg) +(a) Dogs, $m = 0.9$ + +![](images/4c044889cd5eb2e06c754dcc232c1a27847aad4996ec8ca8b7e011e03fa41078.jpg) +(b) Aircrafts, $m = 0.9$ + +![](images/aaf673bd5904cb41e9c347a67f646ada848cdff81cac1752bcf2c7d197ae4048.jpg) +(c) Flowers, $m = 0.9$ + +![](images/fa474532ee6eaf50c921f67891da075718184250082faee625097bb1c1f0a043.jpg) +(d) Dogs, $m = 0.0$ + +![](images/6b4587230c4c2a0472ca9db900a575ed080b0495eac75e655ba592783adb848b.jpg) +(e) Aircrafts, $m = 0.0$ + +![](images/811a505dfb3439d996daef4928ab4bb4668eb10b71d117597ecacedc14bcc1b4.jpg) +(f) Flowers, $m = 0.0$ +Figure 11: Fine-tuning with different data augmentation methods and hyperparameters. Dashed curves are the validation errors. Strong data augmentation is harder to train as it converge slowly and needs more number of epochs to observe the advanced performance on datasets such as Aircrafts. Simple data augmentation (red curves) converges much faster in training error. Strong data augmentation (blue curves) overfits the Dogs dataset with default hyperparameter but performs well with $m = 0$ . + +The effect of data augmentation is dataset dependent and is also influenced by other hyperparameters. The first row in Figure 11 shows that advanced data augmentation with default hyperparameters ( $m = 0.9$ and $\eta = 0.01$ ) leads to overfitting for Dogs while generalize better on Aircrafts and Flowers. Similar observations can be made in Figure 12. However, when momentum is disabled, the overfitting disappears for Dogs and Caltech. This is explainable since random resized cropping adds more variance to the gradient direction, and disabling momentum will lead to a smaller ELR which will be helpful for fine-tuning from a similar domain. On the other hand, the performance of random cropping decreases when momentum is disabled. As random cropping produces training samples with less variation than random resized cropping, disabling momentum or decreasing the ELR might lead to underfitting or stucking in poor local minima. This can be mitigated as we increase the learning rate for random cropping, which adds variation to the gradients. As shown in Table 6, + +![](images/8b7457b7bee8f7867eeea0866a90c086684bfb1026a47531cbeb9c83cf50ba74.jpg) +(a) Caltech, $m = 0.9$ + +![](images/462dc270d18459815cd6623cb39b5eb43e373ebe1b9ec4efdcf56f17260dd1b7.jpg) +(b) Caltech, $m = 0.0$ + +![](images/040a72b54b469d64ec430e21b1406bc2f6ae12f594805cd0726676766b0b56a8.jpg) +(c) Indoor, $m = 0.9$ + +![](images/f8790f39112f5a213b7d6b1fd571930444b90821adcfa0283987e66cdd1af1bb.jpg) +(d) Indoor, $m = 0.0$ + +![](images/f38189ec22ce1efbd9f508c93e77d193163d3727bdddec29de5cbd5df1b862ef.jpg) +(e) Birds, $m = 0.9$ +Figure 12: Comparison of data augmentation methods with different momentum values (Figure 11 continued). The other hyperparameters are: $n = 256$ , $\eta = 0.01$ and $\lambda = 0.0001$ . + +![](images/e86a9d33e5b3d12e03bf2a2b4e73b3ac4be04ae61df08b105af685c075ffdb2e.jpg) +(f) Birds, $m = 0.0$ + +![](images/da2b1ea6c143bfc12449e470b3d32acd18ee4d4f1afdf55ea843022167416b90.jpg) +(g) Cars, $m = 0.9$ + +![](images/0de25a649c7c93584477cd421ba81520aea6209d9a15027a7468ef563bafb503.jpg) +(h) Cars, $m = 0.0$ + +when learning rate increased fro 0.01 to 0.05, disabling momentum shows better performance than nonzero momentum on datasets that are close, similar to previous findings with random resized cropping. + +Table 6: Comparison of data augmentation methods with different momentum values. The rest of the hyperparameters are: $n = 256$ and $\lambda = 0.0001$ . + +
Data AugmentationmηDogsCaltechIndoorBirdsCarsFlowersAircrafts
Rand resized crop0.90.0117.2014.8523.7618.109.103.1217.55
00.0111.0012.1121.1417.4111.065.4820.58
Rand crop0.90.0111.9912.4223.3920.3117.775.6321.72
00.0111.3512.8925.1922.1123.877.7629.04
0.90.0516.8514.8023.4618.8113.704.8517.64
00.0511.7912.5223.2420.6920.007.0623.43
+ +# F SOURCE DOMAINS + +Pre-trained models For most of our experiments, we use the pre-trained ResNet-101_v2 model from the model zoo of MXNet GluonCV7. To get the pre-trained models for iNat-2017 and Places-365, we fine-tune from the ImageNet pre-trained model with the default fine-tuning hyperparameters for 60 epochs, where learning rate is decayed at epoch 45 by a factor of 10. Table 7 illustrates the Top-1 errors of each pre-trained model on their validation sets. + +Table 7: The Top-1 error of ResNet-101 pre-trained on different source dataset. + +
DatasetclassTop-1 error
ImageNet100021.4
iNat20175,08932.2
Places-36536531.5
+ +Training from Scratch with HPO The default hyperparameters for training from scratch are $\eta = 0.1$ , $\lambda = 0.0001$ , $m = 0.9$ and $n = 256$ . We train 600 epochs, and decay the learning rate at epoch 400 and 550 by a factor of 10. To perform Hyperparameter Optimization (HPO), we search hyperparameters in the following space: $\eta \in [0.1, 0.2, 0.5]$ and $\lambda \in [0.0001, 0.0005]$ . Figure 13 shows the training/validation errors of training from scratch on each dataset with different learning rate and weight decay. We observe weight decay 0.0005 consistently performs better than 0.0001. + +Insufficient hyperparameter search may lead to miss-leading conclusion To show the importance of hyperparameter tuning, Table 8 compares the performance with and without hyperparameter tuning for both fine-tuning and training from scratch tasks. With the default hyperparameters, some inappropriate conclusions might be made, e.g., "there is significant gap between fine-tuning and training from scratch", "fine-tuning always surpass training from scratch" or "fine-tuning from iNat cannot beat the performance of ImageNet". However, with HPO, those statements may not be valid. For example, training from scratch surpasses the default fine-tuning result on Cars and Aircrafts and the gap between fine-tuning and training from scratch is much smaller. Previous studies (Kornblith et al., 2019; Cui et al., 2018) also identified that datasets like Cars and Aircrafts do not benefit too much from fine-tuning. + +Table 8: Comparison of default hyperparameters and HPO for both fine-tuning (FT) and training from scratch (ST) tasks. FT Default and ST Default use their default hyperparameters, respectively. HPO refers to the finding the best hyperparameters with grid search. + +
MethodSourceDogsCaltechIndoorBirdsCarsAircraftsFlowers
FT DefaultImageNet17.2013.4223.7618.109.1017.553.12
FT DefaultiNat201724.7420.1230.7314.6911.1619.863.19
FT DefaultPlaces-36530.8422.5322.1927.7211.0621.275.66
ST Default-38.2636.2145.2843.7216.7326.4922.88
FT HPOImageNet9.8311.6120.5416.347.6112.332.91
FT HPOiNat201723.5118.8228.1112.069.5815.452.70
FT HPOPlaces-36526.2422.1419.4222.909.1315.485.06
ST HPO-29.3229.6239.3630.088.3714.3416.51
+ +![](images/e1da1d40097e1553714f4675dff0e11f3e835d4c5073e1d8c8b3a5b3323ba560.jpg) +(a) Dogs, $\lambda = 0.0001$ + +![](images/24f33af8a5982a7e644342e59aca7c1aedf4b945dce742647ce1b3800fda094c.jpg) +(b) Dogs, $\lambda = 0.0005$ + +![](images/4bfe717f91bc53c4b8aa56dac65cf904170078db10d29e57d824ad9236420d01.jpg) +(c) Caltech, $\lambda = 0.0001$ + +![](images/c23ffda340aef667f3c96c5d76eca5ab4ef960026feb63c8927a6809a1ef3b63.jpg) +(d) Caltech, $\lambda = 0.0005$ + +![](images/758e1b5ae8eca4e86efa232818535ccafbd4f6129cfe33fd4f627ea57287e1a8.jpg) +(e) Indoor, $\lambda = 0.0001$ + +![](images/41ea6714a8f5b74ff97d10c4f5f4aec1b75a323db719eb300a17e17627c8be39.jpg) +(f) Indoor, $\lambda = 0.0005$ + +![](images/046f57cdced9c5cfc60c997da0c399ec13785da47b6f92aba9b591707c55c399.jpg) +(g) Birds, $\lambda = 0.0001$ + +![](images/10ba0d4cc74455579e25b78aa9c536cde8b508eedff1961c2efcc6adf2b7b1a4.jpg) +(h) Birds, $\lambda = 0.0005$ + +![](images/c18db3bc8d239aa7dc53ce58cfc0ec1747bc0c8c22561082cdf75a88c6ecbf3f.jpg) +(i) Cars, $\lambda = 0.0001$ + +![](images/f87283b7333fd488e7ad0ffb5f194d4de25faa7bcdd460d9fc28f2b7ffc1d6e3.jpg) +(j) Cars, $\lambda = 0.0005$ + +![](images/cb7dbc92550860be38c238b60b8d00e134e5bc33793b9bc44695c07c02f492d7.jpg) +(k) Aircrafts, $\lambda = 0.0001$ + +![](images/854c4d847127c1e4743fff6a446d96b63b53006f70324eb9841420c031908b4f.jpg) +(1)Aircrafts, $\lambda = 0.0005$ + +![](images/0339efb8f95b8baea11513314fb007202d1c74a35a209754d7f1035dafc8cf26.jpg) +(m) Flowers, $\lambda = 0.0001$ + +![](images/d4f8f5bbc8f34cbf4e23e071e095f0ff53eaafc99d4d76649613721e679426ef.jpg) +(n) Flowers, $\lambda = 0.0005$ +Figure 13: Training from scratch with various learning rate and weight decay. The batch size is 256 and the momentum is 0.9. The solid curves are training error and the dashed lines are validation error. \ No newline at end of file diff --git a/rethinkingthehyperparametersforfinetuning/images.zip b/rethinkingthehyperparametersforfinetuning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..46f0040b94a1f363ef6f76a09f0af70be9629bcb --- /dev/null +++ b/rethinkingthehyperparametersforfinetuning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8be43df6448bed3652387dbbda0e404e20d87531504359342dd9b605e23327d +size 1636708 diff --git a/rethinkingthehyperparametersforfinetuning/layout.json b/rethinkingthehyperparametersforfinetuning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d5f707f8abe1f4113d4d649694d978262a86a3aa --- /dev/null +++ b/rethinkingthehyperparametersforfinetuning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:acf186d65199270e05e441095b4771714fc1547e4e2d3b71117dbaaf8b3e3b71 +size 720884 diff --git a/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_content_list.json b/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7b55d8b2ee1567968bcde2744ca89ca48a2e4db7 --- /dev/null +++ b/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12eca077a6209f4d584ce38e3ebd9dd2a1f28c13b58b07fc8329af68b08c499b +size 84702 diff --git a/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_model.json b/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9e532240b77d9dcdb9e395d9fbac4356af0d9c0f --- /dev/null +++ b/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecd76261c1e4692707f2be3b8237473b917d626f41773e890036756bd33bf4d5 +size 102099 diff --git a/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_origin.pdf b/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..71ef360b52125ee42bd56f6acfe7a48c1754bb1f --- /dev/null +++ b/revisitingselftrainingforneuralsequencegeneration/00285523-746c-45a4-802f-b6c07e1125a7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aac463a7b97bac414fbe1c2d760b688a579a32156fea6f5fee4431a3efd08c19 +size 1024623 diff --git a/revisitingselftrainingforneuralsequencegeneration/full.md b/revisitingselftrainingforneuralsequencegeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e92ded78867d8a064050b56d538918e88a1e22b7 --- /dev/null +++ b/revisitingselftrainingforneuralsequencegeneration/full.md @@ -0,0 +1,307 @@ +# REVISITING SELF-TRAINING FOR NEURAL SEQUENCE GENERATION + +Junxian He* + +Carnegie Mellon University + +junxianh@cs.cmu.edu + +Jiatao Gu*, Jiajun Shen, Marc'Aurelio Ranzato + +Facebook AI Research, New York, NY + +{jgu, jiajunshen, ranzato}@fb.com + +# ABSTRACT + +Self-training is one of the earliest and simplest semi-supervised methods. The key idea is to augment the original labeled dataset with unlabeled data paired with the model's prediction (i.e. the pseudo-parallel data). While self-training has been extensively studied on classification problems, in complex sequence generation tasks (e.g. machine translation) it is still unclear how self-training works due to the compositionality of the target space. In this work, we first empirically show that self-training is able to decently improve the supervised baseline on neural sequence generation tasks. Through careful examination of the performance gains, we find that the perturbation on the hidden states (i.e. dropout) is critical for self-training to benefit from the pseudo-parallel data, which acts as a regularizer and forces the model to yield close predictions for similar unlabeled inputs. Such effect helps the model correct some incorrect predictions on unlabeled data. To further encourage this mechanism, we propose to inject noise to the input space, resulting in a "noisy" version of self-training. Empirical study on standard machine translation and text summarization benchmarks shows that noisy self-training is able to effectively utilize unlabeled data and improve the performance of the supervised baseline by a large margin.1 + +# 1 INTRODUCTION + +Deep neural networks often require large amounts of labeled data to achieve good performance. However, acquiring labels is a costly process, which motivates research on methods that can effectively utilize unlabeled data to improve performance. Towards this goal, semi-supervised learning (Chapelle et al., 2009) methods that take advantage of both labeled and unlabeled data are a natural starting point. In the context of sequence generation problems, semi-supervised approaches have been shown to work well in some cases. For example, back-translation (Sennrich et al., 2015) makes use of the monolingual data on the target side to improve machine translation systems, latent variable models (Kingma et al., 2014) are employed to incorporate unlabeled source data to facilitate sentence compression (Miao & Blunsom, 2016) or code generation (Yin et al., 2018). + +In this work, we revisit a much older and simpler semi-supervised method, self-training (ST, Scudder (1965)), where a base model trained with labeled data acts as a "teacher" to label the unannotated data, which is then used to augment the original small training set. Then, a "student" model is trained with this new training set to yield the final model. Originally designed for classification problems, common wisdom suggests that this method may be effective only when a good fraction of the predictions on unlabeled samples are correct, otherwise mistakes are going to be reinforced (Zhu & Goldberg, 2009). In the field of natural language processing, some early work have successfully applied self-training to word sense disambiguation (Yarowsky, 1995) and parsing (McClosky et al., 2006; Reichart & Rappoport, 2007; Huang & Harper, 2009). + +However, self-training has not been studied extensively when the target output is natural language. This is partially because in language generation applications (e.g. machine translation) hypotheses are often very far from the ground-truth target, especially in low-resource settings. It is natural to + +Algorithm 1 Classic Self-training +1: Train a base model $f_{\theta}$ on $L = \{x_i, y_i\}_{i=1}^l$ +2: repeat +3: Apply $f_{\theta}$ to the unlabeled instances $U$ +4: Select a subset $S \subset \{(x, f_{\theta}(x)) | x \in U\}$ +5: Train a new model $f_{\theta}$ on $S \cup L$ +6: until convergence or maximum iterations are reached + +ask whether self-training can be useful at all in this case. While Ueffing (2006) and Zhang & Zong (2016) explored self-training in statistical and neural machine translation, only relatively limited gains were reported and, to the best of our knowledge, it is still unclear what makes self-training work. Moreover, Zhang & Zong (2016) did not update the decoder parameters when using pseudo parallel data noting that "synthetic target parts may negatively influence the decoder model of NMT". + +In this paper, we aim to answer two questions: (1) How does self-training perform in sequence generation tasks like machine translation and text summarization? Are "bad" pseudo targets indeed catastrophic for self-training? (2) If self-training helps improving the baseline, what contributes to its success? What are the important ingredients to make it work? + +Towards this end, we first evaluate self-training on a small-scale machine translation task and empirically observe significant performance gains over the supervised baseline (§3.2), then we perform a comprehensive ablation analysis to understand the key factors that contribute to its success (§3.3). We find that the decoding method to generate pseudo targets accounts for part of the improvement, but more importantly, the perturbation of hidden states - dropout (Hinton et al., 2012) - turns out to be a crucial ingredient to prevent self-training from falling into the same local optimum as the base model, and this is responsible for most of the gains. To understand the role of such noise in self-training, we use a toy experiment to analyze how noise effectively propagates labels to nearby inputs, sometimes helping correct incorrect predictions (§4.1). Motivated by this analysis, we propose to inject additional noise by perturbing also the input. Comprehensive experiments on machine translation and text summarization tasks demonstrate the effectiveness of noisy self-training. + +# 2 SELF-TRAINING + +Formally, in conditional sequence generation tasks like machine translation, we have a parallel dataset $L = \{x_{i},y_{i}\}_{i = 1}^{l}$ and a large unlabeled dataset $U = \{x_{j}\}_{j = l + 1}^{l + u}$ , where $|U| > |L|$ in most cases. As shown in Algorithm 1, classic self-training starts from a base model trained with parallel data $L$ , and iteratively applies the current model to obtain predictions on unlabeled instances $U$ , then it incorporates a subset of the pseudo parallel data $S$ to update the current model. + +There are two key factors: (1) Selection of the subset $S$ . $S$ is usually selected based on some confidence scores (e.g. log probability) (Yarowsky, 1995) but it is also possible for $S$ to be the whole pseudo parallel data (Zhu & Goldberg, 2009). (2) Combination of real and pseudo parallel data. A new model is often trained on the two datasets jointly as in back-translation, but this introduces an additional hyper-parameter to weigh the importance of the parallel data relative to the pseudo data (Edunov et al., 2018). Another way is to treat them separately - first we train the model on pseudo parallel data $S$ , and then fine-tune it on real data $L$ . In our preliminary experiments, we find that the separate training strategy with the whole pseudo parallel dataset (i.e. $S = \{(x, f_{\theta}(x)) | x \in U\}$ ) produces better or equal performance for neural sequence generation while being simpler. Therefore, in the remainder of this paper we use this simpler setting. We include quantitative comparison regarding joint training, separate training, and pseudo-parallel data filtering in Appendix B, where separate training is able to match (or surpass) the performance of joint training. + +In self-training, the unsupervised loss $\mathcal{L}_U$ from unlabeled instances is defined as: + +$$ +\mathcal {L} _ {U} = - \mathbb {E} _ {\boldsymbol {x} \sim p (\boldsymbol {x})} \mathbb {E} _ {\boldsymbol {y} \sim p _ {\boldsymbol {\theta} ^ {*}} (\boldsymbol {y} | \boldsymbol {x})} \log p _ {\boldsymbol {\theta}} (\boldsymbol {y} | \boldsymbol {x}), \tag {1} +$$ + +where $p(\boldsymbol{x})$ is the empirical data distribution approximated with samples from $S$ , $p_{\boldsymbol{\theta}}(\boldsymbol{y}|\boldsymbol{x})$ is the conditional distribution defined by the model. $\theta^{*}$ is the parameter from the last iteration (initially it + +![](images/ff4792bcc9f3a72b70df506aa3e601e602b4b9286912762985d523eba8008c96.jpg) +Figure 1: BLEU on WMT100K dataset from the supervised baseline and different self-training variants. We plot the results over 3 iterations. "ST" denotes self-training while "NST" denotes noisy self training. + +
MethodsPTFT
baseline-15.6
ST (scratch)16.817.9
ST (baseline)16.517.5
+ +Table 1: Test tokenized BLEU on WMT100K. Self-training results are from the first iteration. "Scratch" denotes that the system is initialized randomly and trained from scratch, while "baseline" means it is initialized with the baseline model. + +is set as the parameter of the supervised baseline), and fixed within the current iteration. Eq. 1 reveals the connection between self-training and entropy regularization (Grandvalet & Bengio, 2005). In the context of classification, self-training can be understood from the view of entropy regularization (Lee, 2013), which favors a low-density separation between classes, a commonly assumed prior for semi-supervised learning (Chapelle & Zien, 2005). + +# 3 A CASE STUDY ON MACHINE TRANSLATION + +To examine the effectiveness of self-training on neural sequence generation, we start by analyzing a machine translation task. We then perform ablation analysis to understand the contributing factors of the performance gains. + +# 3.1 SETUP + +We work with the standard WMT 2014 English-German dataset consisting of about 3.9 million training sentence pairs after filtering long and imbalanced pairs. Sentences are encoded using 40K byte-pair codes (Sennrich et al., 2016). As a preliminary experiment, we randomly sample 100K sentences from the training set to train the model and use the remaining English sentences as the unlabeled monolingual data. For convenience, we refer to this dataset as WMT100K. Such synthetic setting allows us to have high-quality unlabeled data to verify the performance of self-training. We train with the Base Transformer architecture (Vaswani et al., 2017) and dropout rate at 0.3. Full training and optimization parameters can be found in Appendix A.1. All experiments throughout this paper including the transformer implementation are based on the fairseq toolkit (Ott et al., 2019), and all results are in terms of case-sensitive tokenized BLEU (Papineni et al., 2002). We use beam search decoding (beam size 5) to create the pseudo targets and to report BLEU on test set. + +# 3.2 OBSERVATIONS + +In Figure 1, we use green bars to show the result of applying self-training for three iterations. We include both (1) pseudo-training $(PT)$ : the first step of self-training where we train a new model (from scratch) using only the pseudo parallel data generated by the current model, and (2) fine-tuning $(FT)$ : the fine-tuned system using real parallel data based on the pretrained model from the PT step. Note that in the fine-tuning step the system is re-initialized from scratch. Surprisingly, we find that the pseudo-training step at the first iteration is able to improve BLEU even if the model is only trained on its own predictions, and fine-tuning further boosts the performance. The test BLEU keeps improving over the first three iterations, until convergence to outperform the initial baseline by 3 BLEU points. + +
MethodsPTFT
baseline-15.6
baseline (w/o dropout)-5.2
ST (beam search, w/ dropout)16.517.5
ST (sampling, w/ dropout)16.117.0
ST (beam search, w/o dropout)15.816.3
ST (sampling, w/o dropout)15.516.0
Noisy ST (beam search, w/o dropout)15.817.9
Noisy ST (beam search, w/ dropout)16.619.3
+ +Table 2: Ablation study on WMT100K data. For ST and noisy ST, we initialize the model with the baseline and results are from one single iteration. Dropout is varied only in the PT step, while dropout is always applied in FT step. Different decoding methods refer to the strategy used to create the pseudo target. At test time we use beam search decoding for all models. + +This behaviour is unexpected because no new information seems to be injected during this iterative process – target sentences of the monolingual data are from the base model's predictions, thus translation errors are likely to remain, if not magnified. This is different from back-translation where new knowledge may originate from an additional backward translation model and real monolingual targets may help the decoder generate more fluent sentences. + +One straightforward hypothesis is that the added pseudo-parallel data might implicitly change the training trajectory towards a (somehow) better local optimum, given that we train a new model from scratch at each iteration. To rule out this hypothesis, we perform an ablation experiment and initialize $\theta$ from the last iteration (i.e. $\theta^{*}$ ). Formally, based on Eq. 1 we have: + +$$ +\nabla_ {\boldsymbol {\theta}} \mathcal {L} _ {U} | _ {\boldsymbol {\theta} = \boldsymbol {\theta} ^ {*}} = - \mathbb {E} _ {\boldsymbol {x} \sim p (\boldsymbol {x})} \left[ \nabla_ {\boldsymbol {\theta}} \mathbb {E} _ {\boldsymbol {y} \sim p _ {\boldsymbol {\theta} ^ {*}} (\boldsymbol {y} | \boldsymbol {x})} \log p _ {\boldsymbol {\theta}} (\boldsymbol {y} | \boldsymbol {x}) | _ {\boldsymbol {\theta} = \boldsymbol {\theta} ^ {*}} \right] = 0, \tag {2} +$$ + +because the conditional log likelihood is maximized when $p_{\theta}(\pmb{y}|\pmb{x})$ matches the underlying data distribution $p_{\theta^*}(\pmb{y}|\pmb{x})$ . Therefore, the parameter $\pmb{\theta}$ should not (at least not significantly) change if we initialize it with $\pmb{\theta}^*$ from the last iteration. + +Table 1 shows the comparison results of these two initialization schemes at the first iteration. Surprisingly, continuing training from the baseline model also yields an improvement of 1.9 BLEU points, comparable to initializing from random. While stochastic optimization introduces randomness in the training process, it is startling that continuing training gives such a non-trivial improvement. Next, we investigate the underlying reasons for this. + +# 3.3 THE SECRET BEHIND SELF-TRAINING + +To understand why continuing training contradicts Eq. 2 and improves translation performance, we examine possible discrepancies between our assumptions and the actual implementation, and formulate two new hypotheses: + +H1. Decoding Strategy. According to this hypothesis, the gains come from the use of beam search for decoding unlabeled data. Since our focus is a sequence generation task, we decode $\mathbf{y}$ with beam search to approximate the expectation in $\mathbb{E}_{\mathbf{y} \sim p_{\theta^{*}}(\mathbf{y} | \mathbf{x})} \log p_{\theta}(\mathbf{y} | \mathbf{x})$ , yielding a biased estimate, while sampling decoding would result in an unbiased Monte Carlo estimator. The results in Table 2 demonstrate that the performance drops by 0.5 BLEU when we change the decoding strategy to sampling, which implies that beam search does contribute a bit to the performance gains. This phenomenon makes sense intuitively since beam search tends to generate higher-quality pseudo targets than sampling, and the subsequent cross-entropy training might benefit from implicitly learning the decoding process. However, the decoding strategy hypothesis does not fully explain it, as we still observe a gain of 1.4 BLEU points over the baseline from sampling decoding with dropout. + +H2. Dropout (Hinton et al., 2012). Eq. 1 and Eq. 2 implicitly ignore a (seemingly) small difference between the model used to produce the pseudo targets and the model used for training: at test/decoding time the model does not use dropout while at training time dropout noise is injected in the model hidden states. At training time, the model is forced to produce the same (pseudo) targets given the same set of inputs and the same parameter set but various noisy versions of the + +hidden states. The conjecture is that the additional expectation over dropout noise renders Eq. 2 false. To verify this, we remove dropout in the pseudo training step $^{2}$ . The results in Table 2 indicate that without dropout the performance of beam search decoding drops by 1.2 BLEU, just 0.7 BLEU higher than the baseline. Moreover, the pseudo-training performance of sampling without dropout is almost the same as the baseline, which finally agrees with our intuitions from Eq. 2. + +In summary, Table 2 suggests that beam-search decoding contributes only partially to the performance gains, while the implicit perturbation - dropout - accounts for most of it. However, it is still mysterious why such perturbation results in such large performance gains. If dropout is meant to avoid overfitting and fit the target distribution better in the pseudo-training step, why does it bring advantages over the baseline given that the target distribution is from the baseline model itself? This is the subject of the investigation in the next section. + +# 4 NOISE IN SELF-TRAINING + +# 4.1 THE ROLE OF NOISE + +One hypothesis as to why noise (perturbation) is beneficial for self-training, is that it enforces local smoothness for this task, that is, semantically similar inputs are mapped to the same or similar targets. Since the assumption that similar input should ideally produce similar target largely holds for most tasks in practice, this smoothing effect of pseudo-training step may provide a favorable regularization for the subsequent finetuning step. Unlike standard regularization in supervised training which is local to the real parallel data, self-training smooths the data space covered by the additional and much larger monolingual data. + +
Methodssmoothnesssymmetricerror
baseline9.19.87.6
ST8.29.06.2
noisy ST7.38.24.5
+ +Table 3: Results on the toy sum dataset. For ST and noisy ST, smoothness $(\downarrow)$ and symmetric $(\downarrow)$ results are from the pseudo-training step, while test errors $(\downarrow)$ are from fine-tuning, all at the first iteration. + +To verify this hypothesis more easily, we work with the toy task of summing two integers in the range 0 to 99. We concatenate the two integers and view them as a sequence of digits, the sum is also predicted at the digit level, thus this is still a sequence to sequence task. There are 10000 possible data points in the entire space, and we randomly sample 250 instances for training, $^3$ 100 for validation, 5000 for test, and 4000 as the unlabeled data. Test errors are computed as the absolute difference between the predicted integer and the ground-truth integer. We use an LSTM model to tackle this task. We perform self-training for one iteration on this toy sum dataset and initialize the model with the base model to rule out differences due to the initialization. Setup details are in Appendix A.1. + +For any integer pair $(x_{1}, x_{2})$ , we measure local smoothness as the standard deviation of the predictions in a $3 \times 3$ neighborhood of $(x_{1}, x_{2})$ . These values are averaged over all the 10000 points to obtain the overall smoothness. We compare smoothness between baseline and ST pseudo-training in Table 3. To demonstrate the effect of smoothing on the fine-tuning step, we also report test errors after fine-tuning. We observe that ST pseudo-training attains better smoothness, which helps reducing test errors in the subsequent fine-tuning step. + +One natural question is whether we could further improve performance by encouraging even lower smoothness value, although there is a clear trade-off, as a totally smooth model that outputs a constant value is also a bad predictor. One way to decrease smoothness is by increasing the dropout probability in the pseudo-training step, but a large dropout (like 0.5) makes the model too unstable and slow at converging. Therefore, we consider a simple model-agnostic perturbation process - perturbing the input, which we refer to as noisy self-training (noisy ST). + +![](images/289f220f8a0d310c560cd8421b27c54b8400c52067da09472f5618a2fe4888a3.jpg) +Figure 2: Two examples of error heat map on the toy sum dataset that shows the effect of smoothness. The left panel of each composition is from the baseline, and the right one is from the pseudo-training step at the first iteration. $x$ and $y$ axes represent the two input integers. Deeper color represents larger errors. + +![](images/3c1005d7227ebe58dcbcfdae26beb33db970f5aa5c25784cb1aff8aff4870a41.jpg) + +# 4.2 NOISY SELF-TRAINING + +If we perturb the input during the pseudo-training step, then Eq. 1 would be modified to: + +$$ +\mathcal {L} _ {U} = - \mathbb {E} _ {\boldsymbol {x} ^ {\prime} \sim g (\boldsymbol {x}), \boldsymbol {x} \sim p (\boldsymbol {x})} \mathbb {E} _ {\boldsymbol {y} \sim p _ {\boldsymbol {\theta} ^ {*}} (\boldsymbol {y} | \boldsymbol {x})} \log p _ {\boldsymbol {\theta}} \left(\boldsymbol {y} \mid \boldsymbol {x} ^ {\prime}\right), \tag {3} +$$ + +where $g(\pmb{x})$ is a perturbation function. Note that we apply both input perturbation and dropout in the pseudo-training step for noisy ST throughout the paper, but include ablation analysis in §4.3. We first validate noisy ST in the toy sum task. We shuffle the two integers in the input as the perturbation function. Such perturbation is suitable for this task since it would help the model learn the commutative law as well. To check that, we also measure the symmetry of the output space. Specifically, for any point $(x_1, x_2)$ , we compute $|f(x_1, x_2) - f(x_2, x_1)|$ and average it over all the points. Both smoothness and symmetry values are reported in Table 3. While we do not explicitly perturb the input at nearby integers, the shuffling perturbation greatly improves the smoothness metric as well. Furthermore, predictions are more symmetric and test errors are reduced. + +In order to illustrate the effect of smoothness, in Figure 2 we show two examples of error heat map.4 When a point with large error is surrounded by points with small errors, the labels might propagate due to smoothing and its error is likely to become smaller, resulting in a "self-correcting" behaviour, as demonstrated in the left example of Figure 2. However, the prediction of some points might become worse due to the opposite phenomenon too, as shown in the right example of Figure 2. Therefore, the smoothing effect by itself does not guarantee a performance gain in the pseudotraining step, but fine-tuning benefits from it and seems to consistently improve the baseline in all datasets we experiment with. + +# 4.3 OBSERVATIONS ON MACHINE TRANSLATION + +Next, we apply noisy self-training to the more realistic WMT100 translation task. We try two different perturbation functions: (1) Synthetic noise as used in unsupervised MT (Lample et al., 2018), where the input tokens are randomly dropped, masked, and shuffled. We use the default noisng parameters as in unsupervised MT but study the influence of noise level in §5.4. (2) Paraphrase. We translate the source English sentences to German and translate it back to obtain a paraphrase as the perturbation. Figure 1 shows the results over three iterations. Noisy ST (NST) greatly outperforms the supervised baseline by over 6 BLEU points and normal ST by 3 BLEU points, while synthetic noise does not exhibit much difference from paraphrasing. Since synthetic noise is much simpler and more general, in the remaining experiments we use synthetic noise unless otherwise specified. + +Next, we report an ablation analysis of noisy ST when removing dropout at the pseudo-training step in Table 2. Noisy ST without dropout improves the baseline by 2.3 BLEU points and is comparable to normal ST with dropout. When combined together, noisy ST with dropout produces another 1.4 BLEU improvement, indicating that the two perturbations are complementary. + +
MethodsWMT English-GermanFloRes English-Nepali
100K (+3.8M mono)3.9M (+20M mono)En-OriginNe-OriginOverall
baseline15.628.36.72.34.8
BT20.5-8.24.56.5
noisy ST21.429.38.93.56.5
+ +Table 4: Results on two machine translation datasets. For WMT100K, we use the remaining 3.8M English and German sentences from training data as unlabeled data for noisy ST and BT, respectively. + +# 5 EXPERIMENTS + +Our experiments below are designed to examine whether the noisy self-training is generally useful across different sequence generation tasks and resource settings. To this end, we conduct experiments on two machine translation datasets and one text summarization dataset to test the effectiveness under both high-resource and low-resource settings. + +# 5.1 GENERAL SETUP + +We run noisy self-training for three iterations or until performance converges. The model is trained from scratch in the pseudo-training step at each iteration since we found this strategy to work slightly better empirically. Full model and training details for all the experiments can be found in Appendix A.1. In some settings, we also include back-translation (BT, Sennrich et al., 2015) as a reference point, since this is probably the most successful semi-supervised learning method for machine translation. However, we want to emphasize that BT is not directly comparable to ST since they use different resources (ST utilizes the unlabeled data on the source side while BT leverages target monolingual data) and use cases. For example, BT is not very effective when we translate English to extremely low-resource languages where there is almost no in-domain target monolingual data available. We follow the practice in (Edunov et al., 2018) to implement BT where we use unrestricted sampling to translate the target data back to the source. Then, we train the real and pseudo parallel data jointly and tune the upsampling ratio of real parallel data. + +# 5.2 MACHINE TRANSLATION + +We test the proposed noisy self-training on a high-resource translation benchmark: WMT14 English-German and a low-resource translation benchmark: FloRes English-Nepali. + +- WMT14 English-German: In addition to WMT100K, we also report results with all 3.9M training examples. For WMT100K we use the Base Transformer architecture, and the remaining parallel data as the monolingual data. For the full setting, we use the Big Transformer architecture (Vaswani et al., 2017) and randomly sample 20M English sentences from the News Crawl corpus for noisy ST. +- FloRes English-Nepali: We evaluate noisy self-training on a low-resource machine translation dataset FloRes (Guzmán et al., 2019) from English (en) to Nepali (ne), where we have 560K training pairs and a very weak supervised system that attains BLEU smaller than 5 points. For this dataset we have 3.6M Nepali monolingual instances in total (for BT) but 68M English Wikipedia sentences. We randomly sample 5M English sentences for noisy ST. We use the same transformer architecture as in (Guzmán et al., 2019). + +The overall results are shown in Table 4. For almost all cases in both datasets, the noisy ST outperforms the baselines by a large margin (1 ~ 5 BLEU scores), and we see that noisy ST still improves the baseline even when this is very weak. + +Effect of Domain Mismatch. Test sets of the FloRes benchmark were built with mixed original-translationese - some sentences are from English sources and some are from Nepali sources. Intuitively, English monolingual data should be more in-domain with English-origin sentences and + +
Methods100K (+3.7M mono)640K (+3.2M mono)3.8M (+4M mono)
R1R2RLR1R2RLR1R2RL
MASS (Song et al., 2019)*------38.719.736.0
baseline30.412.427.835.817.033.237.919.035.2
BT32.213.829.637.318.434.6---
noisy ST34.115.631.436.618.233.938.619.535.9
+ +Table 5: Rouge scores on Gigaword datasets. For the 100K setting we use the remaining 3.7M training data as unlabeled instances for noisy ST and BT. In the 3.8M setting we use 4M unlabeled data for noisy ST. Stared entry (*) denotes that the system uses a much larger dataset for pretraining. + +![](images/c77f018556f027f8d25045bb4e71ba2e6c138e1c1321941437c22a9a95321e9e.jpg) +(a) BLEU v.s. parallel data + +![](images/9e671961c674a55c9c5c96cd8e75991e560a72624bde719f18024f20c3ba52b9.jpg) +(b) BLEU v.s. monolingual data +Figure 3: Analysis of noisy self-training on WMT English-German dataset, demonstrating the effect of parallel data size, monolingual data size, and noise level. + +![](images/a664fd47bed2ba6282bce8e905bf61b89c600d0441e541f20742075a1cf66fb3.jpg) +(c) BLEU v.s. noise level + +Nepali monolingual data should help more for Nepali-origin sentences. To demonstrate this possible domain-mismatch effect, in Table 4 we report BLEU on the two different test sets separately.6 As expected, ST is very effective when the source sentences originate from English. + +Comparison to Back-Translation. Table 4 shows that noisy ST is able to beat BT on WMT100K and on the en-origin test set of FloRes. In contrast, BT is more effective on the ne-origin test set according to BLEU, which is not surprising as the ne-origin test is likely to benefit more from Nepali than English monolingual data. + +# 5.3 TEXT SUMMARIZATION + +We further evaluate noisy self-training on the Gigaword summarization dataset (Rush et al., 2015) that has 3.8M training sentences. We encode the data with 30K byte-pair codes and use the Base Transformer architecture. Similar to the setting of WMT100K, for Gigaword we create two settings where we sample 100K or 640K training examples and use the remaining as unlabeled data to compare with BT. We also consider the setting where all the 3.8M parallel samples are used and we mine in-domain monolingual data by revisiting the original preprocessing procedure7 and using the $\sim$ 4M samples that Rush et al. (2015) disregarded because they had low-quality targets. We report ROUGE scores (Lin, 2004) in Table 5. Noisy ST consistently outperforms the baseline in all settings, sometimes by a large margin (100K and 640K). It outperforms BT with 100K parallel data but underperforms with 640K parallel data. We conjecture that BT is still effective in this case because the task is still somewhat symmetric as Gigaword mostly contains short sentences and their compressed summaries. Notably, noisy ST in the full setting approaches the performance of state-of-the-art systems which use much larger datasets for pretraining (Song et al., 2019). + +# 5.4 ANALYSIS + +In this section, we focus on the WMT English-German dataset to examine the effect of three factors on noisy self-training: the size of the parallel dataset, the size of the monolingual dataset, and the noise level. All the noisy ST results are after the fine-tuning step. + +Parallel data size. We fix the monolingual data size as 20M from News Crawl dataset, and vary the parallel data size as shown in Figure 3(a). We use a small LSTM model for 10K, Base Transformer for 100K/640K, and Big Transformer for 3.9M. Noisy ST is repeated for three iterations. We see that in all cases noisy ST is able to improve upon the baseline, while the performance gain is larger for intermediate value of the size of the parallel dataset, as expected. + +Monolingual data size. We fix the parallel data size to 100K samples, and use the rest 3.8M English sentences from the parallel data as monolingual data. We sample from this set 100K, 500K, 1.5M, and 3.8M sentences. We also include another point that uses 20M monolingual sentences from a subset of News Crawl dataset. We report performance at the first iteration of noisy ST. Figure 3(b) illustrates that the performance keeps improving as the monolingual data size increases, albeit with diminishing returns. + +Noise level. We have shown that noisy ST outperforms ST, but intuitively larger noise must not always be better since at some point it may destroy all the information present in the input. We adopt the WMT100K setting with 100K parallel data and 3.8M monolingual data, and set the word blanking probability in the synthetic noise (Lample et al., 2018) to 0.2 (default number), 0.4, 0.6, and 0.8. We also include the baseline ST without any synthetic noise. Figure 3(c) demonstrates that performance is quite sensitive to noise level, and that intermediate values work best. It is still unclear how to select the noise level a priori, besides the usual hyper-parameter search to maximize BLEU on the validation set. + +# 5.5 NOISE PROCESS ON PARALLEL DATA ONLY + +In this section, we justify whether the proposed noisy self-training process would help the supervised baseline alone without the help of any monolingual data. Similar to the training process on the monolingual data, we first train the model on the noisy source data (pseudo-training), and then fine-tune it on clean parallel data. Different from using monolingual data, there are two variations here in the "pseudo-training" step: we can either train with the fake target predicted by the model as on monolingual data, or train with the real target paired with noisy source. We denote them as "parallel + fake target" and "parallel + real target" respectively, and report the performance on WMT100K in Table 6. We use the same synthetic noise as used in previous experiments. + +When applying the same noise process to parallel data using fake target, the smoothing effect is not significant since it is restricted into the limited parallel data space, producing marginal improvement over the baseline (+0.4 BLEU). As a comparison, 100K monolingual data produces +1.0 BLEU and the effect is enhanced when we increase the monolingual data to 3.8M, which leads to +3.7 BLEU. Interestingly, pairing the noisy source with real target results in much worse performance than the baseline (-4.3 BLEU), which implies that the use of fake target predicted by the model (i.e. distillation) instead of real target is important for the success of noisy self-training, at least in the case where parallel data size is small. Intuitively, the distilled fake target is simpler and relatively easy for the model to fit, but the real target paired with noisy source makes learning even harder than training with real target and real source, which might lead to a bad starting point for fine-tuning. This issue would be particularly severe when the parallel data size is small, in that case the model would have difficulties to fit real target even with clean source. + +# 6 RELATED WORK + +Self-training belongs to a broader class of "pseudo-label" semi-supervised learning approaches. These approaches all learn from pseudo labels assigned to unlabelled data, with different methods on how to assign such labels. For instance, co-training (Blum & Mitchell, 1998) learns models on two independent feature sets of the same data, and assigns confident labels to unlabeled data from one of the models. Co-training reduces modeling bias by taking into account confidence scores from two models. In the same spirit, democratic co-training (Zhou & Goldman, 2004) or tri-training (Zhou & Li, 2005) trains multiple models with different configurations on the same data feature set, and a subset of the models act as teachers for others. + +
MethodsPTFT
parallel baseline-15.6
noisy ST, 100K mono + fake target10.216.6
noisy ST, 3.8M mono + fake target16.619.3
noisy ST, 100K parallel + real target6.711.3
noisy ST, 100K parallel + fake target10.416.0
+ +Table 6: Results on WMT100K data. All results are from one single iteration. "Parallel + real/fake target" denotes the noise process applied on parallel data but using real/fake target in the "pseudo-training" step. "Mono + fake target" is the normal noisy self-training process described in previous sections. + +Another line of more recent work perturb the input or feature space of the student's inputs as data augmentation techniques. Self-training with dropout or noisy self-training can be viewed as an instantiation of this. These approaches have been very successful on classification tasks (Rasmus et al., 2015; Miyato et al., 2017; Laine & Aila, 2017; Miyato et al., 2018; Xie et al., 2019) given that a reasonable amount of predictions of unlabeled data (at least the ones with high confidence) are correct, but their effect on language generation tasks is largely unknown and poorly understood because the pseudo language targets are often very different from the ground-truth labels. Recent work on sequence generation employs auxiliary decoders (Clark et al., 2018) when processing unlabeled data, overall showing rather limited gains. + +# 7 CONCLUSION + +In this paper we revisit self-training for neural sequence generation, and show that it can be an effective method to improve generalization, particularly when labeled data is scarce. Through a comprehensive ablation analysis and synthetic experiments, we identify that noise injected during self-training plays a critical role for its success due to its smoothing effect. To encourage this behaviour, we explicitly perturb the input to obtain a new variant of self-training, dubbed noisy self-training. Experiments on machine translation and text summarization demonstrate the effectiveness of this approach in both low and high resource settings. + +# ACKNOWLEDGEMENTS + +We want to thank Peng-Jen Chen for helping set up the FloRes experiments, and Michael Auli, Kyunghyun Cho, and Graham Neubig for insightful discussion about this project. + +# REFERENCES + +Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, pp. 92-100. CiteSeer, 1998. +Olivier Chapelle and Alexander Zien. Semi-supervised classification by low density separation. In Proceedings of AISTATS, 2005. +Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. IEEE Transactions on Neural Networks, 20(3):542-542, 2009. +Kevin Clark, Minh-Thang Luong, Christopher D Manning, and Quoc V Le. Semi-supervised sequence modeling with cross-view training. In Proceedings of EMNLP, 2018. +Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. In Proceedings of EMNLP, 2018. +Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In Proceedings of NeurIPS, 2005. + +Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. The FLoRes evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. In Proceedings of EMNLP, 2019. +Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. +Zhongqiang Huang and Mary Harper. Self-training pcfg grammars with latent annotations across languages. In Proceedings of EMNLP, 2009. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Proceedings of NeurIPS, 2014. +Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In Proceedings of ICLR, 2017. +Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. Phrase-based & neural unsupervised machine translation. In Proceedings of EMNLP, 2018. +Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, 2013. +Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74-81, 2004. +David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In Proceedings of NAACL, 2006. +Yishu Miao and Phil Blunsom. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of EMNLP, 2016. +Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarial training methods for semi-supervised text classification. In Proceedings of ICLR, 2017. +Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993, 2018. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL (Demo Track), 2019. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL, 2002. +Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Proceedings of NeurIPS, 2015. +Roi Reichart and Ari Rappoport. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In Proceedings of ACL, 2007. +Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proceedings of EMNLP, 2015. +H Scudder. Probability of error of some adaptive pattern-recognition machines. IEEE Transactions on Information Theory, 11(3):363-371, 1965. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 86-96, 2015. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of ACL, 2016. +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of ICML, 2019. +Nicola Ueffing. Using monolingual source-language data to improve mt performance. In IWSLT, 2006. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of NeurIPS, 2017. +Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation. arXiv preprint arXiv:1904.12848, 2019. +David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of ACL, 1995. +Pengcheng Yin, Chunting Zhou, Junxian He, and Graham Neubig. StructVAE: Tree-structured latent variable models for semi-supervised semantic parsing. In Proceedings of EMNLP, 2018. +Jiajun Zhang and Chengqing Zong. Exploiting source-side monolingual data in neural machine translation. In Proceedings of EMNLP, 2016. +Yan Zhou and Sally Goldman. Democratic co-learning. In 16th IEEE International Conference on Tools with Artificial Intelligence, pp. 594-602. IEEE, 2004. +Zhi-Hua Zhou and Ming Li. Tri-training: Exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge & Data Engineering, (11):1529-1541, 2005. +Xiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning, 3(1):1-130, 2009. + +# A EXPERIMENTS DETAILS + +# A.1 SETUP DETAILS + +For all experiments, we optimize with Adam (Kingma & Ba, 2014) using $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ , $\epsilon = 1e - 8$ . All implementations are based on fairseq (Ott et al., 2019), and we basically use the same learning rate schedule and label smoothing as in fairseq examples to train the transformers. Except for the toy sum dataset which we runs on a single GPU and each batch contains 32 examples, all other experiments are run on 8 GPUs with an effective batch size of 33K tokens. All experiments are validated with loss on the validation set. For self-training or noisy self-training, the pseudo-training takes 300K synchronous updates while the fine-tuning step takes 100K steps. + +We use the downloading and preprocessing scripts in fairseq to obtain the WMT 2014 English-German dataset, $^{10}$ which hold out a small fraction of the original training data as the validation set. + +The model architecture for the toy sum dataset is a single-layer LSTM with word embedding size 32, hidden state size 32, and dropout rate 0.3. The model architecture of WMT10K baseline in Figure 3(a) is a single layer LSTM with word embeddings size 256, hidden state size 256, and dropout rate 0.3. + +# A.2 JUSTIFICATION OF THE WMT100K BASELINE + +We provide more details and evidence to show that our baseline model on WMT100K dataset is trained properly. In all the experiments on WMT100K dataset including baseline and self-training ones, we use Adam optimizer with learning rate 0.0005, which is defaulted in fairseq. We do not use early stop during training but select the best model in terms of the validation loss. We train with 30K update steps for the baseline model and (300K pseudo-training + 100K fine-tuning) update steps for self-training. In both cases we verified that the models are trained sufficiently to fully converge through observing the increase of validation loss. Figure 4 shows the validation curve of the baseline model. Note that the model starts to overfit, and we select the model checkpoint at the lowest point. We also varied the learning rate hyperparameter as 0.0002, 0.0005, and 0.001, which produced BLEU score 15.0, 15.6 (reported in the paper), and 15.5 respectively – our baseline model in previous sections obtained the best performance. + +![](images/b0b5bf05cfee4986abceccb0f856a2ea5576ccc8d61d6d81318b7bf74ebabac2.jpg) +Figure 4: Validation loss v.s. number of update steps, for the baseline model on WMT100K dataset. + +# B COMPARISON REGARDING SEPARATE TRAINING, JOINT TRAINING, AND FILTERING + +In the paper we perform self-training with separate pseudo-training and fine-tuning steps and always use all monolingual data. However, there are other variants such as joint training or iteratively adding confident examples. Here we compare these variants on WMT100K dataset, noisy self-training uses paraphrase as the perturbation function. For joint training, we tune the upsampling ratio of parallel data just as in back-translation (Edunov et al., 2018). We perform noisy self-training for 3 iterations, and for filtering experiments we iteratively use the most confident 2.5M, 3M, and 3.8M monolingual data respectively in these 3 iterations. Table 7 shows that the filtering process helps joint training but still underperforms separate-training methods by over 1.5 BLEU points. Within separate training filtering produces comparable results to using all data. Since separate training with all data is the simplest method and produces the best performance, we stick to this version in the paper. + +
MethodsBLEU
baseline15.6
noisy ST (separate training, all data)21.8
noisy ST (separate training, filtering)21.6
noisy ST (joint training, all data)18.8
noisy ST (joint training, filtering)20.0
+ +Table 7: Ablation analysis on WMT100K dataset. + +# C ADDITIONAL RESULTS ON THE TOY SUM DATASET + +We additionally show the error heat maps of the entire data space on the toy sum datasets for the first two iterations. Here the model at pseudo-training step is initialized as the model from last iteration to clearly examine how the decodings change due to injected noise. As shown in Figure 5, for each iteration the pseudo-training step smooths the space and fine-tuning step benefits from it and greatly reduces the errors + +![](images/ecf1ec995c2012882d29b4d18fa0a7da07ea3a049b4978d47bfacb3563ef4642.jpg) +(a) baseline + +![](images/f23a08783313876618c5fb0a251c6384728b2a23178b9b5b5374166d65b502e0.jpg) +(b) noisy ST (PT, iter=1) + +![](images/100893ebd6cb059dd24300c5ef6232c0e0a8b5b18a9423e3cb23f801ed1a6a08.jpg) +(c) noisy ST (FT, iter=1) + +![](images/1409821260f62f8ed23fbee44e9bbaaa7d76d63f1ba00839361d92ebcf3b6a61.jpg) +(d) noisy ST (PT, iter=2) + +![](images/2b427adde2a78594831e076850fbf443f72f55488820e3eb98c154fcb46121dc.jpg) +(e) noisy ST (FT, iter=2) +Figure 5: Error heat maps on the toy sum dataset over the first two iterations. Deeper color represent larger errors. \ No newline at end of file diff --git a/revisitingselftrainingforneuralsequencegeneration/images.zip b/revisitingselftrainingforneuralsequencegeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..30bed893fdc5a49d4a64e6a1822e84a8acad2415 --- /dev/null +++ b/revisitingselftrainingforneuralsequencegeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a038fe7d09318dbf746be3458b2d9eb12a0b1dde19ac843d863486121b683fab +size 743283 diff --git a/revisitingselftrainingforneuralsequencegeneration/layout.json b/revisitingselftrainingforneuralsequencegeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5b9b535f15608dafded944ee5849aeb49a5d1e78 --- /dev/null +++ b/revisitingselftrainingforneuralsequencegeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce7dae30bb199912517e3b5bc6d0e16bc340c1a264df0d93b9e772c7efc405b8 +size 374795 diff --git a/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_content_list.json b/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0013ae0b212770d4fd56aac2a7ee4ed855c70be6 --- /dev/null +++ b/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db38bc3952ce2ad4be2ccc83aa356c01808f8d0cde38311b90ae10677bb74ed6 +size 113585 diff --git a/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_model.json b/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0844a15ee635922c509dd339d7b40d86ebda62ce --- /dev/null +++ b/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01db080689cda454f961353103d3c7ebce5d82ba866e402125162b893676979b +size 138730 diff --git a/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_origin.pdf b/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8f0ec2f89011e1080089428073667a690694d43f --- /dev/null +++ b/rnyifairinference/690df2bb-8c0c-4920-82da-74ac2aff53c2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df703678d513c0782e7c3d500805eef24cf62e33d8b1967442a9230b82cbc10e +size 1668254 diff --git a/rnyifairinference/full.md b/rnyifairinference/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0b4169b52ce53f64982d99a10da445aabdf34414 --- /dev/null +++ b/rnyifairinference/full.md @@ -0,0 +1,494 @@ +# RÉNYI FAIR INFERENCE + +# Sina Baharlouei + +Industrial and Systems Engineering, USC + +baharlou@usc.edu + +# Maher Nouiehed + +Industrial Engineering and Management, AUB + +mn102@aub.edu.lb + +# Ahmad Beirami + +EECS, MIT + +beirami@mit.edu + +# Meisam Razaviyayn + +Industrial and Systems Engineering, USC + +razaviya@usc.edu + +# ABSTRACT + +Machine learning algorithms have been increasingly deployed in critical automated decision-making systems that directly affect human lives. When these algorithms are solely trained to minimize the training/test error, they could suffer from systematic discrimination against individuals based on their sensitive attributes, such as gender or race. Recently, there has been a surge in machine learning society to develop algorithms for fair machine learning. In particular, several adversarial learning procedures have been proposed to impose fairness. Unfortunately, these algorithms either can only impose fairness up to linear dependence between the variables, or they lack computational convergence guarantees. In this paper, we use Rényi correlation as a measure of fairness of machine learning models and develop a general training framework to impose fairness. In particular, we propose a min-max formulation which balances the accuracy and fairness when solved to optimality. For the case of discrete sensitive attributes, we suggest an iterative algorithm with theoretical convergence guarantee for solving the proposed min-max problem. Our algorithm and analysis are then specialized to fair classification and fair clustering problems. To demonstrate the performance of the proposed Rényi fair inference framework in practice, we compare it with well-known existing methods on several benchmark datasets. Experiments indicate that the proposed method has favorable empirical performance against state-of-the-art approaches. + +# 1 INTRODUCTION + +As we experience the widespread adoption of machine learning models in automated decision-making, we have witnessed increased reports of instances in which the employed model results in discrimination against certain groups of individuals – see Datta et al. (2015); Sweeney (2013); Bolukbasi et al. (2016); Angwin et al. (2016). In this context, discrimination is defined as the unwanted distinction against individuals based on their membership to a specific group. For instance, Angwin et al. (2016) present an example of a computer-based risk assessment model for recidivism, which is biased against certain ethnicities. In another example, Datta et al. (2015) demonstrate gender discrimination in online advertisements for web pages associated with employment. These observations motivated researchers to pay special attention to fairness in machine learning in recent years; see Calmon et al. (2017); Feldman et al. (2015); Hardt et al. (2016); Zhang et al. (2018); Xu et al. (2018); Dwork et al. (2018); Fish et al. (2016); Woodworth et al. (2017); Zafar et al. (2017; 2015); Pérez-Suay et al. (2017); Bechavod & Ligett (2017); Liao et al. (2019). + +In addition to its ethical standpoint, equal treatment of different groups is legally required by many countries Act. (1964). Anti-discrimination laws imposed by many countries evaluate fairness by notions such as disparate treatment and disparate impact. We say a decision-making process suffers from disparate treatment if its decisions discriminate against individuals of a certain protected group based on their sensitive/protected attribute information. On the other hand, we say it suffers from disparate impact if the decisions adversely affect a protected group of individuals with certain sensitive attribute - see Zafar et al. (2015). In simpler words, disparate treatment is intentional + +discrimination against a protected group, while the disparate impact is an unintentional disproportionate outcome that hurts a protected group. To quantify fairness, several notions of fairness have been proposed in the recent decade Calders et al. (2009); Hardt et al. (2016). Examples of these notions include demographic parity, equalized odds, and equalized opportunity. + +Demographic parity condition requires that the model output (e.g., assigned label) be independent of sensitive attributes. This definition might not be desirable when the base ground-truth outcome of the two groups are completely different. This shortcoming motivated the use of equalized odds notion Hardt et al. (2016) which requires that the model output is conditionally independent of sensitive attributes given the ground-truth label. Finally, equalized opportunity requires having equal false positive or false negative rates across protected groups. + +Machine learning approaches for imposing fairness can be broadly classified into three main categories: pre-processing methods, post-processing methods, and in-processing methods. Pre-processing methods modify the training data to remove discriminatory information before passing data to the decision-making process Calders et al. (2009); Feldman et al. (2015); Kamiran & Calders (2010; 2009; 2012); Dwork et al. (2012); Calmon et al. (2017); Ruggieri (2014). These methods map the training data to a transformed space in which the dependencies between the class label and the sensitive attributes are removed Edwards & Storkey (2015); Hardt et al. (2016); Xu et al. (2018); Sattigeri et al. (2018); Raff & Sylvester (2018); Madras et al. (2018); Zemel et al. (2013); Louizos et al. (2015). On the other hand, post-processing methods adjust the output of a trained classifier to remove discrimination while maintaining high classification accuracy Fish et al. (2016); Dwork et al. (2018); Woodworth et al. (2017). The third category is the in-process approach that enforces fairness by either introducing constraints or adding a regularization term to the training procedure Zafar et al. (2017; 2015); Pérez-Suay et al. (2017); Bechavod & Ligett (2017); Berk et al. (2017); Agarwal et al. (2018); Celis et al. (2019); Donini et al. (2018); Rezaei et al. (2019); Kamishima et al. (2011); Zhang et al. (2018); Bechavod & Ligett (2017); Kearns et al. (2017); Menon & Williamson (2018); Alabi et al. (2018). The Rényi fair inference framework proposed in this paper also belongs to this in-process category. + +Among in-processing methods, many add a regularization term or constraints to promote statistical independence between the classifier output and the sensitive attributes. To do that, various independence proxies such as mutual information Kamishima et al. (2011), false positive/negative rates Bechavod & Ligett (2017), equalized odds Donini et al. (2018), Pearson correlation coefficient Zafar et al. (2015; 2017), Hilbert Schmidt independence criterion (HSIC) Pérez-Suay et al. (2017) were used. As will be discussed in Section 2, many of these methods cannot capture nonlinear dependence between random variables and/or lead to computationally expensive algorithms. Motivated by these limitations, we propose to use Rényi correlation to impose several known group fairness measures. Rényi correlation captures nonlinear dependence between random variables. Moreover, Rényi correlation is a normalized measure and can be computed efficiently in certain instances. + +Using Rényi correlation coefficient as a regularization term, we propose a min-max optimization framework for fair statistical inference. In particular, we specialize our framework to both classification and clustering tasks. We show that when the sensitive attribute(s) is discrete (e.g., gender and/or race), the learning task can be efficiently solved to optimality, using a simple gradient ascent-descent approach. We summarize our contributions next: + +- We introduce Rényi correlation as a tool to impose several notions of group fairness. Unlike Pearson correlation and HSIC, which only capture linear dependence, Rényi correlation captures any statistical dependence between random variables as zero Rényi correlation implies independence. Moreover, it is more computationally efficient than the mutual information regularizers approximated by neural networks. +- Using Rényi correlation as a regularization term in training, we propose a min-max formulation for fair statistical inference. Unlike methods that use an adversarial neural network to impose fairness, we show that in particular instances such as binary classification, or discrete sensitive variable(s), it suffices to use a simple quadratic function as the adversarial objective. This observation helped us to develop a simple multi-step gradient ascent descent algorithm for fair inference and guarantee its theoretical convergence to first-order stationarity. +- Our Rényi correlation framework leads to a natural fair classification method and a novel fair $K$ -means clustering algorithm. For $K$ -means clustering problem, we show that sufficiently + +large regularization coefficient yields perfect fairness under disparate impact doctrine. Unlike the two-phase methods proposed in Chierichetti et al. (2017); Backurs et al. (2019); Rösner & Schmidt (2018); Bercea et al. (2018); Schmidt et al. (2018), our method does not require any preprocessing step, is scalable, and allows for regulating the trade-off between the clustering quality and fairness. + +# 2 RÉNYI CORRELATION + +The most widely used notions for group fairness in machine learning are demographic parity, equalized odds, and equalized opportunities. These notions require (conditional) independence between a certain model output and a sensitive attribute. This independence is typically imposed by adding fairness constraints or regularization terms to the training objective function. For instance, Kamishima et al. (2011) added a regularization term based on mutual information. Since estimating mutual information between the model output and sensitive variables during training is not computationally tractable, Kamishima et al. (2011) approximates the probability density functions using a logistic regression model. To have a tighter estimation, Song et al. (2019) used an adversarial approach that estimates the joint probability density function using a parameterized neural network. Although these works start from a well-justified objective function, they end up solving approximations of the objective function due to computational barriers. Thus, no fairness guarantee is provided even when the resulting optimization problems are solved to global optimality in the large sample size limit. + +A more tractable measure of dependence between two random variables is the Pearson correlation. The Pearson correlation coefficient between the two random variables $A$ and $B$ is defined as $\rho_{P}(A,B) = \frac{\operatorname{Cov}(A,B)}{\sqrt{\operatorname{Var}(A)}\sqrt{\operatorname{Var}(B)}}$ , where $\operatorname{Cov}(\cdot, \cdot)$ denotes the covariance and $\operatorname{Var}(\cdot)$ denotes the variance. The Pearson correlation coefficient is used in Zafar et al. (2015) to decorrelate the binary sensitive attribute and the decision boundary of the classifier. A major drawback of Pearson correlation is that it only captures linear dependencies between random variables. In fact, two random variables $A$ and $B$ may have strong dependence but have zero Pearson correlation. This property raises concerns about the use of the Pearson correlation for imposing fairness. Similar to the Pearson correlation, the HSIC measure proposed in Pérez-Suay et al. (2017) may be zero even if the two variables have strong dependencies. While universal Kernels can be used to alleviate this issue, they could arrive at the expense of computational intractability. In addition, HSIC is not a normalized dependence measure Gretton et al. (2005b;a) which raises concerns about the appropriateness of using it as a measure of dependence. + +In this paper, we suggest to use Hirschfeld-Gebelein-Renyi correlation Rényi (1959); Hirschfeld (1935); Gebelein (1941) as a dependence measure between random variables to impose fairness. Rényi correlation, which is also known as maximal correlation, between two random variables $A$ and $B$ is defined as + +$$ +\rho_ {R} (A, B) = \sup _ {f, g} \mathbb {E} [ f (A) g (B) ] \quad \text {s . t .} \quad \mathbb {E} [ f (A) ] = \mathbb {E} [ g (B) ] = 0, \quad \mathbb {E} [ f ^ {2} (A) ] = \mathbb {E} [ g ^ {2} (B) ] = 1, \tag {1} +$$ + +where the supremum is over the set of measurable functions $f(\cdot)$ and $g(\cdot)$ satisfying the constraints. Unlike HSIC and Pearson correlation, Rényi correlation captures higher-order dependencies between random variables. Rényi correlation between two random variables is zero if and only if the random variables are independent, and it is one if there is a strict dependence between the variables Rényi (1959). These favorable statistical properties of $\rho_{R}$ do not come at the price of computational intractability as opposed to other measures such as mutual information. In fact, as we will discuss in Section 3, $\rho_{R}$ can be used in a computationally tractable framework to impose several group fairness notions. + +# 3 A GENERAL MIN-MAX FRAMEWORK FOR RÉNYI FAIR INFERENCE + +Consider a learning task over a given random variable $\mathbf{Z}$ . Our goal is to minimize the average inference loss $\mathcal{L}(\cdot)$ where our loss function is parameterized with parameter $\theta$ . To find the optimal + +value of parameter $\theta$ with the smallest average loss, we solve the following optimization problem + +$$ +\min _ {\boldsymbol {\theta}} \mathbb {E} \left[ \mathcal {L} (\boldsymbol {\theta}, \mathbf {Z}) \right], +$$ + +where the expectation is taken over $\mathbf{Z}$ and possible regularization terms are absorbed in the loss function $\mathcal{L}(\cdot)$ . Notice that this formulation is quite general and can include regression, classification, clustering, or dimensionality reduction tasks as special cases. As an example, in the case of linear regression $\mathbf{Z} = (\mathbf{X}, Y)$ and $\mathcal{L}(\boldsymbol{\theta}, \mathbf{Z}) = (Y - \boldsymbol{\theta}^T\mathbf{X})^2$ where $\mathbf{X}$ is a random vector and $Y$ is the random target variable. + +Assume that, in addition to minimizing the average loss, we are interested in bringing fairness to our learning task. Let $S$ be a sensitive attribute (such as age or gender) and $\widehat{Y}_{\theta}(\mathbf{Z})$ be a certain output of our inference task using parameter $\theta$ . Assume we are interested in removing/reducing the dependence between the random variable $\widehat{Y}_{\theta}(\mathbf{Z})$ and the sensitive attribute $S$ . To balance the goodness-of-fit and fairness, one can solve the following optimization problem + +$$ +\min _ {\boldsymbol {\theta}} \mathbb {E} \left[ \mathcal {L} (\boldsymbol {\theta}, \mathbf {Z}) \right] + \lambda \rho_ {R} ^ {2} \left(\widehat {Y} _ {\boldsymbol {\theta}} (\mathbf {Z}), S\right), \tag {2} +$$ + +where $\lambda$ is a positive scalar balancing fairness and goodness-of-fit. Notice that the above framework is quite general. For example, $\hat{Y}_{\theta}$ may be the assigned label in a classification task, the assigned cluster in a clustering task, or the output of a regressor in a regression task. + +Using the definition of Rényi correlation, we can rewrite optimization problem in equation 2 as + +$$ +\begin{array}{l} \min _ {\boldsymbol {\theta}} \sup _ {f, g} \mathbb {E} [ \mathcal {L} (\boldsymbol {\theta}, \mathbf {Z}) ] + \lambda (\mathbb {E} [ f (\widehat {Y} _ {\boldsymbol {\theta}} (\mathbf {Z})) g (S) ]) ^ {2}, \tag {3} \\ \mathrm {s . t .} \mathbb {E} \big [ f (\widehat {Y _ {\boldsymbol {\theta}}} (\mathbf {Z})) \big ] = \mathbb {E} \big [ g (S) \big ] = 0, \mathbb {E} \big [ f ^ {2} (\widehat {Y _ {\boldsymbol {\theta}}} (\mathbf {Z})) \big ] = \mathbb {E} \big [ g ^ {2} (S) \big ] = 1, \\ \end{array} +$$ + +where the supremum is taken over the set of measurable functions. The next natural question to ask is whether this optimization problem can be efficiently solved in practice. This question motivates the discussions of the following subsection. + +# 3.1 COMPUTING RÉNYI CORRELATION + +The objective function in equation 3 may be non-convex in $\theta$ in general. Several algorithms have been recently proposed for solving such non-convex min-max optimization problems Sanjabi et al. (2018); Nouiehed et al. (2019); Jin et al. (2019). Most of these methods require solving the inner maximization problem to (approximate) global optimality. More precisely, we need to be able to solve the optimization problem described in equation 1. While popular heuristic approaches such as parameterizing the functions $f$ and $g$ with neural networks can be used to solve equation 1, we focus on solving this problem in a more rigorous manner. In particular, we narrow down our focus to the discrete random variable case. This case holds for many practical sensitive attributes among which are the gender and race. In what follows, we show that in this case, equation 1 can be solved "efficiently" to global optimality. + +Theorem 3.1 (Witsenhausen (1975)). Let $a \in \{a_1, \ldots, a_c\}$ and $b \in \{b_1, \ldots, b_d\}$ be two discrete random variables. Then the Rényi coefficient $\rho_R(a, b)$ is equal to the second largest singular value of the matrix $\mathbf{Q} = [q_{ij}]_{i,j} \in \mathbb{R}^{c \times d}$ , where $q_{ij} = \frac{\mathbb{P}(a = a_i, b = b_j)}{\sqrt{\mathbb{P}(a = a_i)\mathbb{P}(b = b_j)}}$ . + +The above theorem provides a computationally tractable approach for computing the Rényi coefficient. This computation could be further simplified when one of the random variables is binary. + +Theorem 3.2. Suppose that $a \in \{1, \ldots, c\}$ is a discrete random variable and $b \in \{0, 1\}$ is a binary random variable. Let $\tilde{\mathbf{a}}$ be a one-hot encoding of $a$ , i.e., $\tilde{\mathbf{a}} = \mathbf{e}_i$ if $a = i$ , where $\mathbf{e}_i = (0, \ldots, 0, 1, 0, \ldots, 0)$ is the $i$ -th standard unit vector. Let $\tilde{b} = b - 1/2$ . Then, + +$$ +\rho_ {R} (a, b) = \sqrt {1 - \frac {\gamma}{\mathbb {P} (b = 1) \mathbb {P} (b = 0)}}, +$$ + +where $\gamma \triangleq \min_{\mathbf{w} \in \mathbb{R}^c} \mathbb{E}\left[(\mathbf{w}^T \widetilde{\mathbf{a}} - \widetilde{b})^2\right]$ . Equivalently, + +$$ +\gamma \triangleq \min _ {\mathbf {w} \in \mathbb {R} ^ {c}} \quad \sum_ {i = 1} ^ {c} w _ {i} ^ {2} \mathbb {P} (a = i) - \sum_ {i = 1} ^ {c} w _ {i} \big (\mathbb {P} (a = i, b = 1) - \mathbb {P} (a = i, b = 0) \big) + 1 / 4. +$$ + +Proof. The proof is relegated to the appendix. + +![](images/e9ed2e087ce27519ac4de3c432566a9c29b497bb76888e611bbf48f48e15bfd2.jpg) + +Let us specialize our framework to classification and clustering problems in the next two sections. + +# 4 RÉNYI FAIR CLASSIFICATION + +In a typical (multi-class) classification problem, we are given samples from a random variable $\mathbf{Z} \triangleq (\mathbf{X}, Y)$ and the goal is to predict $Y$ from $\mathbf{X}$ . Here $\mathbf{X} \in \mathbb{R}^d$ is the input feature vector, and $Y \in \mathcal{V} \triangleq \{1, \dots, c\}$ is the class label. Let $\widehat{Y}_{\theta}$ be the output of our classifier taking different values in the set $\{1, \dots, c\}$ . Assume further that + +$$ +\mathbb {P} (\widehat {Y _ {\boldsymbol {\theta}}} = i \mid \mathbf {X}) = \mathcal {F} _ {i} (\boldsymbol {\theta}, \mathbf {X}), \quad \forall i = 1, \ldots , c. +$$ + +Here $\pmb{\theta}$ is that parameter of the classifier that needs to be tuned. For example, $\mathcal{F}(\pmb {\theta},\mathbf{X}) =$ $(\mathcal{F}_1(\pmb {\theta},\mathbf{X}),\dots ,\mathcal{F}_c(\pmb {\theta},\mathbf{X}))$ could represent the output of a neural network after softmax layer; the soft probability label assigned by a logistic regression model; or the 0-1 probability values obtained by a deterministic classifier. In order to find the optimal parameter $\pmb{\theta}$ , we need to solve the optimization problem + +$$ +\min _ {\boldsymbol {\theta}} \mathbb {E} \left[ \mathcal {L} \left(\mathcal {F} (\boldsymbol {\theta}, \mathbf {X}), Y\right) \right], \tag {4} +$$ + +where $\mathcal{L}$ is the loss function and the expectation is taken over the random variable $\mathbf{Z} = (\mathbf{X}, Y)$ . Let $S$ be the sensitive attribute. We say a model satisfies demographic parity if the assigned label $\widehat{Y}$ is independent of the sensitive attribute $S$ , see Dwork et al. (2012). Using our regularization framework, to find the optimal parameter $\theta$ balancing classification accuracy and fairness objective, we need to solve + +$$ +\min _ {\boldsymbol {\theta}} \mathbb {E} \left[ \mathcal {L} \left(\mathcal {F} (\boldsymbol {\theta}, \mathbf {X}), Y\right) \right] + \lambda \rho_ {R} ^ {2} \left(\widehat {Y} _ {\boldsymbol {\theta}}, S\right). \tag {5} +$$ + +# 4.1 GENERAL DISCRETE CASE + +When $S \in \{s_1, \ldots, s_d\}$ is discrete, Theorem 3.1 implies that equation 5 can be rewritten as + +$$ +\min _ {\boldsymbol {\theta}} \max _ {\mathbf {v} \perp \mathbf {v} _ {1}, \| \mathbf {v} \| ^ {2} \leq 1} \left(f _ {D} (\boldsymbol {\theta}, \mathbf {v}) \triangleq \mathbb {E} \left[ \mathcal {L} \left(\mathcal {F} (\boldsymbol {\theta}, \mathbf {X}), \mathbf {Y}\right) \right] + \lambda \mathbf {v} ^ {T} \mathbf {Q} _ {\boldsymbol {\theta}} ^ {T} \mathbf {Q} _ {\boldsymbol {\theta}} \mathbf {v}\right). \tag {6} +$$ + +Here $\mathbf{v}_1 = \left[\sqrt{\mathbb{P}(S = s_1)},\dots ,\sqrt{\mathbb{P}(S = s_d)}\right]\in \mathbb{R}^d$ is the right singular vector corresponding to the largest singular value of $\mathbf{Q}_{\pmb{\theta}} = [q_{ij}]_{i,j}\in \mathbb{R}^{c\times d}$ , with $q_{ij}\triangleq \frac{\mathbb{P}(\widehat{Y}_{\pmb{\theta}} = i|S = s_j)\mathbb{P}(S = s_j)}{\sqrt{\mathbb{P}(\widehat{Y}_{\pmb{\theta}} = i)\mathbb{P}(S = s_j)}}$ . + +Given training data $(\mathbf{x}_n, y_n)_{n=1}^N$ sampled from the random variable $\mathbf{Z} = (\mathbf{X}, Y)$ , we can estimate the entries of the matrix $\mathbf{Q}_{\theta}$ using $\mathbb{P}\big(\widehat{Y}_{\theta} = i\big) = \mathbb{E}[\mathbb{P}(\widehat{Y}_{\theta} = i \mid \mathbf{X})] \approx \frac{1}{N} \sum_{n=1}^{N} \mathcal{F}_i(\boldsymbol{\theta}, \mathbf{x}_n)$ , and $\mathbb{P}\big(\widehat{Y}_{\theta} = i \mid S = s_j\big) \approx \frac{1}{|\mathcal{X}_j|} \sum_{\mathbf{x} \in \mathcal{X}_j} \mathcal{F}_i(\boldsymbol{\theta}, \mathbf{x})$ , where $\mathcal{X}_j$ is the set of samples with sensitive attribute $s_j$ . Motivated by the algorithm proposed in Jin et al. (2019), we present Algorithm 1 for solving equation 6. + +# Algorithm 1 Rényi Fair Classifier for Discrete Sensitive Attributes + +1: Input: $\pmb{\theta}^{0} \in \Theta$ , step-size $\eta$ . +2: for $t = 0,1,\dots ,T$ do +3: Set $\mathbf{v}^{t + 1}\gets \max_{\mathbf{v}\in \perp \mathbf{v}_1,\| \mathbf{v}\| \leq 1}f_D(\pmb {\theta}^t,\mathbf{v})$ by finding the second singular vector of $\mathbf{Q}_{\pmb{\theta}}^t$ +4: Set $\pmb{\theta}^{t + 1}\gets \pmb{\theta}^t -\eta \nabla_{\pmb{\theta}}f_D(\pmb{\theta}^t,\mathbf{v}^{t + 1})$ +5: end for + +To understand the convergence behavior of Algorithm 1 for the nonconvex optimization problem equation 6, we need to first define an approximate stationary solution. Let us define $g(\theta) =$ + +$\max_{\mathbf{v} \in \bot \mathbf{v}_1, \| \mathbf{v} \| \leq 1} f(\pmb{\theta}, \mathbf{v})$ . Assume further that $f(\cdot, \mathbf{v})$ has $L_1$ -Lipschitz gradient, then $g(\cdot)$ is $L_1$ -weakly convex; for more details check Rafique et al. (2018). For such weakly convex function, we say $\pmb{\theta}^*$ is a $\epsilon$ -stationary solution if the gradient of its Moreau envelop is smaller than epsilon, i.e., $\| \nabla g_\beta(\cdot) \| \leq \varepsilon$ with $g_\beta(\pmb{\theta}) \triangleq \min_{\pmb{\theta}'} g(\pmb{\theta}') + \frac{1}{2\beta} \| \pmb{\theta} - \pmb{\theta}' \|$ and $\beta < \frac{1}{2L_1}$ is a given constant. The following theorem demonstrates the convergence of Algorithm 1. This theorem is a direct consequence of Theorem 27 in Jin et al. (2019). + +Theorem 4.1. Suppose that $f$ is $L_0$ -Lipschitz and $L_1$ -gradient Lipschitz. Then Algorithm 1 computes an $\varepsilon$ -stationary solution of the objective function in equation 6 in $\mathcal{O}(\varepsilon^{-4})$ iterations. + +# 4.2 BINARY CASE + +When $S$ is binary, we can obtain a more efficient algorithm compared to Algorithm 1 by exploiting Theorem 3.2. Particularly, by a simple scaling of $\lambda$ and ignoring the constant terms, the optimization problem in equation 5 can be written as + +$$ +\begin{array}{l} \min _ {\boldsymbol {\theta}} \max _ {\mathbf {w}} f (\boldsymbol {\theta}, \mathbf {v}) \triangleq \mathbb {E} \left[ \mathcal {L} \left(\mathcal {F} (\boldsymbol {\theta}, \mathbf {X}), Y\right) \right] - \lambda \left[ \sum_ {i = 1} ^ {c} w _ {i} ^ {2} \mathbb {P} \left(\widehat {Y} _ {\boldsymbol {\theta}} = i\right) \right. \tag {7} \\ \left. - \sum_ {i = 1} ^ {c} w _ {i} \left(\mathbb {P} \left(\widehat {Y} _ {\boldsymbol {\theta}} = i, S = 1\right) - \mathbb {P} \left(\widehat {Y} _ {\boldsymbol {\theta}} = i, S = 0\right)\right) \right]. \\ \end{array} +$$ + +Defining $\tilde{S} = 2S - 1$ , the above problem can be rewritten as + +$$ +\min _ {\boldsymbol {\theta}} \max _ {\mathbf {w}} \mathbb {E} \left[ \mathcal {L} (\mathcal {F} (\boldsymbol {\theta}, \mathbf {X}), Y) - \lambda \sum_ {i = 1} ^ {c} w _ {i} ^ {2} \mathcal {F} _ {i} (\boldsymbol {\theta}, \mathbf {X}) + \lambda \sum_ {i = 1} ^ {c} w _ {i} \widetilde {S} \mathcal {F} _ {i} (\boldsymbol {\theta}, \mathbf {X}) \right]. +$$ + +Thus, given training data $(\mathbf{x}_n,y_n)_{n = 1}^N$ sampled from the random variable $\mathbf{Z} = (\mathbf{X},Y)$ , we solve + +$$ +\min _ {\boldsymbol {\theta}} \max _ {\mathbf {w}} \left[ f _ {B} (\boldsymbol {\theta}, \mathbf {w}) \triangleq \frac {1}{N} \sum_ {n = 1} ^ {N} \left[ \mathcal {L} \left(\mathcal {F} \left(\boldsymbol {\theta}, \mathbf {x} _ {n}\right), y _ {n}\right) - \lambda \sum_ {i = 1} ^ {c} w _ {i} ^ {2} \mathcal {F} _ {i} \left(\boldsymbol {\theta}, \mathbf {x} _ {n}\right) + \lambda \sum_ {i = 1} ^ {c} w _ {i} \widetilde {s} _ {n} \mathcal {F} _ {i} \left(\boldsymbol {\theta}, \mathbf {x} _ {n}\right) \right] \right]. \tag {8} +$$ + +Notice that the maximization problem in equation 8 is concave, separable, and has a closed-form solution. We propose Algorithm 2 for solving equation 8. + +# Algorithm 2 Rényi Fair Classifier for Binary Sensitive Attributes + +1: Input: $\theta^0\in \Theta$ , step-size $\eta$ +2: for $t = 0,1,\dots ,T$ do + +3: Set $\mathbf{w}^{t + 1}\gets \arg \max_{\mathbf{w}}f_B(\pmb {\theta}^t,\mathbf{w})$ , i.e., set $w_{i}^{t + 1}\gets \frac{\sum_{n = 1}^{N}\widetilde{s}_{n}\mathcal{F}_{i}(\pmb{\theta}^{t},\mathbf{x}_{n})}{2\sum_{n = 1}^{N}\mathcal{F}_{i}(\pmb{\theta}^{t},\mathbf{x}_{n})},\quad \forall i = 1,\ldots ,c$ +4: Set $\pmb{\theta}^{t + 1}\gets \pmb{\theta}^t -\eta \nabla_\pmb{\theta}f_B(\pmb{\theta}^t,\mathbf{w}^{t + 1})$ +5: end for + +While the result in Theorem 4.1 can be applied to Algorithm 2, under the following assumption, we can show a superior convergence rate. + +Assumption 4.1. We assume that there exists a constant scalar $\mu >0$ such that $\sum_{n = 1}^{N}\mathcal{F}_i(\pmb {\theta},\mathbf{x}_n)\geq \mu ,\quad \forall i = 1,\ldots ,C.$ + +This assumption is reasonable when soft-max is used. This is because we can always assume $\theta$ lies in a compact set in practice, and hence the output of the softmax layer cannot be arbitrarily small. + +Theorem 4.2. Suppose that $f$ is $L_{1}$ -gradient Lipschitz. Then Algorithm 2 computes an $\varepsilon$ -stationary solution of the objective function in equation 8 in $\mathcal{O}(\varepsilon^{-2})$ iterations. + +Proof. The proof is relegated to the appendix. + +Notice that this convergence rate is clearly a faster rate than the one obtained in Theorem 4.1. Moreover, this rate of convergence matches the oracle lower bound for general non-convex optimization; + +see Carmon et al. (2019). This observation shows that the computational overhead of imposing fairness is negligible as compared to solving the original non-convex training problem without imposing fairness. + +Remark 4.3 (Extension to multiple sensitive attributes). Our discrete Rényi classification framework can naturally be extended to the case of multiple discrete sensitivity attributes by concatenating all attributes into one. For instance, when we have two sensitivity attributes $S^1 \in \{0,1\}$ and $S^2 \in \{0,1\}$ , we can consider them as a single attribute $S \in \{0,1,2,3\}$ corresponding to the four combinations of $\{(S^1 = 0, S^2 = 0), (S^1 = 0, S^2 = 1), (S^1 = 0, S^2 = 0), (S^1 = 1, S^2 = 1)\}$ . + +Remark 4.4 (Extension to other notions of fairness). Our proposed framework imposes the demographic parity notion of group fairness. However, other notions of group fairness may be represented by (conditional) independence conditions. For such cases, we can again apply our framework. For example, we say a predictor $\hat{Y}_{\theta}$ satisfies equalized odds condition if the predictor $\hat{Y}_{\theta}$ is conditionally independent of the sensitive attribute $S$ given the true label $Y$ . Similar to formulation equation 5, the equalized odds fairness notion can be achieved by the following min-max problem + +$$ +\min _ {\boldsymbol {\theta}} \quad \mathbb {E} \left[ \mathcal {L} \left(\mathcal {F} (\boldsymbol {\theta}, \mathbf {X}), Y\right) \right] + \lambda \sum_ {y \in \mathcal {Y}} \rho_ {R} ^ {2} \left(\hat {Y} _ {\boldsymbol {\theta}}, S \mid Y = y\right). \tag {9} +$$ + +# 5 RÉNYI FAIR CLUSTERING + +In this section, we apply the proposed fair Rényi framework to the widespread $K$ -means clustering problem. Given a set of data points $\mathbf{x}_1,\ldots ,\mathbf{x}_N\in \mathbb{R}^{N\times d}$ , in the $K$ -means problem, we seek to partition them into $K$ clusters such that the following objective function is minimized: + +$$ +\min _ {\mathbf {A}, \mathbf {C}} \sum_ {n = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {k n} \| \mathbf {x} _ {n} - \mathbf {c} _ {k} \| ^ {2} \quad \text {s . t .} \quad \sum_ {k = 1} ^ {K} a _ {k n} = 1, \forall n, \quad a _ {k n} \in \{0, 1 \}, \forall k, n \tag {10} +$$ + +where $\mathbf{c}_k$ is the centroid of cluster $k$ ; the variable $a_{kn} = 1$ if data point $\mathbf{x}_n$ belongs to cluster $k$ and it is zero otherwise; $\mathbf{A} = [a_{kn}]_{k,n}$ and $\mathbf{C} = [\mathbf{c}_1, \dots, \mathbf{c}_K]$ represent the association matrix and the cluster centroids respectively. Now, suppose we have an additional sensitive attribute $S$ for each one of the given data points. In order to have a fair clustering under disparate impact doctrine, we need to make the random variable $\mathbf{a}_n = [a_{1n}, \dots, a_{Kn}]$ independent of $S$ . In other words, we need to make the clustering assignment independent of the sensitive attribute $S$ . Using our framework in equation 2, we can easily add a regularizer to this problem to impose fairness under disparate impact doctrine. In particular, for binary sensitive attribute $S$ , using Theorem 3.2, and absorbing the constants into the hyper-parameter $\lambda$ , we need to solve + +$$ +\min _ {\mathbf {A}, \mathbf {C}} \max _ {\mathbf {w} \in \mathbb {R} ^ {K}} \sum_ {n = 1} ^ {N} \sum_ {k = 1} ^ {K} a _ {k n} \| \mathbf {x} _ {n} - \mathbf {c} _ {k} \| ^ {2} - \lambda \sum_ {n = 1} ^ {N} \left(\mathbf {a} _ {n} ^ {T} \mathbf {w} - s _ {n}\right) ^ {2} \tag {11} +$$ + +$$ +\begin{array}{l} \text {s . t .} \quad \sum_ {k = 1} ^ {K} a _ {k n} = 1, \quad \forall n, \quad a _ {k n} \in \{0, 1 \}, \quad \forall k, n. \end{array} +$$ + +where $\mathbf{a}_n = (a_{1n},\ldots ,a_{Kn})^T$ encodes the clustering information of data point $\mathbf{x}_n$ and $s_n$ is the sensitive attribute for data point $n$ . + +Fixing the assignment matrix $\mathbf{A}$ , and cluster centers $\mathbf{C}$ , the vector $\mathbf{w}$ can be updated in closed-form. More specifically, $w_{k}$ at each iteration equals to the current proportion of the privileged group in the $k$ -th cluster. Combining this idea with the update rules of assignments and cluster centers in the standard K-means algorithm, we propose Algorithm 3, which is a fair $K$ -means algorithm under disparate impact doctrine. To illustrate the behavior of the algorithm, a toy example is presented in Appendix C. + +The main difference between this algorithm and the popular $K$ -means algorithm is in Step 6 of Algorithm 3. This step is a result of optimizing equation 11 over $\mathbf{A}$ when both $\mathbf{C}$ and $\mathbf{w}$ are fixed. When $\lambda = 0$ , this step would be identical to the update of cluster assignment variables in $K$ -means. However, when $\lambda > 0$ , Step 6 considers fairness when computing the distance considered in updating the cluster assignments. + +Algorithm 3 Rényi Fair K-means +1: Input: $\mathbf{X} = \{\mathbf{x}_1, \dots, \mathbf{x}_N\}$ and $\mathbf{S} = \{s_1, \dots, s_N\}$ +2: Initialize: Random assignment A s.t. $\sum_{k=1}^{K} a_{kn} = 1 \forall n$ ; and $a_{kn} \in \{0,1\}$ . Set $\mathbf{A}_{prev} = \mathbf{0}$ . +3: while $\mathbf{A}_{prev} \neq \mathbf{A}$ do +4: Set $\mathbf{A}_{prev} = \mathbf{A}$ +5: for $n = 1, \dots, N$ do +6: $k^* = \arg \min_k \| \mathbf{x}_n - \mathbf{c}_k \|^2 - \lambda (\mathbf{w}_k - s_n)^2$ +7: Set $a_{k*n} = 1$ and $a_{kn} = 0$ for all $k \neq k^*$ +8: Set $w_k = \frac{\sum_{n=1}^{N} s_n a_{kn}}{\sum_{n=1}^{N} a_{kn}}$ , $\forall k = 1, \dots, K$ . +9: end for +10: Set $\mathbf{c}_k = \frac{\sum_{n=1}^{N} a_{kn} \mathbf{x}_n}{\sum_{n=1}^{N} a_{kn}}$ , $\forall k = 1, \dots, K$ . +11: end while + +Remark 5.1. Note that in Algorithm 3, the parameter $\mathbf{w}$ is being updated after each assignment of a point to a cluster. More specifically, for every iteration of the algorithm, $\mathbf{w}$ is updated $N$ times. If we otherwise update $\mathbf{w}$ after completely updating the matrix $\mathbf{A}$ , then with a simple counterexample we can show that the algorithm can get stuck; see more details in Appendix C.1. + +# 6 NUMERICAL EXPERIMENTS + +In this section, we evaluate the performance of the proposed Rényi fair classifier and Rényi fair k-means algorithm on three standard datasets: Bank, German Credit, and Adult datasets. The detailed description of these datasets is available in the supplementary material. + +We evaluate the performance of our proposed Rényi classifier under both demographic parity, and equality of opportunity notions. We have implemented a logistic regression classifier regularized by Rényi correlation on Adult dataset considering gender as the sensitive feature. To measure the equality of opportunity we use the Equality of Opportunity (EO) violation, defined as EO Violation $= |\mathbb{P}(\hat{Y} = 1|S = 1,Y = 1) - \mathbb{P}(\hat{Y} = 1|S = 0,Y = 1)|$ , where $\hat{Y}$ and $Y$ represent the predicted, and true labels respectively. Smaller EO violation corresponds to a more fair solution. Figure 1, parts (a) and (b) demonstrate that by increasing $\lambda$ , the Rényi regularizer coefficient decreases implying a more fair classifier at the price of a higher training and testing errors. Figure 1, part (c) compares the fair Rényi logistic regression model to several existing methods in the literature Hardt et al. (2016); Zafar et al. (2015); Rezaei et al. (2019); Donini et al. (2018). As we can see in plot (c), the Rényi classifier outperforms other methods in terms of accuracy for a given level of EO violation. + +The better performance of the Rényi fair classifier compared to the baselines could be attributed to the following. Hardt et al. (2016) is a post-processing approach where the output of the classification is modified to promote a fair prediction. This modification is done without changing the classification process. Clearly, this approach limits the design space and cannot explore the possibilities that can reach with "in-processing" methods where both fairness and classification objectives are optimized jointly. Zafar et al. (2015) imposes fairness by using linear covariance and thus can only capture linear dependence between the predictor and the sensitive attribute. Consequently, there might exist nonlinear dependencies which are revealed in fairness measures such as DP or EO violation. Rezaei et al. (2019); Donini et al. (2018), on the other hand, propose to use nonlinear measures of dependence as regularizers. However, due to computational barriers, they approximate the regularizer and solve the approximate problem. The approximation step could potentially have an adverse effect on the performance of the resulting classifier. Notice that while these methods are different in terms of the way they impose fairness, they are all implemented for logistic regression model (with the exception of SVM model used in Donini et al. (2018)). Thus, the difference in the performance is not due to the classification model used in the experiments. + +To show the practical benefits of Rényi correlation over Pearson correlation and HSIC regularizers under the demographic parity notion, we evaluate the logistic regression classifier regularized + +by these three measures on Adult, Bank, and German Credit datasets. For the first two plots, we use $p\% = \min \left( \frac{\mathbb{P}(\hat{Y} = 1|S = 1)}{\mathbb{P}(\hat{Y} = 1|S = 0)}, \frac{\mathbb{P}(\hat{Y} = 1|S = 0)}{\mathbb{P}(\hat{Y} = 1|S = 1)} \right)$ as a measure of fairness. Since $p\%$ is defined only for binary sensitive variables, for the last two plots in Figure 2 (German dataset with gender and marital status, and Adult dataset with gender and race as the sensitive features), we use the inverse of demographic parity (DP) violation as the fairness measure. We define DP violation as DP Violation $= \max_{a,b} |\mathbb{P}(\hat{Y} = 1|S = a) - \mathbb{P}(\hat{Y} = 1|S = b)|$ . As it is evident from the figure, Rényi classifier outperforms both HSIC and Pearson classifiers, especially when targeting high levels of fairness. For the last two experiments in Figure 2, we could not further increase fairness by increasing the regularization coefficient for Pearson and HSIC regularizers (see green and red curves cannot go beyond a certain point on the fairness axis). This can be explained by the nonlinear correlation between the predictor and the sensitive variables in these two scenarios which cannot be fully captured using linear or quadratic independence measures. Interestingly, our experiments indicate that minimizing Rényi correlation eventually minimizes the Normalized Mutual Information (NMI) between the variables (See Supplementary Figure 6). Recall that similar to Rényi correlation, NMI can capture any dependence between two given random variables. + +Finally, to evaluate the performance of our fair k-means algorithm, we implement Algorithm 3 to find clusters of Adult and Bank datasets. We use the deviation of the elements of the vector $\mathbf{w}$ as a measure of fairness. The element $\mathbf{w}_k$ of $\mathbf{w}$ represents the ratio of the number of data points that belong to the privileged group ( $S = 1$ ) in cluster $k$ over the number of data points in that cluster. This notion of fairness is closely related to minimum balance introduced by Chierichetti et al. (2017). The deviation of these elements is a measure for the deviation of these ratios across different clusters. A clustering solution is exactly fair if all entries of $\mathbf{w}$ are the same. For $K = 14$ , we plot in Figure 3 the minimum, maximum, average, and average $\pm$ standard deviation of the entries of $\mathbf{w}$ vector for different values of $\lambda$ . For an exactly fair clustering solution, these values should be the same. As we can see in Figure 3, increasing $\lambda$ yields exact fair clustering at the price of a higher clustering loss. + +![](images/a50c90687eceeb86d69ca5a44953841f454d78fa86a0b2cd81477fa86b3cc4c9.jpg) +Figure 1: Trade-off between the accuracy of classifier and fairness on the adult dataset under the equality of opportunity notion. (a, b) By increasing $\lambda$ from 0 to 1000, EO violation (the blue curve on the left axis) approaches to 0. The fairer solution comes at the price of a slight increase of the training/test error (Red curve, right axis). (c) Comparison of the existing approaches with Rényi classifier, under the equality of opportunity notion. Rényi classifier demonstrates a better accuracy for a given level of fairness measured by EO violation. + +![](images/e9fe913d4e2bbefa072c12582648c81fcfe1831047b74ef7ce024373820da0e6.jpg) + +![](images/4bc73c24b2609f1aa7faef29eb3d68830cc3eea2d72a036465db103f4c2dd07d.jpg) + +# 7 CONCLUSION + +In this paper, we proposed Rényi fair inference as an in-process method to impose fairness in empirical risk minimization. Fairness is defined as (conditional) independence between a sensitive attribute and the inference output from the learning machine. As statistical independence is only measurable when the data distributions are fully known, we can only hope to promote independence through empirical surrogates in this framework. Our method imposes a regularizer in the form of the Rényi correlation (maximal correlation) between a sensitive attribute(s) and the inference output. Rényi correlation between two random variables is zero if and only if they are independent, which is a desirable property for an independence surrogate. We pose Rényi fair correlation as a minimax optimization problem. In the case where the sensitive attributes are discrete (e.g., race), we present an algorithm that finds a first-order optimal solution to the problem with convergence guarantees. In the special case where the sensitive attribute is binary (e.g., gender), we show an algorithm with optimal convergence guarantees. Our numerical experiments show that Rényi fair inference captures + +nonlinear correlations better than Pearson correlation or HSIC. We also show that increasing the regularization hyperparameter results in near statistical independence between the sensitive attribute and the inference output. Future work would naturally consider extension to continuous sensitive attributes and problems with missing or non-explicit sensitive labels such as fair word embedding problem. + +![](images/588fa13664741b08fda86680bf6914773014d84efbc11374b5942c9333eeda87.jpg) + +![](images/cf35637d30d77e2a7f11bcfdc6e5bb65a8fec0c553720a827758beea59b2a0af.jpg) + +![](images/0318f7275c96f3a2a952cc50dd7f9df8f6f0cc440852b227daad65ee969b99f9.jpg) +Figure 2: Trade-off between accuracy and fairness for logistic regression classifier regularized with Rényi, HSIC, and Pearson measures, on German Credit, Adult, and Bank datasets. (Top) The drop in the accuracy of the model regularized by Rényi, is less than the same model regularized by HSIC, and Pearson correlation. Moreover, as can be observed for both Bank and Adult datasets, Pearson and HSIC regularizers usually cannot increase $p\%$ beyond a certain limit, due to the fact that removing all linear correlations does not guarantee independence between the predictor and the sensitive attribute. (Down) When the sensitive attribute is not binary (or we have more than one sensitive attribute), obtaining a fair model for HSIC and Pearson regularizers is even harder. The model regularized by HSIC or Pearson, cannot minimize the DP violation (or maximize its reciprocal) beyond a threshold. + +![](images/c7f788ff4bacf98a7a82ae0d7dc37c8e3cfc94a7cf2bceaa2795333e9f0a8409.jpg) + +![](images/c5f9157c1ee0f25874f3ff6fcba8213bc60d2d3517913d413037b1742368cd48.jpg) +Figure 3: Performance and fairness of $K$ -means algorithm in terms of Rényi regularizer hyperparameter $\lambda$ . By increasing $\lambda$ , the standard deviation of the $w$ vector components (each component represents the relative proportion of the privileged group in the corresponding cluster) is reduced accordingly. Both plots demonstrate that the standard deviation of $w$ is reduced fast with respect to $\lambda$ , and the increase in loss is small when $\lambda \leq 0.005$ . However, to reach a completely fair clustering, a $\lambda \geq 1$ must be chosen that can increase the loss (the right axis, red curve) drastically. + +![](images/d8af1a2ac95256df000b94ddfaa3f4ed9cff53875d168b714a0f06e7bed48a98.jpg) + +# REFERENCES + +Civil Rights Act. Civil rights act of 1964, title vii, equal employment opportunities. 1964. +Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. A reductions approach to fair classification. arXiv preprint arXiv:1803.02453, 2018. +Daniel Alabi, Nicole Immorlica, and Adam Kalai. Unleashing linear optimizers for group-fair learning and optimization. In Conference On Learning Theory, pp. 2043-2066, 2018. +Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. ProPublica, 2016. +Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, and Tal Wagner. Scalable fair clustering. arXiv preprint arXiv:1902.03519, 2019. +Yahav Bechavod and Katrina Ligett. Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044, 2017. +Ioana O Bercea, Martin Groß, Samir Khuller, Aounon Kumar, Clemens Rösner, Daniel R Schmidt, and Melanie Schmidt. On the cost of essentially fair clusterings. arXiv preprint arXiv:1811.10319, 2018. +Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. A convex framework for fair regression. arXiv preprint arXiv:1706.02409, 2017. +Dimitri P Bertsekas. Control of uncertain systems with a set-membership description of the uncertainty. PhD thesis, Massachusetts Institute of Technology, 1971. +Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems, pp. 4349-4357, 2016. +Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops, pp. 13-18. IEEE, 2009. +Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems, pp. 3992-4001, 2017. +Yair Carmon, John C Duchi, Oliver Hinder, and Aaron Sidford. Lower bounds for finding stationary points i. Mathematical Programming, pp. 1-50, 2019. +L Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K Vishnoi. Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 319-328. ACM, 2019. +Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. In Advances in Neural Information Processing Systems, pp. 5029-5037, 2017. +John M Danskin. The theory of max-min and its application to weapons allocation problems, volume 5. Springer Science & Business Media, 1967. +Amit Datta, Michael Carl Tschantz, and Anupam Datta. Automated experiments on ad privacy settings. Proceedings on privacy enhancing technologies, 2015(1):92-112, 2015. +Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. In Advances in Neural Information Processing Systems, pp. 2791-2801, 2018. +Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226. ACM, 2012. + +Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Max Leiserson. Decoupled classifiers for group-fair and efficient machine learning. In Conference on Fairness, Accountability and Transparency, pp. 119-133, 2018. +Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015. +Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259-268. ACM, 2015. +Benjamin Fish, Jeremy Kun, and Ádám D Lelkes. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM International Conference on Data Mining, pp. 144-152. SIAM, 2016. +Hans Gebelein. Das statistische problem der korrelation als variations-und eigenvwertproblem und sein zusammenhang mit der ausgleichsrechnung. ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik, 21(6):364-379, 1941. +Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In International conference on algorithmic learning theory, pp. 63-77. Springer, 2005a. +Arthur Gretton, Ralf Herbrich, Alexander Smola, Olivier Bousquet, and Bernhard Schölkopf. Kernel methods for measuring independence. Journal of Machine Learning Research, 6(Dec):2075-2129, 2005b. +Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pp. 3315-3323, 2016. +Hermann O Hirschfeld. A connection between correlation and contingency. In *Mathematical Proceedings of the Cambridge Philosophical Society*, volume 31, pp. 520-524. Cambridge University Press, 1935. +Chi Jin, Praneeth Netrapalli, and Michael I Jordan. Minmax optimization: Stable limit points of gradient descent ascent are locally optimal. arXiv preprint arXiv:1902.00618, 2019. +Faisal Kamiran and Toon Calders. Classifying without discriminating. In 2009 2nd International Conference on Computer, Control and Communication, pp. 1-6. IEEE, 2009. +Faisal Kamiran and Toon Calders. Classification with no discrimination by preferential sampling. In Proc. 19th Machine Learning Conf. Belgium and The Netherlands, pp. 1-6. CiteSeer, 2010. +Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1):1-33, 2012. +Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops, pp. 643-650. IEEE, 2011. +Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. arXiv preprint arXiv:1711.05144, 2017. +Jiachun Liao, Oliver Kosut, Lalitha Sankar, and Flavio du Pin Calmon. Tunable measures for information leakage and applications to privacy-utility tradeoffs. IEEE Transactions on Information Theory, 2019. +Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. arXiv preprint arXiv:1511.00830, 2015. +David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309, 2018. + +Aditya Krishna Menon and Robert C Williamson. The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency, pp. 107-118, 2018. +Yurii Nesterov. Lectures on convex optimization, volume 137. Springer, 2018. +Maher Nouiehed, Maziar Sanjabi, Jason D Lee, and Meisam Razaviyayn. Solving a class of nonconvex min-max games using iterative first order methods. arXiv preprint arXiv:1902.08297, 2019. +Adrián Pérez-Suay, Valero Laparra, Gonzalo Mateo-García, Jordi Muñoz-Marí, Luis Gómez-Chova, and Gustau Camps-Valls. Fair kernel learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 339-355. Springer, 2017. +Edward Raff and Jared Sylvester. Gradient reversal against discrimination: A fair neural network learning approach. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 189-198. IEEE, 2018. +Hassan Rafique, Mingrui Liu, Qihang Lin, and Tianbao Yang. Non-convex min-max optimization: Provable algorithms and applications in machine learning. arXiv preprint arXiv:1810.02060, 2018. +Alfréd Rényi. On measures of dependence. Acta mathematica hungarica, 10(3-4):441-451, 1959. +Ashkan Rezaei, Rizal Fathony, Omid Memarrast, and Brian Ziebart. Fair logistic regression: An adversarial perspective. arXiv preprint arXiv:1903.03910, 2019. +Clemens Rösner and Melanie Schmidt. Privacy preserving clustering with constraints. arXiv preprint arXiv:1802.02497, 2018. +Salvatore Ruggieri. Using t-closeness anonymity to control for non-discrimination. Trans. Data Privacy, 7(2):99-129, 2014. +Maziar Sanjabi, Jimmy Ba, Meisam Razaviyayn, and Jason D Lee. On the convergence and robustness of training gans with regularized optimal transport. In Advances in Neural Information Processing Systems, pp. 7091-7101, 2018. +Prasanna Sattigeri, Samuel C Hoffman, Vijil Chenthamarakshan, and Kush R Varshney. Fairness gan. arXiv preprint arXiv:1805.09910, 2018. +Melanie Schmidt, Chris Schwiegelshohn, and Christian Sohler. Fair coresets and streaming algorithms for fair k-means clustering. arXiv preprint arXiv:1812.10854, 2018. +Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, and Stefano Ermon. Learning controllable fair representations. arXiv preprint arXiv:1812.04218, 2019. +Latanya Sweeney. Discrimination in online ad delivery. arXiv preprint arXiv:1301.6822, 2013. +Hans S Witsenhausen. On sequences of pairs of dependent random variables. SIAM Journal on Applied Mathematics, 28(1):100-113, 1975. +Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. Learning nondiscriminatory predictors. arXiv preprint arXiv:1702.06081, 2017. +Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu. Fairgan: Fairness-aware generative adversarial networks. In 2018 IEEE International Conference on Big Data (Big Data), pp. 570-575. IEEE, 2018. +Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:1507.05259, 2015. +Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pp. 1171-1180. International World Wide Web Conferences Steering Committee, 2017. + +Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International Conference on Machine Learning, pp. 325-333, 2013. +Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335-340. ACM, 2018. + +# A APPENDIX: PROOF OF THEOREM 3.2 + +Proof. First, notice that since $\tilde{\mathbf{a}}$ is a one-hot encoding of $a$ , any function $f:\{1,\ldots ,c\} \mapsto \mathbb{R}$ can be equivalently represented as $f(\tilde{\mathbf{a}}) = \mathbf{u}^T\tilde{\mathbf{a}}$ for some $\mathbf{u}\in \mathbb{R}^c$ . Therefore, following the definition of Rényi correlation, we can write + +$$ +\rho_ {R} (a, b) = \max _ {\mathbf {u}, g} \quad \mathbb {E} \left[ \left(\mathbf {u} ^ {T} \widetilde {\mathbf {a}}\right) g (b) \right] +$$ + +$$ +\text {s . t .} \quad \mathbb {E} \left[ \left(\mathbf {u} ^ {T} \tilde {\mathbf {a}}\right) ^ {2} \right] \leq 1, \quad \mathbb {E} [ \mathbf {u} ^ {T} \tilde {\mathbf {a}} ] = 0 \quad \cdot +$$ + +$$ +\mathbb {E} [ g ^ {2} (b) ] \leq 1, \quad \mathbb {E} [ g (b) ] = 0 +$$ + +Notice that since $b$ is binary, there is a unique function $g(b) = \frac{b - q}{\sqrt{q(1 - q)}}$ satisfying the constraints where $q \triangleq \mathbb{P}(b = 1)$ . Therefore, the above optimization problem can be written as + +$$ +\rho_ {R} (a, b) = \max _ {\mathbf {u}} \quad \mathbf {u} ^ {T} \mathbb {E} [ \widetilde {\mathbf {a}} g (b) ] +$$ + +$$ +\begin{array}{l} \text {s . t .} \quad \mathbf {u} ^ {T} \mathbb {E} \left[ \tilde {\mathbf {a}} \tilde {\mathbf {a}} ^ {T} \right] \mathbf {u} \leq 1 \quad . \end{array} +$$ + +$$ +\mathbf {u} ^ {T} \mathbb {E} [ \tilde {\mathbf {a}} ] = 0 +$$ + +The last constraint simply implies that $\mathbf{u}$ should be orthogonal to $\mathbf{p} \triangleq \mathbb{E}[\tilde{\mathbf{a}}]$ , which is a stochastic vector capturing the distribution of $a$ . Equivalently, we can write $\mathbf{u} = \left(\mathbf{I} - \frac{\mathbf{P}\mathbf{P}^T}{\|\mathbf{P}\|^2}\right)\mathbf{v}$ for some $\mathbf{v} \in \mathbb{R}^c$ . Thus, we can simplify the above optimization problem as + +$$ +\rho_ {R} (a, b) = \max _ {\mathbf {v}} \quad \mathbf {v} ^ {T} \left(\mathbf {I} - \frac {\mathbf {p p} ^ {T}}{\| \mathbf {p} \| ^ {2}}\right) \mathbb {E} [ \widetilde {\mathbf {a}} g (b) ] +$$ + +$$ +\mathrm {s . t .} \quad \mathbf {v} ^ {T} \left(\mathbf {I} - \frac {\mathbf {p p} ^ {T}}{\| \mathbf {p} \| ^ {2}}\right) \operatorname {d i a g} (\mathbf {p}) \left(\mathbf {I} - \frac {\mathbf {p p} ^ {T}}{\| \mathbf {p} \| ^ {2}}\right) \mathbf {v} \leq 1, +$$ + +where in the constraint, we used the equality $\mathbb{E}\left[\tilde{\mathbf{a}}\tilde{\mathbf{a}}^T\right] = \mathrm{diag}(\mathbf{p})$ , which follows the definition. Let us do the change of variable $\hat{\mathbf{v}} = \mathrm{diag}(\sqrt{\mathbf{p}})(\mathbf{I} - \frac{\mathbf{p}\mathbf{p}^T}{\|\mathbf{p}\|^2})\mathbf{v}$ . Then the above optimization can be simplified to + +$$ +\rho_ {R} (a, b) = \max _ {\hat {\mathbf {v}}} \quad \hat {\mathbf {v}} ^ {T} \mathrm {d i a g} (1 / \sqrt {\mathbf {p}}) \mathbb {E} \left[ \widetilde {\mathbf {a}} g (b) \right] +$$ + +$$ +\begin{array}{l l} \text {s . t .} & \| \hat {\mathbf {v}} \| \leq 1. \end{array} +$$ + +Clearly, this leads to + +$$ +\begin{array}{l} \rho_ {R} ^ {2} (a, b) = \left\| \operatorname {d i a g} \left(\frac {1}{\sqrt {\mathbf {p}}}\right) \mathbb {E} [ \widetilde {\mathbf {a}} g (b) ] \right\| ^ {2} \\ = \sum_ {i = 1} ^ {c} \frac {1}{\mathbb {P} (a = i)} \left(\mathbb {P} (a = i, b = 1) \sqrt {\frac {1 - q}{q}} - \mathbb {P} (a = i, b = 0) \sqrt {\frac {q}{1 - q}}\right) ^ {2}, \tag {12} \\ \end{array} +$$ + +where in the last equality we use the fact that $g(1) = \sqrt{\frac{1 - q}{q}}$ and $g(0) = -\sqrt{\frac{q}{1 - q}}$ . Define $p_{i0} \triangleq \mathbb{P}(a = i, b = 0)$ and $p_{i1} \triangleq \mathbb{P}(a = i, b = 0)$ , $p_i \triangleq \mathbb{P}(a = i) = p_{i0} + p_{i1}$ . Then, using simple + +algebraic manipulations, we have that + +$$ +\begin{array}{l} \rho_ {R} ^ {2} (a, b) = \sum_ {i = 1} ^ {c} \frac {1}{p _ {i}} \left(p _ {i 1} \sqrt {\frac {1 - q}{q}} - p _ {i 0} \sqrt {\frac {q}{1 - q}}\right) ^ {2} \\ = \sum_ {i = 1} ^ {c} \frac {\left(2 p _ {i 1} (1 - q) - 2 p _ {i 0} q\right) ^ {2}}{4 p _ {i} q (1 - q)} - \sum_ {i = 1} ^ {c} \frac {\left(p _ {i 0} - p _ {i 1}\right) ^ {2}}{4 p _ {i} q (1 - q)} + \sum_ {i = 1} ^ {c} \frac {\left(p _ {i 0} - p _ {i 1}\right) ^ {2}}{4 p _ {i} q (1 - q)} \\ = \sum_ {i = 1} ^ {c} \frac {\left((3 - 2 q) p _ {i 1} - (1 + 2 q) p _ {i 0}\right) \left((1 - 2 q) p _ {i 1} + (1 - 2 q) p _ {i 0}\right)}{4 p _ {i} q (1 - q)} + \sum_ {i = 1} ^ {c} \frac {(p _ {i 0} - p _ {i 1}) ^ {2}}{4 p _ {i} q (1 - q)} \\ = \frac {1 - 2 q}{4 q (1 - q)} ((3 - 2 q) q - (1 + 2 q) (1 - q)) + \sum_ {i = 1} ^ {c} \frac {\left(p _ {i 0} - p _ {i 1}\right) ^ {2}}{4 p _ {i} q (1 - q)} \\ = 1 - \frac {1 - \sum_ {i = 1} ^ {c} (p _ {i 0} - p _ {i 1}) ^ {2} / p _ {i}}{4 q (1 - q)} = 1 - \frac {\gamma}{q (1 - q)}, \\ \end{array} +$$ + +where in the last equality we used the definition of $\gamma$ and the optimal value of equation 3.2. + +![](images/98201c50892294d45199216052381323e1cb7d5d5cf91dbe3cc28ff90e30b26b.jpg) + +# B PROOF OF THEOREM 4.2 + +Proof. Define $g_B(\pmb{\theta}) = \max_{\mathbf{w}} f_B(\pmb{\theta}, \mathbf{w})$ . Since the optimization problem $\max_{\mathbf{w}} f_B(\pmb{\theta}, \mathbf{w})$ is strongly concave in $\mathbf{w}$ , using Danskin's theorem (see Danskin (1967) and Bertsekas (1971)), we conclude that the function $g_B(\cdot)$ is differentiable. Moreover, + +$$ +\nabla_ {\boldsymbol {\theta}} g _ {B} (\bar {\boldsymbol {\theta}}) = \nabla_ {\boldsymbol {\theta}} f _ {B} (\bar {\boldsymbol {\theta}}, \bar {\mathbf {w}}) +$$ + +where $\bar{\mathbf{w}} = \arg \max_{\mathbf{w}} f_B(\bar{\boldsymbol{\theta}}, \mathbf{w})$ . Thus Algorithm 2 is in fact equivalent to the gradient descent algorithm applied to $g_B(\boldsymbol{\theta})$ . Thus according to (Nesterov, 2018, Chapter 1), the algorithm finds a point with $\|\nabla g_B(\boldsymbol{\theta})\| \leq \epsilon$ in $\mathcal{O}(\epsilon^{-2})$ iterations. + +# C RÉNYI FAIR K-MEANS + +To illustrate the behavior of Algorithm 3, we deployed a simple two-dimensional toy example. In this example we generated data by randomly selecting 5 center points and then randomly generating 500 data points around each center according to a normal distribution with small enough variance. The data is shown in Figure 4 with different colors corresponding to different clusters. Moreover, we assigned for each data point $x_{i}$ a binary value $s_i \in \{0,1\}$ that corresponds to its sensitive attribute. This assignment was also performed randomly except for points generated around center 2 (green points in Figure 4) which were assigned a value of 1 and points generated around center 4 (blue points in Figure 4) which were assigned a value of 0. Without imposing fairness, traditional K-means algorithm would group points generated around center 2 in one cluster regardless of the fact that they all belong to the same protected group. Similarly, points generated around center 4 will belong to the same cluster. Hence, according to traditional K-means clustering shown in Figure 4, the proportion of the protected group in clusters 2 and 4 are 1 and 0 respectively. However, when imposing our fairness scheme, we expect these points to be distributed among various clusters to achieve balanced clustering. This is illustrated in Figure 5. It is evident from Figure 5 that increasing lambda, data points corresponding to centers 2 and 4 are now distributed among different clusters. + +![](images/bb218119a63601fc5221dee71d94e5e10f89fe1dbd7f34fdbd2616592c3e9cd7.jpg) +Figure 4: Applying K-means algorithm without fairness on the synthetic dataset. + +![](images/850a0f5460be2eebb014dd5554c1c8f09fa548affa3850eacce5e040fe2f674e.jpg) +Figure 5: Applying fair K-means algorithm with different values of $\lambda$ on the synthetic dataset. + +![](images/14a71ccc6bc89d4e378c96d18b948681ea06540dc41d30c799dd662b219eb948.jpg) + +![](images/dc5d69acc3618b1d8a2864d93cb8b30d086563b6cb4e2ee9844ef4df41f12941.jpg) + +# C.1 UPDATING w AFTER UPDATING THE ASSIGNMENT OF EACH DATA POINT IN ALGORITHM 3 + +To understand the reasoning behind updating the vector of proportions $\mathbf{w}$ after updating each $\mathbf{a}_i$ which is the assignment of data point $i$ , we discuss a simple one-dimensional counterexample. Consider the following four data points $X_1 = -5$ , $X_2 = -4$ , $X_3 = 4$ , and $X_4 = 5$ with their corresponding sensitive attributes $S_1 = S_2 = 1$ and $S_3 = S_4 = 0$ . Moreover, assume the following initial $\mathbf{A}^0$ and $\mathbf{C}^0$ + +$$ +\mathbf {A} ^ {0} = \left[ \begin{array}{c c} 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 1 \end{array} \right], \quad \mathbf {C} ^ {0} = [ - 4. 5, 4. 5 ]. +$$ + +Hence, $X_{1}$ and $X_{2}$ which both have a sensitive attribute of 1 are assigned to cluster 1 with center $\mathbf{C}_1^0 = -4.5$ and $X_{3}$ and $X_{4}$ which both have a sensitive attribute of 0 are assigned to cluster 2 with center $\mathbf{C}_2^0 = 4.5$ . Then $\mathbf{w}$ which is the current proportion of the privileged group in the clusters will be $\mathbf{w}^0 = [1,0]$ . Now, for sufficiently large $\lambda$ if we update $\mathbf{A}$ according to Step 6 of Algorithm 3, we get the following new assignment + +$$ +\mathbf {A} ^ {1} = \left[ \begin{array}{c c} 0 & 1 \\ 0 & 1 \\ 1 & 0 \\ 1 & 0 \end{array} \right], \quad \mathbf {C} ^ {1} = [ 4. 5, - 4. 5 ], \quad \mathbf {w} ^ {1} = [ 0, 1 ]. +$$ + +Hence, the points just switch their clusters. Then, performing another iteration will get us back to the initial setup and the algorithm will get stuck between these two states that are both not fair. To overcome this issue we update the proportions w after updating the assignment of each data point. + +# D DATASETS DESCRIPTION + +In this section we introduce the datasets used in numerical experiment discussed in Section 6. All of these datasets are publicly available at UCI repository. + +- German Credit Dataset: German Credit dataset consists of 20 features (13 categorical and 7 numerical) regarding to social, and economic status of 1000 customers. The assigned task is to classify customers as good or bad credit risks. Without imposing fairness, the DP violation of the trained model is larger than $20\%$ . We chose first 800 customers as the training data, and last 200 customers as the test data. The sensitive attributes are gender, and marital-status. +- Bank Dataset:² Bank dataset contains the information of individuals contacted by a Portuguese bank institution. The assigned classification task is to predict whether the client will subscribe a term deposit. For the classification task we consider all 17 attributes (except martial status as the sensitive attribute). Removing the sensitive attribute, and train a logistic regression model on the dataset, yields to a solution that is biased under the demographic parity notion ( $p\% = 70.75\%$ ). To evaluate the performance of the classifier, we split data into the training (32000 data points), and test set (13211 data points). For the clustering task, we sampled 3 continuous features: Age, balance, and duration. The sensitive attribute is the marital status of the individuals. +- Adult Dataset:³ Adult dataset contains the census information of individuals including education, gender, and capital gain. The assigned classification task is to predict whether a person earns over 50K annually. The train and test sets are two separated files consisting of 32000 and 16000 samples respectively. We consider gender and race as the sensitive attributes (For the experiments involving one sensitive attribute, we have chosen gender). Learning a logistic regression model on the training dataset (without imposing fairness) shows that only 3 features out of 14 have larger weights than the gender attribute. Note that removing the sensitive attribute (gender), and retraining the model does not eliminate the bias of the classifier. The optimal logistic regression classifier in this case is still highly biased ( $p\% = 31.49\%$ ). For the clustering task, we have chosen 5 continuous features (Capital-gain, age, fnlwgt, capital-loss, hours-per-week), and 10000 samples to cluster. The sensitive attribute of each individual is gender. + +# E SUPPLEMENTARY FIGURES + +![](images/5506f3ae50a6c808d144596a4facbf7f970c90e6ccb845ff17c33c663b1bff0a.jpg) +Figure 6: The relationship between Rényi correlation, Pearson correlation, and normalized mutual information. Direct optimization of normalized mutual information is intractable due to its nonconvexity. However, as we can observe on the right-hand-side, by minimizing Rényi correlation to 0, the normalized mutual information is converging to 0 accordingly. + +![](images/9694be582a7aadd08802cb22603aae3eb756026b90834af438e7087a089915b6.jpg) + +# F FAIR NEURAL NETWORK + +In this section, we train a 2-layers neural network on the adult dataset regularized by the Rényi correlation. In this experiment, the sensitive attribute is gender. We set the number of nodes in the hidden layer, the batch-size, and the number of epochs to 12, 128, and 50, respectively. The following table depicts the performance of the trained model. + +
p%Test AccuracyTime (Seconds)
31.49%85.33731
80.42%83.34915
+ +Table 1: Performance and training time of a neural network trained on the Adult dataset. The first and second rows correspond to the networks not regularized, and regularized by Rényi correlation respectively. As can be seen in the table, while adding Rényi regularizer makes the classifier more fair, it does so by bringing a minimum amount of additional computational overhead. \ No newline at end of file diff --git a/rnyifairinference/images.zip b/rnyifairinference/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ac9f691a740ebde5ac5766289bbd105d49eedb3b --- /dev/null +++ b/rnyifairinference/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb8c1b986b55a7d93238611cb82f191d0ee99cddcd638309d90ffbc75660b969 +size 562241 diff --git a/rnyifairinference/layout.json b/rnyifairinference/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..20a6d37cbd1ac6c6b217f207f1f5ad6ee4f76a59 --- /dev/null +++ b/rnyifairinference/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f040eec139519ab89972515290161202d873213a9697f7c6493c0eb221b560e5 +size 676911 diff --git a/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_content_list.json b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f0250a34a426bec6e45ce65b466deb5d7471eb14 --- /dev/null +++ b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe88eea42ae43cd83576ff92efc48f5bddf7f05a938075a79dd76b53a16be163 +size 95752 diff --git a/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_model.json b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_model.json new file mode 100644 index 0000000000000000000000000000000000000000..da309f4281be7f83a1544bdb1b1f7ebe5ac1c56d --- /dev/null +++ b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c191e40d86f0b0b56b78e3a64094b00cc84db6feeb1123a4d41979c617f66e3 +size 118635 diff --git a/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_origin.pdf b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2c9423c962940d4531bc5cfb9ed6d7300de48178 --- /dev/null +++ b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/08242736-ba18-4ab2-b4c6-7962a592ef08_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddca4574a623792410b6c8c5f5fb5e20577a3f18238562d2fb1404d572dd70db +size 4851080 diff --git a/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/full.md b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..25ddc3d8d3356a42861fa36f8aafc652f72ea87e --- /dev/null +++ b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/full.md @@ -0,0 +1,396 @@ +# ROBUST AND INTERPRETABLE BLIND IMAGE DENOISING VIA BIAS-FREE CONVOLUTIONAL NEURAL NETWORKS + +Sreyas Mohan* + +Center for Data Science + +New York University + +sm7582@nyu.edu + +Zahra Kadkhodaie* + +Center for Data Science + +New York University + +zk388@nyu.edu + +Eero P. Simoncelli + +Center for Neural Science, and Howard Hughes Medical Institute + +New York University + +eero.simoncelli@nyu.edu + +Carlos Fernandez-Granda + +Center for Data Science, and + +Courant Inst. of Mathematical Sciences + +New York University + +cfgranda@cims.nyu.edu + +# ABSTRACT + +We study the generalization properties of deep convolutional neural networks for image denoising in the presence of varying noise levels. We provide extensive empirical evidence that current state-of-the-art architectures systematically overfit to the noise levels in the training set, performing very poorly at new noise levels. We show that strong generalization can be achieved through a simple architectural modification: removing all additive constants. The resulting "bias-free" networks attain state-of-the-art performance over a broad range of noise levels, even when trained over a narrow range. They are also locally linear, which enables direct analysis with linear-algebraic tools. We show that the denoising map can be visualized locally as a filter that adapts to both image structure and noise level. In addition, our analysis reveals that deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace, with dimensionality inversely proportional to noise level, that captures features of natural images. + +# 1 INTRODUCTION AND CONTRIBUTIONS + +The problem of denoising consists of recovering a signal from measurements corrupted by noise, and is a canonical application of statistical estimation that has been studied since the 1950's. Achieving high-quality denoising results requires (at least implicitly) quantifying and exploiting the differences between signals and noise. In the case of photographic images, the denoising problem is both an important application, as well as a useful test-bed for our understanding of natural images. In the past decade, convolutional neural networks (LeCun et al., 2015) have achieved state-of-the-art results in image denoising (Zhang et al., 2017; Chen & Pock, 2017). Despite their success, these solutions are mysterious: we lack both intuition and formal understanding of the mechanisms they implement. Network architecture and functional units are often borrowed from the image-recognition literature, and it is unclear which of these aspects contributes to, or limits, the denoising performance. The goal of this work is advance our understanding of deep-learning models for denoising. Our contributions are twofold: First, we study the generalization capabilities of deep-learning models across different noise levels. Second, we provide novel tools for analyzing the mechanisms implemented by neural networks to denoise natural images. + +An important advantage of deep-learning techniques over traditional methodology is that a single neural network can be trained to perform denoising at a wide range of noise levels. Currently, this is achieved by simulating the whole range of noise levels during training (Zhang et al., 2017). Here, we show that this is not necessary. Neural networks can be made to generalize automatically across noise + +![](images/3eb4e4b860f0563f0b73f1ef96a77e2cbe5dd42a8468713b618792074a4f0ad7.jpg) +(a) + +![](images/ed16490f024036e9db6a701bb22f5bb8a4c29356e10c4df6cb12a22bace48ed9.jpg) +(b) +Figure 1: First-order analysis of the residual of a denoising convolutional neural network as a function of noise level. The plots show the norms of the residual and the net bias averaged over $10020\times 20$ natural-image patches for networks trained over different training ranges. The range of noises used for training is highlighted in blue. (a) When the network is trained over the full range of noise levels $(\sigma \in [0,100])$ the net bias is small, growing slightly as the noise increases. (b-c) When the network is trained over the a smaller range $(\sigma \in [0,55]$ and $\sigma \in [0,30])$ , the net bias grows explosively for noise levels beyond the training range. This coincides with a dramatic drop in performance, reflected in the difference between the magnitudes of the residual and the true noise. The CNN used for this example is DnCNN (Zhang et al., 2017); using alternative architectures yields similar results as shown in Figure 8. + +![](images/c0182187f04a699934f5dbe7c525ab9d3059ff47dcfc5454ce35a5816a2bd3fa.jpg) +(c) + +levels through a simple modification in the architecture: removing all additive constants. We find this holds for a variety of network architectures proposed in previous literature. We provide extensive empirical evidence that the main state-of-the-art denoising architectures systematically overfit to the noise levels in the training set, and that this is due to the presence of a net bias. Suppressing this bias makes it possible to attain state-of-the-art performance while training over a very limited range of noise levels. + +The data-driven mechanisms implemented by deep neural networks to perform denoising are almost completely unknown. It is unclear what priors are being learned by the models, and how they are affected by the choice of architecture and training strategies. Here, we provide novel linear-algebraic tools to visualize and interpret these strategies through a local analysis of the Jacobian of the denoising map. The analysis reveals locally adaptive properties of the learned models, akin to existing nonlinear filtering algorithms. In addition, we show that the deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace capturing features of natural images. + +# 2 RELATED WORK + +The classical solution to the denoising problem is the Wiener filter (Wiener, 1950), which assumes a translation-invariant Gaussian signal model. The main limitation of Wiener filtering is that it over-smoothes, eliminating fine-scale details and textures. Modern filtering approaches address this issue by adapting the filters to the local structure of the noisy image (e.g. Tomasi & Manduchi (1998); Milanfar (2012)). Here we show that neural networks implement such strategies implicitly, learning them directly from the data. + +In the 1990's powerful denoising techniques were developed based on multi-scale ("wavelet") transforms. These transforms map natural images to a domain where they have sparser representations. This makes it possible to perform denoising by applying nonlinear thresholding operations in order to discard components that are small relative to the noise level (Donoho & Johnstone, 1995; Simoncelli & Adelson, 1996; Chang et al., 2000). From a linear-algebraic perspective, these algorithms operate by projecting the noisy input onto a lower-dimensional subspace that contains plausible signal content. The projection eliminates the orthogonal complement of the subspace, which mostly contains noise. This general methodology laid the foundations for the state-of-the-art models in the 2000's (e.g. (Dabov et al., 2006)), some of which added a data-driven perspective, learning sparsifying transforms (Elad & Aharon, 2006), and nonlinear shrinkage functions (Hel-Or & Shaked, 2008; Raphan & Simoncelli, 2008), directly from natural images. Here, we show that deep-learning models learn similar priors in the form of local linear subspaces capturing image features. + +![](images/381ee363260c8cfe8774a23f46aabe0c3446c5abcdc34ef8e8c94e947f0f06a0.jpg) +Noisy training image, $\sigma = 10$ (max level) + +![](images/e3b330575aad2b53f444ddc29972843f68518cad281e540ca38fd8c6ba74513f.jpg) +Noisy test image, $\sigma = 90$ + +![](images/7dce57b2f8b56ee2fee7169b79fae7e183389c923979948afbc5b62b8b2eeee3.jpg) +Test image, denoised by CNN + +![](images/a3607e5580c68d2440fc7d7bff3b94633109571c47131c1181bc9285d27fc42a.jpg) +Test image, denoised by BF-CNN +Figure 2: Denoising of an example natural image by a CNN and its bias-free counterpart (BF-CNN), both trained over noise levels in the range $\sigma \in [0,10]$ (image intensities are in the range [0,255]). The CNN performs poorly at high noise levels ( $\sigma = 90$ , far beyond the training range), whereas BF-CNN performs at state-of-the-art levels. The CNN used for this example is DnCNN (Zhang et al., 2017); using alternative architectures yields similar results (see Section 5). + +In the past decade, purely data-driven models based on convolutional neural networks (LeCun et al., 2015) have come to dominate all previous methods in terms of performance. These models consist of cascades of convolutional filters, and rectifying nonlinearities, which are capable of representing a diverse and powerful set of functions. Training such architectures to minimize mean square error over large databases of noisy natural-image patches achieves current state-of-the-art results (Zhang et al., 2017; Huang et al., 2017; Ronneberger et al., 2015; Zhang et al., 2018a). + +# 3 NETWORK BIAS IMPAIRS GENERALIZATION + +We assume a measurement model in which images are corrupted by additive noise: $y = x + n$ , where $x \in \mathbb{R}^N$ is the original image, containing $N$ pixels, $n$ is an image of i.i.d. samples of Gaussian noise with variance $\sigma^2$ , and $y$ is the noisy observation. The denoising problem consists of finding a function $f: \mathbb{R}^N \to \mathbb{R}^N$ , that provides a good estimate of the original image, $x$ . Commonly, one minimizes the mean squared error: $f = \arg \min_g E||x - g(y)||^2$ , where the expectation is taken over some distribution over images, $x$ , as well as over the distribution of noise realizations. In deep learning, the denoising function $g$ is parameterized by the weights of the network, so the optimization is over these parameters. If the noise standard deviation, $\sigma$ , is unknown, the expectation must also be taken over a distribution of $\sigma$ . This problem is often called blind denoising in the literature. In this work, we study the generalization performance of CNNs across noise levels $\sigma$ , i.e. when they are tested on noise levels not included in the training set. + +Feedforward neural networks with rectified linear units (ReLUs) are piecewise affine: for a given activation pattern of the ReLUs, the effect of the network on the input is a cascade of linear transformations (convolutional or fully connected layers, $W_{k}$ ), additive constants $(b_{k})$ , and pointwise multiplications by a binary mask corresponding to the fixed activation pattern $(R)$ . Since each of these is affine, the entire cascade implements a single affine transformation. For a fixed noisy input image $y \in \mathbb{R}^{N}$ with $N$ pixels, the function $f: \mathbb{R}^{N} \to \mathbb{R}^{N}$ computed by a denoising neural network may be written + +$$ +f (y) = W _ {L} R \left(W _ {L - 1}... R \left(W _ {1} y + b _ {1}\right) +... b _ {L - 1}\right) + b _ {L} = A _ {y} y + b _ {y}, \tag {1} +$$ + +where $A_y \in \mathbb{R}^{N \times N}$ is the Jacobian of $f(\cdot)$ evaluated at input $y$ , and $b_y \in \mathbb{R}^N$ represents the net bias. The subscripts on $A_y$ and $b_y$ serve as a reminder that both depend on the ReLU activation patterns, which in turn depend on the input vector $y$ . + +Based on equation 1 we can perform a first-order decomposition of the error or residual of the neural network for a specific input: $y - f(y) = (I - A_y)y - b_y$ . Figure 1 shows the magnitude of the residual and the constant, which is equal to the net bias $b_y$ , for a range of noise levels. Over the training range, the net bias is small, implying that the linear term is responsible for most of the denoising (see Figures 9 and 10 for a visualization of both components). However, when the network is evaluated at noise levels outside of the training range, the norm of the bias increases dramatically, and the residual is significantly smaller than the noise, suggesting a form of overfitting. Indeed, network performance + +![](images/401fc29c8841cf5607d1c19d8f0223eb80ff6b818e7f5620592c9a5ebd80786b.jpg) +Figure 3: Comparison of the performance of a CNN and a BF-CNN with the same architecture for the experimental design described in Section 5. The performance is quantified by the PSNR of the denoised image as a function of the input PSNR. Both networks are trained over a fixed ranges of noise levels indicated by a blue background. In all cases, the performance of BF-CNN generalizes robustly beyond the training range, while that of the CNN degrades significantly. The CNN used for this example is DnCNN (Zhang et al., 2017); using alternative architectures yields similar results (see Figures 11 and 12). + +![](images/8e5e41103ebbc3869a59871f330bbb42a48262283cccb492b928d82c4b730a73.jpg) + +![](images/491a0d6d56a07ae4ab821042b9a5a94a8c376d1a73de75388ba77b4063882b61.jpg) + +![](images/11a3720ff018060a21f4ddc77db8b11c08d3d065b22a690c95c369ca464aa652.jpg) + +generalizes very poorly to noise levels outside the training range. This is illustrated for an example image in Figure 2, and demonstrated through extensive experiments in Section 5. + +# 4 PROPOSED METHODOLOGY: BIAS-FREE NETWORKS + +Section 3 shows that CNNs overfit to the noise levels present in the training set, and that this is associated with wild fluctuations of the net bias $b_{y}$ . This suggests that the overfitting might be ameliorated by removing additive (bias) terms from every stage of the network, resulting in a bias-free CNN (BF-CNN). Note that bias terms are also removed from the batch-normalization used during training. This simple change in the architecture has an interesting consequence. If the CNN has ReLU activations the denoising map is locally homogeneous, and consequently invariant to scaling: rescaling the input by a constant value simply rescales the output by the same amount, just as it would for a linear system. + +Lemma 1. Let $f_{\mathrm{BF}}: \mathbb{R}^N \to \mathbb{R}^N$ be a feedforward neural network with ReLU activation functions and no additive constant terms in any layer. For any input $y \in \mathbb{R}$ and any nonnegative constant $\alpha$ , + +$$ +f _ {\mathrm {B F}} (\alpha y) = \alpha f _ {\mathrm {B F}} (y). \tag {2} +$$ + +Proof. We can write the action of a bias-free neural network with $L$ layers in terms of the weight matrix $W_{i}$ , $1 \leq i \leq L$ , of each layer and a rectifying operator $\mathcal{R}$ , which sets to zero any negative entries in its input. Multiplying by a nonnegative constant does not change the sign of the entries of a vector, so for any $z$ with the right dimension and any $\alpha > 0$ $\mathcal{R}(\alpha z) = \alpha \mathcal{R}(z)$ , which implies + +$$ +f _ {\mathrm {B F}} (\alpha y) = W _ {L} \mathcal {R} (W _ {L - 1} \dots \mathcal {R} (W _ {1} \alpha y)) = \alpha W _ {L} \mathcal {R} (W _ {L - 1} \dots \mathcal {R} (W _ {1} y)) = \alpha f _ {\mathrm {B F}} (y). \qquad (3) +$$ + +□ + +Note that networks with nonzero net bias are not scaling invariant because scaling the input may change the activation pattern of the ReLUs. Scaling invariance is intuitively desirable for a denoising method operating on natural images; a rescaled image is still an image. Note that Lemma 1 holds for networks with skip connections where the feature maps are concatenated or added, because both of these operations are linear. + +In the following sections we demonstrate that removing all additive terms in CNN architectures has two important consequences: (1) the networks gain the ability to generalize to noise levels not encountered during training (as illustrated by Figure 2 the improvement is striking), and (2) the denoising mechanism can be analyzed locally via linear-algebraic tools that reveal intriguing ties to more traditional denoising methodology such as nonlinear filtering and sparsity-based techniques. + +![](images/472fd059eb7281e103f7f6bd08d1afc1f72f2617727dfd589ccf23724d7b1d45.jpg) +Figure 4: Visualization of the linear weighting functions (rows of $A_{y}$ in equation 4) of a BF-CNN for three example pixels of an input image, and three levels of noise. The images in the three rightmost columns show the weighting functions used to compute each of the indicated pixels (red squares). All weighting functions sum to one, and thus compute a local average (note that some weights are negative, indicated in red). Their shapes vary substantially, and are adapted to the underlying image content. As the noise level $\sigma$ increases, the spatial extent of the weight functions increases in order to average out the noise, while respecting boundaries between different regions in the image, which results in dramatically different functions for each pixel. The CNN used for this example is DnCNN (Zhang et al., 2017); using alternative architectures yields similar results (see Figure 13). + +# 5 BIAS-FREE NETWORKS GENERALIZE ACROSS NOISE LEVELS + +In order to evaluate the effect of removing the net bias in denoising CNNs, we compare several state-of-the-art architectures to their bias-free counterparts, which are exactly the same except for the absence of any additive constants within the networks (note that this includes the batch-normalization additive parameter). These architectures include popular features of existing neural-network techniques in image processing: recurrence, multiscale filters, and skip connections. More specifically, we examine the following models (see Section A for additional details): + +- DnCNN (Zhang et al., 2017): A feedforward CNN with 20 convolutional layers, each consisting of $3 \times 3$ filters, 64 channels, batch normalization (Ioffe & Szegedy, 2015), a ReLU nonlinearity, and a skip connection from the initial layer to the final layer. +- Recurrent CNN: A recurrent architecture inspired by Zhang et al. (2018a) where the basic module is a CNN with 5 layers, $3 \times 3$ filters and 64 channels in the intermediate layers. The order of the recurrence is 4. +- UNet (Ronneberger et al., 2015): A multiscale architecture with 9 convolutional layers and skip connections between the different scales. +- Simplified DenseNet: CNN with skip connections inspired by the DenseNet architecture (Huang et al., 2017; Zhang et al., 2018b). + +We train each network to denoise images corrupted by i.i.d. Gaussian noise over a range of standard deviations (the training range of the network). We then evaluate the network for noise levels that are both within and beyond the training range. Our experiments are carried out on $180 \times 180$ natural images from the Berkeley Segmentation Dataset (Martin et al., 2001) to be consistent with previous + +![](images/ed1343aff48f4da60dc79532e0a6e6a14e68e9651c1c9ec5f6ead553543aff34.jpg) +(a) + +![](images/576beec73da55d4f54b8be6d41ed47de6646a0f4ac371fd7d3d865251f08f5fc.jpg) +(b) +Figure 5: Analysis of the SVD of the Jacobian of a BF-CNN for ten natural images, corrupted by noise of standard deviation $\sigma = 50$ . (a) Singular value distributions. For all images, a large proportion of the values are near zero, indicating (approximately) a projection onto a subspace (the signal subspace). (b) Histogram of dot products (cosine of angle) between the left and right singular vectors that lie within the signal subspaces. (c) Effective dimensionality of the signal subspaces (computed as sum of squared singular values) as a function of noise level. For comparison, the total dimensionality of the space is 1600 ( $40 \times 40$ pixels). Average dimensionality (red curve) falls approximately as the inverse of $\sigma$ (dashed curve). The CNN used for this example is DnCNN (Zhang et al., 2017); using alternative architectures yields similar results (see Figure 17). + +![](images/158ce9f549df95401368cb161ea8faaf707a680c1736b5477860b4dbf19802e0.jpg) +(c) + +results (Schmidt & Roth, 2014; Chen & Pock, 2017; Zhang et al., 2017). Additional details about the dataset and training procedure are provided in Section B. + +Figures 3, 11 and 12 show our results. For a wide range of different training ranges, and for all architectures, we observe the same phenomenon: the performance of CNNs is good over the training range, but degrades dramatically at new noise levels; in stark contrast, the corresponding BF-CNNs provide strong denoising performance over noise levels outside the training range. This holds for both PSNR and the more perceptually-meaningful Structural Similarity Index (Wang et al., 2004) (see Figure 12). Figure 2 shows an example image, demonstrating visually the striking difference in generalization performance between a CNN and its corresponding BF-CNN. Our results provide strong evidence that removing net bias in CNN architectures results in effective generalization to noise levels out of the training range. + +# 6 REVEALING THE DENOISING MECHANISMS LEARNED BY BF-CNNs + +In this section we perform a local analysis of BF-CNN networks, which reveals the underlying denoising mechanisms learned from the data. A bias-free network is strictly linear, and its net action can be expressed as + +$$ +f _ {\mathrm {B F}} (y) = W _ {L} R \left(W _ {L - 1} \dots R \left(W _ {1} y\right)\right) = A _ {y} y, \tag {4} +$$ + +where $A_{y}$ is the Jacobian of $f_{\mathrm{BF}}(\cdot)$ evaluated at $y$ . The Jacobian at a fixed input provides a local characterization of the denoising map. In order to study the map we perform a linear-algebraic analysis of the Jacobian. Our approach is similar in spirit to visualization approaches—proposed in the context of image classification—that differentiate neural-network functions with respect to their input (e.g. Simonyan et al. (2013); Montavon et al. (2017)). + +# 6.1 NONLINEAR ADAPTIVE FILTERING + +The linear representation of the denoising map given by equation 4 implies that the $i$ th pixel of the output image is computed as an inner product between the $i$ th row of $A_{y}$ , denoted $a_{y}(i)$ , and the input image: + +$$ +f _ {\mathrm {B F}} (y) (i) = \sum_ {j = 1} ^ {N} A _ {y} (i, j) y (j) = a _ {y} (i) ^ {T} y. \tag {5} +$$ + +![](images/f92d8b3eb622535846acd50505ff4a91a66cb39a99b96eb2c9449211129a8a99.jpg) +Figure 6: Visualization of left singular vectors of the Jacobian of a BF-CNN, evaluated on two different images (top and bottom rows), corrupted by noise with standard deviation $\sigma = 50$ . The left column shows original (clean) images. The next three columns show singular vectors corresponding to non-negligible singular values. The vectors capture features from the clean image. The last three columns on the right show singular vectors corresponding to singular values that are almost equal to zero. These vectors are noisy and unstructured. The CNN used for this example is DnCNN (Zhang et al., 2017); using alternative architectures yields similar results (see Figure 16). + +The vectors $a_{y}(i)$ can be interpreted as adaptive filters that produce an estimate of the denoised pixel via a weighted average of noisy pixels. Examination of these filters reveals their diversity, and their relationship to the underlying image content: they are adapted to the local features of the noisy image, averaging over homogeneous regions of the image without blurring across edges. This is shown for two separate examples and a range of noise levels in Figures 4, 13, 14 and 15 for the architectures described in Section 5. We observe that the equivalent filters of all architectures adapt to image structure. + +Classical Wiener filtering (Wiener, 1950) denoises images by computing a local average dependent on the noise level. As the noise level increases, the averaging is carried out over a larger region. As illustrated by Figures 4, 13, 14 and 15, the equivalent filters of BF-CNNs also display this behavior. The crucial difference is that the filters are adaptive. The BF-CNNs learn such filters implicitly from the data, in the spirit of modern nonlinear spatially-varying filtering techniques designed to preserve fine-scale details such as edges (e.g. Tomasi & Manduchi (1998), see also Milanfar (2012) for a comprehensive review, and Choi et al. (2018) for a recent learning-based approach). + +# 6.2 PROJECTION ONTO ADAPTIVE LOW-DIMENSIONAL SUBSPACES + +The local linear structure of a BF-CNN facilitates analysis of its functional capabilities via the singular value decomposition (SVD). For a given input $y$ , we compute the SVD of the Jacobian matrix: $A_y = USV^T$ , with $U$ and $V$ orthogonal matrices, and $S$ a diagonal matrix. We can decompose the effect of the network on its input in terms of the left singular vectors $\{U_1, U_2, \ldots, U_N\}$ (columns of $U$ ), the singular values $\{s_1, s_2, \ldots, s_N\}$ (diagonal elements of $S$ ), and the right singular vectors $\{V_1, V_2, \ldots, V_N\}$ (columns of $V$ ): + +$$ +f _ {\mathrm {B F}} (y) = A _ {y} y = U S V ^ {T} y = \sum_ {i = 1} ^ {N} s _ {i} \left(V _ {i} ^ {T} y\right) U _ {i}. \tag {6} +$$ + +The output is a linear combination of the left singular vectors, each weighted by the projection of the input onto the corresponding right singular vector, and scaled by the corresponding singular value. + +Analyzing the SVD of a BF-CNN on a set of ten natural images reveals that most singular values are very close to zero (Figure 5a). The network is thus discarding all but a very low-dimensional portion of the input image. We also observe that the left and right singular vectors corresponding to the singular values with non-negligible amplitudes are approximately the same (Figure 5b). This means that the Jacobian is (approximately) symmetric, and we can interpret the action of the network as projecting the noisy signal onto a low-dimensional subspace, as is done in wavelet thresholding schemes. This is confirmed by visualizing the singular vectors as images (Figure 6). The singular vectors corresponding to non-negligible singular values are seen to capture features of the input image; those corresponding to near-zero singular values are unstructured. The BF-CNN therefore implements + +![](images/877c184e38149c35427ae78b19dfe157a1f4e3df3324f9b2f754831945907a51.jpg) +Figure 7: Signal subspace properties. Left: Signal subspace, computed from Jacobian of a BF-CNN evaluated at a particular noise level, contains the clean image. Specifically, the fraction of squared $\ell_2$ norm preserved by projection onto the subspace is nearly one as $\sigma$ grows from 10 to 100 (relative to the image pixels, which lie in the range [0, 255]). Results are averaged over 50 example clean images. Right: Signal subspaces at different noise levels are nested. The subspace axes for a higher noise level lie largely within the subspace obtained for the lowest noise level ( $\sigma = 10$ ), as measured by the sum of squares of their projected norms. Results are shown for 10 example clean images. + +![](images/a0bd425b4fd22e6e47c6fe1110104a3a24fe6e9239601fd8f4b476a0ad45bc95.jpg) + +an approximate projection onto an adaptive signal subspace that preserves image structure, while suppressing the noise. + +We can define an "effective dimensionality" of the signal subspace as $d \coloneqq \sum_{i=1}^{N} s_i^2$ , the amount of variance captured by applying the linear map to an $N$ -dimensional Gaussian noise vector with variance $\sigma^2$ , normalized by the noise variance. The remaining variance equals + +$$ +E _ {n} | | A _ {y} n | | ^ {2} = E _ {n} | | U _ {y} S _ {y} V _ {y} ^ {T} n | | ^ {2} = E _ {n} | | S _ {y} n | | ^ {2} = E _ {n} \sum_ {i = 1} ^ {N} s _ {i} ^ {2} n _ {i} ^ {2} = \sum_ {i = 1} ^ {N} s _ {i} ^ {2} E _ {n} (n _ {i} ^ {2}) \approx \sigma^ {2} \sum_ {i = 1} ^ {N} s _ {i} ^ {2}, +$$ + +where $E_{n}$ indicates expectation over noise $n$ , so that $d = E_{n}||A_{y}n||^{2} / \sigma^{2} = \sum_{i = 1}^{N}s_{i}^{2}$ . + +When we examine the preserved signal subspace, we find that the clean image lies almost completely within it. For inputs of the form $y \coloneqq x + n$ (where $x$ is the clean image and $n$ the noise), we find that the subspace spanned by the singular vectors up to dimension $d$ contains $x$ almost entirely, in the sense that projecting $x$ onto the subspace preserves most of its energy. This holds for the whole range of noise levels over which the network is trained (Figure 7). + +We also find that for any given clean image, the effective dimensionality of the signal subspace $(d)$ decreases systematically with noise level (Figure 5c). At lower noise levels the network detects a richer set of image features, and constructs a larger signal subspace to capture and preserve them. Empirically, we found that (on average) $d$ is approximately proportional to $\frac{1}{\sigma}$ (see dashed line in Figure 5c). These signal subspaces are nested: the subspaces corresponding to lower noise levels contain more than $95\%$ of the subspace axes corresponding to higher noise levels (Figure 7). + +Finally, we note that this behavior of the signal subspace dimensionality, combined with the fact that it contains the clean image, explains the observed denoising performance across different noise levels (Figure 3). Specifically, if we assume $d \approx \alpha / \sigma$ , the mean squared error is proportional to $\sigma$ : + +$$ +\begin{array}{l} \mathrm {M S E} = E _ {n} | | A _ {y} (x + n) - x | | ^ {2} \\ \approx E _ {n} | | A _ {y} n | | ^ {2} \\ \approx \sigma^ {2} d \\ \approx \alpha \sigma \tag {7} \\ \end{array} +$$ + +Note that this result runs contrary to the intuitive expectation that MSE should be proportional to the noise variance, which would be the case if the denoiser operated by projecting onto a fixed subspace. The scaling of MSE with the square root of the noise variance implies that the PSNR of the denoised image should be a linear function of the input PSNR, with a slope of $1/2$ , consistent with the empirical results shown in Figure 3. Note that this behavior holds even when the networks are trained only on modest levels of noise (e.g., $\sigma \in [0,10]$ ). + +# 7 DISCUSSION + +In this work, we show that removing constant terms from CNN architectures ensures strong generalization across noise levels, and also provides interpretability of the denoising method via linear-algebra techniques. We provide insights into the relationship between bias and generalization through a set of observations. Theoretically, we argue that if the denoising network operates by projecting the noisy observation onto a linear space of "clean" images, then that space should include all rescalings of those images, and thus, the origin. This property can be guaranteed by eliminating bias from the network. Empirically, in networks that allow bias, the net bias of the trained network is quite small within the training range. However, outside the training range the net bias grows dramatically resulting in poor performance, which suggests that the bias may be the cause of the failure to generalize. In addition, when we remove bias from the architecture, we preserve performance within the training range, but achieve near-perfect generalization, even to noise levels more than $10\mathrm{x}$ those in the training range. These observations do not fully elucidate how our network achieves its remarkable generalization- only that bias prevents that generalization, and its removal allows it. + +It is of interest to examine whether bias removal can facilitate generalization in noise distributions beyond Gaussian, as well as other image-processing tasks, such as image restoration and image compression. We have trained bias-free networks on uniform noise and found that they generalize outside the training range. In fact, bias-free networks trained for Gaussian noise generalize well when tested on uniform noise (Figures 18 and 19). In addition, we have applied our methodology to image restoration (simultaneous deblurring and denoising). Preliminary results indicate that bias-free networks generalize across noise levels for a fixed blur level, whereas networks with bias do not (Figure 20). An interesting question for future research is whether it is possible to achieve generalization across blur levels. Our initial results indicate that removing bias is not sufficient to achieve this. + +Finally, our linear-algebraic analysis uncovers interesting aspects of the denoising map, but these interpretations are very local: small changes in the input image change the activation patterns of the network, resulting in a change in the corresponding linear mapping. Extending the analysis to reveal global characteristics of the neural-network functionality is a challenging direction for future research. + +# ACKNOWLEDGEMENTS + +This work was partially supported by the Howard Hughes Medical Institute (HHMI). + +# REFERENCES + +S Grace Chang, Bin Yu, and Martin Vetterli. Adaptive wavelet thresholding for image denoising and compression. IEEE Trans. Image Processing, 9(9):1532-1546, 2000. +Yunjin Chen and Thomas Pock. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Trans. Patt. Analysis and Machine Intelligence, 39(6): 1256-1272, 2017. +Sungjoon Choi, John Isidoro, Pascal Getreuer, and Peyman Milanfar. Fast, trainable, multiscale denoising. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 963-967. IEEE, 2018. +Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising with block-matching and 3d filtering. In Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, volume 6064, pp. 606414. International Society for Optics and Photonics, 2006. +D Donoho and I Johnstone. Adapting to unknown smoothness via wavelet shrinkage. *J American Stat Assoc*, 90(432), December 1995. +Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. on Image processing, 15(12):3736-3745, 2006. + +Y Hel-Or and D Shaked. A discriminative approach for wavelet denoising. IEEE Trans. Image Processing, 2008. +Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 4700-4708, 2017. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Yann LeCun, Joshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436, 2015. +D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. 8th Int'l Conf. Computer Vision, volume 2, pp. 416-423, July 2001. +Peyman Milanfar. A tour of modern image filtering: New insights and methods, both practical and theoretical. IEEE signal processing magazine, 30(1):106-128, 2012. +Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Rec., 65:211-222, 2017. +M Raphan and E P Simoncelli. Optimal denoising in redundant representations. IEEE Trans Image Processing, 17(8):1342-1352, Aug 2008. doi: 10.1109/TIP.2008.925392. +Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234-241. Springer, 2015. +Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2774-2781, 2014. +E P Simoncelli and E H Adelson. Noise removal via Bayesian wavelet coring. In Proc 3rd IEEE Int'l Conf on Image Proc, volume I, pp. 379-382, Lausanne, Sep 16-19 1996. IEEE Sig Proc Society. doi: 10.1109/ICIP.1996.559512. +Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. +Carlo Tomasi and Roberto Manduchi. Bilateral filtering for gray and color images. In ICCV, volume 98, 1998. +Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing, 13(4):600-612, 2004. +Norbert Wiener. Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. Technology Press, 1950. +Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Processing, 26(7): 3142-3155, 2017. +Xiaoshuai Zhang, Yiping Lu, Jiaying Liu, and Bin Dong. Dynamically unfolding recurrent restorer: A moving endpoint control method for image restoration. arXiv preprint arXiv:1805.07709, 2018a. +Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image restoration. CoRR, abs/1812.10477, 2018b. URL http://arxiv.org/abs/1812.10477. + +# A DESCRIPTION OF DENOISING ARCHITECTURES + +In this section we describe the denoising architectures used for our computational experiments in more detail. + +# A.1 DNCNN + +We implement BF-DnCNN based on the architecture of the Denoising CNN (DnCNN) (Zhang et al., 2017). DnCNN consists of 20 convolutional layers, each consisting of $3 \times 3$ filters and 64 channels, batch normalization (Ioffe & Szegedy, 2015), and a ReLU nonlinearity. It has a skip connection from the initial layer to the final layer, which has no nonlinear units. To construct a bias-free DnCNN (BF-DnCNN) we remove all sources of additive bias, including the mean parameter of the batch-normalization in every layer (note however that the scaling parameter is preserved). + +# A.2 RECURRENT CNN + +Inspired by Zhang et al. (2018a), we consider a recurrent framework that produces a denoised image estimate of the form $\hat{x}_t = f(\hat{x}_{t - 1},y_{\mathrm{noisy}})$ , at time $t$ where $f$ is a neural network. We use a 5-layer fully convolutional network with $3\times 3$ filters in all layers and 64 channels in each intermediate layer to implement $f$ . We initialize the denoised estimate as the noisy image, i.e $\hat{x}_0\coloneqq y_{\mathrm{noisy}}$ . For the version of the network with net bias, we add trainable additive constants to every filter in all but the last layer. During training, we run the recurrence for a maximum of $T$ times, sampling $T$ uniformly at random from $\{1,2,3,4\}$ for each mini-batch. At test time we fix $T = 4$ . + +# A.3 UNET + +Our UNet model (Ronneberger et al., 2015) has the following layers: + +1. conv1 - Takes in input image and maps to 32 channels with $5 \times 5$ convolutional kernels. +2. conv2 - Input: 32 channels. Output: 32 channels. $3 \times 3$ convolutional kernels. +3. conv3 - Input: 32 channels. Output: 64 channels. $3 \times 3$ convolutional kernels with stride 2. +4. conv4- Input: 64 channels. Output: 64 channels. $3 \times 3$ convolutional kernels. +5. conv5- Input: 64 channels. Output: 64 channels. $3 \times 3$ convolutional kernels with dilation factor of 2. +6. conv6- Input: 64 channels. Output: 64 channels. $3 \times 3$ convolutional kernels with dilation factor of 4. +7. conv7- Transpose Convolution layer. Input: 64 channels. Output: 64 channels. $4 \times 4$ filters with stride 2. +8. conv8- Input: 96 channels. Output: 64 channels. $3 \times 3$ convolutional kernels. The input to this layer is the concatenation of the outputs of layer conv7 and conv2. +9. conv9- Input: 32 channels. Output: 1 channels. $5 \times 5$ convolutional kernels. + +The structure is the same as in Zhang et al. (2018a), but without recurrence. For the version with bias, we add trainable additive constants to all the layers other than conv9. This configuration of UNet assumes even width and height, so we remove one row or column from images in with odd height or width. + +# A.4 SIMPLIFIED DENSENET + +Our simplified version of the DenseNet architecture (Huang et al., 2017) has 4 blocks in total. Each block is a fully convolutional 5-layer CNN with $3 \times 3$ filters and 64 channels in the intermediate layers with ReLU nonlinearity. The first three blocks have an output layer with 64 channels while the last block has an output layer with only one channel. The output of the $i^{th}$ block is concatenated with the input noisy image and then fed to the $(i + 1)^{th}$ block, so the last three blocks have 65 input channels. In the version of the network with bias, we add trainable additive parameters to all the layers except for the last layer in the final block. + +# B DATASETS AND TRAINING PROCEDURE + +Our experiments are carried out on $180 \times 180$ natural images from the Berkeley Segmentation Dataset (Martin et al., 2001). We use a training set of 400 images. The training set is augmented via downsampling, random flips, and random rotations of patches in these images (Zhang et al., 2017). A test set containing 68 images is used for evaluation. We train the DnCNN and it's bias free model on patches of size $50 \times 50$ , which yields a total of 541,600 clean training patches. For the remaining architectures, we use patches of size $128 \times 128$ for a total of 22,400 training patches. + +We train DnCNN and its bias-free counterpart using the Adam Optimizer (Kingma & Ba, 2014) over 70 epochs with an initial learning rate of $10^{-3}$ and a decay factor of 0.5 at the $50^{th}$ and $60^{th}$ epochs, with no early stopping. We train the other models using the Adam optimizer with an initial learning rate of $10^{-3}$ and train for 50 epochs with a learning rate schedule which decreases by a factor of 0.25 if the validation PSNR decreases from one epoch to the next. We use early stopping and select the model with the best validation PSNR. + +# C ADDITIONAL RESULTS + +In this section we report additional results of our computational experiments: + +- Figure 8 shows the first-order analysis of the residual of the different architectures described in Section A, except for DnCNN which is shown in Figure 1. +- Figures 9 and 10 visualize the linear and net bias terms in the first-order decomposition of an example image at different noise levels. +- Figure 11 shows the PSNR results for the experiments described in Section 5. +- Figure 12 shows the SSIM results for the experiments described in Section 5. +- Figures 13, 14 and 15 show the equivalent filters at several pixels of two example images for different architectures (see Section 6.1). +- Figure 16 shows the singular vectors of the Jacobian of different BF-CNNs (see Section 6.2). +- Figure 17 shows the singular values of the Jacobian of different BF-CNNs (see Section 6.2). +- Figure 18 and 19 shows that networks trained on noise samples drawn from Gaussian distribution with 0 mean generalizes to noise drawn from uniform distribution with 0 mean during test time. Experiments follow the procedure described in Section 5 except that the networks are evaluated on a different noise distribution during the test time. +- Figure 20 shows the application of BF-CNN and CNN to the task of image restoration, where the image is corrupted with both noise and blur at the same time. We show that BF-CNNs can generalize outside the training range for noise levels for a fixed blur level, but do not outperform CNN when generalizing to unseen blur levels. + +![](images/fd5b2f0d1c6fdebf38a487394159a59685914e61b3baaf42ec989e9b3e98e7c4.jpg) +Figure 8: First-order analysis of the residual of Recurrent-CNN (Section A.2), UNet (Section A.3) and DenseNet (Section A.4) as a function of noise level. The plots show the magnitudes of the residual and the net bias averaged over 68 images in Set68 test set of Berkeley Segmentation Dataset (Martin et al., 2001) for networks trained over different training ranges. The range of noises used for training is highlighted in gray. (left) When the network is trained over the full range of noise levels $(\sigma \in [0,100])$ the net bias is small, growing slightly as the noise increases. (middle and right) When the network is trained over the a smaller range $(\sigma \in [0,55]$ and $\sigma \in [0,30])$ , the net bias grows explosively for noise levels outside the training range. This coincides with the dramatic drop in performance due to overfitting, reflected in the difference between the residual and the true noise. + +![](images/e3344f9013d0dd50febdb0c73fe07896c3f25a94fd71d60d9d4270ac33f8831c.jpg) +Figure 9: Visualization of the decomposition of output of DnCNN trained for noise range [0, 55] into linear part and net bias. The noise level $\sigma = 70$ (highlighted by *) is outside the training range. Over the training range, the net bias is small, and the linear part is responsible for most of the denoising effort. However, when the network is evaluated out of the training range, the contribution of the bias increases dramatically, which coincides with a significant drop in denoising performance. + +![](images/1cc97d9d0674f5673100ce1b3c4b4450cb3353b8f629b499502ca9c4c5099977.jpg) +Figure 10: Visualization of the decomposition of output of Recurrent-CNN (Section A.2, UNet (Section A.3) and DenseNet (Section A.4) trained for noise range [0, 55] into linear part and net bias. The noise level $\sigma = 90$ (highlighted by *) is outside the training range. Over the training range, the net bias is small, and the linear part is responsible for most of the denoising effort. However, when the network is evaluated out of the training range, the contribution of the bias increases dramatically, which coincides with a significant drop in denoising performance. + +![](images/ae452c81d65b44c72c1159efff2291042ff256044f8247eee733482fe45c7009.jpg) +(a) + +![](images/bcc79a6b5fc00cd3ae8787c19ef3a257355b5af2d3395a1d64e94a4cef674aeb.jpg) +(b) + +![](images/2df14a2a51a0cadc447a4c2eb2ffc3db5538df7661865e4487923e1a91266981.jpg) +(c) +Figure 11: Comparisons of architectures with (red curves) and without (blue curves) a net bias for the experimental design described in Section 5. The performance is quantified by the PSNR of the denoised image as a function of the input PSNR of the noisy image. All the architectures with bias perform poorly out of their training range, whereas the bias-free versions all achieve excellent generalization across noise levels. (a) Deep Convolutional Neural Network, DnCNN (Zhang et al., 2017). (b) Recurrent architecture inspired by DURR (Zhang et al., 2018a). (c) Multiscale architecture inspired by the UNet (Ronneberger et al., 2015). (d) Architecture with multiple skip connections inspired by the DenseNet (Huang et al., 2017). + +![](images/112a62e33b1ea3d8a966cf9e34fffccd6ca76d50804bc8497118ab103aa1b75c.jpg) +(d) + +![](images/6adcf36cb3e72e19cbbed9ccecfd59bca4a00d7841cf095378091e9501f1f508.jpg) +(a) +Figure 12: Comparisons of architectures with (red curves) and without (blue curves) a net bias for the experimental design described in Section 5. The performance is quantified by the SSIM of the denoised image as a function of the input SSIM of the noisy image. All the architectures with bias perform poorly out of their training range, whereas the bias-free versions all achieve excellent generalization across noise levels. (a) Deep Convolutional Neural Network, DnCNN (Zhang et al., 2017). (b) Recurrent architecture inspired by DURR (Zhang et al., 2018a). (c) Multiscale architecture inspired by the UNet (Ronneberger et al., 2015). (d) Architecture with multiple skip connections inspired by the DenseNet (Huang et al., 2017). + +![](images/e74189a655fb5e1e0c39738a7d9f90b0b0a73bd827a89ab4a117104e4b11b174.jpg) +(b) + +![](images/d016d59157957f83583652c4e0a09abfde8cf0b508150682053a9a24d30cac1e.jpg) +(c) + +![](images/51d8f9f0274f7aa889f71abad9789dc97f113c5ca3b1004d419ff82b2a80cff3.jpg) +(d) + +![](images/26272887991e14aee00dfc4fc9d1783818e179c12cb44b84007a67713a2fb968.jpg) +Figure 13: Visualization of the linear weighting functions (rows of $A_{y}$ ) of Bias-Free Recurrent-CNN (top 2 rows) (Section A.2), Bias-Free UNet (next 2 rows) (Section A.3) and Bias-Free DenseNet (bottom 2 rows) (Section A.4) for three example pixels of a noisy input image (left). The next image is the denoised output. The three images on the right show the linear weighting functions corresponding to each of the indicated pixels (red squares). All weighting functions sum to one, and thus compute a local average (although some weights are negative, indicated in red). Their shapes vary substantially, and are adapted to the underlying image content. Each row corresponds to a noisy input with increasing $\sigma$ and the filters adapt by averaging over a larger region. + +![](images/a7e760679b4b74ef615c3ae7adb4c5359ee9427c25a5c76313b5561294e94fd7.jpg) +Figure 14: Visualization of the linear weighting functions (rows of $A_{y}$ ) of a BF-DnCNN for three example pixels of a noisy input image (left). The next image is the denoised output. The three images on the right show the linear weighting functions corresponding to each of the indicated pixels (red squares). All weighting functions sum to one, and thus compute a local average (although some weights are negative, indicated in red). Their shapes vary substantially, and are adapted to the underlying image content. Each row corresponds to a noisy input with increasing $\sigma$ and the filters adapt by averaging over a larger region. + +![](images/48e325223942d2d95675569d36005a2c2259194269c711fa5f564348bde00ea1.jpg) +Figure 15: Visualization of the linear weighting functions (rows of $A_{y}$ ) of Bias-Free Recurrent-CNN (top 2 rows) (Section A.2), Bias-Free UNet (next 2 rows) (Section A.3) and Bias-Free DenseNet (bottom 2 rows) (Section A.4) for three example pixels of a noisy input image (left). The next image is the denoised output. The three images on the right show the linear weighting functions corresponding to each of the indicated pixels (red squares). All weighting functions sum to one, and thus compute a local average (although some weights are negative, indicated in red). Their shapes vary substantially, and are adapted to the underlying image content. Each row corresponds to a noisy input with increasing $\sigma$ and the filters adapt by averaging over a larger region. + +![](images/4a292da4eead5f226ec9fc0f0c94fb043617a05d6f416643219483f46e2c937b.jpg) +Figure 16: Visualization of left singular vectors of the Jacobian of a BF Recurrent CNN (top 2 rows), BF UNet (next 2 rows) and BF DenseNet (bottom 2 rows) evaluated on three different images, corrupted by noise with standard deviation $\sigma = 25$ . The left column shows original (clean) images. The next three columns show singular vectors corresponding to non-negligible singular values. The vectors capture features from the clean image. The last three columns on the right show singular vectors corresponding to singular values that are almost equal to zero. These vectors are noisy and unstructured. + +![](images/6293239c479b90ad8be0c5021df57b60bab560ba2a9430e7e70fe18f98cc9981.jpg) +(a) + +![](images/a18097199673e43809eec2606316441c859815712bc379e857e9dfa8da2a8de6.jpg) +(b) +Figure 17: Analysis of the SVD of the Jacobian of BF-CNN for ten natural images, corrupted by noise of standard deviation $\sigma = 50$ . For all images, a large proportion of the singular values are near zero, indicating (approximately) a projection onto a subspace (the signal subspace). (a) Recurrent architecture inspired by DURR (Zhang et al., 2018a). (b) Multiscale architecture inspired by the UNet (Ronneberger et al., 2015). (c) Architecture with multiple skip connections inspired by the DenseNet (Huang et al., 2017). + +![](images/5d903f86a122db1d03815a2476eb45dedd0f81b47c8480bd58caf66c36f06163.jpg) +(c) + +![](images/1edb3849e3985fe90e57bb455a2bb4f6464932d8c9df5299aa345dbcfbad3a2d.jpg) +Figure 18: Comparison of the performance of a CNN and a BF-CNN with the same architecture for the experimental design described in Section 5. The networks are trained using i.i.d. Gaussian noise but evaluated on noise drawn i.i.d. from a uniform distribution with mean 0. The performance is quantified by the PSNR of the denoised image as a function of the input PSNR of the noisy image. All the architectures with bias perform poorly out of their training range, whereas the bias-free versions all achieve excellent generalization across noise levels, i.e. they are able to generalize across the two different noise distributions. The CNN used for this example is DnCNN (Zhang et al., 2017); using alternative architectures yields similar results (see Figures 19). + +![](images/6269a96d523ba18dbfb4e51f069a7f7e44c4b6b3b1730775ce9426e92a526d6d.jpg) + +![](images/1b1d8aa034dd7fd49a4e938aec35db9f64d7be14b17ec6325e39313998276122.jpg) + +![](images/562e5d807e48d93928b06efd91ff8ac215bdee9aaf43d56f1c4e9147cf46e8f0.jpg) + +![](images/208205f9c82808e64c4eccad92506c7c91c3eec3de9016dbd7091a9bfbacdbb2.jpg) +(a) + +![](images/56bb3a65173bf38f977f3b1e6c344fdb10ea83dc4bc17ddfbb607127e1f3184c.jpg) +(b) + +![](images/33d680dd8a660bae9a4ec89cb73e5d2feda8ef2defe59ec772e660450a54ce54.jpg) +(c) +Figure 19: Comparisons of architectures with (red curves) and without (blue curves) a net bias for the experimental design described in Section 5. The networks are trained using i.i.d. Gaussian noise but evaluated on noise drawn i.i.d. from a uniform distribution with mean 0. The performance is quantified by the PSNR of the denoised image as a function of the input PSNR of the noisy image. All the architectures with bias perform poorly out of their training range, whereas the bias-free versions all achieve excellent generalization across noise levels, i.e. they are able to generalize across the two different noise distributions. (a) Deep Convolutional Neural Network, DnCNN (Zhang et al., 2017). (b) Recurrent architecture inspired by DURR (Zhang et al., 2018a). (c) Multiscale architecture inspired by the UNet (Ronneberger et al., 2015). (d) Architecture with multiple skip connections inspired by the DenseNet (Huang et al., 2017). + +![](images/fd4803a6de6d6c7e3abf46e0396ad14f34bf95895a93373f51fe87c8d5e4170e.jpg) +(d) + +![](images/bc0b3e6286a8ae0def00f6a4387be51c53d9041c7c3742ebd37473c412ed75c9.jpg) +Figure 20: Comparison of the performance of DnCNN and a corresponding BF-CNN for image restoration. Training is carried out on data corrupted with Gaussian noise $\sigma_{\mathrm{noise}} \in [0,55]$ and Gaussian blur $\sigma_{\mathrm{blur}} \in [0,4]$ . Performance is measured on test data for inside and outside the training ranges. Left: The difference in performance measured in $\Delta \mathrm{PSNR} = \mathrm{PSNR}_{\mathrm{BF - CNN}} - \mathrm{PSNR}_{\mathrm{DnCNN}}$ . The training region is illustrated by the rectangular boundary. Bias-free network generalizes across noise levels for each fixed blur levels, whereas DnCNN does not. However, BF-CNN does not generalize across blur levels. Right: A horizontal slice of the left plot for a fixed blur level of $\sigma_{\mathrm{blur}} = 2$ . BF-CNN generalizes robustly beyond the training range, while the performance of DnCNN degrades significantly. + +![](images/0ecd1bcf905e0c63c5cb53172a847b325bcc7ac4e1f017248b7afbc51303f7f9.jpg) \ No newline at end of file diff --git a/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/images.zip b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f370d5e7b0ca58300b61a99fdee377cb1757aefa --- /dev/null +++ b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d702330f125d6dd6474ebe93a366840981bb4389d566357ae0be8902c0d0acae +size 1631053 diff --git a/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/layout.json b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c92ee3f1823d683a352ce597d6d28a3fc572d47e --- /dev/null +++ b/robustandinterpretableblindimagedenoisingviabiasfreeconvolutionalneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:277efaaaef7adc7bd2a6f62976beb66444a19ac23547d78b6c577b4d43f6dabe +size 566633 diff --git a/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_content_list.json b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bdb3770b552b3ca04dc748aa92672c14773ad9ff --- /dev/null +++ b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:290ad11ab29de516de8e4008c564b63dbd9ff34ff56d845c51259272659fb9bf +size 121717 diff --git a/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_model.json b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_model.json new file mode 100644 index 0000000000000000000000000000000000000000..473f23e7e2c166269ed77209055ebe7c613a2d2f --- /dev/null +++ b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a2968cca227441e2501790838ba3fce9548aa7a3df42ee880b231fcd37fdc8b +size 139127 diff --git a/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_origin.pdf b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1dd2cc64c4c557a676a827113519578e3d22b6f2 --- /dev/null +++ b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/b9c4b3fb-0e43-467c-b8bf-60c66e16a157_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1077da25b33fc80d72cc9bc9b02b7c47a9564833bf66c257c9eeed638a8a3647 +size 507253 diff --git a/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/full.md b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3bb921825ed51cd32d26d1b2037e1f1fe7f8acc0 --- /dev/null +++ b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/full.md @@ -0,0 +1,389 @@ +# ROBUST ANOMALY DETECTION AND BACKDOOR ATTACK DETECTION VIA DIFFERENTIAL PRIVACY + +Min Du, Ruoxi Jia, Dawn Song + +University of California, Berkeley + +{min.du,ruoxijia,dawnsong}@berkeley.edu + +# ABSTRACT + +Outlier detection and novelty detection are two important topics for anomaly detection. Suppose the majority of a dataset are drawn from a certain distribution, outlier detection and novelty detection both aim to detect data samples that do not fit the distribution. Outliers refer to data samples within this dataset, while novelties refer to new samples. In the meantime, backdoor poisoning attacks for machine learning models are achieved through injecting poisoning samples into the training dataset, which could be regarded as "outliers" that are intentionally added by attackers. Differential privacy has been proposed to avoid leaking any individual's information, when aggregated analysis is performed on a given dataset. It is typically achieved by adding random noise, either directly to the input dataset, or to intermediate results of the aggregation mechanism. In this paper, we demonstrate that applying differential privacy can improve the utility of outlier detection and novelty detection, with an extension to detect poisoning samples in backdoor attacks. We first present a theoretical analysis on how differential privacy helps with the detection, and then conduct extensive experiments to validate the effectiveness of differential privacy in improving outlier detection, novelty detection, and backdoor attack detection. + +# 1 INTRODUCTION + +Given a dataset where most of the samples are from a certain distribution, outlier detection aims to detect the minorities in the dataset that are far from the distribution, while the goal of novelty detection is to detect newly observed data samples that do not fit the distribution. On the other hand, poisoning examples that are intentionally added by attackers to achieve backdoor attacks could be treated as one type of "outliers" in the training dataset. Using machine learning for outlier/novelty detection is typically to train a model that learns the distribution where the training data samples are drawn from, and the final trained model could give a high anomaly score for the outliers/novelties that deviate from the same distribution. In both cases, the machine learning model is not supposed to learn from the outliers in the training dataset. Unfortunately, deep learning models that contain millions of parameters tend to remember too much (Song et al. [2017]), and can easily overfit to rare training samples (Carlini et al. [2018]). + +Protecting data privacy has been a major concern in many applications, because sensitive data are being collected and analyzed. Differential privacy has been proposed to "hide" certain input data from the output; that is, by looking at the statistical results calculated from input data, one cannot tell whether the input data contain a certain record or not. The way of applying differential privacy is to add random noise to the input data or the data analysis procedure, such that the output difference caused by the input difference can be hidden by the noise. A known fact is that differential privacy implies stability (Kasiviswanathan et al. [2011]). Particularly, a differentially private learning algorithm is stable in the sense that the model learned by the algorithm is insensitive to the removal or replacement of an arbitrary point in the training dataset (Bousquet & Elisseeff [2002]). When the training dataset contains a handful of outliers, the output model of a stable learning algorithm should be close to the one trained on the clean portion of the training set. Intuitively, compared with the model trained on contaminated dataset, the one trained on clean data could be better at distinguishing outliers from normal data. Therefore, differential privacy can potentially be leveraged to improve the + +identification of outliers. This motivates us to apply differential privacy to anomaly detection and defense against backdoor attacks. + +Our contribution. First, we present a theoretical explanation on why differential privacy can help to detect outliers from a training and testing dataset, as well as an analysis on the relationship between the number of outliers to detect and the amount of random noise to apply. Second, to demonstrate the effectiveness, we apply differential privacy to an autoencoder network trained on a constructed MNIST dataset with injected outliers, for both outlier detection and novelty detection, to show how much the utility could be improved with different amount of outliers and noise. Third, we apply differential privacy to a real-world task - Hadoop file system log anomaly detection. System log anomaly detection is an important topic in computer security. Our proposed method greatly improves upon the state-of-the-art system in this field. The results indicate that differential privacy is able to eliminate almost all the false negatives, and achieve significantly improved overall utility, compared with the current state-of-the-art work DeepLog (Du et al. [2017]). Finally, via a proof-of-concept experiment using MNIST dataset with injected poisoning samples, we show that the idea of outlier detection could be extended to backdoor attack detection, and that differential privacy is able to further improve the performance. + +# 2 PRELIMINARY + +Given an input dataset and an aggregation mechanism, differential privacy (Dwork [2011]) aims to output the requested aggregation results, which are guaranteed not to reveal the participation of any individual data record. Formally, differential privacy is defined as below: + +Definition 1 (Differential privacy). A randomized mechanism $\mathcal{M}$ applied to data domain $\mathbb{D}$ is said to be $(\epsilon, \delta)$ -differentially private if for any adjacent datasets $d$ , $d'$ in $\mathbb{D}$ , and any subset of outputs $\mathcal{S} \subseteq \text{Range}(\mathcal{M})$ , it holds that + +$$ +\Pr [ \mathcal {M} (d) \in \mathcal {S} ] \leq e ^ {\epsilon} \Pr [ \mathcal {M} (d ^ {\prime}) \in \mathcal {S} ] + \delta , +$$ + +where $\epsilon$ stands for the privacy bound, and $\delta$ stands for the probability to break this bound. + +The adjacent datasets $d$ , $d'$ could be understood as two databases, where only one record differs, i.e., $\| d - d' \|_1 = 1$ . Differential privacy guarantees that the difference between $d$ and $d'$ are not revealed through inspecting the outputs $\mathcal{M}(d)$ and $\mathcal{M}(d')$ . Clearly, the closer $\epsilon$ is to 0, the more indistinguishable $\mathcal{M}(d)$ and $\mathcal{M}(d')$ are, and hence the stronger the privacy guarantee is. + +A common approach to enforcing differential privacy for a function $f: \mathbb{D} \to \mathbb{R}$ , is to add random Gaussian noise $\mathcal{N}(0, \sigma^2)$ to perturb the output in $\mathbb{R}$ . The intuition is that, for given adjacent datasets $d$ and $d'$ , one cannot tell whether the difference between $f(d)$ and $f(d')$ is incurred by the single record that differs in $d$ and $d'$ , or by the random noise being applied. The magnitude of Gaussian noise needs to be tailored to the maximum difference between $f(d)$ and $f(d')$ , which is formally defined as $\mathcal{L}_2$ -sensitivity. + +Definition 2 ( $\mathcal{L}_2$ -sensitivity). The $\mathcal{L}_2$ -sensitivity for a function $f: \mathbb{D} \to \mathbb{R}$ is: + +$$ +\Delta = \max_{\substack{d,d^{\prime}\in \mathbb{D}\\ \| d - d^{\prime}\|_{1} = 1}}\| f(d) - f(d^{\prime})\|_{2} +$$ + +The noise scale $\sigma$ to apply can be calculated as below (Dwork et al. [2014]). + +Theorem 1. To perturb a function with sensitivity $\Delta$ under $(\epsilon, \delta)$ - differential privacy, the minimum noise scale $\sigma$ of Gaussian mechanism is given by: $\sigma = \frac{\Delta}{\epsilon} \cdot \sqrt{2 \ln \frac{1.25}{\delta}}$ , where $\epsilon \in (0,1)$ . + +Deep learning with differential privacy (Abadi et al. [2016]) The procedure of deep learning model training is to minimize the output of a loss function, through numerous stochastic gradient descent (SGD) steps. Abadi et al. [2016] proposed a differentially private SGD algorithm that works as follows. At each SGD step, a fixed number of randomly selected training samples are used as a mini batch. For each mini batch training, the following two operations are performed to enforce differential privacy: 1) clip the norm of the gradient for each example, with a clipping bound $C$ , to + +limit the sensitivity of gradient; 2) sum the clipped per-example gradients and add Gaussian noise $\mathcal{N}(0,\sigma^2)$ , before updating the model parameters. Abadi et al. [2016] further proposed a moment accounting mechanism which calculates the aggregate privacy bound when performing SGD for multiple steps. Differential privacy is immune to post-processing. Therefore, the output of the trained model for any queries enjoys the same privacy guarantee as the above SGD-based training process. + +# 3 THE CONNECTION BETWEEN DIFFERENTIAL PRIVACY AND OUTLIER DETECTION + +By definition, random noise added into model training for differential privacy hides the influence of a single record on the learned model. Intuitively, if applying differential privacy to the training process, the contribution of rare training examples will be hidden by random noise, resulting in a model that underfits the outliers. Such model will facilitate novelty and outlier detection because it will be less confident in predicting the atypical examples. In this section, we first present a theorem to precisely characterize the above intuition, and then analyze the relationship between the number of outliers in the training dataset and the amount of noise to apply. + +Notations Let $\mathcal{Z}$ be the sample space and $\mathcal{H}$ be the hypothesis space. The loss function $l: \mathcal{H} \times \mathcal{Z} \to \mathbb{R}$ measures how well the hypothesis $h \in \mathcal{H}$ explains a data instance $z \in \mathcal{Z}$ . A learning algorithm $\mathcal{A}: \mathcal{Z}^n \to \mathcal{H}$ learns some hypothesis $\mathcal{A}(S)$ given a set $S$ of $n$ samples. For instance, in supervised learning problems, $\mathcal{Z} = \mathcal{X} \times \mathcal{Y}$ , where $\mathcal{X}$ is the feature space and $\mathcal{Y}$ is the label space; $\mathcal{H}$ is a collection of models $h: \mathcal{X} \to \mathcal{Y}$ ; and $l(h, z)$ measures how well $h$ predicts the feature-label relationship $z = (x, y)$ . + +Let $S = \{z_1, \ldots, z_n\}$ be a set of independent samples drawn from an unknown distribution $\mathcal{D}$ on $\mathcal{Z}$ . For a given distribution $\mathcal{D}$ , an oracle hypothesis is the one that minimizes the expected loss: + +$$ +h ^ {*} = \underset {h} {\arg \min } \mathbb {E} _ {z \sim \mathcal {D}} [ l (h, z) ] \tag {1} +$$ + +We define an outlier as a data instance that has significantly different loss from the population under the oracle hypothesis. + +Definition 3. We say $\tilde{z}$ is an outlier with regard to distribution $\mathcal{D}$ if + +$$ +l \left(h ^ {*}, \tilde {z}\right) - \mathbb {E} _ {z \sim \mathcal {D}} [ l \left(h ^ {*}, z\right) ] \geq T \tag {2} +$$ + +where $T$ is a constant that depends only on $\mathcal{D}$ . + +We will prove the usefulness of differential privacy to detect outliers for the classes of learning algorithms that produce hypotheses converging to the optimal hypothesis asymptotically pointwise. We define such learning algorithms to be uniformly asymptotic empirical risk minimization (UAERM). + +Definition 4. A (possibly randomized) learning algorithm $\mathcal{A}$ is UAERM with rate $\xi(n, \mathcal{A})$ if for any distribution $\mathcal{D}$ defined on the domain $\mathcal{Z}$ , it holds that + +$$ +\forall z \quad | \mathbb {E} _ {\mathcal {S} \sim \mathcal {D} ^ {n}} \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) - l \left(h ^ {*}, z\right) | \leq \xi (n, \mathcal {A}) \tag {3} +$$ + +In the definition, we make it explicit that the rate $\xi (n,\mathcal{A})$ depends on the learning algorithm $\mathcal{A}$ . For instance, if $\mathcal{A}$ is a differentially private learning algorithm, the rate will depend on the privacy parameters. In that case, with slight abuse of notation, we will denote the rate for a $(\epsilon ,\delta)$ -differentially private learning algorithm trained on $n$ data instances by $\xi (n,\epsilon ,\delta)$ . + +Due to the nonconvexity of their loss functions, neural networks may not enjoy a useful, tight characterization of the learning rate. Thus, we will empirically verify that using noisy SGD to learn differentially private neural networks is UAERM. Moreover, as we will show in the experiment, $\xi (n,\epsilon ,\delta)$ grows as privacy parameters $\epsilon$ and $\delta$ become smaller. Intuitively, this is because larger noise is required to ensure stronger privacy guarantees, which, on the other hand, slows down the convergence of the learning algorithm. + +Without loss of generality, we assume that $0 \leq l(h,z) \leq 1$ . The following theorem exhibits how the prediction performance of differentially private models on normal data will differ from outliers and connects the difference to the privacy parameters of the learning algorithm and the amount of outliers in the training data. + +Theorem 2. Suppose that a learning algorithm $\mathcal{A}$ is $(\epsilon, \delta)$ -differentially private and UAERM with the rate $\xi(n, \epsilon, \delta)$ . Let $S' = S \cup U$ , where $S \sim \mathcal{D}^n$ and $U$ contains $c$ arbitrary outliers. Then + +$$ +\begin{array}{l} \mathbb {E} _ {h \sim \mathcal {A} (S ^ {\prime})} l (h, \tilde {z}) - \mathbb {E} _ {h \sim \mathcal {A} (S ^ {\prime})} \mathbb {E} _ {z \sim \mathcal {D}} l (h, z) \\ \geq T - 2 \left(\xi (n, \epsilon , \delta) + \sqrt {\frac {n \left(e ^ {\epsilon} - 1 + \delta\right) ^ {2}}{2} \log \frac {2}{\gamma}} + e ^ {c \epsilon} - 1 + c e ^ {c \epsilon} \delta\right) \tag {4} \\ \end{array} +$$ + +with probability at least $1 - \gamma$ + +The two terms in the left-hand side of (4) represent the model's prediction loss on outliers and normal test data drawn from $\mathcal{D}$ , respectively. Due to the stochasticity of differentially private learning algorithms, the difference is characterized by the expectation taken over the randomness of the learned models. The theorem establishes a lower bound on the prediction performance difference between normal and outlier data. A larger difference indicates that identifying outliers will be easier. + +The impact of privacy parameters on the lower bound manifests itself in two aspects. On one hand, stronger privacy guarantees (i.e., smaller $\epsilon$ and $\delta$ ) will require higher noise to be added into the training process, which increases the learning rate $\xi(n,\epsilon,\delta)$ . On the other hand, increasing privacy level will improve the stability of the learning algorithm; the resulting models tend to ignore the outliers in the training set and become closer to the ones trained on completely clean data, thus making the outlier detection more effective. The second aspect is embodied by the fact that the terms except $\xi(n,\epsilon,\delta)$ in the parenthesis of the lower bound grow with $\epsilon$ and $\delta$ . Therefore, the privacy parameters cannot be too large or too small in order to ensure optimal anomaly detection performance. + +Moreover, the relationship between the right-hand side of (4) and $c$ indicates that the anomaly detection problem will be more difficult with more outliers in the training set. Dissecting the right-hand side of (4), we further observe that $c$ appears always in tandem with $\epsilon$ . This implies that for larger number of outliers in the training set (i.e., $c$ is larger), we will need to tune down $\epsilon$ and $\delta$ to maintain the same novelty detection performance. + +Last but not least, the definition of outliers in our paper is quite general—it does not make any assumptions about how the outliers are generated. Also, we do not make assumptions about whether these outliers are in training or test data. Therefore, our analysis can shed light on detecting various types of anomalies, including but not limited to outlier/novelty detection, backdoor detection, and noisy label detection. In the following experimental section, we will focus our evaluation on outlier/novelty detection and defense against backdoor attacks. + +# 4 EXPERIMENTS + +This section empirically evaluates the effectiveness of differential privacy in improving anomaly detection and backdoor attack detection. We call an outlier/novelty or a poisoning example as a positive, and other normal data samples as negatives. The metrics measured by each experiment include: false positive (FP), false negative (FN), Precision = TP / (TP+FP), Recall = TP / (TP+FN), Area under the receiver operating characteristic curve (AUROC) which is the area under the TPR-FPR curve, Area under the Precision - Recall curve (AUPR) which summarizes the Precision - Recall curve, as well as F-measure $= 2 \times$ Precision $\times$ Recall / (Precision + Recall). The detailed explanations of all these measures could be found in (Wikipedia [2019b;a]; scikit-learn [2017a;b]). + +# 4.1 OUTLIER DETECTION AND NOVELTY DETECTION WITH AUTOENCODERS + +Autoencoder is a type of neural network that has been widely used for outlier detection and novelty detection. It contains an encoder network which reduces the dimension of the input data, and a decoder network which aims to reconstruct the input. Hence, the learning goal of autoencoders is to minimize the reconstruction error, which is consequently the loss function. Because the dimensionality reduction brings information loss, and the learning goal encourages to preserve the information that is common to most training samples, outliers that contain rare information could be identified by measuring model loss. In this section, with a varying amount of outliers and noise scale, we show how differential privacy would improve the utility of anomaly detection with autoencoders. + +Datasets. We utilize MNIST dataset composed by handwritten digits 0-9, and notMNIST dataset (Kaggle [2017]), which contains letters A-J with different fonts. The original MNIST data contain 60,000 training images, and 10,000 test images, which we refer to as MNIST-train and MNIST-test respectively. The notMNIST data contain 10,000 training images and 1,000 test images, denoted as notMNIST-train and notMNIST-test. Based on these datasets, we intentionally construct training datasets with varying amount of injected outliers. Specifically, each training dataset is constructed with a particular outlier ratio $r_o$ , such that the resulted dataset MNIST-OD-train( $r_o$ ) contains 60,000 images in total, where a percentage of $1 - r_o$ are from MNIST-train, and $r_o$ are from notMNIST-train. For each training dataset MNIST-OD-train( $r_o$ ), a set of autoencoder models are trained with varying noise scale $\sigma$ applied for differential privacy. For an autoencoder model trained on dataset MNIST-OD-train( $r_o$ ), outlier detection is thus to detect the $r_o \times 60$ , 000 outliers from MNIST-OD-train( $r_o$ ). For novelty detection, we further construct a test dataset MNIST-ND-test, which is composed by the entire MNIST-test dataset and notMNIST-test dataset, a total of 11,000 images. The goal of novelty detection is to identify the 1,000 notMNIST-test images as novelties. + +Evaluation metrics. To check whether a data sample is an outlier/novelty using autoencoders, the standard practice is to set a loss threshold based on training statistics, and any sample having a loss above this threshold is an outlier/novelty. To measure the performance under different thresholds, we use the AUPR score which is a threshold-independent metric. Compared with other metrics such as the AUROC score, the AUPR score is more informative when the positive/negative classes are highly unbalanced, e.g., for outlier detection where the ratio of outliers is extremely low. More experiment settings with both AUPR and AUROC metrics are in appendix which present similar observations. + +Set-up. For autoencoders, the encoder network contains 3 convolutional layers with max pooling, while the decoder network contains 3 corresponding upsampling layers. For differential privacy, we use a clipping bound $C = 1$ and $\delta = 10^{-5}$ , and vary the noise scale $\sigma$ as in (Abadi et al. [2016]). All models are trained with a learning rate of 0.15, a mini-batch size of 200 and for a total of 60 epochs. + +![](images/43d1758e7c2c9828ca94d7f9f5159b02a7102b7b28dffb8ff0e6362780e77183.jpg) +Figure 1: The largest test sample loss between a differentially private model trained on a random subset of training data and the oracle hypothesis. + +
noise scale σ =outlier percentage in training data ro
0.1%0.5%5%10%
ODNDODNDODNDODND
N/A99.9299.7792.1298.8184.3388.1872.1668.14
099.8999.8398.399.6983.8687.9177.874.74
0.0110099.9794.9299.2390.7993.3485.4184.07
0.110099.8598.4499.6692.2394.2185.5683.98
110099.7898.2899.6794.9296.8781.8780.12
599.8799.4998.5199.5296.7898.0495.2595.41
1090.2497.7791.8898.296.698.297.0797.46
Below σ value is too big such that the model does not converge well.
5065.9492.1370.3490.886.5891.5988.4990.27
+ +Table 1: AUPR scores for outlier detection (OD) and novelty detection (ND). $\sigma = 0$ indicates applying clipping bound only. + +Validation of UAERM for Noisy SGD. To begin with, we first conduct experiments to empirically validate our assumption in Theorem 2. While a rigorous verification of the assumption is intractable as it requires the knowledge of underlying data distribution and computing expected loss over randomness of both data distribution and differentially private algorithms, our experiments provide a sanity check of the assumption by replacing the expectation by the empirical average of a large number of data samples. For this set of experiments, we only utilize MNIST data for training, while the test dataset contains all available MNIST and notMNIST test samples. The oracle hypothesis is trained on all available training data, while each differentially private model is trained with varying privacy level $\epsilon$ , and training data size. For a fixed training set and $\epsilon$ , we perform training for multiple times to accommodate the randomness of differentially private training. Further, we train on multiple randomly selected training sets of the same size. We measure the loss of each resulting model on the test set, and calculate the average testing loss across different runs of differentially private training and different randomly selected training sets of the same size. We then compute the largest difference between the averaged test loss and the test loss associated with the oracle hypothesis. The results are shown in Figure 1. Each data point in the figure is an average of 9 differentially private models + +trained on 3 randomly sampled subsets of the training data, and 3 random training processes for each sampled subset. As in Figure 1, the larger the training data size, and the larger $\epsilon$ is, the closer of the randomized model to the oracle hypothesis, validating our assumption in Theorem 2 that noisy SGD is UAERM. + +Detection results Table 1 shows the outlier detection (OD) results on dataset MNIST-OD-train, as well as the novelty detection (ND) results on dataset MNIST-ND-test. OD mimics the unsupervised anomaly detection case. ND mimics the case where the autoencoder model is supposed to be trained on normal data, to detect unforeseen anomalies, while the training dataset is noisy. The first row where $\sigma = \mathrm{N} / \mathrm{A}$ is for the baseline model without differential privacy applied. It performs well when $r_o = 0.1\%$ , but drops significantly when $r_o$ reaches $0.5\%$ . That's because for a mini-batch size of 200 that we use, an outlier ratio of $0.5\%$ in training data results in an average of one outlier in each mini-batch, which could be learned by the baseline model. Note that the clipping bound $C = 1$ also restricts the contribution of outliers in SGD steps. We conduct an ablation study which only clips the per-example gradients with $C$ without adding any noise in each gradient descent step. The results are shown as $\sigma = 0$ in Table 1. As an intermediate step to bound the sensitivity in differential privacy, clipping itself is able to slightly improve the anomaly detection results in most cases. Still, we show that increasing the noise scale could further improve the utility. We highlight one of the best results in each column, and find that the trend follows our analysis in Theorem 2. Specifically, the more outliers in the training dataset, the larger noise scale is needed for the best improvement. As explained for (4), our theory shows that the privacy parameters cannot be too large or too small to ensure optimal anomaly detection performance, which coincides with the experimental results in Table 1. Although it could be challenging to select the desired noise level for training, we note that as shown in Table 1, applying differential privacy effectively improves the anomaly detection performance in most cases, except when $\sigma$ is too big to ruin the model parameters completely (e.g., $\sigma = 50$ ). Therefore, it is generally safe and almost always helpful to apply a small amount of differential privacy noise for anomaly detection. The noise scale could be increased further as long as the model converges. However, it should be noted that applying differential privacy makes the model training much slower than the baseline. In our experience utilizing NVIDIA Tesla V100 SXM2 GPU cards, the training time for each epoch could be up to 80 times longer. Finally, a training data portion as high as $10\%$ might not be "outliers", but could be part of the input pattern that should be learned by the model. We show in this case, a relatively large noise scale could effectively improve the anomaly detection results (e.g., $\sigma = 10$ ), but it's up to the requirement of the application whether to apply this. + +# 4.2 HADOOP FILE SYSTEM LOG ANOMALY DETECTION WITH LSTM + +In this section, we use a real-world example for Hadoop file system log anomaly detection, to show how anomaly detection with differential privacy outperforms the current state-of-the-art results. + +Dataset The Hadoop file system (HDFS) log dataset (Wei Xu [2009]) is generated through running Hadoop map-reduce jobs for 48 hours on 203 Amazon EC2 nodes. This dataset contains over 11 million log entries, which could be further grouped into 575, 059 block sessions by the block identifier each log has. Each block is associated with a normal/abnormal label provided by domain experts. Over the past decade this log dataset has been extensively used for research in system log anomaly detection (Xu et al. [2009]; Lou et al. [2010]; Du et al. [2017]). The state-of-the-art results are achieved by DeepLog (Du et al. [2017]), which we use as the baseline model. As in DeepLog, our training dataset contains 4,855 normal block sessions, while the test dataset includes 553,366 normal sessions and 16,838 abnormal sessions. + +Baseline model and metrics DeepLog utilizes LSTM neural networks to learn system log sequences. Note that system log messages are textual logs, e.g., "Transaction A finished on server $B$ ". Before applying LSTM, a log parsing step first maps each log message into its corresponding log printing statement in the source code, e.g., "print('Transaction %s finished on server %s.'%(x,y))". Since there are only a constant number (e.g., $N$ ) of log printing statements in the source code, each one could be mapped to a discrete value from a fixed vocabulary set (e.g., having size $N$ ). With that, a block session of log messages could be parsed to a sequence of discrete values, e.g. "22 5 5 5 11 9 11 9 11 9 26 26 26". Leveraging the fact that hidden execution paths written in source code restrict the possibilities of how one system log follows another, DeepLog trains an LSTM model + +![](images/a7dc2e030d2a100d94b76e3d97cd1e7d823aac9c1890392dc731863fc248465c.jpg) +(a) Comparison of FP and FN. +Figure 2: Improvements by differential privacy for DeepLog. + +![](images/e4eb216841012dcf6e497a0686cbc6c431a062ad51cb5f8464a7c3eb32108ccd.jpg) +(b) F-measure comparison (horizontal lines: DeepLog). + +on normal discrete sequences, which learns to predict the next discrete value given its history. In detection, the LSTM model predicts a probability distribution on all possible values that may appear at a given time step. The real executed value is detected as abnormal if it's unlikely to happen based on LSTM prediction. The criteria presented in DeepLog is to first sort the predicted values based on the assigned probabilities, e.g., for a prediction $\{5: 0.2, 9: 0.08, 11: 0.01, 26: 0.7, \ldots\}$ , the order would be 26, 5, 9, 11, ..., The given value to detect is checked against the sorted top $k$ predictions, and is detected as abnormal if it's not one of them. For anomaly detection metrics, we want to highlight that applying differential privacy significantly reduces false negatives, without introducing many false positives. Therefore, we'll focus on the comparison over the number of false positives and false negatives, while also presenting measurements that indicate the overall detection performance. + +Set up For the baseline model DeepLog, we train an LSTM model for 100 epochs, and use the final model as the anomaly detection model. The model related parameters are: 2 layers, 256 units per layer, 10 time steps, and a batch size of 256. We call the DeepLog model with differential privacy as $\text{DeepLog} + \text{DP}$ . For differential privacy, we use a clipping bound $C = 1$ , $\delta = 10^{-5}$ , and vary the noise scale $\sigma$ . All other model related settings for $\text{DeepLog} + \text{DP}$ are the same as $\text{DeepLog}$ . + +Results Figure 2a shows the comparison of FP and FN under different thresholds $k$ , with the increase of noise scale $\sigma$ . For clarity, we only show the following two cases for baseline model DeepLog: $k = 10$ which has the maximum FP and the minimum FN, and $k = 18$ which has the minimum FP and the maximum FN. Note that y axis is plotted as log scale. It is clear that applying DP noise significantly reduces FN in all cases, from over a thousand in DeepLog, to hundreds or even zero in DeepLog+DP. Also, the larger noise being added, the more FN are reduced. Although more FP could be introduced in some cases, we note that in system anomaly detection, the merit of fewer false negatives in fact worth the cost of more false positives. Reported false positives could be further checked by system admin, and then fed into the model for incremental learning. However, a false negative may never be found out, until a more disastrous event occurs due to the un-discovery of it. + +The F-measure measurements are plotted in Figure 2b. For DeepLog model, F-measure ranges from $90.38\%$ ( $k = 20$ ) to $93.81\%$ ( $k = 10$ ). For DeepLog+DP, the best F-measure scores include $96.29\%$ ( $\sigma = 0.25$ , $k = 15$ ) and $96.28\%$ ( $\sigma = 1$ , $k = 18$ ), which show clear improvements over DeepLog model. Note that the best FN and FP measurements reported in DeepLog (Du et al. [2017]) are 619 and 833 respectively, while DeepLog+DP achieves FN=383, FP=762 at the F-measure of $96.28\%$ ( $\sigma = 1$ , $k = 18$ ); and FN=123, FP=1040 at the F-measure of $96.29\%$ ( $\sigma = 0.25$ , $k = 15$ ), showing its ability to significantly reduce false negatives without introducing many false positives. As shown in the figure, DeepLog performs better when $k$ is smaller, while DeepLog+DP benefits from larger $ks$ . This scenario could also be explained by the addition of differential privacy noise. Since the trained model does not overfit to outliers, it assigns to anomalies much lower probabilities, so that anomalies are ranked much lower than that in the DeepLog model. Meanwhile, normal execution logs are also possibly predicted with lower probabilities because of the uncertainty brought by the noise. As a result, the ideal threshold $k$ for DeepLog+DP is higher than that of DeepLog. We also + +note that a large noise scale could hurt the overall performance, as shown by the downward trend when $\sigma$ increases from 1.75 to 2.0. + +# 4.3 BACKDOOR ATTACK DETECTION + +Since poisoning examples for backdoor attacks are essentially "outlier" training samples injected by attackers, this section conducts proof-of-concept experiments to examine whether measuring model loss as for outliers works to detect poisoning examples, and whether differential privacy is able to further improve the performance. This detection scenario is particularly useful for backdoor attacks injected in the crowdsourcing scenario, where the model trainer gathers training data from untrusted individuals. In this case, the model trainer does not have control over the data quality but does have control over the model training process. Our proposal of adding DP noise is useful for detecting backdoor attacks and training more robust models in such a scenario. + +Dataset and set up MNIST dataset as described in Section 4.1 is used in this set of experiments. We refer the original 60,000 training images as CLEAN-train and the 10,000 test images as CLEAN-test. We construct the backdoor attacks as described in (Gu et al. [2017]), Section 3.1.2. Specifically, each poisoning example is generated by reversing 4 pixel values in the bottom right corner of a clean image having label $i$ , and assigning backdoor label $(i + 1)\% 10$ . From CLEAN-train, we randomly sample a poisoning ratio of $r_p$ images to be poisoning examples, resulting in a poisoned training dataset POISONED-train $(r_p)$ . To demonstrate the effectiveness of the poisoning attacks, we use the CLEAN-test dataset, as well as POISONED-test dataset which is constructed by poisoning all images in CLEAN-test. For image classification model, we use convolutional neural network (CNN) containing 2 convolutional layers with max pooling, and a softmax layer to output desired labels. The differentially private models are trained with the same configurations as in Section 4.1 unless otherwise noted. + +Metrics We first evaluate the effectiveness of the constructed backdoor attack. A successful backdoor attack should have high image classification accuracy on CLEAN-test, which we refer to as benign accuracy, as well as high accuracy on POISONED-test with poisoned labels, which indicates the success rate. To investigate whether measuring the classification model loss is able to detect poisoning examples, and whether differential privacy is able to improve the detection performance, we leverage metrics AUPR and AUROC as described at the beginning of Section 4. + +
noise scaledetection (AUPR / AUROC) and attack (benign accuracy / success rate) performance
\(r_p=0.5\%\)\(r_p=1\%\)\(r_p=5\%\)
\(\sigma =\)detectionattackdetectionattackdetectionattack
N/A73.01 / 99.2698.93 / 47.8527.02 / 95.2398.95 / 97.1214.85 / 78.8899.11 / 98.1
091.22 / 99.9297.66 / 0.2392.11 / 99.8897.84 / 0.3595.33 / 99.7997.46 / 0.3
0.00592.64 / 99.997.57 / 0.1794.04 / 99.9397.46 / 0.2894.76 / 99.7997.75 / 0.3
0.0192.24 / 99.9297.51 / 0.2594.03 / 99.9297.4 / 0.3493.4 / 99.7497.55 / 0.31
0.0590.76 / 99.997.42 / 0.2495.11 / 99.9497.8 / 0.3795.09 / 99.8397.72 / 0.3
0.192.16 / 99.9397.55 / 0.2594.85 / 99.9397.7 / 0.2895.33 / 99.8297.34 / 0.39
+ +Table 2: Backdoor attack and detection results with varying poisoning ratio $r_p$ (clipping bound $C = 1$ ). + +Results We first evaluate the backdoor attack effectiveness and the detection performance with varying poisoning ratio $r_p$ , under different noise scale $\sigma$ , and fixed clipping bound $C = 1$ . The results are summarized in Table 2. $\sigma = \mathrm{N} / \mathrm{A}$ indicates classification models trained without differential privacy. Benign accuracy remains high on clean data. Backdoor success rate is only around half at a poisoning ratio of $0.5\%$ , and shows successful (97.12% success rate) at a poisoning ratio of $1\%$ . Detecting poisoning examples by measuring model loss shows some level of effectiveness when the poisoning ratio is low (e.g., $0.5\%$ ). Furthermore, applying differential privacy to the model training process is able to significantly improve the detection performance. Similar as in Table 1, the higher the poisoning ratio, the larger the noise level (smaller $\epsilon$ ) to achieve the best improvement. Another interesting observation is that, a differentially private model is naturally robust to backdoor attacks. As indicated in Table 2, differential privacy effectively limits the success of backdoor attacks, reducing the success rate below $0.5\%$ in all cases. In comparison, the utility downgrade on benign accuracy is little. + +To further evaluate the applicability of using the same CNN model for both anomaly detection and image classification, seeking to co-optimize the performance of the model for both tasks, we collect measurements with varying clipping bound $C$ , and fixed noise scale $\sigma = 0.5$ , as a complement to Table 2. The results are summarized in Table 3. Note that a small $C$ may hurt model performance as more model parameters are clipped. When $C$ is greater than most parameter values, the effect of increasing $C$ is similar to that of increasing $\sigma$ (Abadi et al. [2016]). From Table 3, we can observe that the best model for anomaly detection could have a similar set of parameters with the best model for image classification. However, in general, as shown in Table 2, classification accuracy and robustness are often two conflicting desiderata; model trainers can tune the privacy parameter in order to meet the task-specific requirements for accuracy and robustness. + +
clipping bounddetection (AUPR / AUROC) and attack (benign accuracy / success rate) performance
\(r_p=0.5\%\)\(r_p=1\%\)\(r_p=5\%\)
C =detectionattackdetectionattackdetectionattack
0.587.46 / 99.8796.29 / 0.3190.78 / 99.8596.47 / 0.2695.62 / 99.7996.73 / 0.34
0.889.03 / 99.8597.13 / 0.3392.39 / 99.8997.11 / 0.3295.63 / 99.7997.4 / 0.28
190 / 99.997.28 / 0.2293.46 / 99.9297.47 / 0.2595.37 / 99.7997.34 / 0.3
290.85 / 99.8197.48 / 0.2493.49 / 99.9197.21 / 0.2693.26 / 99.7597.39 / 0.46
390.17 / 99.9397.29 / 0.388.05 / 99.8497.18 / 0.3389.51 / 99.5997.35 / 0.48
+ +Table 3: Backdoor attack and detection results with varying poisoning ratio $r_p$ (noise scale $\sigma = 0.5$ ). + +# 5 RELATED WORK + +To the best of our knowledge, this paper is the first one that proposes to improve outlier/novelty detection with differential privacy, and further extends it to backdoor attack detection. Note that this is not the first work that combines outlier detection and differential privacy together. Okada et al. [2015] aim to preserve input data privacy while detecting outliers. The two tasks are contradicting in this case as the identification of outliers (part of input data) implies certain privacy leakage, so Okada et al. [2015] try to find a balance. In contrast, we focus on improving anomaly detection performance with differential privacy, which is only applied to the model training stage, but no privacy protection is provided for the input data in detection stage when the outliers are actually being detected. + +Outlier detection and novelty detection are closely related to each other and often addressed together (Hodge & Austin [2004]; scikit-learn [2017c]; Pedregosa et al. [2011]). Outlier detection is the process of identifying rare items in a dataset that significantly differ from the majority (Aggarwal & Yu [2001]), while novelty detection is to detect new observations that lie in the low density area of the existing dataset (Markou & Singh [2003a;b]). Previous work mostly achieves outlier detection using unsupervised learning methods (Zimek et al. [2012; 2014]; Campos et al. [2016]), while novelty detection typically assumes a normal dataset is available for training, and is realized by semi-supervised learning (Blanchard et al. [2010]; De Morsier et al. [2013]). In both cases, it involves summarizing a distribution that the majority of training data are drawn from. Traditional methods such as clustering (Duan et al. [2009]) and principal component analysis (PCA) (Xu et al. [2010]; Hoffmann [2007]) have been frequently used. In this paper, we leverage deep learning based detection methods including autoencoders (Gottschlich et al. [2017]) and LSTM (Du et al. [2017]) as the baselines, and further extend the idea of measuring model loss to backdoor attack detection. + +Proposed by Dwork [2008], differential privacy has been a powerful tool to protect input data privacy. Kasiviswanathan et al. [2011] show that differential privacy implies stability on the output statistical results. Further, Dwork et al. [2015] point out that the empirical average of the output of a differentially private algorithm on a random dataset is close to the true expectation with high probability. Differential privacy has been utilized to train machine learning models that are robust to adversarial examples (Phan et al. [2019]; Lecuyer et al. [2018]), and to bound the success of inference attacks (Yeom et al. [2018]). In this paper, we utilize the property of differential privacy to improve anomaly detection and privacy is ensured via the technique proposed in Abadi et al. [2016]. + +Lastly, we note that a recent paper by Bagdasaryan & Shmatikov [2019] showed that accuracy of differentially private models drops much more for the underrepresented classes and subgroups. Intrinsically, our paper exploited the same phenomenon to improve anomaly detection. Bagdasaryan & Shmatikov [2019] studied the phenomenon empirically, while our work provides a theoretical analysis, which, for the first time, precisely characterizes the dependence of the performance gap between + +the majority and the underrepresented group on the privacy parameters. Moreover, Bagdasaryan & Shmatikov [2019] mainly considered the implication of differential privacy to the fairness of machine learning models; by contrast, our paper focuses on anomaly detection and backdoor attacks and exhibits strong empirical evidence for the efficacy of differential privacy in these two application domains. + +# 6 CONCLUSION + +In this paper, inspired by the fact that differential privacy implies stability, we apply DP noise to improve the performance of outlier detection and novelty detection, with an extension to backdoor attack detection. We first provide the theoretical basis for the efficacy of differential privacy for identifying anomalies, connecting the hardness of the identification problem to privacy parameters. Our theoretical results are useful to explain various experimental findings, including how the anomaly detection performance varies with privacy parameters and the number of outliers in the training set. We perform extensive experiments to demonstrate the effectiveness of differential privacy for anomaly detection. To fully evaluate the effectiveness of DP in anomaly detection with different amount of outliers and noise, we first construct a contaminated dataset based on MNIST and train autoencoder anomaly detection models with varying noise scale applied. We then evaluate the performance using a real-world task, Hadoop file system log anomaly detection, by applying DP noise to DeepLog, the current state-of-the-art detection model. The evaluation results show that DP noise is effective towards reducing the number of false negatives, and further improving the overall utility. Finally, we generalize the idea of measuring model loss for outlier detection to backdoor attack detection and further improve the performance via differential privacy. + +# REFERENCES + +Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308-318. ACM, 2016. +Charu C Aggarwal and Philip S Yu. Outlier detection for high dimensional data. In ACM Sigmod Record, pp. 37-46. ACM, 2001. +Eugene Bagdasaryan and Vitaly Shmatikov. Differential privacy has disparate impact on model accuracy. arXiv preprint arXiv:1905.12101, 2019. +Gilles Blanchard, Gyemin Lee, and Clayton Scott. Semi-supervised novelty detection. Journal of Machine Learning Research, 11(Nov):2973-3009, 2010. +Olivier Bousquet and André Elisseeff. Stability and generalization. Journal of machine learning research, 2(Mar):499-526, 2002. +Guilherme O Campos, Arthur Zimek, Jorg Sander, Ricardo JGB Campello, Barbora Micenkova, Erich Schubert, Ira Assent, and Michael E Houle. On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study. Data Mining and Knowledge Discovery, 30 (4):891-927, 2016. +Nicholas Carlini, Chang Liu, Jernej Kos, Ülfar Erlingsson, and Dawn Song. The secret sharer: Measuring unintended neural network memorization & extracting secrets. arXiv preprint arXiv:1802.08232, 2018. +Frank De Morsier, Travis Tuia, Maurice Borgeaud, Volker Gass, and Jean-Philippe Thiran. Semi-supervised novelty detection usingsvm entire solution path. IEEE Transactions on Geoscience and Remote Sensing, 51(4):1939-1950, 2013. +Min Du, Feifei Li, Guineng Zheng, and Vivek Srikumar. Deeplog: Anomaly detection and diagnosis from system logs through deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1285-1298. ACM, 2017. +Lian Duan, Lida Xu, Ying Liu, and Jun Lee. Cluster-based outlier detection. Annals of Operations Research, 168(1):151-168, 2009. +Cynthia Dwork. Differential privacy: A survey of results. In Theory and Applications of Models of Computation—TAMC, volume 4978 of Lecture Notes in Computer Science. Springer Verlag, April 2008. +Cynthia Dwork. Differential privacy. Encyclopedia of Cryptography and Security, pp. 338-340, 2011. +Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407, 2014. +Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Leon Roth. Preserving statistical validity in adaptive data analysis. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pp. 117-126. ACM, 2015. +Justin Gottschlich, Abdullah Muzahid, et al. Autoperf: A generalized zero-positive learning system to detect software performance anomalies. arXiv preprint arXiv:1709.07536, 2017. +Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017. +Victoria Hodge and Jim Austin. A survey of outlier detection methodologies. Artificial intelligence review, 22(2):85-126, 2004. +Heiko Hoffmann. Kernel pca for novelty detection. Pattern recognition, 40(3):863-874, 2007. +Kaggle. notmnist dataset used in udacity's deep learning mooc, 2017. URL https://www.kaggle.com/lubaroli/notmnist. [Online; accessed 19-May-2019]. + +Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793-826, 2011. +Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. arXiv preprint arXiv:1802.03471, 2018. +Jian-Guang Lou, Qiang Fu, Shengqi Yang, Ye Xu, and Jiang Li. Mining invariants from console logs for system problem detection. In USENIX Annual Technical Conference, pp. 1-14, 2010. +Markos Markou and Sameer Singh. Novelty detection: a review—part 1: statistical approaches. Signal processing, 83(12):2481-2497, 2003a. +Markos Markou and Sameer Singh. Novelty detection: a review—part 2:: neural network based approaches. Signal processing, 83(12):2499-2521, 2003b. +Rina Okada, Kazuto Fukuchi, and Jun Sakuma. Differentially private analysis of outliers. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 458-473. Springer, 2015. +F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011. +NhatHai Phan, Ruoming Jin, My T Thai, Han Hu, and Dejing Dou. Preserving differential privacy in adversarial learning with provable robustness. arXiv preprint arXiv:1903.09822, 2019. +scikit-learn. sklearn.metrics-average_precision_score, 2017a. URL https://scikit-learn.org/stable/modules/generated/sklearn.metrics-average_precision_score.html. [Online; accessed 19-May-2019]. +scikit-learn. sklearn.metrics.roc_auc_score, 2017b. URL https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html. [Online; accessed 19-May-2019]. +scikit-learn. Novelty and outlier detection, 2017c. URL https://scikit-learn.org/stable/modules/outlierDetection.html. [Online; accessed 19-May-2019]. +Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. Machine learning models that remember too much. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 587-601. ACM, 2017. +Wei Xu. Hdfs log dataset, 2009. URL http://iiis.tsinghua.edu.cn/~weixu/sospdata.html. [Online; accessed 3-May-2019]. +Wikipedia. Precision and recall — Wikipedia, the free encyclopedia, 2019a. URL https://en.wikipedia.org/w/index.php?title=Precision_and_recall& oldid=893227571. [Online; accessed 2-May-2019]. +Wikipedia. Sensitivity and specificity — Wikipedia, the free encyclopedia, 2019b. URL https://en.wikipedia.org/w/index.php?title=Sensitivity_and_specificity&oldid=891047257. [Online; accessed 2-May-2019]. +Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust pca via outlier pursuit. In Advances in Neural Information Processing Systems, pp. 2496-2504, 2010. +Wei Xu, Ling Huang, Armando Fox, David Patterson, and Michael I Jordan. Detecting large-scale system problems by mining console logs. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles, pp. 117-132. ACM, 2009. +Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st Computer Security Foundations Symposium (CSF), pp. 268-282. IEEE, 2018. + +Arthur Zimek, Erich Schubert, and Hans-Peter Kriegel. A survey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analysis and Data Mining: The ASA Data Science Journal, 5(5):363-387, 2012. +Arthur Zimek, Ricardo JGB Campello, and Jorg Sander. Ensembles for unsupervised outlier detection: challenges and research questions a position paper. Acm Sigkdd Explorations Newsletter, 15(1): 11-22, 2014. + +# A APPENDIX + +# A.1 PROOF OF THEOREM 2 + +The following result will be used for reasoning about the performance gap of the differentially private learned models between regular data points and outliers. + +Theorem 3 (McDiarmid, 1989). Let $S$ be a set of $n$ points and $S^i$ be the set with the $i$ th element in $S$ replaced by $z_i'$ , let $F: \mathcal{Z}^n \to \mathbb{R}$ be any measurable function for which there exists constants $c_i$ ( $i = 1, \dots, n$ ) such that + +$$ +\sup _ {S \in \mathcal {Z} ^ {n}, z _ {i} ^ {\prime} \in \mathcal {Z}} | F (S) - F \left(S ^ {i}\right) | \leq c _ {i} \tag {5} +$$ + +then + +$$ +P _ {S} [ F (S) - \mathbb {E} _ {S} [ F (S) ] \geq \epsilon ] \leq e ^ {- 2 \epsilon^ {2} / \sum_ {i = 1} ^ {n} c _ {i} ^ {2}} \tag {6} +$$ + +Moreover, we will need the following lemma on group differential privacy. + +Lemma 1. If $\mathcal{A}$ is $(\epsilon, \delta)$ -differentially private with respect to one change in the database, then $\mathcal{A}$ is $(c\epsilon, c\epsilon^{c\epsilon} \delta)$ -differentially private with respect to $c$ changes in the database. + +Now, we are ready to present the proof of Theorem 2. + +Theorem 2. Suppose that a learning algorithm $\mathcal{A}$ is $(\epsilon, \delta)$ -differentially private and UAERM with the rate $\xi(n, \epsilon, \delta)$ . Let $S' = S \cup U$ , where $S \sim \mathcal{D}^n$ and $U$ contains $c$ arbitrary outliers. Then + +$$ +\begin{array}{l} \mathbb {E} _ {h \sim \mathcal {A} (S ^ {\prime})} l (h, \tilde {z}) - \mathbb {E} _ {h \sim \mathcal {A} (S ^ {\prime})} \mathbb {E} _ {z \sim \mathcal {D}} l (h, z) \\ \geq T - 2 \left(\xi (n, \epsilon , \delta) + \sqrt {\frac {n \left(e ^ {\epsilon} - 1 + \delta\right) ^ {2}}{2} \log \frac {2}{\gamma}} + e ^ {c \epsilon} - 1 + c e ^ {c \epsilon} \delta\right) \tag {4} \\ \end{array} +$$ + +with probability at least $1 - \gamma$ . + +Proof. Let the probability density/mass defined by $\mathcal{A}(S')$ and $\mathcal{A}(S)$ be $p(h)$ and $p'(h)$ , respectively. Using Lemma 1, for any $z \in \mathcal{Z}$ we have + +$$ +\begin{array}{l} \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) = \int_ {0} ^ {1} P _ {h \sim \mathcal {A} (S)} [ l (h, z) > t ] d t (7) \\ \leq \int_ {0} ^ {1} \left(e ^ {c \epsilon} P _ {h \sim \mathcal {A} \left(S ^ {\prime}\right)} [ l (h, z) > t ] + c e ^ {c \epsilon} \delta\right) d t (8) \\ = e ^ {c \epsilon} \mathbb {E} _ {h \sim \mathcal {A} (S ^ {\prime})} [ l (h, z) ] + c e ^ {c \epsilon} \delta (9) \\ \end{array} +$$ + +It follows that + +$$ +\begin{array}{l} \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) - \mathbb {E} _ {h \sim \mathcal {A} \left(S ^ {\prime}\right)} [ l (h, z) ] \leq \left(e ^ {c \epsilon} - 1\right) \mathbb {E} _ {h \sim \mathcal {A} \left(S ^ {\prime}\right)} [ l (h, z) ] + c e ^ {c \epsilon} \delta (10) \\ \leq e ^ {c \epsilon} - 1 + c e ^ {c \epsilon} \delta (11) \\ \end{array} +$$ + +By symmetry, it also holds that + +$$ +\begin{array}{l} \mathbb {E} _ {h \sim \mathcal {A} \left(S ^ {\prime}\right)} l (h, z) - \mathbb {E} _ {h \sim \mathcal {A} (S)} [ l (h, z) ] \leq \left(e ^ {c \epsilon} - 1\right) \mathbb {E} _ {h \sim \mathcal {A} (S)} [ l (h, z) ] + c e ^ {c \epsilon} \delta (12) \\ \leq e ^ {c \epsilon} - 1 + c e ^ {c \epsilon} \delta (13) \\ \end{array} +$$ + +Thus, we have the following bound: + +$$ +\left| \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) - \mathbb {E} _ {h \sim \mathcal {A} \left(S ^ {\prime}\right)} [ l (h, z) ] \right| \leq e ^ {c \epsilon} - 1 + c e ^ {c \epsilon} \delta \tag {14} +$$ + +Moreover, let $S^i$ be the set with the $i$ th element in $S$ replaced by $z_i'$ . Then, by the same argument above, we have + +$$ +\left| \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) - \mathbb {E} _ {h \sim \mathcal {A} (S ^ {i})} [ l (h, z) ] \right| \leq e ^ {\epsilon} - 1 + \delta \tag {15} +$$ + +for all $i = 1, \dots, n$ . Then, using Theorem 3 that relates the first order differences of random functions to their variance, we obtain + +$$ +P _ {S \sim \mathcal {D}} [ | \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) - \mathbb {E} _ {S} \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) | \geq \tau ] \leq 2 e ^ {- \frac {2 \tau^ {2}}{n \left(e ^ {\epsilon} - 1 + \delta\right) ^ {2}}} \tag {16} +$$ + +Hence, + +$$ +P _ {S \sim \mathcal {D}} [ | \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) - \mathbb {E} _ {S} \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) | \geq \sqrt {\frac {n \left(e ^ {\epsilon} - 1 + \delta\right) ^ {2}}{2} \log \frac {2}{\gamma}} ] \leq \gamma \tag {17} +$$ + +Combining (14) with (17), we have + +$$ +\begin{array}{l} P _ {S \sim \mathcal {D}} [ | \mathbb {E} _ {h \sim \mathcal {A} (S ^ {\prime})} l (h, z) - \mathbb {E} _ {S} \mathbb {E} _ {h \sim \mathcal {A} (S)} l (h, z) | \leq \sqrt {\frac {n (e ^ {\epsilon} - 1 + \delta) ^ {2}}{2} \log \frac {2}{\gamma}} + e ^ {c \epsilon} - 1 + c e ^ {c \epsilon} \delta ] \\ \geq 1 - \gamma \tag {18} \\ \end{array} +$$ + +Since $\mathcal{A}$ is UAERM, the following inequality holds with probability at least $1 - \gamma$ : + +$$ +\left| \mathbb {E} _ {h \sim \mathcal {A} \left(S ^ {\prime}\right)} l (h, z) - l \left(h ^ {*}, z\right) \right| \leq \xi (n, \epsilon , \delta) + \sqrt {\frac {n \left(e ^ {\epsilon} - 1 + \delta\right) ^ {2}}{2} \log \frac {2}{\gamma}} + e ^ {c \epsilon} - 1 + c e ^ {c \epsilon} \delta \tag {19} +$$ + +for all $z\in \mathcal{Z}$ + +For a given outlier $\tilde{z}$ , it satisfies $l(h^{*},\tilde{z}) - \mathbb{E}_{z\sim \mathcal{D}}[l(h^{*},z)]\geq T$ by definition. Combining it with (19), we get + +$$ +\begin{array}{l} \mathbb {E} _ {h \sim \mathcal {A} \left(S ^ {\prime}\right)} l (h, \tilde {z}) - \mathbb {E} _ {z \sim \mathcal {D}} \mathbb {E} _ {h \sim \mathcal {A} \left(S ^ {\prime}\right)} l (h, z) \\ \geq T - 2 \left(\xi (n, \epsilon , \delta) + \sqrt {\frac {n \left(e ^ {\epsilon} - 1 + \delta\right) ^ {2}}{2} \log \frac {2}{\gamma}} + e ^ {c \epsilon} - 1 + c e ^ {c \epsilon} \delta\right) \tag {20} \\ \end{array} +$$ + +![](images/ee34e4330a6e0bd0ab1c4ecad68475a77bc20491660805f68891ae057705ff6b.jpg) + +# A.2 ADDITIONAL EXPERIMENT RESULTS + +We list additional experimental results in this section. With more parameters being tested and more metrics being collected, these extensive results further validate our observations presented in the main paper body: differential privacy could improve anomaly detection and backdoor attack detection; and the higher ratio of outliers in the training data, the more noise (smaller privacy bound $\epsilon$ ) is needed to achieve the best improvement. + +# A.2.1 AUTOENCODER ANOMALY DETECTION + +Besides the AUPR scores presented in Table 1 for outlier detection and novelty detection using autoencoders, we present additional results with more parameters being tested in Table 4. The observations are similar. As an intermediate step in differential privacy to bound the sensitivity, clipping itself without adding any noise is able to improve the performance of outlier detection and novelty detection. Adding various amounts of random Gaussian noise is able to further improve the utility, except when the amount of noise is too big (e.g., $\sigma = 50$ and 100) to ruin the model. We also indicate the privacy bound $\epsilon$ as accumulated by the moments accountant mechanism in (Abadi et al. [2016]). Interestingly, many $\epsilon$ values are too big to provide any meaningful privacy guarantee, but they are still able to improve the anomaly detection performance. + +Besides AUPR scores, we further present AUROC scores in Table 5 for the same set of experiments. Autoencoders, as validated by many previous works (Gottschlich et al. [2017]), present great effectiveness in detecting outliers and novelties, especially when the outlier ratio in training dataset is slow (e.g., below $1\%$ ). Although not as obvious as AUPR scores, the improvements brought by differential privacy follow a similar trend, where the improvement is more significant with larger noise (smaller $\epsilon$ ) being applied to models trained with more outliers. + +
noise scale σ =outlier percentage in training data roε
0.01%0.1%0.5%1%5%10%
ODNDODNDODNDODNDODNDODND
N/A10099.8499.9299.7792.1298.8192.1299.8184.3388.1872.1668.14
010099.8699.8999.8398.399.6995.298.6883.8687.9177.874.74
0.00110099.7010099.8698.3799.6494.3398.8184.3189.3486.5185.581.0 × 1010
0.00510099.8799.8299.7898.6999.795.6798.8791.0194.379.5577.043.9 × 108
0.0110099.8910099.9794.9299.2397.0899.3390.7993.3485.4184.079.8 × 107
0.0510099.8599.8999.7997.9799.5596.9499.3488.8492.5475.1872.092.8 × 106
0.110099.8810099.8598.4499.6693.1198.2192.2394.2185.5683.986.8 × 104
0.510099.8499.9599.8598.5999.6499.9599.8593.6995.885.6183.8622.23
110099.8110099.7898.2899.6795.899.094.9296.8781.8780.123.09
510099.4999.8799.4998.5199.5296.598.7896.7898.0495.2595.410.44
1097.6297.6190.2497.7791.8898.297.599.1296.698.297.0797.460.25
Below σ value is too big such that the model does not converge well in training.
5054.1990.4665.9492.1370.3490.878.3491.1986.5891.5988.4990.270.19
10056.0887.247.2270.5771.9590.84.2310.7380.5886.9489.2690.680.19
+ +Table 4: AUPR scores for autoencoder outlier detection (OD) and novelty detection (ND). + +
noise scale σ =outlier percentage in training data roε
0.01%0.1%0.5%1%5%10%
ODNDODNDODNDODNDODNDODND
N/A10099.9710099.9599.8199.7999.1799.0797.397.489.4689.2
010099.9710099.9799.9699.9399.799.7797.3697.5292.3992.35
0.00110099.9310099.9799.8999.8499.7999.8297.3797.6396.2696.72\( 1.0 \times 10^{10} \)
0.00510099.9610099.9599.9296.0999.8199.8398.7398.9193.6293.82\( 3.9 \times 10^{8} \)
0.0110099.9710099.9799.899.8799.8999.998.3998.596.0296.4\( 9.8 \times 10^{7} \)
0.0510099.9710099.9699.999.9399.8799.997.5597.8190.9190.87\( 2.8 \times 10^{6} \)
0.110099.9710099.9799.8799.999.5899.7198.7798.8296.1696.46\( 6.8 \times 10^{4} \)
0.510099.9610099.9799.999.9199.7799.8599.1399.2595.9696.2522.23
110099.9510099.9499.9199.9499.7499.8199.3199.4494.6694.953.09
510099.8110099.8599.9699.9399.5299.599.0799.3298.4998.740.44
101009999.8699.2599.1499.9399.6899.7998.7299.2998.999.270.25
Below σ value is too big such that the model does not converge well in training.
5099.9996.1194.896.4192.9696.1394.1596.297.035.9593.8395.610.19
10099.7394.782.6487.2993.5199.2231.7730.9194.4796.0894.6296.090.19
+ +Table 5: AUROC scores for autoencoder outlier detection (OD) and novelty detection (ND). + +# A.2.2 IMPROVEMENTS OVER DEEPELOG + +To compare the differentially private models with DeepLog, we utilize the same anomaly detection criteria, i.e., Top- $k$ based anomaly detection as what's presented in (Du et al. [2017]). Nevertheless, as a direct extension of the idea in measuring model loss for anomaly detection, we also tested the classification probability as one type of threshold for anomaly detection. In particular, if an actual system log entry is predicted with a probability lower than some threshold $T_{p}$ , we treat this log entry as a detected anomaly. While the baseline DeepLog results are not as good as the Top- $k$ based detection, we show that similarly, differential privacy is able to significantly reduce the number of false negatives, without introducing too many new false positives. + +Probability-based detection. As a preliminary result, we use probability-based anomaly detection to demonstrate the effectiveness of differential privacy noise in reducing FN. Table 6 shows FN and FP for DeepLog and DeepLog+DP with increasing noise levels, under different probability thresholds $T_{p}$ . It is clear that differential privacy noise could effectively reduce FN, and the larger noise being added, the more false negatives are reduced. We also note that when $\sigma = 0.25$ , the privacy bound $\epsilon = 90.5$ . It is often thought that a privacy bound $\epsilon > 20$ is completely useless in terms of protecting privacy. Here we indicate that a small amount of noise may be enough to reduce FN. Although more false positives are incurred because of differential privacy noise, the drop in TPR is negligible, considering the large volume of normal data. For example, when $T_{p} = 2 \times 10^{-6}$ and $\sigma = 1$ , the FN drop from 1261 to 183 indicates a TNR increase of $8\%$ ( $91.7\% \rightarrow 98.8\%$ ), while the FP increase from 2291 to 3734 only shows a TPR decrease of $0.3\%$ ( $99.59\% \rightarrow 99.32\%$ ). + +
Probability threshold \(T_p\)DeepLog FN /FPDeepLog+DP (FN /FP)
σ = 0.25σ = 1σ = 1.5σ = 2
\(10^{-5}\)573/35967/142680/60591/82131/9187
\(2 \times 10^{-6}\)1261/2291208/4756183/37341/57181/6317
\(10^{-6}\)1468/2068410/3759190/35522/40021/6093
+ +Table 6: Probability-based anomaly detection results. + +
DeepLogDeepLog+DP
σ=0.25σ=0.5σ=0.75σ=1.0σ=1.25σ=1.5σ=1.75σ=2.0
AUROC score0.99930.99970.99970.99970.99980.99930.99940.99890.9985
privacy bound ε090.456.211.860.960.610.420.310.25
+ +Table 7: AUROC score comparison and privacy bound $\epsilon$ . + +AUROC score To evaluate the overall performance of DeepLog+DP compared with DeepLog under different thresholds, we further compute the AUROC score of DeepLog and DeepLog+DP with different noise scale $\sigma$ . As shown in Table 7, DeepLog already achieves excellent AUROC score, considering the large amount of normal data and the significantly fewer anomalies. However, an adequate amount of differential privacy noise is still able to improve the performance. + +Privacy bound $\epsilon$ Table 7 also indicates the privacy bound $\epsilon$ . Note that $\epsilon < 10$ is often considered as usable and $\epsilon < 1$ is a tight bound that well protects privacy. Considering all the cases, $\sigma = 1$ gives the best anomaly detection utility as well as a tight privacy bound to protect training data privacy. + +# A.2.3 BACKDOOR ATTACK DETECTION + +In this section, we evaluate more parameters for the experiment set up described in Section 4.3 BACKDOOR ATTACK DETECTION, and measure benign accuracy, success rate, AUPR score and AUROC score as explained in Section 4.3 for each experiment setting. Similar as the observations in Section 4.3, a differentially private trained machine learning model is naturally more robust to backdoor attacks. The evidence is that the benign accuracy (Table 8) is affected little by differential privacy except when the noise scale is too big to ruin the model parameters, compared with the significant downgrade (e.g., $98.1\%$ to $0.3\%$ ) in backdoor success rate (Table 9). Also, as shown in Table 10 and Table 11, measuring model loss to detect poisoning examples could be useful when the + +poisoning ratio is low. Nevertheless, applying differential privacy is able to significantly improve the detection performance for a poisoning ratio as high as $45\%$ . + +
noise scale σpoisoning ratio in training data rpε
0.0050.010.050.10.20.30.40.45
N/A98.9399.0398.9599.1198.9499.0699.0598.97
097.6697.2197.8497.4696.9796.3293.6192.4
0.00197.597.5297.7297.2997.4796.6994.1690.419.9 × 109
0.00597.5797.5597.4697.7597.3696.593.9691.253.9 × 108
0.0197.5197.6197.497.5597.2797.0794.7792.829.8 × 107
0.0597.4297.8797.897.7297.6996.3794.1993.282830766.11
0.197.5597.8497.797.3497.2996.9194.1191.3667915.88
0.597.5697.2997.2897.3797.1396.5595.1986.3722.23
196.9496.9596.9696.5396.2795.7891.6583.163.09
293.7693.3992.1693.2292.7792.487.6181.241.18
Below noise level σ could be too high.
389.8589.591.1290.9289.3589.8185.4276.510.75
580.5179.5780.4980.2879.1977.9557.561.640.44
1017.3219.8220.3112.0711.3411.4311.1511.110.25
+ +Table 8: Benign accuracy of models trained on datasets with different poisoning ratio $r_p$ . The more noise being added, the more utility is affected. + +
noise scale σpoisoning ratio in training data rpε
0.0050.010.050.10.20.30.40.45
N/A47.8590.9697.1298.198.4698.9198.9298.79
00.230.290.350.30.470.7468.172.02
0.0010.210.220.250.370.420.8535.8319.019.9 × 109
0.0050.170.20.280.30.3535.5450.2883.943.9 × 108
0.010.250.240.340.310.420.5693.6415.259.8 × 107
0.050.240.250.370.30.3718.5582.154.532830766.11
0.10.250.180.280.390.470.7182.2359.5767915.88
0.50.260.230.290.350.370.811.6374.1522.23
10.280.30.450.50.631.0944.1267.443.09
20.740.681.071.11.42.675.9571.481.18
Below noise level σ could be too high.
30.961.221.11.592.692.666.0220.380.75
52.011.62.443.472.688.1513.7819.910.44
109.9310.310.039.089.689.3310.149.770.25
+ +Table 9: Backdoor attack success rate of models trained on datasets with different poisoning ratio $r_p$ . The success rate is significantly reduced for models trained with differential privacy. + +
noise scale σpoisoning ratio in training data rpε
0.0050.010.050.10.20.30.40.45
N/A73.0127.0214.8517.6324.936.4242.1645.85
091.2292.1195.3395.4695.996.5562.5760.33
0.00191.5293.3694.6195.9896.9596.0578.5386.99.9 × 109
0.00592.6494.0494.7695.4596.9879.172.7352.073.9 × 108
0.0192.2494.0393.495.7696.6996.2247.0890.779.8 × 107
0.0590.7695.1195.0995.5496.3587.2849.8293.722830766.11
0.192.1694.8595.3395.2896.496.675165.967915.9
0.592.7693.494.594.9395.7495.8895.9654.9922.23
188.6790.3194.7794.4696.0394.6772.9956.353.09
265.0178.5180.7687.5488.7687.6386.7551.411.18
Below noise level σ could be too high.
329.3151.3778.258179.8381.5684.4671.620.75
56.417.3958.6658.8861.869.1964.2761.580.44
100.8210.1210.8820.3230.8940.9245.170.25
+ +Table 10: AUPR scores for backdoor attack detection. Applying differential privacy significantly improves the results. + +
noise scale σpoisoning ratio in training data rpε
0.0050.010.050.10.20.30.40.45
N/A99.2695.2378.8867.7259.4760.955.3352.73
099.9299.8899.7999.7299.5199.3170.2462.88
0.00199.9199.8899.7999.7299.6299.2781.2788.19.9 × 109
0.00599.999.9399.7999.7599.6384.5175.1260.213.9 × 108
0.0199.9299.9299.7499.7499.6199.361.8393.789.8 × 107
0.0599.999.9499.8399.7399.5990.6758.0397.122830766.11
0.199.9399.9399.8299.6999.5599.3759.7165.5567915.88
0.599.9599.9299.7699.6899.599.2398.6858.4422.23
199.8699.8499.7699.6199.4398.8775.763.943.09
299.6499.6498.9798.8698.2197.1394.8257.811.18
Below noise level σ could be too high.
398.6898.9298.6898.1796.7395.9293.8681.220.75
595.7496.1396.3894.4292.2289.5277.7773.110.44
1056.7760.4961.253.1151.0551.3351.1950.350.25
+ +Table 11: AUROC scores for backdoor attack detection. It shows that measuring model loss for poisoning samples detection could be effective when the poisoning ratio is low. Differential privacy improves the performance in all cases, except when the noise scale is too big to ruin the model parameters. \ No newline at end of file diff --git a/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/images.zip b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0ec20aa5471531695acdae4f83af6c0cf0befb2d --- /dev/null +++ b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:941460db51bae9b71ecd1130a84283c29384e1c4b065871a907749547ae13cd4 +size 1263551 diff --git a/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/layout.json b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..17a702d08dd974b4340477a9bccd73c49554147e --- /dev/null +++ b/robustanomalydetectionandbackdoorattackdetectionviadifferentialprivacy/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2557ca104f9fbf30a010481a64e53a35d4f8da5c89c4199b15c981a7a554d5c8 +size 605554 diff --git a/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_content_list.json b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1c040dc45101c6b6d5e3de8f72ee761b0457714a --- /dev/null +++ b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc4ff0c834c5b70f954cf4849d90be2c3b4be998e7ab0a3a254f5c45ee0ecbeb +size 70940 diff --git a/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_model.json b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_model.json new file mode 100644 index 0000000000000000000000000000000000000000..36f340e048da4a5749d23995a71ba8f52fc8deaf --- /dev/null +++ b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfdc8ecc52753bc0f2493c0fee016c4dd28c8843f5f6648e4489279e5dfa642b +size 86080 diff --git a/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_origin.pdf b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4ad77ed6ddadf4e04548485e09d0687d2b5a36cf --- /dev/null +++ b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/1ebc5f2a-4588-440a-9e92-f4599d624e34_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0b4576a1352700b103761d6c40af1e9abdebe9d9a22c5b011d6d9926b6d5a26 +size 3147192 diff --git a/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/full.md b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3eed5793c21bbe67c92c1e735d12809a0bbbddf5 --- /dev/null +++ b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/full.md @@ -0,0 +1,339 @@ +# ROBUST LOCAL FEATURES FOR IMPROVING THE GENERALIZATION OF ADVERSARIAL TRAINING + +Chuanbiao Song & Kun He * & Jiadong Lin + +School of Computer Science and Technology +Huazhong University of Science and Technology +Wuhan, 430074, China +{cbsong, brooklet60, jdlin}@hust.edu.cn + +Liwei Wang + +School of Electronics Engineering and Computer Sciences, Peking University Peking, China wanglw@cis.pku.edu.cn + +John E. Hopcroft + +Department of Computer Science Cornell University, NY 14853, USA jeh@cs.cornell.edu + +# ABSTRACT + +Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples. However, adversarially trained models often lack adversarially robust generalization on unseen testing data. Recent works show that adversarially trained models are more biased towards global structure features. Instead, in this work, we would like to investigate the relationship between the generalization of adversarial training and the robust local features, as the robust local features generalize well for unseen shape variation. To learn the robust local features, we develop a Random Block Shuffle (RBS) transformation to break up the global structure features on normal adversarial examples. We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples. To demonstrate the generality of our argument, we implement RLFAT in currently state-of-the-art adversarial training frameworks. Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training. Additionally, we demonstrate that our models capture more local features of the object on the images, aligning better with human perception. + +# 1 INTRODUCTION + +Deep learning has achieved a remarkable performance breakthrough on various challenging benchmarks in machine learning fields, such as image classification (Krizhevsky et al., 2012) and speech recognition (Hinton et al., 2012). However, recent studies (Szegedy et al., 2014; Goodfellow et al., 2015) have revealed that deep neural network models are strikingly susceptible to adversarial examples, in which small perturbations around the input are sufficient to mislead the predictions of the target model. Moreover, such perturbations are almost imperceptible to humans and often transfer across diverse models to achieve black-box attacks (Papernot et al., 2017; Liu et al., 2017; Wang et al., 2019; Lin et al., 2020). + +Though the emergence of adversarial examples has received significant attention and led to various defend approaches for developing robust models (Madry et al., 2018; Dhillon et al., 2018; Wang & Yu, 2019; Song et al., 2019; Zhang et al., 2019a), many proposed defense methods provide few benefits for the true robustness but mask the gradients on which most attacks rely (Carlini & Wagner, 2017a; Athalye et al., 2018; Uesato et al., 2018; Li et al., 2019). Currently, one of the best + +techniques to defend against adversarial attacks (Athalye et al., 2018; Li et al., 2019) is adversarial training (Madry et al., 2018; Zhang et al., 2019a), which improves the adversarial robustness by injecting adversarial examples into the training data. + +Among substantial works of adversarial training, there still remains a big robust generalization gap between the training data and the testing data (Schmidt et al., 2018; Zhang et al., 2019b; Ding et al., 2019; Zhai et al., 2019). The robustness of adversarial training fails to generalize on unseen testing data. Recent works (Geirhos et al., 2019; Zhang & Zhu, 2019) further show that adversariely trained models capture more on global structure features but normally trained models are more biased towards local features. In intuition, global structure features tend to be robust against adversarial perturbations but hard to generalize for unseen shape variations, instead, local features generalize well for unseen shape variations but are hard to generalize on adversarial perturbation. It naturally raises an intriguing question for adversarial training: + +For adversarial training, is it possible to learn the robust local features, which have better adversarially robust generalization and better standard generalization? + +To address this question, we investigate the relationship between the generalization of adversarial training and the robust local features, and advocate for learning robust local features for adversarial training. Our main contributions are as follows: + +- To our knowledge, this is the first work that sheds light on the relationship between adversarial training and robust local features. Specifically, we develop a Random Block Shuffle (RBS) transformation to study such relationship by breaking up the global structure features on normal adversarial examples. +- We propose a novel method called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features, and then transfers the information of robust local features into the training on normal adversarial examples. +- To demonstrate the generality of our argument, we implement RLFAT in two currently state-of-the-art adversarial training frameworks, PGD Adversarial Training (PGDAT) (Madry et al., 2018) and TRADES (Zhang et al., 2019a). Empirical results show consistent and substantial improvements for both adversarial robustness and standard accuracy on several standard datasets. Moreover, the salience maps of our models on images tend to align better with human perception. + +# 2 PRELIMINARIES + +In this section, we introduce some notations and provide a brief description on current advanced methods for adversarial attacks and adversarial training. + +# 2.1 NOTATION + +Let $F(x)$ be a probabilistic classifier based on a neural network with the logits function $f(x)$ and the probability distribution $p_F(\cdot | x)$ . Let $\mathcal{L}(F;x,y)$ be the cross entropy loss for image classification. The goal of the adversaries is to find an adversarial example $x^{\prime} \in \mathcal{B}_{\epsilon}^{p}(x) := \{x^{\prime} : \| x^{\prime} - x \|_{p} \leq \epsilon \}$ in the $\ell_p$ norm bounded perturbations, where $\epsilon$ denotes the magnitude of the perturbations. In this paper, we focus on $p = \infty$ to align with previous works. + +# 2.2 ADVERSARIAL ATTACKS + +Projected Gradient Descent. Projected Gradient Descent (PGD) (Madry et al., 2018) is a stronger iterative variant of Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), which iteratively solves the optimization problem $\max_{x^{\prime}:\| x^{\prime} - x\|_{\infty} < \epsilon}\mathcal{L}(F;x^{\prime},y)$ with a step size $\alpha$ : + +$$ +x ^ {0} \sim \mathcal {U} \left(\mathcal {B} _ {\epsilon} ^ {\infty} (x)\right), +$$ + +$$ +x ^ {t + 1} = \Pi_ {\mathcal {B} _ {\epsilon} ^ {\infty} (x)} \left(x ^ {t} - \alpha \operatorname {s i g n} \left(\nabla_ {x} \mathcal {L} (F; x, y) | _ {x ^ {t}}\right)\right), \tag {1} +$$ + +where $\mathcal{U}$ denotes the uniform distribution, and $\Pi_{\mathcal{B}_{\epsilon}^{\infty}(x)}$ indicates the projection of the set $\mathcal{B}_{\epsilon}^{\infty}(x)$ . + +Carlini-Wagner attack. Carlini-Wagner attack (CW) (2017b) is a sophisticated method to directly solve for the adversarial example $x^{adv}$ by using an auxiliary variable $w$ : + +$$ +x ^ {a d v} = 0. 5 \cdot (\tanh (w) + 1). \tag {2} +$$ + +The objective function to optimize the auxiliary variable $w$ is defined as: + +$$ +\min _ {w} \left\| x ^ {a d v} - x \right\| + c \cdot \mathcal {F} \left(x ^ {a d v}\right), \tag {3} +$$ + +where $\mathcal{F}(x^{adv}) = \max \left(f_{y^{\mathrm{true}}}(x^{adv}) - \max \left\{f_i(x^{adv}):i\neq y^{\mathrm{true}}\right\} , - k\right)$ . The constant $k$ controls the confidence gap between the adversarial class and the true class. + +$\mathcal{N}$ attack. $\mathcal{N}$ attack (Li et al., 2019) is a derivative-free black-box adversarial attack and it breaks many of the defense methods based on gradient masking. The basic idea is to learn a probability density distribution over a small region centered around the clean input, such that a sample drawn from this distribution is likely to be an adversarial example. + +# 2.3 ADVERSARIAL TRAINING + +Despite a wide range of defense methods, Athalye et al. (2018) and Li et al. (2019) have broken most previous defense methods (Dhillon et al., 2018; Buckman et al., 2018; Wang & Yu, 2019; Zhang et al., 2019a), and revealed that adversarial training remains one of the best defense method. The basic idea of adversarial training is to solve the min-max optimization problem, as shown in Eq. (4): + +$$ +\min _ {F} \max _ {x ^ {\prime}: \| x ^ {\prime} - x \| _ {\infty} < \epsilon} \mathcal {L} (F; x ^ {\prime}, y). \tag {4} +$$ + +Here we introduce two currently state-of-the-art adversarial training frameworks. + +PGD adversarial training. PGD Adversarial Training (PGDAT) (Madry et al., 2018) leverages the PGD attack to generate adversarial examples, and trains only with the adversarial examples. The objective function is formalized as follows: + +$$ +\mathcal {L} _ {\mathrm {P G D}} (F; x, y) = \mathcal {L} (F; x _ {\mathrm {P G D}} ^ {\prime}, y), \tag {5} +$$ + +where $x_{\mathrm{PGD}}^{\prime}$ is obtained via the PGD attack on the cross entropy $\mathcal{L}(F;x,y)$ . + +TRADES. Zhang et al. (2019a) propose TRADES to specifically maximize the trade-off of adversarial training between adversarial robustness and standard accuracy by optimizing the following regularized surrogate loss: + +$$ +\mathcal {L} _ {\text {T R A D E S}} (F; x, y) = \mathcal {L} (F; x, y) + \lambda D _ {\mathrm {K L}} \left(p _ {F} (\cdot | x) \| p _ {F} (\cdot | x _ {\mathrm {P G D}} ^ {\prime} [ x ])\right), \tag {6} +$$ + +where $x_{\mathrm{PGD}}' [x]$ is obtained via the PGD attack on the KL-divergence $D_{\mathrm{KL}}(p_F(\cdot | x) \| p_F(\cdot | x'))$ , and $\lambda$ is a hyper-parameter to control the trade-off between adversarial robustness and standard accuracy. + +# 3 ROBUST LOCAL FEATURES FOR ADVERSARIAL TRAINING + +Unlike adversarially trained models, normally trained models are more biased towards the local features but vulnerable to adversarial examples (Geirhos et al., 2019). It indicates that, in contrast to global structural features, local features seems to be more well-generalized but less robust against adversarial perturbation. Based on the basic observation, in this work, we focus on the learning of robust local features on adversarial training, and propose a novel form of adversarial training called RLFAT that learns the robust local features and transfers the robust local features into the training of normal adversarial examples. In this way, our adversarially trained models not only yield strong robustness against adversarial examples but also show great generalization on unseen testing data. + +# 3.1 ROBUST LOCAL FEATURE LEARNING + +It's known that adversarial training tends to capture global structure features so as to increase invariance against adversarial perturbations (Zhang & Zhu, 2019; Ilyas et al., 2019). To advocate for + +the learning of robust local features during adversarial training, we propose a simple and straightforward image transformation called Random Block Shuffle (RBS) to break up the global structure features of the images, at the same time retaining the local features. Specifically, for an input image, we randomly split the target image into $k$ blocks horizontally and randomly shuffle the blocks, and then we perform the same split-shuffle operation vertically on the resulting image. As illustrated in Figure 1, RBS transformation can destroy the global structure features of the images to some extent and retain the local features of the images. + +![](images/6e43d248b91ce4a9b95bb0d05c5a20981f1116fb2b188a50840038d96cfa86da.jpg) +Figure 1: Illustration of the RBS transformation for $k = 3$ . For a better understanding on the RBS transformation, we paint the split image blocks with different colors. + +Then we apply the RBS transformation on adversarial training. Different from normal adversarial training, we use the RBS-transformed adversarial examples rather than normal adversarial examples as the adversarial information to encourage the models to learn robust local features. Note that we only use the RBS transformation as a tool to learn the robust local features during adversarial training and will not use RBS transformation in the inference phase. We refer to the form of adversarial training as RBS Adversarial Training (RBSAT). + +To demonstrate the generality of our argument, we consider two currently state-of-the-art adversarial training frameworks, PGD Adversarial Training (PGDAT) (Madry et al., 2018) and TRADES (Zhang et al., 2019a), to demonstrate the effectiveness of the robust local features. + +We use the following loss function as the alternative to the objective function of PGDAT: + +$$ +\mathcal {L} _ {\mathrm {P G D A T}} ^ {\mathrm {R L F L}} (F; x, y) = \mathcal {L} (F; \mathrm {R B S} \left(x _ {\mathrm {P G D}} ^ {\prime}\right), y), \tag {7} +$$ + +where $\mathrm{RBS}(\cdot)$ denotes the RBS transformation; $x_{\mathrm{PGD}}^{\prime}$ is obtained via the PGD attack on the cross entropy $\mathcal{L}(F;x,y)$ . + +Similarly, we use the following loss function as the alternative to the objective function of TRADES: + +$$ +\mathcal {L} _ {\text {T R A D E S}} ^ {\text {R L F L}} (F; x, y) = \mathcal {L} (F; x, y) + \lambda D _ {\mathrm {K L}} \left[ p _ {F} (\cdot | x) \| p _ {F} (\cdot | \mathrm {R B S} \left(x _ {\mathrm {P G D}} ^ {\prime} [ x ]\right)) \right], \tag {8} +$$ + +where $x_{\mathrm{PGD}}'[\boldsymbol{x}]$ is obtained via the PGD attack on the KL-divergence $D_{\mathrm{KL}}(p_F(\cdot | \boldsymbol{x}) \| p_F(\cdot | \boldsymbol{x}'))$ . + +# 3.2 ROBUST LOCAL FEATURE TRANSFER + +Since the type of input images in the training phase and the inference phase is different (RBS transformed images for training, versus original images for inference), we consider to transfer the knowledge of the robust local features learned by RBSAT to the normal adversarial examples. Specifically, we present a knowledge transfer scheme, called Robust Local Feature Transfer (RLFT). The goal of RLFT is to learn the representation that minimizes the feature shift between the normal adversarial examples and the RBS-transformed adversarial examples. + +In particular, we apply RLFT on the logit layer for high-level feature alignment. Formally, the objective functions of robust local feature transfer for PGDAT and TRADES are formalized as follows, respectively: + +$$ +\mathcal {L} _ {\mathrm {P G D A T}} ^ {\mathrm {R L F T}} (F; x, y) = \left\| f \left(\mathrm {R B S} \left(x _ {\mathrm {P G D}} ^ {\prime}\right)\right) - f \left(x _ {\mathrm {P G D}} ^ {\prime}\right) \right\| _ {2} ^ {2}, \tag {9} +$$ + +$$ +\mathcal {L} _ {\mathrm {T R A D E S}} ^ {\mathrm {R L F T}} (F; x, y) = \| f (\mathrm {R B S} (x _ {\mathrm {P G D}} ^ {\prime} [ x ])) - f (x _ {\mathrm {P G D}} ^ {\prime} [ x ]) \| _ {2} ^ {2}, +$$ + +where $f(\cdot)$ denotes the mapping of the logit layer, and $\| \cdot \|_2^2$ denotes the squared Euclidean norm. + +# 3.3 OVERALL OBJECTIVE FUNCTION + +Since the quality of robust local feature transfer depends on the quality of the robust local features learned by RBSAT, we integrate RBSAT and RLFT into an end-to-end training framework, which we refer to as $RLFAT$ (Robust Local Features for Adversarial Training). The general training process of RLFAT is summarized in Algorithm 1. Note that the computational cost of RBS transformation (line 7) is negligible in the total computational cost. + +Algorithm 1 Robust Local Features for Adversarial Training (RLFAT). +1: Randomly initialize network $F(x)$ +2: Number of iterations $t\gets 0$ +3: repeat +4: $t\gets t + 1$ +5: Read a minibatch of data $\{x_{1},\dots,x_{m}\}$ from the training set; +6: Generate the normal adversarial examples $\{x_1^{adv},\dots,x_m^{adv}\}$ +7: Obtain the RBS-transformed adversarial examples $\{\mathrm{RBS}(x_1^{adv}),\dots,\mathrm{RBS}(x_m^{adv})\} ;$ +8: Calculate the overall loss following Eq. (10). +9: Update the parameters of network $F$ through back propagation; +10: until the training converges. + +We implement RLFAT in two currently state-of-the-art adversarial training frameworks, PGDAT and TRADES, and have new objective functions to learn the robust and well-generalized feature representations, which we call $\mathrm{RLFAT}_{\mathrm{P}}$ and $\mathrm{RLFAT}_{\mathrm{T}}$ : + +$$ +\mathcal {L} _ {\mathrm {R L F A T} _ {\mathrm {P}}} (F; x, y) = \mathcal {L} _ {\mathrm {P G D A T}} ^ {\mathrm {R L F L}} (F; x, y) + \eta \mathcal {L} _ {\mathrm {P G D A T}} ^ {\mathrm {R L F T}} (F; x, y), \tag {10} +$$ + +$$ +\mathcal {L} _ {\mathrm {R L F A T} _ {\mathrm {T}}} (F; x, y) = \mathcal {L} _ {\mathrm {T R A D E S}} ^ {\mathrm {R L F L}} (F; x, y) + \eta \mathcal {L} _ {\mathrm {T R A D E S}} ^ {\mathrm {R L F T}} (F; x, y), +$$ + +where $\eta$ is a hyper-parameter to balance the two terms. + +# 4 EXPERIMENTS + +In this section, to validate the effectiveness of RLFAT, we empirically evaluate our two implementations, denoted as $\mathrm{RLFAT_P}$ and $\mathrm{RLFAT_T}$ , and show that our models make significant improvement on both robust accuracy and standard accuracy on standard benchmark datasets, which provides strong support for our main hypothesis. Codes are available online1. + +# 4.1 EXPERIMENTAL SETUP + +Baselines. Since most previous defense methods provide few benefit in true adversarial robustness (Athalye et al., 2018; Li et al., 2019), we compare the proposed methods with state-of-the-art adversarial training defenses, PGD Adversarial Training (PGDAT) (Madry et al., 2018) and TRADES (Zhang et al., 2019a). + +Adversarial setting. We consider two attack settings with the bounded $\ell_{\infty}$ norm: the white-box attack setting and the black-box attack setting. For the white-box attack setting, we consider existing strongest white-box attacks: Projected Gradient Descent (PGD) (Madry et al., 2018) and Carlini-Wagner attack (CW) (Carlini & Wagner, 2017b). For the black-box attack setting, we perform the powerful black-box attack, $\mathcal{N}$ attack (Li et al., 2019), on a sample of 1,500 test inputs as it is time-consuming. + +Datasets. We compare the proposed methods with the baselines on widely used benchmark datasets, namely CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009). Since adversarially robust generalization becomes increasingly hard for high dimensional data and little training data (Schmidt et al., 2018), we also consider one challenging dataset: STL-10 (Coates et al.), which contains 5,000 training images, with $96 \times 96$ pixels per image. + +Neural networks. For STL-10, the architecture we use is a wide ResNet 40-2 (Zagoruyko & Komodakis, 2016). For CIFAR-10 and CIFAR-100, we use a wide ResNet w32-10. For all datasets, we scale the input images to the range of [0, 1]. + +Hyper-parameters. To avoid posting much concentrate on optimizing the hyper-parameters, for all datasets, we set the hyper-parameter $\lambda$ in TRADES as 6, set the hyper-parameter $\eta$ in $\mathrm{RLFAT}_{\mathrm{P}}$ as 0.5, and set the hyper-parameter $\eta$ in $\mathrm{RLFAT}_{\mathrm{T}}$ as 1. For the training jobs of all our models, we set the hyper-parameters $k$ of the RBS transformation as 2. More details about the hyper-parameters are provided in Appendix A. + +# 4.2 EVALUATION RESULTS + +We first validate our main hypothesis: for adversarial training, is it possible to learn the robust local features that have better adversially robust generalization and better standard generalization? + +In Table 1, we compare the accuracy of $\mathrm{RLFAT_P}$ and $\mathrm{RLFAT_T}$ with the competing baselines on three standard datasets. The proposed methods lead to consistent and significant improvements on adversarial robustness as well as standard accuracy over the baseline models on all datasets. With the robust local features, $\mathrm{RLFAT_T}$ achieves better adversially robust generalization and better standard generalization than TRADES. $\mathrm{RLFAT_P}$ also works similarly, showing a significant improvement on the robustness against all attacks and standard accuracy than PGDAT. + +Table 1: The classification accuracy (\%) of defense methods under white-box and black-box attacks on STL-10, CIFAR-10 and CIFAR-100. +(a) STL-10. The magnitude of perturbation is 0.03 in $\ell_{\infty}$ norm. + +
DefenseNo attackPGDCWNattack
PGDAT67.0530.0031.9734.80
TRADES65.2438.9938.3542.07
RLFATP71.4738.4238.4244.80
RLFATT72.3843.3639.3148.13
+ +(b) CIFAR-10. The magnitude of perturbation is 0.03 in $\ell_{\infty}$ norm. + +
DefenseNo attackPGDCWNattack
PGDAT82.9646.1946.4146.67
TRADES80.3550.9549.8052.47
RLFATP84.7753.9752.4054.60
RLFATT82.7258.7551.9454.60
+ +(c) CIFAR-100. The magnitude of perturbation is 0.03 in $\ell_{\infty}$ norm. + +
DefenseNo attackPGDCWNattack
PGDAT55.8623.3222.8722.47
TRADES52.1327.2624.6625.13
RLFATP56.7031.9929.0432.53
RLFATT58.9631.6327.5430.86
+ +The results demonstrate that, the robust local features can significantly improve both the adversarially robust generalization and the standard generalization over the state-of-the-art adversarial training frameworks, and strongly support our hypothesis. That is, for adversarial training, it is possible to learn the robust local features, which have better robust and standard generalization. + +# 4.3 LOSS SENSITIVITY UNDER DISTRIBUTION SHIFT + +Motivation. Ding et al. (2019) and Zhang et al. (2019b) found that the effectiveness of adversarial training is highly sensitive to the "semantic-loss" shift of the test data distribution, such as gamma mapping. To further investigate the performance of the proposed methods, we quantify the smoothness of the models under the distribution shifts of brightness perturbation and gamma mapping. + +Loss sensitivity on brightness perturbation. To quantify the smoothness of models on the shift of the brightness perturbation, we propose to estimate the Lipschitz continuity constant $\ell_{\mathcal{F}}$ by using the gradients of the loss function with respect to the brightness perturbation of the testing data. We adjust the brightness factor of images in the HSV (hue, saturation, value) color space, which we refer to as $x^{b} = \mathcal{V}(x,\alpha)$ , where $\alpha$ denotes the magnitude of the brightness adjustment. The lower the value of $\ell_{\mathcal{F}}^b (\alpha)$ is, the smoother the loss function of the model is: + +$$ +\ell_ {\mathcal {F}} ^ {b} (\alpha) = \frac {1}{m} \sum_ {i = 1} ^ {m} \| \nabla_ {x} \mathcal {L} (F; \mathcal {V} (x _ {i}, \alpha), y _ {t r u e}) \| _ {2} \tag {11} +$$ + +Loss sensitivity on gamma mapping. Gamma mapping (Szeliski, 2011) is a nonlinear elementwise operation used to adjust the exposure of images by applying $\tilde{x}^{(\gamma)} = x^{\gamma}$ on the original image $x$ . Similarly, we approximate the loss sensitivity under gamma mapping, by using the gradients of the loss function with respect to the gamma mapping of the testing data. A smaller value indicates a smoother loss function. + +$$ +\ell_ {\mathcal {F}} ^ {g} (\gamma) = \frac {1}{m} \sum_ {i = 1} ^ {m} \| \nabla_ {x} \mathcal {L} \left(F; x _ {i} ^ {\gamma}, y _ {\text {t r u e}}\right) \| _ {2} \tag {12} +$$ + +Sensitivity analysis. The results for the loss sensitivity of the adversarially trained models under brightness perturbation are reported in Table 2a, where we adopt various magnitude of brightness adjustment on each testing data. In Table 2b, we report the loss sensitivity of adversarially trained models under various gamma mappings. We observe that $\mathrm{RLFAT_T}$ provides the smoothest model under the distribution shifts on all the three datasets. The results suggest that, as compared to PGDAT and TRADES, both $\mathrm{RLFAT_P}$ and $\mathrm{RLFAT_T}$ show lower gradients of the models on different data distributions, which we can directly attribute to the robust local features. + +Table 2: The loss sensitivity of defense methods under different testing data distributions. +(a) Loss sensitivity on brightness perturbation for the adversarially trained models. + +
DatasetlfbF(-0.15) / lbbF(0.15)
PGDATTRADESRLFATPRLFATT
STL-100.85 / 0.820.46 / 0.480.32 / 0.340.22 / 0.23
CIFAR-101.41 / 1.360.88 / 0.900.70 / 0.710.54 / 0.56
CIFAR-1003.93 / 3.662.31 / 2.121.25 / 1.311.00 / 0.98
+ +(b) Loss sensitivity on gamma mapping for the adversarially trained models. + +
DatasetlfgF(0.8) / lfgF(1.2)
PGDATTRADESRLFATPRLFATT
STL-100.77 / 0.790.44 / 0.420.30 / 0.290.21 / 0.19
CIFAR-101.27 / 1.200.84 / 0.760.69 / 0.620.54 / 0.48
CIFAR-1002.82 / 2.801.78 / 1.761.09 / 1.010.95 / 0.88
+ +# 4.4 ABLATION STUDIES + +To further gain insights on the performance obtained by the robust local features, we perform ablation studies to dissect the impact of various components (robust local feature learning and robust local feature transfer). As shown in Figure 2, we conduct additional experiments for the ablation studies of $\mathrm{RLFAT_P}$ and $\mathrm{RLFAT_T}$ on STL-10, CIFAR-10 and CIFAR-100, where we report the standard accuracy over the clean data and the average robust accuracy over all the attacks for each model. + +Does robust local feature learning help? We first analyze that as compared to adversarial training on normal adversarial examples, whether adversarial training on RBS-transformed adversarial examples produces better generalization and more robust features. As shown in Figure 2, we observe that Robust Local Features Learning (RLFL) exhibits stable improvements on both standard accuracy and robust accuracy for $\mathrm{RLFAT_P}$ and $\mathrm{RLFAT_T}$ , providing strong support for our hypothesis. + +![](images/fb91b971be45bfc250facef788f3276db4ddafe8a08b8c916e5a2716fd94444b.jpg) + +![](images/563d9eba558341fb8687a26496125aebac6d3eb53ab08726fa113478f6d37d11.jpg) + +![](images/ee32db0139148c0939a3721e7d29469270122389ab0c1544023791a73d7d1a28.jpg) + +![](images/1689920085e4fc421001f0d1063490cffbca7cad1f04f67eda99573d3e81e26f.jpg) +(a) STL-10 + +![](images/f9b1d671f22dc75b022ea306239cb0e13e2741467529e21d64eb976f5dc62b33.jpg) +(b) CIFAR-10 + +![](images/019337aa7a62741f5a1e37d7601539ca575478c4bed116513f5addadaf369b65.jpg) +(c) CIFAR-100 +Figure 2: Ablation studies for $\mathrm{RLFAT_P}$ and $\mathrm{RLFAT_T}$ to investigate the impact of Robust Local Feature Learning (RLFL) and Robust Local Feature Transfer (RLFT). + +Does robust local feature transfer help? We further add Robust Local Feature Transfer (RLFT), the second term in Eq. (10), to get the overall loss of RLFAT. The robust accuracy further increases on all datasets for $\mathrm{RLFAT_P}$ and $\mathrm{RLFAT_T}$ . The standard accuracy further increases also, except for $\mathrm{RLFAT_P}$ on CIFAR-100, but it is still clearly higher than the baseline model PGDAT. It indicates that transferring the robust local features into the training of normal adversarial examples does help promote the standard accuracy and robust accuracy in most cases. + +# 4.5 VISUALIZING THE SALIENCE MAPS + +We would like to investigate the features of the input images that the models are mostly focused on. Following the work of Zhang & Zhu (2019), we generate the salience maps using SmoothGrad (Smilkov et al., 2017) on STL-10 dataset. The key idea of SmoothGrad is to average the gradients of class activation with respect to noisy copies of an input image. As illustrated in Figure 3, all the adversarially trained models basically capture the global structure features of the object on the images. As compared to PGDAT and TRADES, both $\mathrm{RLFAT_P}$ and $\mathrm{RLFAT_T}$ capture more local feature information of the object, aligning better with human perception. Note that the images are correctly classified by all these models. For more visualization results, see Appendix B. + +# 5 CONCLUSION AND FUTURE WORK + +Differs to existing adversarially trained models that are more biased towards the global structure features of the images, in this work, we hypothesize that robust local features can improve the generalization of adversarial training. To validate this hypothesis, we propose a new stream of adversarial training approach called Robust Local Features for Adversarial Training (RLFAT) and implement it in currently state-of-the-art adversarial training frameworks, PGDAT and TRADES. We provide strong empirical support for our hypothesis and show that the proposed methods based on RLFAT not only yield better standard generalization but also promote the adversarially robust generalization. Furthermore, we show that the salience maps of our models on images tend to align better with human perception, uncovering certain unexpected benefit of the robust local features for adversarial training. + +Our findings open a new avenue for improving adversarial training, whereas there are still a lot to explore along this avenue. First, is it possible to explicitly disentangle the robust local features from the perspective of feature disentanglement? What is the best way to leverage the robust local + +![](images/4263167479839dea29d8a1cfd1d18ffb85e9135c48ceff5696eaa9f17f71df98.jpg) +Figure 3: Salience maps of the four models on sampled images. For each group of images, we have the original image, and the salience maps of the four models sequentially. + +features? Second, from a methodological standpoint, the discovered relationship may also serve as an inspiration for new adversarial defenses, where not only the robust local features but also the global information is taken into account, as the global information is useful for some tasks. These questions are worth investigation in future work, and we hope that our observations on the benefit of robust local features will inspire more future development. + +# ACKNOWLEDGMENTS + +This work is supported by the Fundamental Research Funds for the Central Universities (2019kfyXKJC021) and Microsoft Research Asia. + +# REFERENCES + +Anish Athalye, Nicholas Carlini, and David A. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, pp. 274-283, 2018. +Jacob Buckman, Aurko Roy, Colin Raffel, and Ian J. Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. In 6th International Conference on Learning Representations, ICLR 2018, 2018. +Nicholas Carlini and David A. Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec@CCS 2017, pp. 3-14, 2017a. +Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017, pp. 39-57, 2017b. +Adam Coates, Andrew Y. Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2011. +Guneet S. Dhillon, Kamyar Azizzadenesheli, Zachary C. Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Animashree Anandkumar. Stochastic activation pruning for robust adversarial defense. In 6th International Conference on Learning Representations, ICLR 2018, 2018. + +Gavin Weiguang Ding, Kry Yik Chau Lui, Xiaomeng Jin, Luyu Wang, and Ruitong Huang. On the sensitivity of adversarial robustness to input data distributions. In 7th International Conference on Learning Representations, ICLR 2019, 2019. +Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In 7th International Conference on Learning Representations, ICLR 2019, 2019. +Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, 2015. +Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Brian Kingsbury, et al. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal processing magazine, 29, 2012. +Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. CoRR, abs/1905.02175, 2019. +Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012, pp. 1106-1114, 2012. +Yandong Li, Lijun Li, Liqiang Wang, Tong Zhang, and Boqing Gong. NATTACK: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, pp. 3866-3876, 2019. +Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, and John E. Hopcroft. Nesterov accelerated gradient and scale invariance for adversarial attacks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJ1HwkBYDH. +Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings, 2017. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, 2018. +Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, AsiaCCS 2017, pp. 506-519, 2017. +Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, pp. 5019-5031, 2018. +Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. CoRR, abs/1706.03825, 2017. +Chuanbiao Song, Kun He, Liwei Wang, and John E. Hopcroft. Improving the generalization of adversarial training with domain adaptation. In 7th International Conference on Learning Representations, ICLR 2019, 2019. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, 2014. + +Richard Szeliski. Computer Vision - Algorithms and Applications. Texts in Computer Science. Springer, 2011. +Jonathan Uesato, Brendan O'Donoghue, Pushmeet Kohli, and Aaron van den Oord. Adversarial risk and the dangers of evaluating against weak attacks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, pp. 5032-5041, 2018. +Huaxia Wang and Chun-Nam Yu. A direct approach to robust deep learning using adversarial networks. In 7th International Conference on Learning Representations, ICLR 2019, 2019. +Xiaosen Wang, Kun He, and John E. Hopcroft. AT-GAN: A generative attack model for adversarial transferring on generative adversarial nets. CoRR, abs/1904.07793, 2019. +Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Proceedings of the British Machine Vision Conference 2016, BMVC, 2016. +Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John E. Hopcroft, and Liwei Wang. Adversarial robust generalization just requires more unlabeled data. CoRR, abs/1906.00555, 2019. +Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, pp. 7472-7482, 2019a. +Huan Zhang, Hongge Chen, Zhao Song, Duane S. Boning, Inderjit S. Dhillon, and Cho-Jui Hsieh. The limitations of adversarial training and the blind-spot attack. In 7th International Conference on Learning Representations, ICLR 2019, 2019b. +Tianyuan Zhang and Zhanxing Zhu. Interpreting adversarially trained convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, pp. 7502-7511, 2019. + +# A HYPER-PARAMETER SETTING + +Here we show the details of the training hyper-parameters and the attack hyper-parameters for the experiments. + +Training Hyper-parameters. For all training jobs, we use the Adam optimizer with a learning rate of 0.001 and a batch size of 32. For CIFAR-10 and CIFAR-100, we run 79,800 steps for training. For STL-10, we run 29,700 steps for training. For STL-10 and CIFAR-100, the adversarial examples are generated with step size 0.0075, 7 iterations, and $\epsilon = 0.03$ . For CIFAR-10, the adversarial examples are generated with step size 0.0075, 10 iterations, and $\epsilon = 0.03$ . + +Attack Hyper-parameters. For the PGD attack, we use the same attack parameters as those of the training process. For the CW attack, we use PGD to minimize its loss function with a high confidence parameter ( $k = 50$ ) following the work of Madry et al. (2018). For the $\mathcal{N}$ attack, we set the maximum number of optimization iterations to $T = 200$ , $b = 300$ for the sample size, the variance of the isotropic Gaussian $\sigma^2 = 0.01$ , and the learning rate $\eta = 0.008$ . + +# B MORE FEATURE VISUALIZATION + +We provide more salience maps of the adversarially trained models on sampled images in Figure 4. + +![](images/d6112ce5c5356d712aa18b33e1e8d585c1edefb1098f4ef3baea82f64310e44a.jpg) +Figure 4: More Salience maps of the four models. For each group of images, we have the original image, and the salience maps of the four models sequentially. + +![](images/54be9e543fe4227172ef4ac1cb2299d0af23ff494905f9827803ac3f1ae574cd.jpg) \ No newline at end of file diff --git a/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/images.zip b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e07dbb4a38fbc72966870c95e72fb666d99f18b5 --- /dev/null +++ b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df80384c85356d2963d2035c35623b650892472de01e33ab1711381d700d7136 +size 661057 diff --git a/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/layout.json b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..eff0ec1cf9a944e02b1c544712292629a4427695 --- /dev/null +++ b/robustlocalfeaturesforimprovingthegeneralizationofadversarialtraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96a67d593095a94e202fc6cbe6acffd25a7e16a67120c98cc375481b4a779d24 +size 372401 diff --git a/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_content_list.json b/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ef328df59817833e01d589ef3ebf061ae4c88bc1 --- /dev/null +++ b/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23771651fd60348bdc1b0a4a3a36c419cd7a1281871af4a6b90d3d05b2b05128 +size 152906 diff --git a/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_model.json b/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5bfda8561b2b4707b870c4f1ed0c7497da13ed84 --- /dev/null +++ b/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6902a98fd247f5b3cb8bf4f04add14e4d84bccb84c0fa0e904d135d959f30f60 +size 176994 diff --git a/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_origin.pdf b/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..95b62e1178a5a37c92514ccaf5eacc12e28ff06f --- /dev/null +++ b/robustnessverificationfortransformers/e5f6d69b-e193-4949-bc01-446fb85ad7a3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b501a94df517acda7b21f107f03d8359484debd23a51bdab7056d57f6695fccf +size 1289175 diff --git a/robustnessverificationfortransformers/full.md b/robustnessverificationfortransformers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..36d8da3f387439dbf8a76d23b8d549d22c81cc1b --- /dev/null +++ b/robustnessverificationfortransformers/full.md @@ -0,0 +1,678 @@ +# ROBUSTNESS VERIFICATION FOR TRANSFORMERS + +Zhouxing Shi $^{1}$ , Huan Zhang $^{2}$ , Kai-Wei Chang $^{2}$ , Minlie Huang $^{1}$ , Cho-Jui Hsieh $^{2}$ + +$^{1}$ Dept. of Computer Science & Technology, Tsinghua University, Beijing 10084, China + +$^{2}$ Dept. of Computer Science, University of California, Los Angeles, CA 90095, USA + +zhouxingshichn@gmail.com,huan@huan-zhang.com + +kw@kwchang.net,aihuang@tsinghua.edu.cn,chohsieh@cs.ucla.edu + +# ABSTRACT + +Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees. However, previous methods can usually only handle neural networks with relatively simple architectures. In this paper, we consider the robustness verification problem for Transformers. Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous works. We resolve these challenges and develop the first robustness verification algorithm for Transformers. The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation. These bounds also shed light on interpreting Transformers as they consistently reflect the importance of different words in sentiment analysis. + +# 1 INTRODUCTION + +Deep neural networks have been successfully applied to many domains. However, these black-box models are generally difficult to analyze and their behavior is not guaranteed. Moreover, it has been shown that the predictions of deep networks become unreliable and unstable when tested in unseen situations, e.g., in the presence of small adversarial perturbations to the input (Szegedy et al., 2013; Goodfellow et al., 2014; Lin et al., 2019). Therefore, neural network verification has become an important tool for analyzing and understanding the behavior of neural networks, with applications in safety-critical applications (Katz et al., 2017; Julian et al., 2019; Lin et al., 2019), model explanation (Shih et al., 2018) and robustness analysis (Tjeng et al., 2019; Wang et al., 2018c; Gehr et al., 2018; Wong & Kolter, 2018; Singh et al., 2018; Weng et al., 2018; Zhang et al., 2018). + +Formally, a neural network verification algorithm aims to provably characterize the prediction of a network within some input space. For example, given a $K$ -way classification model $f: \mathbb{R}^d \to \mathbb{R}^K$ , where $f_i(\mathbf{x})$ stands for the predicted score of class $i$ , we can verify some linear specification (defined by a vector $\mathbf{c}$ ) as below: + +$$ +\min _ {\mathbf {x}} \sum_ {i} \mathbf {c} _ {i} f _ {i} (\mathbf {x}) \quad \text {s . t .} \mathbf {x} \in \mathbb {S}, \tag {1} +$$ + +where $\mathbb{S}$ is a predefined input space. In the robustness verification problem, $\mathbb{S} = \{\mathbf{x} \mid \| \mathbf{x} - \mathbf{x}_0\|_p \leq \epsilon\}$ is defined as some small $\ell_p$ -ball around the original example $\mathbf{x}_0$ , and setting up $\mathbf{c} = 1_{y_0} - 1_y$ enables us to verify whether the logit output of class $y_0$ is always greater than another class $y$ for any input within $\mathbb{S}$ . This is a nonconvex optimization problem which makes computing the exact solution challenging, and thus several algorithms are recently proposed to find the lower bounds of Eq. (1) in order to efficiently obtain a safety guarantee (Gehr et al., 2018; Weng et al., 2018; Zhang et al., 2018; Singh et al., 2019). Moreover, extensions of these algorithms can be used for verifying properties beyond robustness, such as rotation or shift invariant (Singh et al., 2019), conservation of energy (Qin et al., 2019) and model correctness (Yang & Rinarid, 2019). + +However, most of existing verification methods focus on relatively simple neural network architectures, such as feed-forward and recurrent neural networks, while they cannot handle complex structures. In this paper, we develop the first robustness verification algorithm for Transformers (Vaswani et al., 2017) with self-attention layers. Transformers have been widely used in natural language processing (Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019) and many other domains (Parmar + +et al., 2018; Kang & McAuley, 2018; Li et al., 2019b; Su et al., 2019; Li et al., 2019a). For frames under perturbation in the input sequence, we aim to compute a lower bound $\epsilon$ such that when these frames are perturbed within $\ell_p$ -balls centered at the original frames respectively and with a radius of $\epsilon$ , the model prediction is certified to be unchanged. To compute such bounds efficiently, we adopt the linear-relaxation framework (Weng et al., 2018; Zhang et al., 2018) - we recursively propagate and compute linear lower and upper bounds for each neuron w.r.t the input within the perturbation space $S$ . + +We resolve several particular challenges in verifying Transformers. First, Transformers with self-attention layers have a complicated architecture. Unlike simpler networks, they cannot be written as multiple layers of affine transformations or element-wise activation functions. Therefore, we need to propagate linear bounds differently for self-attention layers. Second, dot products, softmax, and weighted summation in self-attention layers involve multiplication or division of two variables both under perturbation, namely cross-nonlinearity, which is not present in feed-forward networks. Ko et al. (2019) proposed a gradient descent based approach to find linear bounds, however it is inefficient and poses a computational challenge for Transformer verification since self-attention is the core of Transformers. In contrast, we derive closed-form linear bounds that can be computed in $O(1)$ complexity. Third, in the computation of self-attention, output neurons in each position depend on all input neurons from different positions (namely cross-position dependency), unlike the case in recurrent neural networks where outputs depend on only the hidden features from the previous position and the current input. Previous works (Zhang et al., 2018; Weng et al., 2018; Ko et al., 2019) have to track all such dependency and thus is costly in time and memory. To tackle this, we introduce an efficient bound propagating process in a forward manner specially for self-attention layers, enabling the tighter backward bounding process for other layers to utilize bounds computed by the forward process. In this way, we avoid cross-position dependency in the backward process which is relatively slower but produces tighter bounds. Combined with the forward process, the complexity of the backward process is reduced by $O(n)$ for input length $n$ , while the computed bounds remain comparably tight. Our contributions are summarized below: + +- We propose an effective and efficient algorithm for verifying the robustness of Transformers with self-attention layers. To our best knowledge, this is the first method for verifying Transformers. +- We resolve key challenges in verifying Transformers, including cross-nonlinearity and cross-position dependency. Our bounds are significantly tighter than those by adapting Interval Bound Propagation (IBP) (Mirman et al., 2018; Gowal et al., 2018). +- We quantitatively and qualitatively show that the certified bounds computed by our algorithm consistently reflect the importance of input words in sentiment analysis, which justifies that these bounds are meaningful in practice and they shed light on interpreting Transformers. + +# 2 RELATED WORK + +Robustness Verification for Neural Networks. Given an input $\mathbf{x}_0$ and a small region $\mathbb{B}_p(\mathbf{x}_0,\epsilon) := \{\mathbf{x} \mid \| \mathbf{x} - \mathbf{x}_0\|_p \leq \epsilon\}$ , the goal of robustness verification is to verify whether the prediction of the neural network is unchanged within this region. This problem can be mathematically formulated as Eq. (1). If Eq. (1) can be solved optimally, then we can derive the minimum adversarial perturbation of $\mathbf{x}$ by conducting binary search on $\epsilon$ . Equivalently, we obtain the maximum $\epsilon$ such that any perturbation within $\mathbb{B}_p(\mathbf{x}_0,\epsilon)$ cannot change the predicted label. + +Several works focus on solving Eq. (1) exactly and optimally, using mixed integer linear programming (MILP) (Tjeng et al., 2019; Dutta et al., 2018), branch and bound (BaB) (Bunel et al., 2018), and satisfiability modulo theory (SMT) (Ehlers, 2017; Katz et al., 2017). Unfortunately, due to the nonconvexity of model $f$ , solving Eq. (1) is NP-hard even for a simple ReLU network (Katz et al., 2017). Therefore, we can only expect to compute a lower bound of Eq. (1) efficiently by using relaxations. Many algorithms can be seen as using convex relaxations for non-linear activation functions (Salman et al., 2019), including using duality (Wong & Kolter, 2018; Dvijotham et al., 2018), abstract domains (Gehr et al., 2018; Singh et al., 2018; Mirman et al., 2018; Singh et al., 2019), layer-by-layer reachability analysis (Wang et al., 2018b; Weng et al., 2018; Zhang et al., 2018; Gowal et al., 2018) and semi-definite relaxations (Raghunathan et al., 2018; Dvijotham et al., 2019). + +Additionally, robustness verification can rely on analysis on local Lipschitz constants (Hein & Andriushchenko, 2017; Zhang et al., 2019). However, existing methods are mostly limited to verifying networks with relatively simple architectures, such as feed-forward networks and RNNs (Wang et al., 2018a; Akintunde et al., 2019; Ko et al., 2019), while none of them are able to handle Transformers. + +Transformers and Self-Attentive Models. Transformers (Vaswani et al., 2017) based on self-attention mechanism, further with pre-training on large-scale corpora, such as BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), achieved state-of-the-art performance on many NLP tasks. Self-attentive models are also useful beyond NLP, including VisualBERT on vision and language applications (Li et al., 2019b; Su et al., 2019), image transformer for image generation (Parmar et al., 2018), acoustic models for speech recognition (Zhou et al., 2018), sequential recommendation (Kang & McAuley, 2018) and graph embedding (Li et al., 2019a). + +The robustness of NLP models has been studied, especially many methods have been proposed to generate adversarial examples (Papernot et al., 2016; Jia & Liang, 2017; Zhao et al., 2017; Alzantot et al., 2018; Cheng et al., 2018; Ebrahimi et al., 2018; Shi et al., 2019). In particular, Hsieh et al. (2019) showed that Transformers are more robust than LSTMs. However, there is not much work on robustness verification for NLP models. Ko et al. (2019) verified RNN/LSTM. Jia et al. (2019); Huang et al. (2019) used Interval Bound Propagation (IBP) for certified robustness training of CNN and LSTM. In this paper, we propose the first verification method for Transformers. + +# 3 METHODOLOGY + +We aim to verify the robustness of a Transformer whose input is a sequence of frames $\mathbf{X} = [\mathbf{x}^{(1)},\mathbf{x}^{(2)},\dots ,\mathbf{x}^{(n)}]$ . We take binary text classification as a running example, where $\mathbf{x}^{(i)}$ is a word embedding and the model outputs a score $y_{c}(\mathbf{X})$ for each class $c$ ( $c\in \{0,1\}$ ). Nevertheless, our method for verifying Transformers is general and can also be applied in other applications. + +For a clean input sequence $\mathbf{X}_0 = [\mathbf{x}_0^{(1)},\mathbf{x}_0^{(2)},\dots ,\mathbf{x}_0^{(n)}]$ correctly classified by the model, let $P = \{r_1,r_2,\dots ,r_t\} (1\leq r_k\leq n)$ be the set of perturbed positions, where $t$ is the number of perturbed positions. Thus the perturbed input belongs to $\mathbb{S}_{\epsilon}\coloneqq \{\mathbf{X} = [\mathbf{x}^{(1)},\mathbf{x}^{(2)},\dots ,\mathbf{x}^{(n)}]:\| \mathbf{x}^{(r_k)} - \mathbf{x}_0^{(r_k)}\| _p\leq \epsilon ,1\leq k\leq t,\mathbf{x}^{(i)} = \mathbf{x}_0^{(i)},\forall i\notin P\}$ . Assuming that $c$ is the gold class, the goal of robustness verification is to compute + +$$ +\left\{\min _ {\mathbf {X} \in \mathbb {S}} y _ {c} (\mathbf {X}) - y _ {1 - c} (\mathbf {X}) \right\} := \delta_ {\epsilon}. +$$ + +If $\delta_{\epsilon} > 0$ , the output score of the correct class is always larger than the incorrect one for any input within $\mathbb{S}_{\epsilon}$ . As mentioned previously, computing the exact values of $\delta_{\epsilon}$ is NP-hard, and thus our goal is to efficiently compute a lower bound $\delta_{\epsilon}^{L} \leq \delta_{\epsilon}$ . + +# 3.1 BASE FRAMEWORK + +We obtain $\delta_{\epsilon}^{L}(\mathbf{X})$ by computing the bounds of each neuron when $\mathbf{X}$ is perturbed within $\mathbb{S}_{\epsilon}(\delta_{\epsilon}^{L}$ can be regarded as a final neuron). A Transformer layer can be decomposed into a number of sub-layers, where each sub-layer contains neurons after some operations. These operations can be categorized into three categories: 1) linear transformations, 2) unary nonlinear functions, and 3) operations in self-attention. Each sub-layer contains $n$ positions in the sequence and each position contains a group of neurons. We assume that the Transformer we verify has $m$ sub-layers in total, and the value of the $j$ -th neuron at the $i$ -th position in the $l$ -th sub-layer is $\Phi_j^{(l,i)}(\mathbf{X})$ , where $\Phi^{(l,i)}(\mathbf{X})$ is a vector for the specified sub-layer and position. Specially, $\Phi^{(0,i)} = \mathbf{x}^{(i)}$ taking $l = 0$ . We aim to compute a global lower bound $f_{j}^{(l,i),L}$ and a global upper bound $f_{j}^{(l,i),U}$ of $\Phi_j^{(l,i)}(\mathbf{X})$ for $\mathbf{X} \in \mathbb{S}_{\epsilon}$ . + +We compute bounds from the first sub-layer to the last sub-layer. For neurons in the $l$ -th layer, we aim to represent their bounds as linear functions of neurons in a previous layer, the $l'$ -th layer: + +$$ +\sum_ {k = 1} ^ {n} \boldsymbol {\Lambda} _ {j,:} ^ {(l, i, l ^ {\prime}, k), L} \Phi^ {(l ^ {\prime}, k)} (\mathbf {X}) + \boldsymbol {\Delta} _ {j} ^ {(l, i, l ^ {\prime}), L} \leq \Phi_ {j} ^ {(l, i)} (\mathbf {X}) \leq \sum_ {k = 1} ^ {n} \boldsymbol {\Lambda} _ {j,:} ^ {(l, i, l ^ {\prime}, k), U} \Phi^ {(l ^ {\prime}, k)} (\mathbf {X}) + \boldsymbol {\Delta} _ {j} ^ {(l, i, l ^ {\prime}), U}, \tag {2} +$$ + +where $\Lambda^{(l,i,l',k),L},\Delta^{(l,i,l'),L}$ and $\Lambda^{(l,i,l',k),U},\Delta^{(l,i,l'),U}$ are parameters of linear lower and upper bounds respectively. Using linear bounds enables us to efficiently compute bounds with a reasonable tightness. We initially have $\Lambda^{(l,i,l,i),L} = \Lambda^{(l,i,l,i),U} = \mathbf{I}$ and $\Delta^{(l,i,l),L} = \Delta^{(l,i,l),U} = \mathbf{0}$ . Thereby the right-hand-side of Eq. (2) equals to $\Phi_j^{(l,i)}(\mathbf{X})$ when $l' = l$ . Generally, we use a backward process to propagate the bounds to previous sub-layers, by substituting $\Phi^{(l',i)}$ with linear functions of previous neurons. It can be recursively conducted until the input layer $l' = 0$ . Since $\Phi^{(0,k)} = \mathbf{x}^{(k)} = \mathbf{x}_0^{(k)} (\forall k \notin P)$ is constant, we can regard the bounds as linear functions of the perturbed embeddings $\Phi^{(0,r_k)} = \mathbf{x}^{(r_k)} (1 \leq k \leq t)$ , and take the global bounds for $\mathbf{x}^{(r_k)} \in \mathbb{B}_p(\mathbf{x}_0^{(r_k)}, \epsilon)$ : + +$$ +f _ {j} ^ {(l, i), L} = - \epsilon \sum_ {k = 1} ^ {t} \| \boldsymbol {\Lambda} _ {j,:} ^ {(l, i, 0, r _ {k}), L} \| _ {q} + \sum_ {k = 1} ^ {n} \boldsymbol {\Lambda} _ {j,:} ^ {(l, i, 0, k), L} \mathbf {x} _ {0} ^ {(k)} + \boldsymbol {\Delta} _ {j} ^ {(l, i, 0), L}, \tag {3} +$$ + +$$ +f _ {j} ^ {(l, i), U} = \epsilon \sum_ {k = 1} ^ {t} \| \Lambda_ {j,:} ^ {(l, i, 0, r _ {k}), U} \| _ {q} + \sum_ {k = 1} ^ {n} \boldsymbol {\Lambda} _ {j,:} ^ {(l, i, 0, k), U} \mathbf {x} _ {0} ^ {(k)} + \boldsymbol {\Delta} _ {j} ^ {(l, i, 0), U}, \tag {4} +$$ + +where $1 / p + 1 / q = 1$ with $p, q \geq 1$ . These steps resemble to CROWN (Zhang et al., 2018) which is proposed to verify feed-forward networks. We further support verifying self-attentive Transformers which are more complex than feed-forward networks. Moreover, unlike CROWN that conducts a fully backward process, we combine the backward process with a forward process (see Sec. 3.3) to reduce the computational complexity of verifying Transformers. + +# 3.2 LINEAR TRANSFORMATIONS AND UNARY NONLINEAR FUNCTIONS + +Linear transformations and unary nonlinear functions are basic operations in neural networks. We show how bounds Eq. (2) at the $l'$ -th sub-layer are propagated to the $(l' - 1)$ -th layer. + +Linear Transformations If the $l'$ -th sub-layer is connected with the $(l' - 1)$ -th sub-layer with a linear transformation $\Phi^{(l',k)}(\mathbf{X}) = \mathbf{W}^{(l')}\Phi^{(l' - 1,k)}(\mathbf{X}) + \mathbf{b}^{(l')}$ where $\mathbf{W}^{(l')},\mathbf{b}^{(l')}$ are parameters of the linear transformation, we propagate the bounds to the $(l' - 1)$ -th layer by substituting $\Phi^{(l',k)}(\mathbf{X})$ : + +$$ +\boldsymbol {\Lambda} ^ {(l, i, l ^ {\prime} - 1, k), L / U} = \boldsymbol {\Lambda} ^ {(l, i, l ^ {\prime}, k), L / U} \mathbf {W} ^ {(l ^ {\prime})}, \boldsymbol {\Delta} ^ {(l, i, l ^ {\prime} - 1), L / U} = \boldsymbol {\Delta} ^ {(l, i, l ^ {\prime}), L / U} + \left(\sum_ {k = 1} ^ {n} \boldsymbol {\Lambda} ^ {(l, i, l ^ {\prime}, k), L / U}\right) \mathbf {b} ^ {(l ^ {\prime})}, +$$ + +where $L / U$ means that the equations hold for both lower bounds and upper bounds respectively. + +**Unary Nonlinear Functions** If the $l'$ -th layer is obtained from the $(l' - 1)$ -th layer with an unary nonlinear function $\Phi_j^{(l',k)}(\mathbf{X}) = \sigma^{(l')}(\Phi_j^{(l' - 1,k)}(\mathbf{X}))$ , to propagate linear bounds over the nonlinear function, we first bound $\sigma^{(l')}(\Phi_j^{(l' - 1,k)}(\mathbf{X}))$ with two linear functions of $\Phi_j^{(l' - 1,k)}(\mathbf{X})$ : + +$$ +\alpha_ {j} ^ {(l ^ {\prime}, k), L} \Phi_ {j} ^ {(l ^ {\prime} - 1, k)} (\mathbf {X}) + \beta_ {j} ^ {(l ^ {\prime}, k), L} \leq \sigma^ {(l ^ {\prime})} \left(\Phi_ {j} ^ {(l ^ {\prime} - 1, k)} (\mathbf {X})\right) \leq \alpha_ {j} ^ {(l ^ {\prime}, k), U} \Phi_ {j} ^ {(l ^ {\prime} - 1, k)} (\mathbf {X}) + \beta_ {j} ^ {(l ^ {\prime}, k), U}, +$$ + +where $\alpha_{j}^{(l',k),L / U},\beta_{j}^{(l',k),L / U}$ are parameters such that the inequation holds true for all $\Phi_j^{(l' - 1,k)}(\mathbf{X})$ within its bounds computed previously. Such linear relaxations can be done for different functions, respectively. We provide detailed bounds for functions involved in Transformers in Appendix B. + +We then back propagate the bounds: + +$$ +\boldsymbol {\Lambda} _ {:, j} ^ {(l, i, l ^ {\prime} - 1, k), L / U} = \alpha_ {j} ^ {(l ^ {\prime}, k), L / U} \boldsymbol {\Lambda} _ {:, j, +} ^ {(l, i, l ^ {\prime}, k), L / U} + \alpha_ {j} ^ {(l ^ {\prime}, k), U / L} \boldsymbol {\Lambda} _ {:, j, -} ^ {(l, i, l ^ {\prime}, k), L / U}, +$$ + +$$ +\boldsymbol {\Delta} _ {j} ^ {(l, i, l ^ {\prime} - 1), L / U} = \boldsymbol {\Delta} _ {j} ^ {(l, i, l ^ {\prime}), L / U} + \left(\sum_ {k = 1} ^ {n} \beta_ {j} ^ {(l ^ {\prime}, k), L / U} \boldsymbol {\Lambda} _ {:, j, +} ^ {(l, i, l ^ {\prime}, k), L / U} + \beta_ {j} ^ {(l ^ {\prime}, k), U / L} \boldsymbol {\Lambda} _ {:, j, -} ^ {(l, i, l ^ {\prime}, k), L / U}\right), +$$ + +where $\Lambda_{\therefore j, + }^{(l,i,l^{\prime},k),L / U}$ and $\Lambda_{\therefore j, - }^{(l,i,l^{\prime},k),L / U}$ mean to retain positive and negative elements in vector $\Lambda_{\therefore j}^{(l,i,l^{\prime},k),L / U}$ respectively and set other elements to 0. + +# 3.3 SELF-ATTENTION MECHANISM + +Self-attention layers are the most challenging parts for verifying Transformers. We assume that $\Phi^{(l - 1,i)}(\mathbf{X})$ is the input to a self-attention layer. We describe our method for computing bounds for one attention head, and bounds for different heads of the multi-head attention in Transformers can be easily concatenated. $\Phi^{(l - 1,i)}(\mathbf{X})$ is first linearly projected to queries $\mathbf{q}^{(l,i)}(\mathbf{X})$ , keys $\mathbf{k}^{(l,i)}(\mathbf{X})$ , and values $\mathbf{v}^{(l,i)}(\mathbf{X})$ with different linear projections, and their bounds can be obtained as described in Sec. 3.2. We also keep their linear bounds that are linear functions of the perturbed embeddings. For convenience, let $\mathbf{x}^{(r)} = \mathbf{x}^{(r_1)}\oplus \mathbf{x}^{(r_2)}\oplus \dots \mathbf{x}^{(r_t)}$ , where $\oplus$ indicates vector concatenation, and thereby we represent the linear bounds as linear functions of $\mathbf{x}^{(r)}$ : + +$$ +\Omega_ {j,:} ^ {(l, i), q / k / v, L} \mathbf {x} ^ {(r)} + \Theta_ {j} ^ {(l, i), q / k / v, L} \leq (\mathbf {q} / \mathbf {k} / \mathbf {v}) _ {j} ^ {(l, i)} (\mathbf {X}) \leq \Omega_ {j,:} ^ {(l, i), q / k / v, U} \mathbf {x} ^ {(r)} + \Theta_ {j} ^ {(l, i), q / k / v, U}, +$$ + +where $q / k / v$ and $\mathbf{q} / \mathbf{k} / \mathbf{v}$ mean that the inequation holds true for queries, keys and values respectively. We then bound the output of the self-attention layer starting from $\mathbf{q}^{(l,i)}(\mathbf{X})$ , $\mathbf{k}^{(l,i)}(\mathbf{X})$ , $\mathbf{v}^{(l,i)}(\mathbf{X})$ . + +Bounds of Multiplications and Divisions We bound multiplications and divisions in the self-attention mechanism with linear functions. We aim to bound bivariate function $z = xy$ or $z = \frac{x}{y} (y > 0)$ with two linear functions $z^{L} = \alpha^{L}x + \beta^{L}y + \gamma^{L}$ and $z^{U} = \alpha^{U}x + \beta^{U}y + \gamma^{U}$ , where $x \in [l_x, u_x], y \in [l_y, u_y]$ are bounds of $x, y$ obtained previously. For $z = xy$ , we derive optimal parameters: $\alpha^{L} = l_{y}$ , $\alpha^{U} = u_{y}$ , $\beta^{L} = \beta^{U} = l_{x}$ , $\gamma^{L} = -l_{x}l_{y}$ , $\gamma^{U} = -l_{x}u_{y}$ . We provide a proof in Appendix C. However, directly bounding $z = \frac{x}{y}$ is tricky; fortunately, we can bound it indirectly by first bounding a unary function $\overline{y} = \frac{1}{y}$ and then bounding the multiplication $z = x\overline{y}$ . + +A Forward Process For the self-attention mechanism, instead of using the backward process like CROWN (Zhang et al., 2018), we compute bounds with a forward process which we will show later that it can reduce the computational complexity. Attention scores are computed from $\mathbf{q}^{(l,i)}(\mathbf{X})$ and $\mathbf{k}^{(l,i)}(\mathbf{X})$ : $\mathbf{S}_{i,j}^{(l)} = (\mathbf{q}^{(l,i)}(\mathbf{X}))^T\mathbf{k}^{(l,j)}(\mathbf{X}) = \sum_{k=1}^{d_{qk}}\mathbf{q}_k^{(l,i)}(\mathbf{X})\mathbf{k}_k^{(l,j)}(\mathbf{X})$ , where $d_{qk}$ is the dimension of $\mathbf{q}^{(l,i)}(\mathbf{X})$ and $\mathbf{k}^{(l,j)}(\mathbf{X})$ . For each multiplication $\mathbf{q}_k^{(l,i)}(\mathbf{X})\mathbf{k}_k^{(l,j)}(\mathbf{X})$ , it is bounded by: + +$$ +\mathbf {q} _ {k} ^ {(l, i)} (\mathbf {X}) \mathbf {k} _ {k} ^ {(l, j)} (\mathbf {X}) \geq \alpha_ {k} ^ {(l, i, j), L} \mathbf {q} _ {k} ^ {(l, i)} (\mathbf {X}) + \beta_ {k} ^ {(l, i, j), L} \mathbf {k} _ {k} ^ {(l, j)} (\mathbf {X}) + \gamma_ {k} ^ {(l, i, j), L}, +$$ + +$$ +\mathbf {q} _ {k} ^ {(l, i)} (\mathbf {X}) \mathbf {k} _ {k} ^ {(l, j)} (\mathbf {X}) \leq \alpha_ {k} ^ {(l, i, j), U} \mathbf {q} _ {k} ^ {(l, i)} (\mathbf {X}) + \beta_ {k} ^ {(l, i, j), U} \mathbf {k} _ {k} ^ {(l, j)} (\mathbf {X}) + \gamma_ {k} ^ {(l, i, j), U}. +$$ + +We then obtain the bounds of $\mathbf{S}_{i,j}^{(l)}$ : + +$$ +\boldsymbol {\Omega} _ {j,:} ^ {(l, i), s, L} \mathbf {x} ^ {(r)} + \boldsymbol {\Theta} _ {j} ^ {(l, i), s, L} \leq \mathbf {S} _ {i, j} ^ {(l)} \leq \boldsymbol {\Omega} _ {j,:} ^ {(l, i), s, U} \mathbf {x} ^ {(r)} + \boldsymbol {\Theta} _ {j} ^ {(l, i), s, U}, +$$ + +$$ +\begin{array}{l} \Omega_ {j,:} ^ {(l, i), s, L / U} = \alpha_ {k} ^ {(l, i, j), L / U} (\sum_ {\alpha_ {k} ^ {(l, i, j), L / U} > 0} \Omega_ {k,:} ^ {(l, i), q, L / U} + \sum_ {\alpha_ {k} ^ {(l, i, j), L / U} < 0} \Omega_ {k,:} ^ {(l, i), q, U / L)} + \\ \beta_{k}^{(l,i,j),L / U}(\sum_{\substack{\beta_{k}^{(l,i,j),L / U} > 0}}\Omega_{k,:}^{(l,j),k,L / U} + \sum_{\beta_{k}^{(l,i,j),L / U} < 0}\Omega_{k,:}^{(l,j),k,U / L}), \\ \end{array} +$$ + +$$ +\Theta_ {j} ^ {(l, i), s, L / U} = \alpha_ {k} ^ {(l, i, j), L / U} (\sum_ {\alpha_ {k} ^ {(l, i, j), L / U} > 0} \Theta_ {k} ^ {(l, i), q, L / U} + \sum_ {\alpha_ {k} ^ {(l, i, j), L / U} < 0} \Theta_ {k} ^ {(l, i), q, U / L)} + +$$ + +$$ +\beta_ {k} ^ {(l, i, j), L / U} \left(\sum_ {\beta_ {k} ^ {(l, i, j), L / U} > 0} \Theta_ {k} ^ {(l, j), k, L / U} + \sum_ {\beta_ {k} ^ {(l, i, j), L / U} < 0} \Theta_ {k} ^ {(l, j), k, U / L}\right) + \sum_ {k = 1} ^ {d _ {q k}} \gamma_ {k} ^ {(l, i, j), L / U}. +$$ + +In this way, linear bounds of $\mathbf{q}^{(l,i)}(\mathbf{X})$ and $\mathbf{k}^{(l,i)}(\mathbf{X})$ are forward propagated to $\mathbf{S}_{i,j}^{(l)}$ . Attention scores are normalized into attention probabilities with a softmax, i.e. $\tilde{\mathbf{S}}_{i,j}^{(l)} = \exp (\mathbf{S}_{i,j}^{(l)}) / (\sum_{k = 1}^{n}\exp (\mathbf{S}_{i,k}^{(l)}))$ , where $\tilde{\mathbf{S}}_{i,j}^{(l)}$ is a normalized attention probability. $\exp (\mathbf{S}_{i,j}^{(l)})$ is an unary nonlinear function and can + +be bounded by $\alpha_{i,j}^{(l),L / U}\mathbf{S}_{i,j}^{(l)} + \beta_{i,j}^{(l),L / U}$ . So we forward propagate bounds of $\mathbf{S}_{i,j}^{(l)}$ to bound $\exp (\mathbf{S}_{i,j}^{(l)})$ with $\Omega_{j,:}^{(l,i),e,L / U}\mathbf{x}^{(r)} + \Theta_{j}^{(l,i),e,L / U}$ , where: + +$$ +\left\{ \begin{array}{l l} \boldsymbol {\Omega} _ {j,:} ^ {(l, i), e, L / U} = \alpha_ {i, j} ^ {(l), L / U} \boldsymbol {\Omega} _ {j,:} ^ {(l, i), s, L / U} & \boldsymbol {\Theta} _ {j} ^ {(l, i), e, L / U} = \alpha_ {i, j} ^ {(l), L / U} \boldsymbol {\Theta} _ {j} ^ {(l, i), s, L / U} + \beta_ {i, j} ^ {(l), L / U} \quad \alpha_ {i, j} ^ {(l), L / U} \geq 0, \\ \boldsymbol {\Omega} _ {j,:} ^ {(l, i), e, L / U} = \alpha_ {i, j} ^ {(l), L / U} \boldsymbol {\Omega} _ {j,:} ^ {(l, i), s, U / L} & \boldsymbol {\Theta} _ {j} ^ {(l, i), e, L / U} = \alpha_ {i, j} ^ {(l), L / U} \boldsymbol {\Theta} _ {j} ^ {(l, i), s, U / L} + \beta_ {i, j} ^ {(l), L / U} \quad \alpha_ {i, j} ^ {(l), L / U} < 0. \end{array} \right. +$$ + +By summing up bounds of each $\exp (\mathbf{S}_{i,k}^{(l)})$ , linear bounds can be further propagated to $\sum_{k = 1}^{n}\exp (\mathbf{S}_{i,k}^{(l)})$ . With bounds of $\exp (\mathbf{S}_{i,j}^{(l)})$ and $\sum_{k = 1}^{n}\exp (\mathbf{S}_{i,k}^{(l)})$ ready, we forward propagate the bounds to $\tilde{\mathbf{S}}_{i,j}^{(l)}$ with a division similarly to bounding $\mathbf{q}_k^{(l,i)}(\mathbf{X})\mathbf{k}_k^{(l,j)}(\mathbf{X})$ . The output of the self-attention $\Phi^{(l,i)}(\mathbf{X})$ is obtained with a summation of $\mathbf{v}^{(l,j)}(\mathbf{X})$ weighted by attention probability $\tilde{\mathbf{S}}_{i,k}^{(l)}$ : $\Phi_j^{(l,i)}(\mathbf{X}) = \sum_{k = 1}^n\tilde{\mathbf{S}}_{i,k}^{(l)}\mathbf{v}_j^{(l,k)}(\mathbf{X})$ , which can be regarded as a dot product of $\tilde{\mathbf{S}}_i^{(l)}$ and $\tilde{\mathbf{v}}_k^{(l,j)}(\mathbf{X})$ , where $\tilde{\mathbf{v}}_k^{(l,j)}(\mathbf{X}) = \mathbf{v}_j^{(l,k)}(\mathbf{X})$ whose bounds can be obtained from those of $\mathbf{v}_j^{(l,k)}(\mathbf{X})$ with a transposing. Therefore, bounds of $\tilde{\mathbf{S}}_{i,k}^{(l)}$ and $\tilde{\mathbf{v}}_k^{(l,j)}(\mathbf{X})$ can be forward propagated to $\Phi^{(l,i)}(\mathbf{X})$ similarly to bounding $\mathbf{S}_{i,j}^{(l)}$ . In this way, we obtain the output bounds of the self-attention: + +$$ +\boldsymbol {\Omega} _ {j,:} ^ {(l ^ {\prime}, i), \Phi , L} \mathbf {x} ^ {(r)} + \boldsymbol {\Theta} _ {j} ^ {(l ^ {\prime}, i), \Phi , L} \leq \Phi^ {(l ^ {\prime}, i)} (\mathbf {X}) \leq \boldsymbol {\Omega} _ {j,:} ^ {(l ^ {\prime}, i), \Phi , U} \mathbf {x} ^ {(r)} + \boldsymbol {\Theta} _ {j} ^ {(l ^ {\prime}, i), \Phi , U}. \tag {5} +$$ + +Recall that $\mathbf{x}^{(r)}$ is a concatenation of $\mathbf{x}^{(r_1)},\mathbf{x}^{(r_2)},\dots ,\mathbf{x}^{(r_t)}$ . We can split $\Omega_{j,:}^{(l',i),\Phi,L / U}$ into $t$ vectors with equal dimensions, $\Omega_{j,:}^{(l',i,1),\Phi,L / U},\Omega_{j,:}^{(l',i,2),\Phi,L / U},\dots ,\Omega_{j,:}^{(l',i,t),\Phi,L / U}$ , such that Eq. (5) becomes + +$$ +\sum_ {k = 1} ^ {t} \boldsymbol {\Omega} _ {j,:} ^ {(l ^ {\prime}, i, k), \Phi , L} \mathbf {x} ^ {(r _ {k})} + \boldsymbol {\Theta} _ {j} ^ {(l ^ {\prime}, i), \Phi , L} \leq \boldsymbol {\Phi} ^ {(l ^ {\prime}, i)} (\mathbf {X}) \leq \sum_ {k = 1} ^ {t} \boldsymbol {\Omega} _ {j,:} ^ {(l ^ {\prime}, i, k), \Phi , U} \mathbf {x} ^ {(r _ {k})} + \boldsymbol {\Theta} _ {j} ^ {(l ^ {\prime}, i), \Phi , U}. \tag {6} +$$ + +Backward Process to Self-Attention Layers When computing bounds for a later sub-layer, the $l$ -th sub-layer, using the backward process, we directly propagate the bounds at the the closest previous self-attention layer assumed to be the $l'$ -th layer, to the input layer, and we skip other previous sublayers. The bounds propagated to the $l'$ -th layer are as Eq. (2). We substitute $\Phi^{(l',k)}(\mathbf{X})$ with linear bounds in Eq. (6): + +$$ +\boldsymbol {\Lambda} _ {j,:} ^ {(l, i, 0, r _ {k}), L / U} = \sum_ {k ^ {\prime} = 1} ^ {n} (\boldsymbol {\Lambda} _ {j,: +} ^ {(l, i, l ^ {\prime}, k ^ {\prime}), L / U} \boldsymbol {\Omega} _ {j,:} ^ {(l ^ {\prime}, k ^ {\prime}, k), \Phi , L / U} + \boldsymbol {\Lambda} _ {j,: -} ^ {(l, i, l ^ {\prime}, k ^ {\prime}), L / U} \boldsymbol {\Omega} _ {j,:} ^ {(l ^ {\prime}, k ^ {\prime}, k), \Phi , U / L}) (1 \leq k \leq t), +$$ + +$$ +\Lambda_ {j,:} ^ {(l, i, 0, k), L / U} = 0 (\forall k \notin P), +$$ + +$$ +\boldsymbol {\Delta} _ {j} ^ {(l, i, 0), L / U} = \boldsymbol {\Delta} _ {j} ^ {(l, i, l ^ {\prime}, L / U)} + \sum_ {k = 1} ^ {n} \boldsymbol {\Lambda} _ {j,: +} ^ {(l, i, l ^ {\prime}, k), L / U} \boldsymbol {\Theta} _ {j} ^ {(l ^ {\prime}, k), \Phi , L / U} + \boldsymbol {\Lambda} _ {j,: -, -} ^ {(l, i, l ^ {\prime}, k), L / U} \boldsymbol {\Theta} _ {j} ^ {(l ^ {\prime}, k), \Phi , U / L}. +$$ + +We take global bounds as Eq. (3) and Eq. (4) to obtain the bounds of the $l$ -th layer. + +Advantageous of Combining the Backward Process with a Forward Process Introducing a forward process can significantly reduce the complexity of verifying Transformers. With the backward process only, we need to compute $\Lambda^{(l,i,l',k)}$ and $\Delta^{(l,i,l')} (l' \leq l)$ , where the major cost is on $\Lambda^{(l,i,l',k)}$ and there are $O(m^2 n^2)$ such matrices to compute. The $O(n^2)$ factor is from the dependency between all pairs of positions in the input and output respectively, which makes the algorithm inefficient especially when the input sequence is long. In contrast, the forward process represents the bounds as linear functions of the perturbed positions only instead of all positions by computing $\Omega^{(l,i)}$ and $\Theta^{(l,i)}$ . Imperceptible adversarial examples may not have many perturbed positions (Gao et al., 2018; Ko et al., 2019), and thus we may assume that the number of perturbed positions, $t$ , is small. The major cost is on $\Omega^{(l,i)}$ while there are only $O(mn)$ such matrices and the sizes of $\Lambda^{(l,i,l',k)}$ and $\Omega^{(l,i)}$ are relatively comparable for a small $t$ . We combine the backward process and the forward process. The number of matrices $\Omega$ in the forward process is $O(mn)$ , and for the backward process, since we do not propagate bounds over self-attention layers and there is no cross-position dependency in other sub-layers, we only compute $\Lambda^{(l,i,l',k)}$ such that $i = k$ , and thus the number of matrices $\Lambda$ is reduced to $O(m^2 n)$ . Therefore, the total number of matrices $\Lambda$ and $\Omega$ we compute is $O(m^2 n)$ and is $O(n)$ times smaller than $O(m^2 n^2)$ when only the backward process is used. Moreover, the backward process makes bounds tighter compared to solely the forward one, as we explain in Appendix D. + +# 4 EXPERIMENTS + +To demonstrate the effectiveness of our algorithm, we compute certified bounds for several sentiment classification models and perform an ablation study to show the advantage of combining the backward and forward processes. We also demonstrate the meaningfulness of our certified bounds with an application on identifying important words. + +# 4.1 DATASETS AND MODELS + +We use two datasets: Yelp (Zhang et al., 2015) and SST (Socher et al., 2013). Yelp consists of 560,000/38,000 examples in the training/test set and SST consists of 67,349/872/1,821 examples in the training/development/test set. Each example is a sentence or a sentence segment (for the training data of SST only) labeled with a binary sentiment polarity. + +We verify the robustness of Transformers trained from scratch. For the main experiments, we consider $N$ -layer models $(N \leq 3)$ , with 4 attention heads, hidden sizes of 256 and 512 for self-attention and feed-forward layers respectively, and we use ReLU activations for feed-forward layers. We remove the variance related terms in layer normalization, making Transformers verification bounds tighter while the clean accuracies remain comparable (see Appendix E for discussions). Although our method can be in principle applied to Transformers with any number of layers, we do not use large-scale pre-trained models such as BERT because they are too challenging to be tightly verified with the current technologies. + +# 4.2 CERTIFIED BOUNDS + +
DatasetNAcc.\( \ell_p \)UpperLower (IBP)Lower (Ours)Ours vs Upper
MinAvgMinAvgMinAvgMinAvg
Yelp191.5\( \ell_1 \)9.08513.9171.4E-43.1E-41.4231.80916%13%
\( \ell_2 \)0.6951.0051.4E-43.1E-40.3840.48355%48%
\( \ell_\infty \)0.1170.1551.4E-43.1E-40.0340.04329%27%
291.5\( \ell_1 \)10.22815.4521.4E-72.2E-70.3890.5124%3%
\( \ell_2 \)0.7731.1031.4E-72.2E-70.1160.14915%14%
\( \ell_\infty \)0.1220.1611.4E-72.2E-70.0100.0139%8%
391.6\( \ell_1 \)11.13715.0414.3E-107.1E-100.1520.2841%2%
\( \ell_2 \)0.8261.0904.3E-107.1E-100.0420.0725%7%
\( \ell_\infty \)0.1360.1874.3E-107.1E-100.0040.0063%3%
SST183.2\( \ell_1 \)7.4188.8492.4E-42.7E-42.5032.68934%30%
\( \ell_2 \)0.5600.6582.4E-42.7E-40.4180.45475%69%
\( \ell_\infty \)0.0910.1112.4E-42.7E-40.0330.03636%32%
283.5\( \ell_1 \)6.7818.3673.6E-73.8E-71.9191.96928%24%
\( \ell_2 \)0.5200.6283.6E-73.8E-70.3050.31559%50%
\( \ell_\infty \)0.0850.1053.6E-73.8E-70.0240.02428%23%
383.9\( \ell_1 \)6.4757.8775.7E-106.7E-101.0071.03116%13%
\( \ell_2 \)0.4970.5905.7E-106.7E-100.1690.17334%29%
\( \ell_\infty \)0.0840.1015.7E-106.7E-100.0130.01416%13%
+ +Table 1: Clean accuracies and computed bounds for 1-position perturbation. Bounds include upper bounds (obtained by an enumeration based method), certified lower bounds by IBP and our method respectively. We also report the gap between upper bounds and our lower bounds (represented as the percentage of lower bounds relative to upper bounds). We compute bounds for each possible option of perturbed positions and report the minimum ("Min") and average ("Avg") among them. + +
NYelpSST
Lower (IBP)Lower (Ours)Lower (IBP)Lower (Ours)
MinAvgMinAvgMinAvgMinAvg
16.5E-51.2E-40.2420.2901.1E-41.1E-40.2120.229
26.2E-88.6E-80.0600.0781.5E-71.5E-70.1450.149
32.8E-104.4E-100.0230.0353.3E-104.5E-100.0810.083
+ +Table 2: Bounds by IBP and our method for 2-position perturbation constrained by $\ell_2$ -norm. + +We compute certified lower bounds for different models on different datasets. We include 1-position perturbation constrained by $\ell_1 / \ell_2 / \ell_{\infty}$ -norms and 2-position perturbation constrained by $\ell_2$ -norm. We compare our lower bounds with those computed by the Interval Bound Propagation (IBP) (Gowal et al., 2018) baseline. For 1-position perturbation, we also compare with upper bounds computed by enumerating all the words in the vocabulary and finding the word closest to the original one such that the word substitution alters the predicted label. This method has an exponential complexity with respect to the vocabulary size and can hardly be extended to perturbations on 2 or more positions; thus we do not include upper bounds for 2-position perturbation. For each example, we enumerate possible options of perturbed positions (there are $\binom{n}{t}$ options), and we integrate results from different options by taking the minimum or average respectively. We report the average results on 10 correctly classified random test examples with sentence lengths no more than 32 for 1-position perturbation and 16 for 2-position perturbation. Table 1 and Table 2 present the results for 1-position and 2-position perturbation respectively. Our certified lower bounds are significantly larger and thus tighter than those by IBP. For 1-position perturbation, the lower bounds are consistently smaller than the upper bounds, and the gap between the upper bounds and our lower bounds is reasonable compared with that in previous work on verification of feed-forward networks, e.g., in (Weng et al., 2018; Zhang et al., 2018) the upper bounds are in the order of 10 times larger than lower bounds. This demonstrates that our proposed method can compute robustness bounds for Transformers in a similar quality to the bounds of simpler neural networks. + +# 4.3 EFFECTIVENESS OF COMBINING THE BACKWARD PROCESS WITH A FORWARD PROCESS + +
DatasetAcc.\( \ell_p \)Fully-ForwardFully-BackwardBackward & Forward
MinAvgTimeMinAvgTimeMinAvgTime
Yelp91.3\( \ell_1 \)2.1222.17312.63.4853.737141.43.4793.72924.0
\( \ell_2 \)0.5760.59912.40.8670.947140.40.8660.94626.0
\( \ell_\infty \)0.0810.08412.60.1230.136143.90.1230.13626.4
SST83.3\( \ell_1 \)1.5451.59213.71.8911.961177.61.8911.96126.5
\( \ell_2 \)0.3520.36612.60.4190.439178.80.4190.43924.3
\( \ell_\infty \)0.0480.05014.60.0580.061181.30.0580.06124.3
+ +Table 3: Comparison of certified lower bounds and computation time (sec) by different methods. + +In the following, we show the effectiveness of combining the backward process with a forward process. We compare our proposed method (Backward & Forward) with two variations: 1) Fully-Forward propagates bounds in a forward manner for all sub-layers besides self-attention layers; 2) Fully-Backward computes bounds for all sub-layers including self-attention layers using the backward bound propagation and without the forward process. We compare the tightness of bounds and computation time of the three methods. We use smaller models with the hidden sizes reduced by $75\%$ , and we use 1-position perturbation only, to accommodate Fully-Backward with large computational cost. Experiments are conducted on an NVIDIA Titan X GPU. Table 3 presents the results. Bounds by Fully-Forward are significantly looser while those by Fully-Backward and Backward & Forward are comparable. Meanwhile, the computation time of Backward & Forward is significantly shorter than that of Fully-Backward. This demonstrates that our method of combining the backward and forward processes can compute comparably tight bounds much more efficiently. + +# 4.4 IDENTIFYING WORDS IMPORTANT TO PREDICTION + +The certified lower bounds can reflect how sensitive a model is to the perturbation of each input word. Intuitively, if a word is more important to the prediction, the model is more sensitive to its perturbation. Therefore, the certified lower bounds can be used to identify important words. In the following, we conduct an experiment to verify whether important words can be identified by our certified lower bounds. We use a 1-layer Transformer classifier under 1-position perturbation constrained by $\ell_2$ -norm. The certified lower bounds are normalized by the norm of the unperturbed word embeddings respectively, when they are used for identifying the most/least important words. We compare our method with two baselines that also estimate local vulnerability: 1) Upper uses upper bounds; 2) Gradient identifies the word whose embedding has the largest $\ell_2$ -norm of gradients as the most important and vice versa. + +
MethodImportance Score (on SST)Words Identified from 10 Examples on the Yelp Dataset (split by “/”)
Most Important Words or Symbols
Grad0.47terrible/great/diner/best/best/food/service/food/perfect/best
Upper0.45terrible/we/.best/best/and/slow/great,this/best
Ours0.57terrible/great/diner/best/good/slow/great/perfect/best
Least Important Words or Symbols
Grad0.40./decadent/../had/and/place/../!./
Upper0.24././typical/boba/i/dark/star/atmosphere&/boba
Ours0.01./../the/../food/../the
+ +Table 4: Average importance scores of the most/least important words identified from 100 examples respectively on SST by different methods. For the most important words identified, larger important scores are better, and vice versa. Additionally, we show most/least important words identified from 10 examples on the Yelp dataset. Boldfaced words are considered to have strong sentiment polarities, and they should appear as most important words rather than least important ones. + +Quantitative Analysis on SST SST contains sentiment labels for all phrases on parse trees, where the labels range from very negative (0) to very positive (4), and 2 for neutral. For each word, assuming its label is $x$ , we take $|x - 2|$ , i.e., the distance to the neutral label, as the importance score, since less neutral words tend to be more important for the sentiment polarity of the sentence. We evaluate on 100 random test input sentences and compute the average importance scores of the most or least important words identified from the examples. In Table 4, compared to the baselines ("Upper" and "Grad"), the average importance score of the most important words identified by our lower bounds are the largest, while the least important words identified by our method have the smallest average score. This demonstrates that our method identifies the most and least important words more accurately compared to baseline methods. + +Qualitative Analysis on Yelp We further analyze the results on a larger dataset, Yelp. Since Yelp does not provide per-word sentiment labels, importance scores cannot be computed as on SST. Thus, we demonstrate a qualitative analysis. We use 10 random test examples and collect the words identified as the most and least important word in each example. In Table 4, most words identified as the most important by certified lower bounds are exactly the words reflecting sentiment polarities (bold-faced words), while those identified as the least important words are mostly stopwords. Baseline methods mistakenly identify more words containing no sentiment polarity as the most important. This again demonstrates that our certified lower bounds identify word importance better than baselines and our bounds provide meaningful interpretations in practice. While gradients evaluate the sensitivity of each input word, this evaluation only holds true within a very small neighborhood (where the classifier can be approximated by a first-order Taylor expansion) around the input sentence. Our certified method gives valid lower bounds that hold true within a large neighborhood specified by a perturbation set $S$ , and thus it provides more accurate results. + +# 5 CONCLUSION + +We propose the first robustness verification method for Transformers, and tackle key challenges in verifying Transformers, including cross-nonlinearity and cross-position dependency. Our method computes certified lower bounds that are significantly tighter than those by IBP. Quantitative and qualitative analyses further show that our bounds are meaningful and can reflect the importance of different words in sentiment analysis. + +# ACKNOWLEDGEMENT + +This work is jointly supported by Tsinghua Scholarship for Undergraduate Overseas Studies, NSF IIS1719097 and IIS1927554, and NSFC key project with No. 61936010 and regular project with No. 61876096. + +# REFERENCES + +Michael E Akintunde, Andreea Kevorchian, Alessio Lomuscio, and Edoardo Pirovano. Verification of rnn-based neural agent-environment systems. In Proceedings of the 33th AAAI Conference on + +Artificial Intelligence (AAAI19). Honolulu, HI, USA. AAAI Press., 2019. +Moustafa Alzantot, Yash Sharma, Ahmed Elghohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. In EMNLP, pp. 2890-2896, 2018. +Rudy R Bunel, Ilker Turkaslan, Philip Torr, Pushmeet Kohli, and Pawan K Mudigonda. A unified view of piecewise linear neural network verification. In Advances in Neural Information Processing Systems, pp. 4790-4799, 2018. +Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. arXiv preprint arXiv:1803.01128, 2018. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, 2019. +Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. Output range analysis for deep feedforward neural networks. In NASA Formal Methods Symposium, pp. 121-138. Springer, 2018. +Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and Pushmeet Kohli. A dual approach to scalable verification of deep networks. UAI, 2018. +Krishnamurthy Dj Dvijotham, Robert Stanforth, Sven Gowal, Chongli Qin, Soham De, and Pushmeet Kohli. Efficient neural network verification with exactness characterization. UAI, 2019. +Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 31-36, 2018. +Ruediger Ehlers. Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis, pp. 269-286. Springer, 2017. +Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pp. 50-56. IEEE, 2018. +Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pp. 3-18. IEEE, 2018. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. +Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018. +Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems (NIPS), pp. 2266-2276, 2017. +Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, and Cho-Jui Hsieh. On the robustness of self-attentive models. In ACL, pp. 1520–1529, 2019. +Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. Achieving verified robustness to symbol substitutions via interval bound propagation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4074-4084, 2019. + +Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328, 2017. +Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. Certified robustness to adversarial word substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4120-4133, 2019. +Kyle D Julian, Shivam Sharma, Jean-Baptiste Jeannin, and Mykel J Kochenderfer. Verifying aircraft collision avoidance neural networks through linear approximations of safe regions. arXiv preprint arXiv:1903.00762, 2019. +Wang-Cheng Kang and Julian McAuley. Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM), pp. 197-206. IEEE, 2018. +Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp. 97-117. Springer, 2017. +Ching-Yun Ko, Zhaoyang Lyu, Lily Weng, Luca Daniel, Ngai Wong, and Dahua Lin. Popcorn: Quantifying robustness of recurrent neural networks. In International Conference on Machine Learning, pp. 3468-3477, 2019. +Jia Li, Yu Rong, Hong Cheng, Helen Meng, Wenbing Huang, and Junzhou Huang. Semi-supervised graph classification: A hierarchical graph perspective. In The World Wide Web Conference, pp. 972-982. ACM, 2019a. +Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019b. +Xuankang Lin, He Zhu, Roopsha Samanta, and Suresh Jagannathan. Art: Abstraction refinement-guided training for provably correct neural networks. arXiv preprint arXiv:1907.10662, 2019. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. +Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pp. 3575-3583, 2018. +Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. Crafting adversarial input sequences for recurrent neural networks. In MILCOM 2016-2016 IEEE Military Communications Conference, pp. 49-54. IEEE, 2016. +Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. arXiv preprint arXiv:1802.05751, 2018. +Chongli Qin, Brendan O'Donoghue, Rudy Bunel, Robert Stanforth, Sven Gowal, Jonathan Uesato, Grzegorz Swirszcz, Pushmeet Kohli, et al. Verification of non-linear specifications for neural networks. arXiv preprint arXiv:1902.09592, 2019. +Aditi Raghunathan, Jacob Steinhardt, and Percy S Liang. Semidefinite relaxations for certifying robustness to adversarial examples. In Advances in Neural Information Processing Systems, pp. 10877-10887, 2018. +Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. A convex relaxation barrier to tight robustness verification of neural networks. In Advances in Neural Information Processing Systems 32, pp. 9832-9842. 2019. +Zhouxing Shi, Ting Yao, Jingfang Xu, and Minlie Huang. Robustness to modification with shared words in paraphrase identification. arXiv preprint arXiv:1909.02560, 2019. + +Andy Shih, Arthur Choi, and Adnan Darwiche. A symbolic approach to explaining bayesian network classifiers. In *IJCAI*, 2018. +Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Puschel, and Martin Vechev. Fast and effective robustness certification. In Advances in Neural Information Processing Systems, pp. 10825-10836, 2018. +Gagandeep Singh, Timon Gehr, Markus Puschel, and Martin Vechev. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL):41, 2019. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631-1642, 2013. +Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530, 2019. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. +Vincent Tjeng, Kai Xiao, and Russ Tedrake. Evaluating robustness of neural networks with mixed integer programming. *ICLR*, 2019. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Qinglong Wang, Kaixuan Zhang, Xue Liu, and C Lee Giles. Verification of recurrent neural networks through rule extraction. arXiv preprint arXiv:1811.06029, 2018a. +Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems, pp. 6367-6377, 2018b. +Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Formal security analysis of neural networks using symbolic intervals. In 27th {USENIX} Security Symposium ( {USENIX} Security 18), pp. 1599-1614, 2018c. +Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Bong, and Inderjit Dhillon. Towards fast computation of certified robustness for relu networks. In International Conference on Machine Learning, pp. 5273-5282, 2018. +Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, pp. 5283-5292, 2018. +Yichen Yang and Martin Rinard. Correctness verification of neural networks. arXiv preprint arXiv:1906.01030, 2019. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019. +Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Efficient neural network robustness certification with general activation functions. In Advances in neural information processing systems, pp. 4939-4948, 2018. +Huan Zhang, Pengchuan Zhang, and Cho-Jui Hsieh. Recurjac: An efficient recursive algorithm for bounding jacobian matrix of neural networks and its applications. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 5757-5764, 2019. +Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pp. 649-657, 2015. + +Zhengli Zhao, Dheeru Dua, and Sameer Singh. Generating natural adversarial examples. arXiv preprint arXiv:1710.11342, 2017. +Shiyu Zhou, Linhao Dong, Shuang Xu, and Bo Xu. Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chinese. arXiv preprint arXiv:1804.10752, 2018. + +# A ILLUSTRATION OF DIFFERENT BOUNDING PROCESSES + +![](images/ff8eff7d0a604b5923df30630766909978dd13e639f7c3651d4c51d09b2b8a91.jpg) + +![](images/0e8a837db860f682f8bcc17798908d04d09d0bd0f21ab76255ba4875b249c2f1.jpg) + +![](images/a861880435f46adf5a1b362937bc41075e6c6ded898c8d1da8690b5166f04620.jpg) +(c) Backward & Forward +Figure 1: Illustration of three different bounding processes: Fully-Forward (a), Fully-Backward (b), and Backward&Forward (c). We show an example of a 2-layer Transformer, where operations can be divided into two kinds of blocks, "Feed-forward" and "Self-attention". "Self-attention" contains operations in the self-attention mechanism starting from queries, keys, and values, and "Feed-forward" contains all the other operations including linear transformations and unary nonlinear functions. Arrows with solid lines indicate the propagation of linear bounds in a forward manner. Each backward arrow $A_{k} \rightarrow B_{k}$ with a dashed line for blocks $A_{k}, B_{k}$ indicates that there is a backward bound propagation to block $B_{k}$ when computing bounds for block $A_{k}$ . Blocks with blue rectangles have forward processes inside the blocks, while those with green rounded rectangles have backward processes inside. +Figure 1 illustrates a comparison of the Fully-Forward, Fully-Backward and Backward & Forward processes, for a 2-layer Transformer as an example. For Fully-Forward, there are only forward processes connecting adjacent layers and blocks. For Fully-Backward, there are only backward processes, and each layer needs a backward bound propagation to all the previous layers. For our Backward & Forward algorithm, we use backward processes for the feed-forward parts and forward processes for self-attention layers, and for layers after self-attention layers, they no longer need backward bound propagation to layers prior to self-attention layers. In this way, we resolve the cross-position dependency in verifying Transformers while still keeping bounds comparably tight as those by using fully backward processes. Empirical comparison of the three frameworks are presented in Sec. 4.3. + +# B LINEAR BOUNDS OF UNARY NONLINEAR FUNCTIONS + +We show in Sec. 3.2 that linear bounds can be propagated over unary nonlinear functions as long as the unary nonlinear functions can be bounded with linear functions. Such bounds are determined for each neuron respectively, according to the bounds of the input for the function. Specifically, for a unary nonlinear function $\sigma(x)$ , with the bounds of $x$ obtained previously as $x \in [l, u]$ , we aim to derive a linear lower bound $\alpha^L x + \beta^L$ and a linear upper bound $\alpha^U x + \beta^U$ , such that + +$$ +\alpha^ {L} x + \beta^ {L} \leq \sigma (x) \leq \alpha^ {U} x + \beta^ {U} (\forall x \in [ l, u ]), +$$ + +where parameters $\alpha^L, \beta^L, \alpha^U, \beta^U$ are dependent on $l, u$ and designed for different functions $\sigma(x)$ respectively. We introduce how the parameters are determined for different unary nonlinear functions involved in Transformers such that the linear bounds are valid and as tight as possible. Bounds of ReLU and tanh has been discussed by Zhang et al. (2018), and we further derive bounds of $e^x$ , $\frac{1}{x}$ , $x^2$ , $\sqrt{x}$ . $x^2$ and $\sqrt{x}$ are only used when the layer normalization is not modified for experiments to study the impact of our modification. For the following description, we define the endpoints of the function to be bounded within range $(l, r)$ as $(l, \sigma(l))$ and $(u, \sigma(u))$ . We describe how the lines corresponding to the linear bounds of different functions can be determined, and thereby parameters $\alpha^L, \beta^L, \alpha^U, \beta^U$ can be determined accordingly. + +ReLU For ReLU activation, $\sigma (x) = \max (x,0)$ . ReLU is inherently linear on segments $(-\infty ,0]$ and $[0,\infty)$ respectively, so we make the linear bounds exactly $\sigma (x)$ for $u\leq 0$ or $l\geq 0$ ; and for $l < 0 < u$ , we take the line passing the two endpoints as the upper bound; and we take $\sigma^L (x) = 0$ when $u < |l|$ and $\sigma^L (x) = x$ when $u\geq |l|$ as the lower bound, to minimize the gap between the lower bound and the original function. + +Tanh For tanh activation, $\sigma (x) = \frac{1 - e^{-2x}}{1 + e^{-2x}}$ . tanh is concave for $l\geq 0$ , and thus we take the line passing the two endpoints as the lower bound and take a tangent line passing $((l + u) / 2,\sigma ((l + u) / 2)$ as the upper bound. For $u\leq 0$ , tanh is convex, and thus we take the line passing the two endpoints as the upper bound and take a tangent line passing $((l + u) / 2,\sigma ((l + u) / 2)$ as the lower bound. For $l < 0 < u$ , we take a tangent line passing the right endpoint and $(d^{L},\sigma (d^{L}))(d^{L}\leq 0)$ as the lower bound, and take a tangent line passing the left endpoint and $(d^{U},\sigma (d^{U}))(d^{U}\geq 0)$ as the upper bound. $d^L$ and $d^U$ can be found with a binary search. + +Exp $\sigma (x) = \exp (x) = e^{x}$ is convex, and thus we take the line passing the two endpoints as the upper bound and take a tangent line passing $(d,\sigma (d))$ as the lower bound. Preferably, we take $d = (l + u) / 2$ . However, $e^x$ is always positive and used in the softmax for computing normalized attention probabilities in self-attention layers, i.e., $\exp (\mathbf{S}_{i,j}^{(l)})$ and $\sum_{k = 1}^{n}\exp (\mathbf{S}_{i,k}^{(l)})$ . $\sum_{k = 1}^{n}\exp (\mathbf{S}_{i,k}^{(l)})$ appears in the denominator of the softmax, and to make reciprocal function $\frac{1}{x}$ finitely bounded, the range of $x$ should not pass 0. Therefore, we impose a constraint to force the lower bound function to be always positive, i.e., $\sigma^L (l) > 0$ , since $\sigma^L (l)$ is monotonously increasing. $\sigma_d^L (x) = e^d (x - d) + e^d$ is the tangent line passing $(d,\sigma (d))$ . So the constraint $\sigma_d^L (l) > 0$ yields $d < l + 1$ . Hence we take $d = \min ((l + u) / 2,l + 1 - \Delta_d)$ where $\Delta_d$ is a small real value to ensure that $d < l + 1$ such as $\Delta_d = 10^{-2}$ . + +Reciprocal For the reciprocal function, $\sigma(x) = \frac{1}{x}$ . It is used in the softmax and layer normalization and its input is limited to have $l > 0$ by the lower bounds of $\exp(x)$ , and $\sqrt{x}$ . With $l > 0$ , $\sigma(x)$ is convex. Therefore, we take the line passing the two endpoints as the upper bound. And we take the tangent line passing $((l + u)/2, \sigma((l + u)/2))$ as the lower bound. + +Square For the square function, $\sigma(x) = x^2$ . It is convex and we take the line passing the two endpoints as the upper bound. And we take a tangent line passing $(d, \sigma(d))(d \in [l, u])$ as the lower bound. We still prefer to take $d = (l + u)/2$ . $x^2$ appears in the variance term of layer normalization and is later passed to a square root function to compute a standard derivation. To make the input to the square root function valid, i.e., non-negative, we impose a constraint $\sigma^L(x) \geq 0 (\forall x \in [l, u])$ . $\sigma_d^L(x) = 2d(x - d) + d^2$ is the tangent line passing $(d, \sigma(d))$ . For $u \leq 0$ , $x^2$ is monotonously decreasing, the constraint we impose is equivalent to $\sigma^L(u) = 2du - d^2 \geq 0$ , and with $d \leq 0$ , we have $d \geq 2u$ . So we take $d = \max((l + u)/2, 2u)$ . For $l \geq 0$ , $x^2$ is monotonously increasing, and + +thus the constraint we impose is equivalent to $\sigma^L (l) = 2dl - d^2\geq 0$ , and with $d\geq 0$ , we have $d\leq 2l$ . So we take $d = \max ((l + u) / 2,2l)$ . And for $l < 0 < u$ , since $\sigma_d^L (0) = -d^2$ is negative for $d\neq 0$ while $d = 0$ yields a valid lower bound, we take $d = 0$ + +Square root For the square root function, $\sigma(x) = \sqrt{x}$ . It is used to compute a standard derivation in layer normalization and its input is limited to be non-negative by the lower bounds of $x^2$ , and thus $l \geq 0$ . $\sigma(x)$ is concave, and thus we take the line passing the two endpoints as the lower bound and take the tangent line passing $((l + u)/2, \sigma((l + u)/2))$ as the upper bound. + +# C LINEAR BOUNDS OF MULTIPLICATIONS AND DIVISIONS + +We provide a mathematical proof of optimal parameters for linear bounds of multiplications used in Sec. 3.3. We also show that linear bounds of division can be indirectly obtained from bounds of multiplications and the reciprocal function. + +For each multiplication, we aim to bound $z = xy$ with two linear bounding planes $z^L = \alpha^L x + \beta^L y + \gamma^L$ and $z^U = \alpha^U x + \beta^U y + \gamma^U$ , where $x$ and $y$ are both variables and $x \in [l_x, u_x]$ , $y \in [l_y, u_y]$ are concrete bounds of $x, y$ obtained from previous layers, such that: + +$$ +z ^ {L} = \alpha^ {L} x + \beta^ {L} y + \gamma^ {L} \leq z = x y \leq z ^ {U} = \alpha^ {U} x + \beta^ {U} y + \gamma^ {U} \forall (x, y) \in [ l _ {x}, u _ {x} ] \times [ l _ {y}, u _ {y} ]. +$$ + +Our goal is to determine optimal parameters of bounding planes, i.e., $\alpha^L, \beta^L, \gamma^L, \alpha^U, \beta^U, \gamma^U$ , such that the bounds are as tight as possible. + +# C.1 LOWER BOUND OF MULTIPLICATIONS + +We define a difference function $F^L(x, y)$ which is the difference between the original function $z = xy$ and the lower bound $z^L = \alpha^L x + \beta^L y + \gamma^L$ : + +$$ +F ^ {L} (x, y) = x y - (\alpha^ {L} x + \beta^ {L} y + \gamma^ {L}). +$$ + +To make the bound as tight as possible, we aim to minimize the integral of the difference function $F^{L}(x,y)$ on our concerned area $(x,y)\in [l_x,u_x]\times [l_y,u_y]$ , which is equivalent to maximizing + +$$ +V ^ {L} = \int_ {x \in \left[ l _ {x}, u _ {x} \right]} \int_ {y \in \left[ l _ {y}, u _ {y} \right]} \alpha^ {L} x + \beta^ {L} y + \gamma^ {L}, \tag {7} +$$ + +while $F^L(x, y) \geq 0$ ( $\forall (x, y) \in [l_x, u_x] \times [l_y, u_y]$ ). For an optimal bounding plane, there must exist a point $(x_0, y_0) \in [l_x, u_x] \times [l_y, u_y]$ such that $F^L(x_0, y_0) = 0$ (otherwise we can validly increase $\gamma^L$ to make $V^L$ larger). To ensure that $F^L(x, y) \geq 0$ within the concerned area, we need to ensure that the minimum value of $F^L(x, y)$ is non-negative. We show that we only need to check cases when $(x, y)$ is any of $(l_x, l_y), (l_x, u_y), (u_x, l_y), (u_x, u_y)$ , i.e., points at the corner of the considered area. The partial derivatives of $F^L$ are: + +$$ +\frac {\partial F ^ {L}}{\partial x} = y - \alpha^ {L}, +$$ + +$$ +\frac {\partial F ^ {L}}{\partial y} = x - \beta^ {L}. +$$ + +If there is $(x_{1},y_{1})\in (l_{x},u_{x})\times (l_{y},u_{y})$ such that $F^{L}(x_{1},y_{1})\leq F(x,y)$ $(\forall (x,y)\in [l_x,u_x]\times$ $[l_y,u_y])$ $\frac{\partial F^L}{\partial x} (x_1,y_1) = \frac{\partial F^L}{\partial y} (x_1,y_1) = 0$ should hold true. Thereby $\frac{\partial F^L}{\partial x} (x,y),\frac{\partial F^L}{\partial y} (x,y) < 0 (\forall (x,y)\in [l_x,x_1)\times [l_y,y_1))$ , and thus $F^{L}(l_{x},l_{y}) < F^{L}(x_{1},y_{1})$ and $(x_{1},y_{1})$ cannot be the point with the minimum value of $F^{L}(x,y)$ . On the other hand, if there is $(x_{1},y_{1})(x_{1} = l_{x},y_{1}\in (l_{y},u_{y}))$ i.e., on one border of the concerned area but not on any corner, $\frac{\partial F^L}{\partial y} (x_1,y_1) = 0$ should hold true. Thereby, $\frac{\partial F^L}{\partial y} (x,y) = \frac{\partial F^L}{\partial y} (x_1,y) = 0$ $(\forall (x,y),x = x_1 = l_x)$ , and $F^{L}(x_{1},y_{1}) = F^{L}(x_{1},l_{y}) =$ $F^{L}(l_{x},l_{y})$ . This property holds true for the other three borders of the concerned area. Therefore, other points within the concerned area cannot have smaller function value $F^{L}(x,y)$ , so we only + +need to check the corners, and the constraints on $F^L(x,y)$ become + +$$ +\left\{ \begin{array}{l l} F ^ {L} (x _ {0}, y _ {0}) & = 0 \\ F ^ {L} (l _ {x}, l _ {y}) & \geq 0 \\ F ^ {L} (l _ {x}, u _ {y}) & \geq 0 \\ F ^ {L} (u _ {x}, l _ {y}) & \geq 0 \\ F ^ {L} (u _ {x}, u _ {y}) & \geq 0 \end{array} \right., +$$ + +which is equivalent to + +$$ +\left\{ \begin{array}{c} \gamma^ {L} = x _ {0} y _ {0} - \alpha^ {L} x _ {0} - \beta^ {L} y _ {0} \\ l _ {x} l _ {y} - \alpha^ {L} (l _ {x} - x _ {0}) - \beta^ {L} (l _ {y} - y _ {0}) - x _ {0} y _ {0} \geq 0 \\ l _ {x} u _ {y} - \alpha^ {L} (l _ {x} - x _ {0}) - \beta^ {L} (u _ {y} - y _ {0}) - x _ {0} y _ {0} \geq 0 \\ u _ {x} l _ {y} - \alpha^ {L} (u _ {x} - x _ {0}) - \beta^ {L} (l _ {y} - y _ {0}) - x _ {0} y _ {0} \geq 0 \\ u _ {x} u _ {y} - \alpha^ {L} (u _ {x} - x _ {0}) - \beta^ {L} (u _ {y} - y _ {0}) - x _ {0} y _ {0} \geq 0 \end{array} . \right. \tag {8} +$$ + +We substitute $\gamma^L$ in Eq. (7) with Eq. (8), yielding + +$$ +V ^ {L} = V _ {0} [ (l _ {x} + u _ {x} - 2 x _ {0}) \alpha^ {L} + (l _ {y} + u _ {y} - 2 y _ {0}) \beta^ {L} + 2 x _ {0} y _ {0} ], +$$ + +where $V_{0} = \frac{(u_{x} - l_{x})(u_{y} - l_{y})}{2}$ + +We have shown that the minimum function value $F^{L}(x,y)$ within the concerned area cannot appear in $(l_{x},u_{x})\times (l_{y},u_{y})$ , i.e., it can only appear at the border. When $(x_0,y_0)$ is a point with a minimum function value $F^{L}(x_{0},y_{0}) = 0$ , $(x_0,y_0)$ can also only be chosen from the border of the concerned area. At least one of $x_0 = l_x$ and $x_0 = u_x$ holds true. + +If we take $x_0 = l_x$ : + +$$ +V _ {1} ^ {L} = V _ {0} \left[ \left(u _ {x} - l _ {x}\right) \alpha^ {L} + \left(l _ {y} + u _ {y} - 2 y _ {0}\right) \beta^ {L} + 2 l _ {x} y _ {0} \right]. +$$ + +And from Eq. (8) we obtain + +$$ +\alpha^ {L} \leq \frac {u _ {x} l _ {y} - l _ {x} y _ {0} - \beta^ {L} (l _ {y} - y _ {0})}{u _ {x} - l _ {x}}, +$$ + +$$ +\alpha^ {L} \leq \frac {u _ {x} u _ {y} - l _ {x} y _ {0} - \beta^ {L} (u _ {y} - y _ {0})}{u _ {x} - l _ {x}}, +$$ + +$$ +l _ {x} \leq \beta^ {L} \leq l _ {x} \Leftrightarrow \beta^ {L} = l _ {x}. +$$ + +Then + +$$ +V _ {1} ^ {L} = V _ {0} \left[ \left(u _ {x} - l _ {x}\right) \alpha^ {L} + l _ {x} \left(l _ {y} + u _ {y}\right) \right], +$$ + +$$ +\begin{array}{l} \left(u _ {x} - l _ {x}\right) \alpha^ {L} \leq - l _ {x} y _ {0} + \min \left(u _ {x} l _ {y} - \beta^ {L} \left(l _ {y} - y _ {0}\right), u _ {x} u _ {y} - \beta^ {L} \left(u _ {y} - y _ {0}\right)\right) \\ = - l _ {x} y _ {0} + \min \left(u _ {x} l _ {y} - l _ {x} \left(l _ {y} - y _ {0}\right), u _ {x} u _ {y} - l _ {x} \left(u _ {y} - y _ {0}\right)\right) \\ = \left(u _ {x} - l _ {x}\right) \min \left(l _ {y}, u _ {y}\right) \\ = \left(u _ {x} - l _ {x}\right) l _ {y}. \\ \end{array} +$$ + +Therefore, + +$$ +\alpha^ {L} \leq l _ {y}. +$$ + +To maximize $V_{1}^{L}$ , since now only $\alpha^{L}$ is unknown in $V_{1}^{L}$ and the coefficient of $\alpha^{L}$ is $V_{0}(u_{x} - l_{x}) \geq 0$ , we take $\alpha^{L} = l_{y}$ , and then + +$$ +V _ {1} ^ {L} = V _ {0} \left(u _ {x} l _ {y} + l _ {x} u _ {y}\right) +$$ + +is a constant. + +For the other case if we take $x_0 = u_x$ : + +$$ +\begin{array}{l} V _ {2} ^ {L} = V _ {0} \left[ \left(l _ {x} - u _ {x}\right) \alpha^ {L} + \left(l _ {y} + u _ {y} - 2 y _ {0}\right) \beta^ {L} + 2 u _ {x} y _ {0} \right], \\ \alpha^ {L} \geq \frac {l _ {x} l _ {y} - u _ {x} y _ {0} - \beta^ {L} (l _ {y} - y _ {0})}{l _ {x} - u _ {x}}, \\ \end{array} +$$ + +$$ +\alpha^ {L} \geq \frac {l _ {x} u _ {y} - u _ {x} y _ {0} - \beta^ {L} (u _ {y} - y _ {0})}{l _ {x} - u _ {x}}, +$$ + +$$ +u _ {x} \leq \beta^ {L} \leq u _ {x} \Leftrightarrow \beta^ {L} = u _ {x}, +$$ + +$$ +V _ {2} ^ {L} = V _ {0} \left[ \left(l _ {x} - u _ {x}\right) \alpha^ {L} + u _ {x} \left(l _ {y} + u _ {y}\right) \right], +$$ + +$$ +\begin{array}{l} (l _ {x} - u _ {x}) \alpha^ {L} \leq - u _ {x} y _ {0} + \min (l _ {x} l _ {y} - \beta^ {L} (l _ {y} - y _ {0}), l _ {x} u _ {y} - \beta^ {L} (u _ {y} - y _ {0})) \\ = \min \left(l _ {x} l _ {y} - u _ {x} l _ {y}, l _ {x} u _ {y} - u _ {x} u _ {y}\right) \\ = \left(l _ {x} - u _ {x}\right) \max \left(l _ {y}, u _ {y}\right) \\ = \left(l _ {x} - u _ {x}\right) u _ {y}. \\ \end{array} +$$ + +Therefore, + +$$ +\alpha^ {L} \geq u _ {y}. +$$ + +We take $\alpha^L = u_y$ similarly as in the case when $x_0 = l_x$ , and then + +$$ +V _ {2} ^ {L} = V _ {0} \left(l _ {x} u _ {y} + u _ {x} l _ {y}\right). +$$ + +We notice that $V_{1}^{L} = V_{2}^{L}$ , so we can simply adopt the first one. We also notice that $V_{1}^{L}, V_{2}^{L}$ are independent of $y_{0}$ , so we may take any $y_{0}$ within $[l_y, u_y]$ such as $y_{0} = l_{y}$ . Thereby, we obtain the a group of optimal parameters of the lower bounding plane: + +$$ +\left\{ \begin{array}{l l} \alpha^ {L} & = l _ {y} \\ \beta^ {L} & = l _ {x} \\ \gamma^ {L} & = - l _ {x} l _ {y} \end{array} \right.. +$$ + +# C.2 UPPER BOUND OF MULTIPLICATIONS + +We derive the upper bound similarly. We aim to minimize + +$$ +V ^ {U} = V _ {0} [ (l _ {x} + u _ {x} - 2 x _ {0}) \alpha^ {U} + (l _ {y} + u _ {y} - 2 y _ {0}) \beta^ {U} + 2 x _ {0} y _ {0} ], +$$ + +where $V_{0} = \frac{(u_{x} - l_{x})(u_{y} - l_{y})}{2}$ + +If we take $x_0 = l_x$ : + +$$ +V _ {1} ^ {U} = V _ {0} \left[ \left(u _ {x} - l _ {x}\right) \alpha^ {U} + \left(l _ {y} + u _ {y} - 2 y _ {0}\right) \beta^ {U} + 2 l _ {x} y _ {0} \right], +$$ + +$$ +\alpha^ {U} \geq \frac {u _ {x} l _ {y} - l _ {x} y _ {0} - \beta^ {U} (l _ {y} - y _ {0})}{u _ {x} - l _ {x}}, +$$ + +$$ +\alpha^ {U} \geq \frac {u _ {x} u _ {y} - l _ {x} y _ {0} - \beta^ {U} (u _ {y} - y _ {0})}{u _ {x} - l _ {x}}, +$$ + +$$ +l _ {x} \leq \beta^ {U} \leq l _ {x} \Leftrightarrow \beta^ {U} = l _ {x}. +$$ + +Then + +$$ +V _ {1} ^ {U} = V _ {0} [ (u _ {x} - l _ {x}) \alpha^ {U} + l _ {x} (l _ {y} + u _ {y}) ], +$$ + +$$ +\begin{array}{l} (u _ {x} - l _ {x}) \alpha^ {U} \geq - l _ {x} y _ {0} + \max (u _ {x} l _ {y} - \beta^ {U} (l _ {y} - y _ {0}), u _ {x} u _ {y} - \beta^ {U} (u _ {y} - y _ {0})) \\ = \max \left(u _ {x} l _ {y} - l _ {x} l _ {y}, u _ {x} u _ {y} - l _ {x} u _ {y}\right) \\ = \left(u _ {x} - l _ {x}\right) \max \left(l _ {y}, u _ {y}\right) \\ = \left(u _ {x} - l _ {x}\right) u _ {y}. \\ \end{array} +$$ + +Therefore, + +$$ +\alpha^ {U} \geq u _ {y}. +$$ + +To minimize $V_{1}^{U}$ , we take $\alpha^{U} = u_{y}$ , and then + +$$ +V _ {1} ^ {U} = V _ {0} \left(l _ {x} l _ {y} + u _ {x} u _ {y}\right). +$$ + +For the other case if we take $x_0 = u_x$ : + +$$ +V _ {2} ^ {U} = V _ {0} \left[ \left(l _ {x} - u _ {x}\right) \alpha^ {U} + \left(l _ {y} + u _ {y} - 2 y _ {0}\right) \beta^ {U} + 2 u _ {x} y _ {0} \right], +$$ + +$$ +\alpha^ {U} \leq \frac {l _ {x} l _ {y} - u _ {x} y _ {0} - \beta^ {U} (l _ {y} - y _ {0})}{l _ {x} - u _ {x}}, +$$ + +$$ +\alpha^ {U} \leq \frac {l _ {x} u _ {y} - u _ {x} y _ {0} - \beta^ {U} (u _ {y} - y _ {0})}{l _ {x} - u _ {x}}, +$$ + +$$ +u _ {x} \leq \beta^ {U} \leq u _ {x} \Leftrightarrow \beta^ {U} = u _ {x}. +$$ + +Therefore, + +$$ +V _ {2} ^ {U} = V _ {0} \left[ \left(l _ {x} - u _ {x}\right) \alpha^ {U} + u _ {x} \left(l _ {y} + u _ {y}\right) \right], +$$ + +$$ +\begin{array}{l} \left(l _ {x} - u _ {x}\right) \alpha^ {U} \geq - u _ {x} y _ {0} + \max \left(l _ {x} l _ {y} - \beta^ {U} \left(l _ {y} - y _ {0}\right), l _ {x} u _ {y} - \beta^ {U} \left(u _ {y} - y _ {0}\right)\right) \\ = \max \left(l _ {x} l _ {y} - u _ {x} l _ {y}, l _ {x} u _ {y} - u _ {x} u _ {y}\right) \\ = \left(l _ {x} - u _ {x}\right) \min \left(l _ {y}, u _ {y}\right) \\ = \left(l _ {x} - u _ {x}\right) l _ {y}. \\ \end{array} +$$ + +Therefore, + +$$ +\alpha^ {U} \leq l _ {y}. +$$ + +To minimize $V_2^U$ , we take $\alpha^U = l_y$ , and then + +$$ +V _ {2} ^ {U} = V _ {0} \left(l _ {x} l _ {y} + u _ {x} u _ {y}\right). +$$ + +Since $V_{1}^{U} = V_{2}^{U}$ , we simply adopt the first case. And $V_{1}^{U}, V_{2}^{U}$ are independent of $y_{0}$ , so we may take any $y_{0}$ within $[l_y, u_y]$ such as $y_{0} = l_{y}$ . Thereby, we obtain a group of optimal parameters of the upper bounding plane: + +$$ +\left\{ \begin{array}{l l} \alpha^ {U} & = u _ {y} \\ \beta^ {U} & = l _ {x} \\ \gamma^ {U} & = - l _ {x} u _ {y} \end{array} \right.. +$$ + +# C.3 LINEAR BOUNDS OF DIVISIONS + +We have shown that closed-form linear bounds of multiplications can be derived. However, we find that directly bounding $z = \frac{x}{y}$ is relatively more difficult. If we try to derive a lower bound $z^{L} = \alpha^{L}x + \beta^{L}y + \gamma^{L}$ for $z = \frac{x}{y}$ as shown in Appendix C.1, the difference function is + +$$ +F ^ {L} (x, y) = \frac {x}{y} - \left(\alpha^ {L} x + \beta^ {L} y + \gamma^ {L}\right). +$$ + +It is possible that a minimum function value of $F^L(x, y)$ for $(x, y)$ within the concerned area appears at a point other than the corners. For example, for $l_x = 0.05$ , $u_x = 0.15$ , $l_y = 0.05$ , $u_y = 0.15$ , $\alpha = 10$ , $\beta = -20$ , $\gamma = 2$ , the minimum function value of $F^L(x, y)$ for $(x, y) \in [0.05, 0.15] \times [0.05, 0.15]$ appears at $(0.1, 0.1)$ which is not a corner of $[0.05, 0.15] \times [0.05, 0.15]$ . This makes it more difficult to derive closed-form parameters such that the constraints on $F^L(x, y)$ are satisfied. Fortunately, we can bound $z = \frac{x}{y}$ indirectly by utilizing the bounds of multiplications and reciprocal functions. We bound $z = \frac{x}{y}$ by first bounding a unary function $\overline{y} = \frac{1}{y}$ and then bounding the multiplication $z = x\overline{y}$ . + +# D TIGHTNESS OF BOUNDS BY THE BACKWARD PROCESS AND FORWARD PROCESS + +We have discussed that combining the backward process with a forward process can reduce computational complexity, compared to the method with the backward process only. But we only use the forward process for self-attention layers and do not fully use the forward process for all sublayers, because bounds by the forward process can be looser than those by the backward process. + +We compare the tightness of bounds by the forward process and the backward process respectively. To illustrate the difference, for simplicity, we consider a $m$ -layer feed-forward network $\Phi^{(0)} = \mathbf{x}$ , $\mathbf{y}^{(l)} = \mathbf{W}^{(l)}\Phi^{(l-1)}(\mathbf{x}) + \mathbf{b}^{(l)}$ , $\Phi^{(l)}(\mathbf{x}) = \sigma(\mathbf{y}^{(l)}(\mathbf{x})) (0 < l \leq m)$ , where $\mathbf{x}$ is the input vector, $\mathbf{W}^{(l)}$ and $\mathbf{b}^{(l)}$ are the weight matrix and the bias vector for the $l$ -th layer respectively, $\mathbf{y}^{(l)}(\mathbf{x})$ is the pre-activation vector of the $l$ -th layer, $\Phi^{(l)}(\mathbf{x})$ is the vector of neurons in the $l$ -th layer, and $\sigma(\cdot)$ is an activation function. Before taking global bounds, both the backward process and the forward process bound $\Phi_{j}^{(l)}(\mathbf{x})$ with linear functions of $\mathbf{x}$ . When taking global bounds as Eq. (3) and Eq. (4), only the norm of weight matrix is directly related to the $\epsilon$ in binary search for certified lower bounds. Therefore, we try to measure the tightness of the computed bounds using the difference between weight matrices for lower bounds and upper bounds respectively. We show how it is computed for the forward process and the backward process respectively. + +# D.1 THE FORWARD PROCESS + +For the forward process, we bound each neuron $\Phi_j^{(l)}(\mathbf{x})$ with linear functions: + +$$ +\boldsymbol {\Omega} _ {j,:} ^ {(l), L} \mathbf {x} + \boldsymbol {\Theta} _ {j} ^ {(l), L} \leq \Phi_ {j} ^ {(l)} (\mathbf {x}) \leq \boldsymbol {\Omega} _ {j,:} ^ {(l), U} \mathbf {x} + \boldsymbol {\Theta} _ {j} ^ {(l), U}. +$$ + +To measure the tightness of the bounds, we are interested in $\Omega^{(l),L}$ , $\Omega^{(l),U}$ , and also $\Omega^{(l),U} - \Omega^{(l),L}$ . Initially, + +$$ +\boldsymbol {\Omega} ^ {(0), L / U} = \mathbf {I}, \Theta^ {(0), L / U} = \mathbf {0}, \boldsymbol {\Omega} ^ {(0), U} - \boldsymbol {\Omega} ^ {(0), L} = \mathbf {0}. +$$ + +We can forward propagate the bounds of $\Phi^{(l - 1)}(\mathbf{x})$ to $\mathbf{y}^{(l)}(\mathbf{x})$ : + +$$ +\boldsymbol {\Omega} _ {j,:} ^ {(l), y, L} \mathbf {x} + \boldsymbol {\Theta} _ {j} ^ {(l), y, L} \leq \mathbf {y} _ {j} ^ {(l)} (\mathbf {x}) \leq \boldsymbol {\Omega} _ {j,:} ^ {(l), y, U} \mathbf {x} + \boldsymbol {\Theta} _ {j} ^ {(l), y, U}, +$$ + +where + +$$ +\boldsymbol {\Omega} _ {j,:} ^ {(l), y, L / U} = \sum_ {\mathbf {W} _ {j, i} ^ {(l)} > 0} \mathbf {W} _ {j, i} ^ {(l)} \boldsymbol {\Omega} _ {i,:} ^ {(l - 1), L / U} + \sum_ {\mathbf {W} _ {j, i} ^ {(l)} < 0} \mathbf {W} _ {j, i} ^ {(l)} \boldsymbol {\Omega} _ {i,:} ^ {(l - 1), U / L}, +$$ + +$$ +\boldsymbol {\Theta} ^ {(l), y, L / U} = \sum_ {\mathbf {W} _ {j, i} ^ {(l)} > 0} \mathbf {W} _ {j, i} ^ {(l)} \boldsymbol {\Theta} _ {i} ^ {(l - 1), L / U} + \sum_ {\mathbf {W} _ {j, i} ^ {(l)} < 0} \mathbf {W} _ {j, i} ^ {(l)} \boldsymbol {\Theta} _ {i} ^ {(l - 1), U / L} + \mathbf {b} ^ {(l)}. +$$ + +With the global bounds of $\mathbf{y}^{(l)}(\mathbf{x})$ that can be obtained with Eq. (3) and Eq. (4), we bound the activation function: + +$$ +\alpha_ {j} ^ {(l), L} \mathbf {y} ^ {(l)} (\mathbf {x}) + \beta_ {j} ^ {(l), L} \leq \sigma \left(\mathbf {y} _ {j} ^ {(l)} (\mathbf {x})\right) \leq \alpha_ {j} ^ {(l), U} \mathbf {y} ^ {(l)} (\mathbf {x}) + \beta_ {j} ^ {(l), U}. +$$ + +And then bounds can be propagated from $\Phi^{(l - 1)}(\mathbf{x})$ to $\Phi^{(l)}(\mathbf{x})$ : + +$$ +\boldsymbol {\Omega} _ {j,:} ^ {(l), L / U} = \left\{ \begin{array}{l l} \alpha_ {j} ^ {(l), L / U} \boldsymbol {\Omega} _ {j,:} ^ {(l), y, L / U} & \alpha_ {j} ^ {(l), L / U} \geq 0 \\ \alpha_ {j} ^ {(l), L / U} \boldsymbol {\Omega} _ {j,:} ^ {(l), y, U / L} & \alpha_ {j} ^ {(l), L / U} < 0 \end{array} \right., +$$ + +$$ +\begin{array}{r} \pmb {\Theta} _ {j} ^ {(l), L / U} = \left\{ \begin{array}{l l} \alpha_ {j} ^ {(l), L / U} \pmb {\Theta} _ {j} ^ {(l), y, L / U} + \beta_ {j} ^ {(l), L / U} & \alpha_ {j} ^ {(l), L / U} \geq 0 \\ \alpha_ {j} ^ {(l), L / U} \pmb {\Theta} _ {j} ^ {(l), y, U / L} + \beta_ {j} ^ {(l), L / U} & \alpha_ {j} ^ {(l), L / U} < 0 \end{array} \right.. \end{array} +$$ + +Therefore, + +$$ +\boldsymbol {\Omega} _ {j,:} ^ {(l), U} - \boldsymbol {\Omega} _ {j,:} ^ {(l), L} = \left(\alpha_ {j} ^ {(l), U} - \alpha_ {j} ^ {(l), L}\right) \left| \mathbf {W} _ {j} ^ {(l)} \right| \left(\boldsymbol {\Omega} _ {j,:} ^ {(l - 1), U} - \boldsymbol {\Omega} _ {j,:} ^ {(l - 1), L}\right) \tag {9} +$$ + +illustrates how the tightness of the bounds is changed from earlier layers to later layers. + +# D.2 THE BACKWARD PROCESS AND DISCUSSIONS + +For the backward process, we bound the neurons in the $l$ -th layer with linear functions of neurons in a previous layer, the $l'$ -th layer: + +$$ +\Phi^ {(l, l ^ {\prime}), L} = \Lambda_ {j,:} ^ {(l, l ^ {\prime}), L} \Phi^ {(l ^ {\prime})} (\mathbf {x}) + \Delta_ {j} ^ {(l, l ^ {\prime}), L} \leq \Phi^ {(l)} (\mathbf {x}) \leq \Lambda_ {j,:} ^ {(l, l ^ {\prime}), U} \Phi^ {(l ^ {\prime})} (\mathbf {x}) + \Delta_ {j} ^ {(l, l ^ {\prime}), U} = \Phi^ {(l, l ^ {\prime}), U}. +$$ + +We have shown in Sec. 3.1 how such bounds can be propagated to $l' = 0$ , for the case when the input is sequential. For the nonsequential case we consider here, it can be regarded as a special case when the input length is 1. So we can adopt the method in Sec. 3.1 to propagate bounds + +for the feed-forward network we consider here. We are interested in $\Lambda^{(l,l'),L}$ , $\Lambda^{(l,l'),U}$ and also $\Lambda^{(l,l'),U} - \Lambda^{(l,l'),L}$ . Weight matrices of linear bounds before taking global bounds are $\Lambda^{(l,0),L}$ and $\Lambda^{(l,0),U}$ which are obtained by propagating the bounds starting from $\Lambda^{(l,l),L} = \Lambda^{(l,l),U} = \mathbf{I}$ . According to bound propagation described in Sec. 3.2, + +$$ +\boldsymbol {\Lambda} _ {:, j} ^ {(l, l ^ {\prime} - 1), U} - \boldsymbol {\Lambda} _ {:, j} ^ {(l, l ^ {\prime} - 1), L} = \left(\alpha_ {j} ^ {(l ^ {\prime}), U} \left(\boldsymbol {\Lambda} _ {:, j, +} ^ {(l, l ^ {\prime}), U} - \boldsymbol {\Lambda} _ {:, j, -} ^ {(l, l ^ {\prime}), L}\right) - \alpha_ {j} ^ {(l ^ {\prime}), L} \left(\boldsymbol {\Lambda} _ {:, j, +} ^ {(l, l ^ {\prime}), L} - \boldsymbol {\Lambda} _ {:, j, -} ^ {(l, l ^ {\prime}), U}\right)\right) \mathbf {W} ^ {(l ^ {\prime})} \tag {10} +$$ + +illustrates how the tightness bounds can be measured during the backward bound propagation until $l' = 0$ . + +There is a $\mathbf{W}^{(l^{\prime})}$ in Eq. (10) instead of $|\mathbf{W}^{(l^{\prime})}|$ in Eq. (9). The norm of $(\Omega_{j,:}^{(l),U} - \Omega_{j,:}^{(l),L})$ in Eq. (9) can quickly grow large as $l$ increases during the forward propagation when $\| \mathbf{W}_j^{(l)}\|$ is greater than 1, while this generally holds true for neural networks to have $\| \mathbf{W}_j^{(l)}\|$ greater than 1 in feedforward layers. While in Eq. (10), $\mathbf{W}_j^{(l^{\prime})}$ can have both positive and negative elements and tends to allow cancellations for different $\mathbf{W}_{j,i}^{(l^{\prime})}$ , and thus the norm of $(\Lambda_{:,j}^{(l,l^{\prime} - 1),U} - \Lambda_{:,j}^{(l,l^{\prime} - 1),L})$ tends to be smaller. Therefore, the bounds computed by the backward process tend to be tighter than those by the forward framework, which is consistent with our experiment results in Table 3. + +# E IMPACT OF MODIFYING THE LAYER NORMALIZATION + +The original Transformers have a layer normalization after the embedding layer, and two layer normalization before and after the feed-forward part respectively in each Transformer layer. We modify the layer normalization, $f(\mathbf{x}) = \mathbf{w}(\mathbf{x} - \mu) / \sigma + \mathbf{b}$ , where $\mathbf{x}$ is $d$ -dimensional a vector to be normalized, $\mu$ and $\sigma$ are the mean and standard derivation of $\{\mathbf{x}_i\}$ respectively, and $\mathbf{w}$ and $\mathbf{b}$ are gain and bias parameters respectively. $\sigma = \sqrt{(1 / d)\sum_{i=1}^{d}(\mathbf{x}_i - \mu)^2 + \epsilon_s}$ where $\epsilon_s$ is a smoothing constant. It involves $(\mathbf{x}_i - \mu)^2$ whose linear lower bound is loose and exactly 0 when the range of the $\mathbf{x}_i - \mu$ crosses 0. When the $\ell_p$ -norm of the perturbation is relatively larger, there can be many $\mathbf{x}_i - \mu$ with ranges crossing 0, which can cause the lower bound of $\sigma$ to be small and thereby the upper bound of $f_i(\mathbf{x})$ to be large. This can make the certified bounds loose. To tackle this, we modify the layer normalization into $f(\mathbf{x}) = \mathbf{w}(\mathbf{x} - \mu) + \mathbf{b}$ by removing the standard derivation term. We use an experiment to study the impact of this modification. We compare the clean accuracies and certified bounds of the models with modified layer normalization to models with standard layer normalization and with no layer normalization respectively. Table 5 presents the results. Certified lower bounds of models with no layer normalization or our modification are significantly tighter than those of corresponding models with the standard layer normalization. Meanwhile, the clean accuracies of the models with our modification are comparable with those of the models with the standard layer normalization (slightly lower on Yelp and slightly higher on SST). This demonstrates that it appears to be worthwhile to modify the layer normalization in Transformers for easier verification. + +
DatasetNLayerNormAcc.lpUpperLowerOurs vs Upper
MinAvgMinAvgMinAvg
Yelp1Standard91.7l1189.934199.2650.0100.0225.3E-51.1E-4
l215.12515.3840.0080.0195.5E-41.3E-3
l∞2.0013.0660.0020.0058.7E-41.7E-3
None91.4l18.04412.9481.3601.68417%13%
l20.5800.9050.3630.44763%49%
l∞0.0860.1270.0330.04038%32%
Ours91.5l19.08513.9171.4231.80916%13%
l20.6951.0050.3840.48355%48%
l∞0.1170.1550.0340.04329%27%
2Standard92.0l1190.476201.0920.0020.0041.2E-51.8E-5
l215.27715.5070.0010.0029.0E-51.6E-4
l∞2.0222.9010.0000.0009.5E-51.3E-4
None91.5l18.11215.2250.5120.6316%4%
l20.5871.0420.1230.15421%15%
l∞0.0810.1400.0100.01313%9%
Ours91.5l110.22815.4520.3890.5124%3%
l20.7731.1030.1160.14915%14%
l∞0.1220.1610.0100.0139%8%
SST1Standard83.0l1190.777194.9610.0080.0154.2E-57.8E-5
l215.54915.6300.0060.0134.1E-48.2E-4
l∞2.2412.5040.0010.0035.2E-41.2E-3
None83.0l16.9218.4172.4802.65936%32%
l20.5270.6280.4110.44778%71%
l∞0.0890.1090.0320.03536%32%
Ours83.2l17.4188.8492.5032.68934%30%
l20.5600.6580.4180.45475%69%
l∞0.0910.1110.0330.03636%32%
2Standard82.5l1191.742196.3650.0020.0049.6E-61.9E-5
l215.55415.6490.0010.0037.0E-51.7E-4
l∞2.2522.5130.0000.0006.6E-51.9E-4
None83.5l16.7428.1181.8211.86127%23%
l20.5150.6100.2980.30658%50%
l∞0.0850.1030.0230.02428%23%
Ours83.5l16.7818.3671.9191.96928%24%
l20.5200.6280.3050.31559%50%
l∞0.0850.1050.0240.02428%23%
+ +Table 5: Clean accuracies, upper bounds, certified lower bounds by our method of models with different layer normalization settings. \ No newline at end of file diff --git a/robustnessverificationfortransformers/images.zip b/robustnessverificationfortransformers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0fa3a8f8e3d579f3b774f3e475b716eec9bc057e --- /dev/null +++ b/robustnessverificationfortransformers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71461cb086792065cd0400aaddf8e1ec35d675d8cc84b181007cea5f34dad3f2 +size 1103681 diff --git a/robustnessverificationfortransformers/layout.json b/robustnessverificationfortransformers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..63ab8c17d1dd5dbd804504d6477b337fe1765c61 --- /dev/null +++ b/robustnessverificationfortransformers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a15a1e33e0d4c20fd3e4f5924b23f6a2c4b8c2b36f9366930293dfdb138fdeef +size 921043 diff --git a/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_content_list.json b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8a2f219e1b6df59ab954085fc8a6b58f506548eb --- /dev/null +++ b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97390d61d32fb6062b5bd4bb20588bcf8d359391a29ee690fbd7c52f195160bd +size 163360 diff --git a/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_model.json b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fbb6934cad8f3801d7589c70b1f318b10be80b61 --- /dev/null +++ b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:781c4a81c1fa3cb4f264751fd1dd1a28ccf7f6de0cc2a673085b2daed5134b11 +size 181680 diff --git a/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_origin.pdf b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..740ad59c6f915bd4e47cf6c3c3e7b92abbf3ac52 --- /dev/null +++ b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/934dfccc-a7a7-46d3-bae5-d660445ffaea_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f24ff814c4338c9f96b62770c5665c473b823b81b1178e5e7c1ba33e88f1e7c4 +size 2907949 diff --git a/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/full.md b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0e666ae5165e01c1ab0b8408d758cbb6b6983c82 --- /dev/null +++ b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/full.md @@ -0,0 +1,723 @@ +# ROBUST REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL WITH MODEL MISSPECIFICATION + +Daniel J. Mankowitz*, Nir Levine*, Rae Jeong, Abbas Abdelmaleki, Jost Tobias Springenberg, Yuanyuan Shi†, Jackie Kay, Timothy Mann, Todd Hester, Martin Riedmiller DeepMind + +{dmankowitz, nirlevine, raejeong, aabdolmaleki, springenberg yyshi, kayj, timothymann, toddhester, riedmiller}@google.com + +# ABSTRACT + +We provide a framework for incorporating robustness – to perturbations in the transition dynamics which we refer to as model misspecification – into continuous control Reinforcement Learning (RL) algorithms. We specifically focus on incorporating robustness into a state-of-the-art continuous control RL algorithm called Maximum a-posteriori Policy Optimization (MPO). We achieve this by learning a policy that optimizes for a worst case expected return objective and derive a corresponding robust entropy-regularized Bellman contraction operator. In addition, we introduce a less conservative, soft-robust, entropy-regularized objective with a corresponding Bellman operator. We show that both, robust and soft-robust policies, outperform their non-robust counterparts in nine Mujoco domains with environment perturbations. In addition, we show improved robust performance on a high-dimensional, simulated, dexterous robotic hand. Finally, we present multiple investigative experiments that provide a deeper insight into the robustness framework. This includes an adaptation to another continuous control RL algorithm as well as learning the uncertainty set from offline data. Performance videos can be found online at https://sites.google.com/view/robust-rl. + +# 1 INTRODUCTION + +Reinforcement Learning (RL) algorithms typically learn a policy that optimizes for the expected return (Sutton & Barto, 1998). That is, the policy aims to maximize the sum of future expected rewards that an agent accumulates in a particular task. This approach has yielded impressive results in recent years, including playing computer games with super human performance (Mnih et al., 2015; Tessler et al., 2016), multi-task RL (Rusu et al., 2016; Devin et al., 2017; Teh et al., 2017; Mankowitz et al., 2018b; Riedmiller et al., 2018) as well as solving complex continuous control robotic tasks (Duan et al., 2016; Abdelmaleki et al., 2018b; Kalashnikov et al., 2018; Haarnoja et al., 2018). + +The current crop of RL agents are typically trained in a single environment (usually a simulator). As a consequence, an issue that is faced by many of these agents is the sensitivity of the agent's policy to environment perturbations. Perturbing the dynamics of the environment during test time, which may include executing the policy in a real-world setting, can have a significant negative impact on the performance of the agent (Andrychowicz et al., 2018; Peng et al., 2018; Derman et al., 2018; Di Castro et al., 2012; Mankowitz et al., 2018a). This is because the training environment is not necessarily a very good model of the perturbations that an agent may actually face, leading to potentially unwanted, sub-optimal behaviour. There are many types of environment perturbations. These include changing lighting/weather conditions, sensor noise, actuator noise, action delays etc (Dulac-Arnold et al., 2019). + +It is desirable to train agents that are agnostic to environment perturbations. This is especially crucial in the Sim2Real setting (Andrychowicz et al., 2018; Peng et al., 2018; Wulfmeier et al., 2017; Rastogi et al., 2018; Christiano et al., 2016) where a policy is trained in a simulator and then executed on a + +real-world domain. As an example, consider a robotic arm that executes a control policy to perform a specific task in a factory. If, for some reason, the arm needs to be replaced and the specifications do not exactly match, then the control policy still needs to be able to perform the task with the 'perturbed' robotic arm dynamics. In addition, sensor noise due to malfunctioning sensors, as well as actuator noise, may benefit from a robust policy to deal with these noise-induced perturbations. + +Model misspecification: For the purpose of this paper, we refer to an agent that is trained in one environment and performs in a different, perturbed version of the environment (as in the above examples) as model misspecification. By incorporating robustness into our agents, we correct for this misspecification yielding improved performance in the perturbed environment(s). + +In this paper, we propose a framework for incorporating robustness into continuous control RL algorithms. We specifically focus on robustness to model misspecification in the transition dynamics. Our main contributions are as follows: + +(1) We incorporate robustness into a state-of-the-art continuous control RL algorithm called Maximum a-posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018b) to yield Robust MPO (R-MPO). We also carry out an additional experiment, where we incorporate robustness into an additional continuous RL algorithm called Stochastic Value Gradients (SVG) (Heess et al., 2015b). +(2) Entropy regularization encourages exploration and helps prevent early convergence to sub-optimal policies (Nachum et al., 2017). To incorporate these advantages, we: (i) Extend the Robust Bellman operator (Iyengar, 2005) to robust and soft-robust entropy-regularized versions, and show that these operators are contraction mappings. In addition, we (ii) extend MPO to Robust Entropy-regularized MPO (RE-MPO) and Soft RE-MPO (SRE-MPO) and show that they perform at least as well as R-MPO and in some cases significantly better. All the derivations have been deferred to Appendices B, C and D. + +We want to emphasize that, while the theoretical contributions are novel, our most significant contribution is that of the extensive experimental analysis we have performed to analyze the robustness performance of our agent. Specifically: + +(3) We present experimental results in nine Mujoco domains showing that RE-MPO, SRE-MPO and R-MPO, SR-MPO outperform both E-MPO and MPO respectively. +(4) To ensure that our method scales, we show robust performance on a high-dimensional, simulated, dexterous robotic hand called Shadow hand which outperforms the non-robust MPO baseline. + +(5) Multiple investigative experiments to better understand the robustness framework. These include (i) an analysis of modifying the uncertainty set; (ii) comparing our technique to data augmentation; (iii) a comparison to domain randomization; (iv) comparing with and without entropy regularization; (v) We also train the transition models from offline data and use them as the uncertainty set to run R-MPO. We show that R-MPO with learned transition models as the uncertainty set can lead to improved performance over R-MPO. + +# 2 BACKGROUND + +A Markov Decision Process (MDP) is defined as the tuple $\langle S, A, r, \gamma, P \rangle$ where $S$ is the state space, $A$ the action space, $r: S \times A \to \mathbb{R}$ is a bounded reward function; $\gamma \in [0,1]$ is the discount factor and $P: S \times A \to \Delta^S$ maps state-action pairs to a probability distribution over next states. We use $\Delta^S$ to denote the $|S| - 1$ simplex. The goal of a Reinforcement Learning agent for the purpose of control is to learn a policy $\pi: S \to \Delta^A$ which maps a state and action to a probability of executing the action from the given state so as to maximize the expected return $J(\pi) = \mathbb{E}^\pi[\sum_{t=0}^\infty \gamma^t r_t]$ where $r_t$ is a random variable representing the reward received at time $t$ (Sutton & Barto, 2018). The value function is defined as $V^\pi(s) = \mathbb{E}^\pi[\sum_{t=0}^\infty \gamma^t r_t | s_0 = s]$ and the action value function as $Q^\pi(s, a) = r(s, a) + \gamma \mathbb{E}_{s' \sim P(\cdot | s, a)}[V^\pi(s')]$ . + +A Robust MDP (R-MDP) is defined as a tuple $\langle S, A, r, \gamma, \mathcal{P} \rangle$ where $S, A, r$ and $\gamma$ are defined as above; $\mathcal{P}(s, a) \subseteq \mathcal{M}(S)$ is an uncertainty set where $\mathcal{M}(S)$ is the set of probability measures over next states $s' \in S$ . This is interpreted as an agent selecting a state and action pair, and the next state $s'$ is determined by a conditional measure $p(s'|s, a) \in \mathcal{P}(s, a)$ (Iyengar, 2005). A robust policy optimizes for the worst-case expected return objective: $J_{\mathrm{R}}(\pi) = \inf_{p \in \mathcal{P}} \mathbb{E}^{p,\pi}[\sum_{t=0}^{\infty} \gamma^t r_t]$ . + +The robust value function is defined as $V_{\mathbb{R}}^{\pi}(s) = \inf_{p\in \mathcal{P}}\mathbb{E}^{p,\pi}[\sum_{t = 0}^{\infty}\gamma^{t}r_{t}|s_{0} = s]$ and the robust action value function as $Q_{\mathbb{R}}^{\pi}(s,a) = r(s,a) + \gamma \inf_{p\in \mathcal{P}}\mathbb{E}_{s^{\prime}\sim p(\cdot |s,a)}[V_{\mathbb{R}}^{\pi}(s^{\prime})]$ . Both the robust Bellman operator $T_{\mathbb{R}}^{\pi}:\mathcal{R}^{|S|}\to \mathcal{R}^{|S|}$ for a fixed policy and the optimal robust Bellman operator $T_{\mathbb{R}}v(s) = \max_{\pi}T_{\mathbb{R}}^{\pi}v(s)$ have previously been shown to be contractions (Iyengar, 2005). A rectangularity assumption on the uncertainty set (Iyengar, 2005) ensures that "nature" can choose a worst-case transition function independently for every state $s$ and action $a$ . + +Maximum A-Posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018a,b) is a continuous control RL algorithm that performs an expectation maximization form of policy iteration. There are two steps comprising policy evaluation and policy improvement. The policy evaluation step receives as input a policy $\pi_{k}$ and evaluates an action-value function $Q_{\theta}^{\pi_k}(s,a)$ by minimizing the squared TD error: $\min_{\theta}(r_t + \gamma Q_{\hat{\theta}}^{\pi_k}(s_{t + 1}\sim P(\cdot |s_t,a_t),a_{t + 1}\sim \pi_k(\cdot |s_{t + 1})) - Q_{\theta}^{\pi_k}(s_t,a_t))^2$ where $\hat{\theta}$ denotes the parameters of a target network (Mnih et al., 2015) that are periodically updated from $\theta$ . In practice we use a replay-buffer of samples in order to perform the policy evaluation step. The second step comprises a policy improvement step. The policy improvement step consists of optimizing the objective $\bar{J} (s,\pi) = \mathbb{E}_{\pi}[Q_{\theta}^{\pi_k}(s,a)]$ for states $s$ drawn from a state distribution $\mu (s)$ . In practice the state distribution samples are drawn from an experience replay. By improving $\bar{J}$ in all states $s$ , we improve our objective. To do so, a two step procedure is performed. + +First, we construct a non-parametric estimate $q$ such that $\bar{J}(s,q) \geq \bar{J}(s,\pi_k)$ . This is done by maximizing $\bar{J}(s,q)$ while ensuring that the solution, locally, stays close to the current policy $\pi_k$ ; i.e. $\mathbb{E}_{\mu(s)}[\mathrm{KL}(q(\cdot|s),\pi_k(\cdot|s))] < \epsilon$ . This optimization has a closed form solution given as $q(a|s) \propto \pi_k(a|s)\exp Q_{\theta}^{\pi_k}(s,a)/\eta$ , where $\eta$ is a temperature parameter that can be computed by minimizing a convex dual function (Abdolmaleki et al. (2018b)). Second, we project this non-parametric representation back onto a parameterized policy by solving the optimization problem $\pi_{k+1} = \arg \min_{\pi}\mathbb{E}_{\mu(s)}[\mathrm{KL}(q(a|s)\|\pi(a|s)]$ , where $\pi_{k+1}$ is the new and improved policy and where one typically employs additional regularization (Abdolmaleki et al., 2018a). Note that this amounts to supervised learning with samples drawn from $q(a|s)$ ; see Abdolmaleki et al. (2018a) for details. + +# 3 ROBUST MPO + +To incorporate robustness into MPO, we focus on learning a worst-case value function in the policy evaluation step. Note that this policy evaluation step can be incorporated into any actor-critic algorithm. In particular, instead of optimizing the squared TD error, we optimize the worst-case squared TD error, which is defined as: + +$$ +\min _ {\theta} \left(r _ {t} + \gamma \inf _ {p \in \mathcal {P} (s _ {t}, a _ {t})} \left[ Q _ {\hat {\theta}} ^ {\pi_ {k}} (s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi_ {k} (\cdot | s _ {t + 1})) \right] - Q _ {\theta} ^ {\pi_ {k}} (s _ {t}, a _ {t})\right) ^ {2}, \tag {1} +$$ + +where $\mathcal{P}(s_t, a_t)$ is an uncertainty set for the current state $s_t$ and action $a_t$ ; $\pi_k$ is the current network's policy, and $\hat{\theta}$ denotes the target network parameters. It is in this policy evaluation step (Line 3 in Algorithms 1,2 and 3 in Appendix I) that the Bellman operators in the previous sections are applied. + +Relation to MPO: In MPO, this replaces the current policy evaluation step. The robust Bellman operator (Iyengar, 2005) ensures that this process converges to a unique fixed point for the policy $\pi_k$ . This is achieved by repeated application of the robust Bellman operator during the policy evaluation step until convergence to the fixed point. Since the proposal policy $q(a|s)$ (see Section 2) is proportional to the robust action value estimate $Q_{\theta}^{\pi_k}(s,a)$ , it intuitively yields a robust policy as the policy is being generated from a worst-case value function. The fitting of the policy network to the proposal policy yields a robust network policy $\pi_{k + 1}$ . + +Entropy-regularized MPO: Entropy-regularization encourages exploration and helps prevent early convergence to sub-optimal policies (Nachum et al., 2017). To incorporate these advantages, we extended the Robust Bellman operator (Iyengar, 2005) to robust and soft-robust entropy-regularized versions (See Appendix B and C respectively for a detailed overview and the corresponding derivations) and show that these operators are contraction mappings (Theorem 1 below and Theorem 2 in Appendix E) and yield a well-known value-iteration bound with respect to the max norm. + +Theorem 1. The robust entropy-regularized Bellman operator $\mathcal{T}_{R - KL}^{\pi}$ for a fixed policy $\pi$ is a contraction operator. Specifically: $\forall U,V\in \mathbb{R}^{|S|}$ and $\gamma \in (0,1)$ , we have, $\| \mathcal{T}_{R - KL}^{\pi}U - \mathcal{T}_{R - KL}^{\pi}V\| \leq \gamma \| U - V\|$ . + +In addition, we extended MPO to Robust Entropy-regularized MPO (RE-MPO) and Soft RE-MPO (SRE-MPO) (see Appendix D for a detailed overview and derivations) and show that they perform at least as well as R-MPO and in some cases significantly better. All the derivations have been deferred to the Appendix. The corresponding algorithms for R-MPO, RE-MPO and SRE-MPO can be found in Appendix I. + +# 4 EXPERIMENTS + +We now present experiments on nine different continuous control domains (four of which we show in the paper and the rest can be found in Appendix H.4) from the DeepMind control suite (Tassa et al., 2018). In addition, we present an experiment on a high-dimensional dexterous, robotic hand called Shadow hand (ShadowRobot, 2019). In our experiments, we found that the entropy-regularized version of Robust MPO had similar performance and in some cases, slightly better performance than the expected return version of Robust MPO without entropy-regularization. We therefore decided to include experiments of our agent optimizing the entropy-regularized objective (non-robust, robust and soft-robust versions). This corresponds to (a) non-robust E-MPO baseline, (b) Robust E-MPO (RE-MPO) and (c) Soft-Robust E-MPO (SRE-MPO). From hereon in, it is assumed that the algorithms optimize for the entropy-regularized objective unless otherwise stated. + +Appendix: In Appendix H.4, we present results of our agent optimizing for the expected return objective without entropy regularization (for the non-robust, robust and soft-robust versions). This corresponds to (a') non-robust MPO baseline, (b') R-MPO and (c') SR-MPO. + +The experiments are divided into three sections. The first section details the setup for robust and soft-robust training. The next section compares robust and soft-robust performance to the non-robust MPO baseline in each of the domains. The final section is a set of investigative experiments to gain additional insights into the performance of the robust and soft-robust agents. + +Setup: For each domain, the robust agent is trained using a pre-defined uncertainty set consisting of three task perturbations1. Each of the three perturbations corresponds to a particular perturbation of the Mujoco domain. For example, in Cartpole, the uncertainty set consists of three different pole lengths. Both the robust and non-robust agents are evaluated on a test set of three unseen task perturbations. In the Cartpole example, this would correspond to pole lengths that the agent has not seen during training. The chosen values of the uncertainty set and evaluation set for each domain can be found in Appendix H.3. Note that it is common practice to manually select the pre-defined uncertainty set and the unseen test environments. Practitioners often have significant domain knowledge and can utilize this when choosing the uncertainty set (Derman & Mannor, 2019; Derman et al., 2018; Di Castro et al., 2012; Mankowitz et al., 2018a; Tamar et al., 2014). + +During training, the robust, soft-robust and non-robust agents act in an unperturbed environment which we refer to as the nominal environment. During the TD learning update, the robust agent calculates an infimum between Q values from each next state realization for each of the uncertainty set task perturbations (the soft-robust agent computes an average, which corresponds to a uniform distribution over $\mathcal{P}$ , instead of an infimum). Each transition model is a different instantiation of the Mujoco task. The robust and soft-robust agents are exposed to more state realizations than the non-robust agent. However, as we show in our ablation studies, significantly increasing the number of samples and the diversity of the samples for the non-robust agent still results in poor performance compared to the robust and soft-robust agents. + +# 4.1 MAIN EXPERIMENTS + +Mujoco Domains: We compare the performance of non-robust MPO to the robust and soft-robust variants. Each training run consists of $30k$ episodes and the experiments are repeated 5 times. In the bar plots, the y-axis indicates the average reward (with standard deviation) and the x-axis indicates different unseen evaluation environment perturbations starting from the first perturbation (Env0) onwards. Increasing environment indices correspond to increasingly large perturbations. For example, in Figure 1 (top left), Env0, Env1 and Env2 for the Cartpole Balance task represents the pole perturbed + +![](images/7969897f3bc5b7d84655fa4ac8fec11b5f1e0a1833e2a3393d31124014499763.jpg) +Figure 1: Three domains showing RE-MPO (blue), SRE-MPO (green) and E-MPO (red). The addition six domains can be found in the appendix. In addition, the results for R-MPO, SR-MPO and MPO can be found in Appendix H.4 with similar results. + +![](images/0c1e1617926979fe17f241d226492cc120ba722d9bbbf32be756a009b6e8188f.jpg) + +![](images/1ded77c88951024b1a20b8addfbb03c18253f8a6eb5ff0cabe63ae26db48acd5.jpg) + +![](images/caa260edc08c3d754e3940e4a2cabb845c1bc875237e801bbf218b83118ef0c8.jpg) +Figure 2: (1) The Shadow hand domain (left) and results for RE-MPO and E-MPO (middle left). (2) A larger test set: the figures show the performance of RE-MPO (blue), SRE-MPO (green) and E-MPO (red) for a test set that extends from the nominal environment to significant perturbations outside the training set for Cartpole Balance (middle right) and Pendulum Swingup (right). + +![](images/ccedef9d623706106f868b1858c2c09e5d39613dcc12fa9e3de6ed4224445873.jpg) + +![](images/43950fe6e39fb08965b900453f0498006c15bc01460069e0f70d68c9c6a86604.jpg) + +![](images/aa55cde8553e16ee12390d547f9e30fbaa8084812468689f3434a4c2886a4a1b.jpg) + +to lengths of 2.0, 2.2 and 2.3 meters respectively. Figure 1 shows the performance of three Mujoco domains (The remaining six domains are in Appendix H.4). The bar plots indicate the performance of E-MPO (red), RE-MPO (blue) and SRE-MPO (green) on the held-out test perturbations. This color scheme is consistent throughout the experiments unless otherwise stated. As can be seen in each of the figures, RE-MPO attains improved performance over E-MPO. This same trend holds true for all nine domains. SRE-MPO outperforms the non-robust baseline in all but the Cheetah domain, but is not able to outperform RE-MPO. An interesting observation can be seen in the video for the Walker walk task (https://sites.google.com/view/robust-rl), where the RE-MPO agent learns to 'drag' its leg which is a fundamentally different policy to that of the non-robust agent which learns a regular gait movement. + +Appendix: The appendix contains additional experiments with the non entropy-regularized versions of the algorithms where again the robust (R-MPO) and soft robust (SR-MPO) versions of MPO outperform the non-robust version (MPO). + +**Shadow hand domain:** This domain consists of a dexterous, simulated robotic hand called Shadow hand whose goal is to rotate a cube into a pre-defined orientation (ShadowRobot, 2019). The state space is a 79 dimensional vector and consisting of angular positions and velocities, the cube orientation and goal orientation. The action space is a 20 dimensional vector and consisting of the desired angular velocity of the hand actuators. The reward is a function of the current orientation of the cube relative to the desired orientation. The uncertainty set consists of three models which correspond to increasingly smaller sizes of the cube that the agent needs to orientate. The agent is evaluated on a different, unseen holdout set. The values can be found in Appendix H.3. We compare RE-MPO to E-MPO trained agents. Episodes are 200 steps long corresponding to approximately 10 seconds of interaction. Each experiment is run for $6k$ episodes and is repeated 5 times. As seen in Figure 2, RE-MPO outperforms E-MPO, especially as the size of the cube decreases (from Env0 to Env2). This is an especially challenging problem due to the high-dimensionality of the task. As seen in the videos (https://sites.google.com/view/robust-r1), the RE-MPO agent is able to manipulate significantly smaller cubes than it had observed in the nominal simulator. + +![](images/68ed9189cf2b9b976c8e90a0637124f66f7e2141d00ad9d1139c0b372e83a6a4.jpg) +Figure 3: (1) Domain Randomization (DR): Domain randomization performance for the Cartpole balance (left) and Pendulum swingup (middle left) tasks. (2) Stochastic Value Gradients (SVG): Two right images show the performance of Robust Entropy-regularized SVG (RE-SVG) and SRE-SVG compared to E-SVG for Pendulum and Cartpole respectively. + +![](images/390318a40fe9fb09d38acd0f55ae064925b8cf5054b4008e3ff7e54b47930169.jpg) + +![](images/c0b1b0914c83f1457b5e97bbd7a5594460367b961e09295aeb330bf932b0cc6f.jpg) + +![](images/bf303f2e7073651715e03eff03f3e813c8cc162e7691245f1a46e3733332aff8.jpg) + +# 4.2 INVESTIGATIVE EXPERIMENTS + +This section aims to investigate and try answer various questions that may aid in explaining the performance of the robust and non-robust agents respectively. Each investigative experiment is conducted on the Cartpole Balance and Pendulum Swingup domains. + +What if we increase the number of training samples? One argument is that the robust agent has access to more samples since it calculates the Bellman update using the infimum of three different environment realizations. To balance this effect, the non-robust agent was trained for three times more episodes than the robust agents. Training with significantly more samples does not increase the performance of the non-robust agent and, can even decreases the performance, as a result of overfitting to the nominal domain. See Appendix H.5, Figure 12 for the results. + +What about Domain Randomization? A subsequent point would be that the robust agent sees more diverse examples compared to the non-robust agent from each of the perturbed environments. We therefore trained the non-robust agent in a domain randomization setting (Andrychowicz et al., 2018; Peng et al., 2018). We compare our method to two variants of DR. The first variant Limited-DR uses the same perturbations as in the uncertainty set of RE-MPO. Here, we compare which method better utilizes a limited set of perturbations to learn a robust policy. As seen in Figure 3 (left and middle left for Carpole Balance and Pendulum Swingup respectively), RE-MPO yields significantly better performance given the limited set of perturbations. The second variant Full-DR performs regular DR on a significantly larger set of 100 perturbations in the Pendulum Swingup task. In this setting, DR, which uses 30 times more perturbations, improves but still does not outperform RE-MPO (which still only uses three perturbations). This result can be seen in Figure 13, Appendix H.5. + +What is the intuitive difference between DR and RE-MPO/SRE-MPO? DR defines the loss to be the expectation of TD-error over the uncertainty set. Each TD error is computed using a state, action, reward, next state $< s, a, r, s' >$ trajectory from a particular perturbed environment, (selected uniformly from the uncertainty set). These TD errors are then averaged together. This is a form of data augmentation and the resulting behaviour is the average across all of this data. RE-MPO/SRE-MPO: In the case of robustness, the TD error is computed such that the target action value function is computed as a worst case value function with respect to the uncertainty set. This means that the learned policy is explicitly searching for adversarial examples during training to account for worst-case performance. In the soft-robust case, the subtle yet important difference (as seen in the experiments) with DR is that the TD loss is computed with the average target action value function with respect to next states (as opposed to averaging the TD errors of each individual perturbed environment as in DR). This results in different gradient updates being used to update the action value function compared to DR. + +A larger test set: It is also useful to view the performance of the agent from the nominal environment to increasingly large perturbations in the unseen test set (see Appendix H.3 for values). These graphs can be seen in Figure 2 for Cartpole Balance and Pendulum Swingup respectively. As expected, the robust agent maintains a higher level of performance compared to the non-robust agent. Initially, the soft-robust agent outperforms the robust agent, but its performance degrades as the perturbations increase which is consistent with the results of Derman et al. (2018). In addition, the robust and soft-robust agents are competitive with the non-robust agent in the nominal environment. + +![](images/90dcbdd5ba0df8a8deb54ea0a79ee2f2fd83cb1ca2b553d5aef536d110baa142.jpg) +Figure 4: Modifying the uncertainty set: Pendulum Swingup when modifying the third perturbation of the uncertainty set to values of 1.2 (left), 1.3 (middle) and 2.0 (right) meters respectively. + +![](images/2022bd48a55286d8ad537b606c3bbf71071131050a1550a38233c8d70eef203e.jpg) + +![](images/3b89d7a0ecf2d7a7af043c82949fe44e13bd59f09caadc5e32fa3947a7449673.jpg) + +Modifying the uncertainty set: We now evaluate the performance of the agent for different uncertainty sets. For Pendulum Swingup, the original uncertainty set values of the pendulum arm are 1.0, 1.1 and 1.4 meters. We modified the final perturbation to values of 1.2, 1.3 and 2.0 meters respectively. The agent is evaluated on unseen lengths of 1.5, 1.6 and 1.7 meters. An increase in performance can be seen in Figure 4 as the third perturbation approaches that of the unseen evaluation environments. Thus it appears that if the agent is able to approximately capture the dynamics of the unseen test environments within the training set, then the robust agent is able to adapt to the unseen test environments. The results for cartpole balance can be seen in Appendix H.5, Figure 14. + +What about incorporating Robustness into other algorithms? To show the generalization of this robustness approach, we incorporate it into the critic of the Stochastic Value Gradient (SVG) continuous control RL algorithm (See Appendix H.1). As seen in Figure 3, Robust Entropy-regularized SVG (RE-SVG) and Soft RE-SVG (SRE-SVG) significantly outperform the non-robust Entropy-regularized SVG (E-SVG) baseline in both Cartpole and Pendulum. + +Robust entropy-regularized return vs. robust expected return: When comparing the robust entropy-regularized return performance to the robust expected return, we found that the entropy-regularized return appears to do no worse than the expected return. In some cases, e.g., Cheetah, the entropy-regularized objective performs significantly better (see Appendix H.5, Figure 11). + +Different Nominal Models: In this paper the nominal model was always chosen as the smallest perturbation parameter value from the uncertainty set. This was done to highlight the strong performance of robust policies to increasingly large environment perturbations. However, what if we set the nominal model as the median or largest perturbation with respect to the chosen uncertainty set for each agent? As seen in Appendix H.5, Figure 15, the closer (further) the nominal model is to (from) the holdout set, the better (worse) the performance of the non-robust agent. However, in all cases, the robust agent still performs at least as well as (and sometimes better than) the non-robust agent. + +What about learning the uncertainty set from offline data? In real-world settings, such as robotics and industrial control centers (Gao, 2014), there may be a nominal simulator available as well as offline data captured from the real-world system(s). These data could be used to train transition models to capture the dynamics of the task at hand. For example, a set of robots in a factory might each be performing the same task, such as picking up a box. In industrial control cooling centers, there are a number of cooling units in each center responsible for cooling the overall system. In both of these examples, each individual robot and cooling unit operate with slightly different dynamics due to slight fluctuations in the specifications of the designed system, wear-and-tear as well as sensor calibration errors. As a result, an uncertainty set of transition models can be trained from data generated by each robot or cooling unit. + +However, can we train a set of transition models from these data, utilize them as the uncertainty set in R-MPO and still yield robust performance when training on a nominal simulator? To answer this question, we mimicked the above scenarios by generating datasets for the Cartpole Swingup and the Pendulum swingup tasks. For Cartpole swingup, we varied the length of the pole and generated a dataset for each pole length. For Pendulum Swingup, we varied the mass of the pole and generated the corresponding datasets. We then trained transition models on increasingly large data batches ranging from 100 to one million datapoints for each pole length and pole mass respectively. We then utilized each set of transition models for different data batch sizes as the uncertainty set and ran R-MPO on each task. We term this variant of R-MPO, Data-Driven Robust MPO (DDR-MPO). The results + +![](images/0578d72b37bdeeb54c96461aec53a130d64ae890e8d70856f7af51face4e7a0b.jpg) +Figure 5: Training uncertainty sets of transition models on different batch sizes of offline data. The performance of Data Driven R-MPO (DDR-MPO) can be seen in the figures above for Cartpole Swingup (left) and Pendulum Swingup (right) respectively. + +![](images/0445cb3c6f3e660e730ddf77cac52f1cee2496ab67b1e5880b1915e4b1f06b1b.jpg) + +can be seen in Figure 5. There are a number of interesting observations from this analysis. (1) As expected, on small batches of data, the models are too inaccurate and result in poor performance. (2) An interesting insight is that as the data batch size increases, DDR-MPO starts to outperform R-MPO, especially for increasingly large perturbations. The hypothesis here is that due to the transition models being more accurate, but not perfect, adversarial examples are generated in a small region around the nominal next state observation, yielding an increasingly robust agent. (3) As the batch size increases further, and the transition models get increasingly close to the ground truth models, DDR-MPO converges to the performance of R-MPO. + +# 5 RELATED WORK + +From a theoretical perspective, Robust Bellman operators were introduced in (Iyengar, 2005; Nilim & Ghaoui, 2005; Wiesemann et al., 2013; Hansen & Sargent, 2011; Tamar et al., 2014). Our theoretical work extends this operator to the entropy regularized setting, for both the robust and soft-robust formulation, and modifies the MPO optimization formulation accordingly. A more closely related work from a theoretical perspective is that of Grau-Moya et al. (2016) who introduces a formulation for robustness to model misspecification. This work is a special case of robust MDPs where they introduce a robust Bellman operator that regularizes the immediate reward with two KL terms; one entropy term capturing model uncertainty with respect to a base model, and the other term being entropy regularization with respect to a base policy. Our work differs from this work in a number of respects: (1) Their uncertainty set is represented by a KL constraint which has the effect of restricting the set of admissible transition models. Our setup does not have these restrictions. (2) The uncertainty set elements from Grau-Moya et al. (2016) output a probability distribution over model parameter space whereas the uncertainty set elements in our formulation output a distribution over next states. + +Mankowitz et al. (2018a) learn robust options, also known as temporally extended actions (Sutton et al., 1999), using policy gradient. Robust solutions tend to be overly conservative. To combat this, Derman et al. (2018) extend the actor-critic two-timescale stochastic approximation algorithm to a 'soft-robust' formulation to yield a less, conservative solution. Di Castro et al. (2012) introduce a robust implementation of Deep Q Networks (Mnih et al., 2015). Domain Randomization (DR) (Andrychowicz et al., 2018; Peng et al., 2018) is a technique whereby an agent trains on different perturbations of the environment. The agent batch averages the learning error of these different perturbed trajectories together to yield an agent that is robust to environment perturbations. This can be viewed as a data augmentation technique where the resulting behaviour is the average across all of the data. There are also works that look into robustness to action stochasticity (Fox et al., 2015; Braun et al., 2011; Rubin et al., 2012). + +# 6 CONCLUSION + +We have presented a framework for incorporating robustness - to perturbations in the transition dynamics, which we refer to as model misspecification - into continuous control RL algorithms. This framework is suited to continuous control algorithms that learn a value function, such as an actor critic setup. We specifically focused on incorporating robustness into MPO as well as our entropy-regularized version of MPO (E-MPO). In addition, we presented an experiment which incorporates robustness into the SVG algorithm. From a theoretical standpoint, we adapted MPO to an entropy- + +regularized version (E-MPO); we then incorporated robustness into the policy evaluation step of both algorithms to yield Robust MPO (R-MPO) and Robust E-MPO (RE-MPO) as well as the soft-robust variants (SR-MPO/SRE-MPO). This was achieved by deriving the corresponding robust and soft-robust entropy-regularized Bellman operators to ensure that the policy evaluation step converges in each case. We have extensive experiments showing that the robust versions outperform the non-robust counterparts on nine Mujoco domains as well as a high-dimensional dexterous, simulated robotic hand called Shadow hand (ShadowRobot, 2019). We also provide numerous investigative experiments to understand the robust and soft-robust policy in more detail. This includes an experiment showing improved robust performance over R-MPO when using an uncertainty set of transition models learned from offline data. + +# REFERENCES + +Abbas Abdelmaleki, Jost Tobias Springenberg, Jonas Degrave, Steven Bohez, Yuval Tassa, Dan Belov, Nicolas Heess, and Martin A. Riedmiller. Relative entropy regularized policy iteration. CoRR, abs/1812.02256, 2018a. URL http://arxiv.org/abs/1812.02256. +Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920, 2018b. +Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177, 2018. +Daniel A. Braun, Pedro A. Ortega, Evangelos Theodorou, and Stefan Schaal. Path integral control and bounded rationality. 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pp. 202-209, 2011. +Paul F. Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter Abbeel, and Wojciech Zaremba. Transfer from simulation to real world through learning deep inverse dynamics model. CoRR, abs/1610.03518, 2016. URL http://arxiv.org/abs/1610.03518. +Esther Derman, Daniel J Mankowitz, Timothy A Mann, and Shie Mannor. Soft-robust actor-critic policy-gradient. arXiv preprint arXiv:1803.04848, 2018. +Mankowitz Daniel J Mann Timothy A Derman, Esther and Shie Mannor. A bayesian approach to robust reinforcement learning. In Association for Uncertainty in Artificial Intelligence, 2019. +Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular neural network policies for multi-task and multi-robot transfer. 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2169-2176, 2017. +Dotan Di Castro, Aviv Tamar, and Shie Mannor. Policy gradients with variance related risk criteria. arXiv preprint arXiv:1206.6404, 2012. +Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In ICML, 2016. +Gabriel Dulac-Arnold, Daniel J. Mankowitz, and Todd Hester. Challenges of real-world reinforcement learning. CoRR, abs/1904.12901, 2019. URL http://arxiv.org/abs/1904.12901. +Roy Fox, Ari Pakman, and Naftali Tishby. G-learning: Taming the noise in reinforcement learning via soft updates. CoRR, abs/1512.08562, 2015. URL http://arxiv.org/abs/1512.08562. +Jim Gao. Machine learning applications for data center optimization. 2014. +Jordi Grau-Moya, Felix Leibfried, Tim Genewein, and Daniel A. Braun. Planning with information-processing constraints and model uncertainty in markov decision processes. CoRR, abs/1604.02080, 2016. URL http://arxiv.org/abs/1604.02080. +Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-critic algorithms and applications. CoRR, abs/1812.05905, 2018. + +Lars Peter Hansen and Thomas Sargent. *Robustness*. Princeton University Press, 11 2011. ISBN 9780691114422. +Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems 28 (NIPS). 2015a. +Nicolas Heess, Gregory Wayne, David Silver, Timothy Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2944-2952, 2015b. +Garud N Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2): 257-280, 2005. +Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, and Sergey Levine. Scalable deep reinforcement learning for vision-based robotic manipulation. In CoRL, 2018. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Daniel J Mankowitz, Timothy A Mann, Pierre-Luc Bacon, Doina Precup, and Shie Mannor. Learning robust options. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018a. +Daniel J. Mankowitz, Augustin Zidek, André Barreto, Dan Horgan, Matteo Hessel, John Quan, Junhyuk Oh, Hado van Hasselt, David Silver, and Tom Schaul. *Unicorn: Continual learning with a universal, off-policy agent. CoRR*, abs/1802.08294, 2018b. URL http://arxiv.org/abs/1802.08294. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. +Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2775-2785, 2017. +Arnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertain transition matrices. Operations Research, 53:780-798, 2005. +Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1-8. IEEE, 2018. +Divyam Rastogi, Ivan Koryakovskiy, and Jens Kober. Sample-efficient reinforcement learning via difference models. In Machine Learning in Planning and Control of Robot Motion Workshop at ICRA, 2018. +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning (ICML), 2014. +Martin A. Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van de Wiele, Volodymyr Mnih, Nicolas Heess, and Jost Tobias Springenberg. Learning by playing - solving sparse reward tasks from scratch. In ICML, 2018. +Jonathan Rubin, Ohad Shamir, and Naftali Tishby. Trading value and information in mdps. 2012. +Andrei A. Rusu, Sergio Gomez Colmenarejo, Caglar Gulçehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. CoRR, abs/1511.06295, 2016. + +John Schulman, Xi Chen, and Pieter Abbeel. Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440, 2017. +ShadowRobot. ShadowRobot Dexterous Hand. https://www.shadowrobot.com/wp-content/uploads/shadow_dexterous_hand技术和specification_E_20190221.pdf, 2019. [Online; accessed 25-September-2019]. +Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. URL http://www-anw.cs.umass.edu/~rich/book/the-book.html. +Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. +Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181-211, 1999. +Aviv Tamar, Shie Mannor, and Huan Xu. Scaling up robust mdps using function approximation. In International Conference on Machine Learning, pp. 181-189, 2014. +Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdelmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. Deepmind control suite. CoRR, abs/1801.00690, 2018. URL http://arxiv.org/abs/1801.00690. +Yee Whye Teh, Victor Bapst, Wojciech Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. In NIPS, 2017. +Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in apache. CoRR, abs/1604.07255, 2016. URL http://arxiv.org/abs/1604.07255. +Wolfram Wiesemann, Daniel Kuhn, and Berç Rustem. Robust markov decision processes. Math. Oper. Res., 38(1):153-183, February 2013. ISSN 0364-765X. doi: 10.1287/moor.1120.0566. URL http://dx.doi.org/10.1287/moor.1120.0566. +Markus Wulfmeier, Ingmar Posner, and Pieter Abbeel. Mutual alignment transfer learning. arXiv preprint arXiv:1707.07907, 2017. + +# A BACKGROUND + +Entropy-regularized Reinforcement Learning: Entropy regularization encourages exploration and helps prevent early convergence to sub-optimal policies (Nachum et al., 2017). We make use of the relative entropy-regularized RL objective defined as $J_{\mathrm{KL}}(\pi ;\bar{\pi}) = \mathbb{E}^{\pi}[\sum_{t = 0}^{\infty}\gamma^{t}(r_{t} - \tau \mathrm{KL}(\pi (\cdot |s_{t})||\bar{\pi} (\cdot |s_{t})))]$ where $\tau$ is a temperature parameter and KL $(\pi (\cdot |s_t)||\bar{\pi} (\cdot |s_t))$ is the Kullback-Leibler (KL) divergence between the current policy $\pi$ and a reference policy $\bar{\pi}$ given a state $s_t$ (Schulman et al., 2017). The entropy-regularized value function is defined as $V_{\mathrm{KL}}^{\pi}(s;\bar{\pi}) = \mathbb{E}^{\pi}[\sum_{t = 0}^{\infty}\gamma^{t}(r_{t} - \tau \mathrm{KL}(\pi (\cdot |s_{t})||\bar{\pi} (\cdot |s_{t})))|s_{0} = s]$ . Intuitively, augmenting the rewards with the KL term regularizes the policy by forcing it to be 'close' in some sense to the base policy. + +# B ROBUST ENTROPY-REGULARIZED BELLMAN OPERATOR + +(Relative-)Entropy regularization has been shown to encourage exploration and prevent early convergence to sub-optimal policies (Nachum et al., 2017). To take advantage of this idea when developing a robust RL algorithm we extend the robust Bellman operator to a robust entropy regularized Bellman operator and prove that it is a contraction.4 We also show that well-known value iteration bounds can be attained using this operator. We first define the robust entropy-regularized value function as $V_{\mathrm{R - KL}}^{\pi}(s;\bar{\pi}) = \mathbb{E}_{a\sim \pi (\cdot |s)}[r(s,a) - \tau \log \frac{\pi(\cdot|s)}{\bar{\pi}(\cdot|s)} +\gamma \inf_{p\in \mathcal{P}}\mathbb{E}_{s'\sim p(\cdot |s,a)}[V_{\mathrm{R - KL}}^{\pi}(s';\bar{\pi})]]$ . For the remainder of this section, we drop the sub-and superscripts, as well as the reference policy conditioning, from the value function $V_{\mathrm{R - KL}}^{\pi}(s;\bar{\pi})$ , and simply represent it as $V(s)$ for brevity. We define the robust entropy-regularized Bellman operator for a fixed policy $\pi$ in Equation 2, and show it is a max norm contraction (Theorem 1). + +$$ +\mathcal {T} _ {\mathrm {R - K L}} ^ {\pi} V (s) = \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \inf _ {p \in \mathcal {P}} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ V (s ^ {\prime}) ] ], \tag {2} +$$ + +Theorem 1. The robust entropy-regularized Bellman operator $\mathcal{T}_{R - KL}^{\pi}$ for a fixed policy $\pi$ is a contraction operator. Specifically: $\forall U, V \in \mathbb{R}^{|S|}$ and $\gamma \in (0,1)$ , we have, $\| \mathcal{T}_{R - KL}^{\pi}U - \mathcal{T}_{R - KL}^{\pi}V\| \leq \gamma \| U - V\|$ . + +The proof can be found in the (Appendix E, Theorem 1). Using the optimal robust entropy-regularized Bellman operator $T_{\mathrm{R - KL}} = \sup_{\pi}T_{\mathrm{R - KL}}^{\pi}$ , which is shown to also be a contraction operator in Appendix E, Theorem 2, a standard value iteration error bound can be derived (Appendix E, Corollary 1). + +# C Soft-ROBUST ENTROPY-REGULARIZED BELLMAN OPERATOR + +In this section, we derive a soft-robust entropy-regularized Bellman operator and show that it is also a $\gamma$ -contraction in the max norm. First, we define the average transition model as $\bar{p} = \mathbb{E}^{p\sim w}[p]$ which corresponds to the average transition model distributed according to some distribution $w$ over the uncertainty set $\mathcal{P}$ . This average transition model induces an average stationary distribution (see Derman et al. (2018)). The soft-robust entropy-regularized value function is defined as $V_{\mathrm{SR - KL}}^{\pi}(s;\bar{\pi}) = \mathbb{E}_{a\sim \pi (\cdot |s)}[r(s,a) - \tau \log \frac{\pi(\cdot|s)}{\bar{\pi}(\cdot|s)}] + \gamma \mathbb{E}_{s'\sim \bar{p} (\cdot |s,a)}[V_{\mathrm{SR - KL}}^{\pi}(s';\bar{\pi})]$ . Again, for ease of notation, we denote $V_{\mathrm{SR - KL}}^{\pi}(s;\bar{\pi}) = V(s)$ for the remainder of the section. The soft-robust entropy-regularized Bellman operator for a fixed policy $\pi$ is defined as: + +$$ +\mathcal {T} _ {\mathrm {S R - K L}} ^ {\pi} V (s) = \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ V (s ^ {\prime}) ] ], \tag {3} +$$ + +which is also a contraction mapping (see Appendix F, Theorem 3) and yields the same bound as Corollary 1 for the optimal soft-robust Bellman operator derived in Appendix F, Theorem 4. + +# D ROBUST ENTROPY-REGULARIZED POLICY EVALUATION + +To extend Robust policy evaluation to robust entropy-regularized policy evaluation, two key steps need to be performed: (1) optimize for the entropy-regularized expected return as opposed to the regular expected return and modify the TD update accordingly; (2) Incorporate + +robustness into the entropy-regularized expected return and modify the entropy-regularized TD update. To achieve (1), we define the entropy-regularized expected return as $Q_{\mathrm{KL}}^{\pi_k}(s,a;\bar{\pi}) = r(s,a) - \tau \mathrm{KL}(\pi_k(\cdot |s)\| \bar{\pi} (\cdot |s)) + \mathbb{E}_{s'\sim p(\cdot |s,a)}[V_{\mathrm{KL}}^{\pi_k}(s';\bar{\pi})]$ , and show in Appendix G that performing policy evaluation with the entropy-regularized value function is equivalent to optimizing the entropy-regularized squared TD error (same as Eq. equation 4, only omitting the inf operator). To achieve (2), we optimize for the robust entropy regularized expected return objective defined as $Q_{\mathrm{R - KL}}^{\pi_k}(s,a;\bar{\pi}) = r(s,a) - \tau \mathrm{KL}(\pi_k(\cdot |s)\| \bar{\pi} (\cdot |s)) + \inf_{p\in \mathcal{P}}\mathbb{E}_{s'\sim p(\cdot |s,a)}[V_{\mathrm{R - KL}}^{\pi_k}(s';\bar{\pi})]$ , yielding the robust entropy-regularized squared TD error: + +$$ +\begin{array}{l} \min _ {\theta} \left(r _ {t} + \gamma \inf _ {p \in \mathcal {P} \left(s _ {t}, a _ {t}\right)} \left[ \widetilde {Q} _ {\mathrm {R - K L}, \hat {\theta}} ^ {\pi_ {k}} \left(s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi_ {k} (\cdot | s _ {t + 1}); \bar {\pi}\right) \right. \right. \tag {4} \\ \left. \left. - \tau \mathrm {K L} \left(\pi_ {k} (\cdot | s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t})) \| \bar {\pi} (\cdot | s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t}))\right) \right] - \widetilde {Q} _ {\mathrm {R - K L}, \theta} ^ {\pi_ {k}} (s _ {t}, a _ {t}; \bar {\pi})\right) ^ {2}, \\ \end{array} +$$ + +where $Q_{\mathrm{R - KL}}^{\pi_k}(s,a;\bar{\pi}) = \widetilde{Q}_{\mathrm{R - KL}}^{\pi_k}(s,a;\bar{\pi}) - \tau \mathrm{KL}(\pi_k(\cdot |s)\| \bar{\pi} (\cdot |s))$ . For the soft-robust setting, we remove the infimum from the TD update and replace the next state transition function $p(\cdot |s_t,a_t)$ with the average next state transition function $\bar{p} (\cdot |s_t,a_t)$ . + +Relation to MPO: As in the previous section, this step replaces the policy evaluation step of MPO. Our robust entropy-regularized Bellman operator $T_{\mathrm{R - KL}}^{\pi_k}$ and soft-robust entropy-regularized Bellman operator $T_{\mathrm{SR - KL}}^{\pi_k}$ ensures that this process converges to a unique fixed point for the policy $\pi_k$ for the robust and soft-robust cases respectively. We use $\pi_{k - 1}$ as the reference policy $\bar{\pi}$ . The pseudo code for the R-MPO, RE-MPO and Soft-Robust Entropy-regularized MPO (SRE-MPO) algorithms can be found in Appendix I (Algorithms 1, 2 and 3 respectively). + +# E PROOFS + +# Theorem 1. + +Proof. We follow the proofs from (Tamar et al., 2014; Iyengar, 2005), and adapt them to account for the additional entropy regularization for a fixed policy $\pi$ . Let $U, V \in \mathbb{R}^{|S|}$ , and $s \in S$ an arbitrary state. Assume $\mathcal{T}_{\mathrm{R - KL}}^{\pi} U(s) \geq \mathcal{T}_{\mathrm{R - KL}}^{\pi} V(s)$ . Let $\epsilon > 0$ be an arbitrary positive number. + +By the definition of the inf operator, there exists $p_s \in \mathcal{P}$ such that, + +$$ +\begin{array}{l} \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ V (s ^ {\prime}) ] ] \\ < \inf _ {p \in \mathcal {P}} \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ V (s ^ {\prime}) ] ] + \epsilon \tag {5} \\ \end{array} +$$ + +In addition, we have by definition that: + +$$ +\begin{array}{l} \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ U (s ^ {\prime}) ] ] \\ \geq \inf _ {p \in \mathcal {P}} \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ U (s ^ {\prime}) ] ] \tag {6} \\ \end{array} +$$ + +Thus, we have, + +$$ +\begin{array}{l} 0 \leq \mathcal {T} _ {\mathrm {R - K L}} ^ {\pi} U (s) - \mathcal {T} _ {\mathrm {R - K L}} ^ {\pi} V (s) \\ < \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ U (s ^ {\prime}) ] ] \\ - \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ V (s ^ {\prime}) ] ] + \epsilon \tag {7} \\ = \mathbb {E} _ {a \sim \pi (\cdot | s), s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ \gamma U (s ^ {\prime}) ] - \mathbb {E} _ {a \sim \pi (\cdot | s), s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ \gamma V (s ^ {\prime}) ] + \epsilon \\ \leq \gamma \| U - V \| + \epsilon \\ \end{array} +$$ + +Applying a similar argument for the case $\mathcal{T}_{\mathrm{R - KL}}U(s)\leq \mathcal{T}_{\mathrm{R - KL}}V(s)$ results in + +$$ +\left| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} U - \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} V \right| < \gamma \| U - V \| + \epsilon . \tag {8} +$$ + +Since $\epsilon$ is an arbitrary positive number, we establish the result, i.e., + +$$ +\left| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} U - \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} V \right| \leq \gamma \| U - V \|. \tag {9} +$$ + +![](images/5751c8ca5efdf5867f200e3c416920899b6f2a75124e5d7d9b448bb6b7a4a5c1.jpg) + +# Theorem 2. + +Proof. We follow a similar argument to the proof of Theorem 1. Let $U, V \in \mathbb{R}^{|S|}$ , and $s \in S$ an arbitrary state. Assume $\mathcal{T}_{\mathrm{R - KL}}U(s) \geq \mathcal{T}_{\mathrm{R - KL}}V(s)$ . Let $\epsilon > 0$ be an arbitrary positive number. By definition of the sup operator, there exists $\hat{\pi} \in \Pi$ such that, + +$$ +\inf _ {p \in \mathcal {P}} \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ U (s ^ {\prime}) ] ] > \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} U (s) - \epsilon \tag {10} +$$ + +In addition, by the definition of the inf operator, there exists $p_s \in \mathcal{P}$ such that, + +$$ +\begin{array}{l} \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ V (s ^ {\prime}) ] ] \\ < \inf _ {p \in \mathcal {P}} \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ V (s ^ {\prime}) ] ] + \epsilon \tag {11} \\ \end{array} +$$ + +Thus, we have, + +$$ +\begin{array}{l} 0 \leq \mathcal {T} _ {\mathrm {R - K L}} U (s) - \mathcal {T} _ {\mathrm {R - K L}} V (s) \\ < \left(\inf _ {p \in \mathcal {P}} \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ U (s ^ {\prime}) ] ] + \epsilon\right) \\ - \left(\inf _ {p \in \mathcal {P}} \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, \hat {\pi} (a | s))} [ V (s ^ {\prime}) ] ]\right) \\ < \left(\mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ U (s ^ {\prime}) ] ] + \epsilon\right) \tag {12} \\ - \left(\mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim p _ {s} (\cdot | s, a)} [ V (s ^ {\prime}) ] ] - \epsilon\right) \\ = \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s), s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ \gamma U (s ^ {\prime}) ] - \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s), s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ \gamma V (s ^ {\prime}) ] + 2 \epsilon \\ \leq \gamma \| U - V \| + 2 \epsilon \\ \end{array} +$$ + +Applying a similar argument for the case $\mathcal{T}_{\mathrm{R - KL}}U(s)\leq \mathcal{T}_{\mathrm{R - KL}}V(s)$ results in + +$$ +\left| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} U - \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V \right| < \gamma \| U - V \| + 2 \epsilon . \tag {13} +$$ + +Since $\epsilon$ is an arbitrary positive number, we establish the result, i.e., + +$$ +\left| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} U - \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V \right| \leq \gamma \| U - V \|. \tag {14} +$$ + +![](images/83082deb75b7b70b7c763a464abb41db603305263dffc4cbc29aab7a6eae9e62.jpg) + +Corollary 1. Let $\pi_N$ be the greedy policy after applying $N$ value iteration steps. The bound between the optimal value function $V^{*}$ and $V^{\pi_N}$ , the value function that is induced by $\pi_N$ , is given by $\| V^{*} - V^{\pi_{N}}\| \leq \frac{2\gamma\epsilon}{(1 - \gamma)^{2}} +\frac{2\gamma^{N + 1}}{(1 - \gamma)}\| V^{*} - V_{0}\|$ , where $\epsilon = \max_{0\leq k\leq N}\| T_{R - KL}V_k - V_{k + 1}\|$ is the function approximation error, and $V_{0}$ is the initial value function. + +Proof. From Berteskas (1996), we have the following proposition: + +Let $V^{*}$ be the optimal value function, $V$ some arbitrary value function, $\pi$ the greedy policy with respect to $V$ , and $V^{\pi}$ the value function that is induced by $\pi$ . Thus, + +$$ +\left\| V ^ {*} - V ^ {\pi} \right\| \leq \frac {2 \gamma}{(1 - \gamma)} \| V ^ {*} - V \| \tag {15} +$$ + +Next, define the maximum projected loss to be: + +$$ +\epsilon = \max _ {0 \leq k \leq N} \| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V _ {k} - V _ {k + 1} \| \tag {16} +$$ + +We can now derive a bound on the loss between the optimal value function $V^{*}$ and the value function obtained after $N$ updates of value iteration (denoted by $V_{N}$ ) as follows: + +$$ +\begin{array}{l} \left\| V ^ {*} - V _ {N} \right\| \leq \left\| V ^ {*} - \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V _ {N - 1} \right\| + \left\| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V _ {N - 1} - V _ {N} \right\| \\ = \left\| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V ^ {*} - \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V _ {N - 1} \right\| + \left\| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V _ {N - 1} - V _ {N} \right\| \\ \leq \gamma \| V ^ {*} - V _ {N - 1} \| + \| \mathcal {T} _ {\mathrm {R} - \mathrm {K L}} V _ {N - 1} - V _ {N} \| \\ \leq \gamma \| V ^ {*} - V _ {N - 1} \| + \epsilon \tag {17} \\ \leq (1 + \gamma + \dots + \gamma^ {N - 1}) \epsilon + \gamma^ {N} \| V ^ {*} - V _ {0} \| \\ \leq \frac {\epsilon}{(1 - \gamma)} + \gamma^ {N} \| V ^ {*} - V _ {0} \| \\ \end{array} +$$ + +Then, using Lemma E, we get: + +$$ +\begin{array}{l} \left\| V ^ {*} - V ^ {\pi_ {N}} \right\| \leq \frac {2 \gamma}{(1 - \gamma)} \left\| V ^ {*} - V _ {N} \right\| \\ \leq \frac {2 \gamma}{(1 - \gamma)} \frac {\epsilon}{(1 - \gamma)} + \frac {2 \gamma}{(1 - \gamma)} \gamma^ {N} \| V ^ {*} - V _ {0} \| \tag {18} \\ = \frac {2 \gamma \epsilon}{(1 - \gamma) ^ {2}} + \frac {2 \gamma^ {N + 1}}{(1 - \gamma)} \| V ^ {*} - V _ {0} \| \\ \end{array} +$$ + +which establishes the result. + +![](images/083c25b49257ae0fb0a1ed2d4a5120260cb5ba4cbe6807427fdb786bff5b134d.jpg) + +# F SOFT-ROBUST ENTROPY-REGULARIZED BELLMAN OPERATOR + +# Theorem 3. + +Proof. For an arbitrary $U, V \in \mathbb{R}^{|S|}$ and for a fixed policy $\pi$ : + +$$ +\begin{array}{l} \left\| \mathcal {T} _ {\mathrm {S R - K L}} ^ {\pi} U (s) - \mathcal {T} _ {\mathrm {S R - K L}} ^ {\pi} V (s) \right\| _ {\infty} \\ = \sup _ {s} \left| \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ U (s ^ {\prime}) ] \right] \\ - \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ V (s ^ {\prime}) ] ] \Bigg | \\ = \gamma \sup _ {s} | \sum_ {s ^ {\prime}} \bar {p} (s ^ {\prime} | s, a) [ U (s ^ {\prime}) - V (s ^ {\prime}) ] | \\ \leq \gamma \sup _ {s} \sum_ {s ^ {\prime}} \bar {p} \left(s ^ {\prime} | s, a\right) \left| U \left(s ^ {\prime}\right) - V \left(s ^ {\prime}\right) \right| \\ \leq \gamma \sup _ {s} \sum_ {s ^ {\prime}} \bar {p} \left(s ^ {\prime} | s, a\right) \| U \left(s ^ {\prime}\right) - V \left(s ^ {\prime}\right) \| _ {\infty} \\ \leq \gamma \| U - V \| _ {\infty} \\ \end{array} +$$ + +![](images/9a37f258bcf96dbb26493e70a0f754cf8a125b67c46e486d7b6de7e7b161cc96.jpg) + +# Theorem 4. + +Proof. Let $U, V \in \mathbb{R}^{|S|}$ , and $s \in S$ an arbitrary state. Assume $\mathcal{T}_{\mathrm{SR - KL}}U(s) \geq \mathcal{T}_{\mathrm{SR - KL}}V(s)$ . Let $\epsilon > 0$ be an arbitrary positive number. By definition of the sup operator, there exists $\hat{\pi} \in \Pi$ such that, + +$$ +\mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ U (s ^ {\prime}) ] ] > \mathcal {T} _ {\mathrm {S R - K L}} U (s) - \epsilon \tag {19} +$$ + +Thus, we have, + +$$ +0 \leq \mathcal {T} _ {\mathrm {S R - K L}} U (s) - \mathcal {T} _ {\mathrm {S R - K L}} V (s) +$$ + +$$ +< (\mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ U (s ^ {\prime}) ]) + \epsilon) +$$ + +$$ +\begin{array}{l} - \left(\sup _ {\pi \in \Pi} \mathbb {E} _ {a \sim \pi (\cdot | s)} [ r (s, a) - \tau \log \frac {\pi (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim \tilde {p} (\cdot | s, a)} [ V (s ^ {\prime}) ] ]\right) \\ \leq \left(\mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ U (s ^ {\prime}) ] + \epsilon\right) \tag {20} \\ - \left(\mathbb {E} _ {a \sim \hat {\pi} (\cdot | s)} [ r (s, a) - \tau \log \frac {\hat {\pi} (\cdot | s)}{\bar {\pi} (\cdot | s)} + \gamma \mathbb {E} _ {s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ V (s ^ {\prime}) ] ]\right) \\ = \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s), s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ \gamma U (s ^ {\prime}) ] - \mathbb {E} _ {a \sim \hat {\pi} (\cdot | s), s ^ {\prime} \sim \bar {p} (\cdot | s, a)} [ \gamma V (s ^ {\prime}) ] + \epsilon \\ \end{array} +$$ + +$\leq \gamma \| U - V\| +\epsilon$ Applying a similar argument for the case $\mathcal{T}_{\mathrm{SR - KL}}U(s)\leq \mathcal{T}_{\mathrm{SR - KL}}V(s)$ results in + +$$ +\left| \mathcal {T} _ {\mathrm {S R - K L}} U - \mathcal {T} _ {\mathrm {S R - K L}} V \right| < \gamma \| U - V \| + \epsilon . \tag {21} +$$ + +Since $\epsilon$ is an arbitrary positive number, we establish the result, i.e., + +$$ +\left| \mathcal {T} _ {\mathrm {S R - K L}} U - \mathcal {T} _ {\mathrm {S R - K L}} V \right| \leq \gamma \| U - V \|. \tag {22} +$$ + +![](images/19ddacfc114af4fb0cf35527fd68234af96359b0ae682cbb5f89d50d6c113fbc.jpg) + +# G ENTROPY-REGULARIZED POLICY EVALUATION + +This section describes: (1) modification to the TD update for the expected return to optimize for the entropy-regularized expected return, (2) additional modification to account for robustness. + +We start with (1). + +The entropy-regularized value function is defined as: + +$$ +V _ {\mathrm {K L}} ^ {\pi} (s; \bar {\pi}) = \mathbb {E} ^ {\pi} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(r _ {t} - \tau \mathrm {K L} \left(\pi (\cdot | s _ {t}) \| \bar {\pi} (\cdot | s _ {t}))\right) \mid s _ {0} = s \right] \right. \tag {23} +$$ + +and the corresponding entropy-regularized action value function is given by: + +$$ +\begin{array}{l} Q _ {\mathrm {K L}} ^ {\pi} (s, a; \bar {\pi}) = \mathbb {E} ^ {\pi} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left(r _ {t} - \tau \mathrm {K L} \left(\pi \left(\cdot | s _ {t}\right) \| \bar {\pi} \left(\cdot | s _ {t}\right)\right)\right) \mid s _ {0} = s, a _ {0} = a \right] (24) \\ = r (s, a) - \tau \mathrm {K L} (\pi (\cdot | s) \| \bar {\pi} (\cdot | s)) + \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ V _ {\mathrm {K L}} ^ {\pi} (s ^ {\prime}; \bar {\pi}) ] (25) \\ \end{array} +$$ + +Next, we define: + +$$ +\widetilde {Q} _ {\mathrm {K L}} ^ {\pi} (s, a; \bar {\pi}) = r (s, a) + \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} \left[ V _ {\mathrm {K L}} ^ {\pi} \left(s ^ {\prime}; \bar {\pi}\right) \right] \tag {26} +$$ + +thus, + +$$ +Q _ {\mathrm {K L}} ^ {\pi} (s, a; \bar {\pi}) = \widetilde {Q} _ {\mathrm {K L}} ^ {\pi} (s, a; \bar {\pi}) - \tau \mathrm {K L} (\pi (\cdot | s) \| \bar {\pi} (\cdot | s))) \tag {27} +$$ + +Therefore, we have the following relationship: + +$$ +V _ {\mathrm {K L}} ^ {\pi} \left(s ^ {\prime}; \bar {\pi}\right) = \mathbb {E} _ {a \sim \pi (\cdot | s)} \left[ Q _ {\mathrm {K L}} ^ {\pi} (s, a; \bar {\pi}) \right] = \mathbb {E} _ {a \sim \pi (\cdot | s)} \left[ \widetilde {Q} _ {\mathrm {K L}} ^ {\pi} (s, a; \bar {\pi}) - \tau \mathrm {K L} \left(\pi (\cdot | s) \| \bar {\pi} (\cdot | s)\right) \right] \tag {28} +$$ + +We now retrieve the TD update for the entropy-regularized action value function: + +$$ +\begin{array}{l} \delta_ {t} = r _ {t} - \tau \mathrm {K L} (\pi (\cdot | s _ {t}) \| \bar {\pi} (\cdot | s _ {t})) + \gamma Q _ {\mathrm {K L}} ^ {\pi} (s _ {t + 1} \sim P (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi (\cdot | s _ {t + 1}); \bar {\pi}) \\ - Q _ {\mathrm {K L}} ^ {\pi} \left(s _ {t}, a _ {t}; \bar {\pi}\right) \\ = r _ {t} - \tau \mathrm {K L} (\pi (\cdot | s _ {t}) \| \bar {\pi} (\cdot | s _ {t})) + \gamma Q _ {\mathrm {K L}} ^ {\pi} (s _ {t + 1} \sim P (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi (\cdot | s _ {t + 1}); \bar {\pi}) \\ - \widetilde {Q} _ {\mathrm {K L}} ^ {\pi} \left(s _ {t}, a _ {t}; \bar {\pi}\right) + \tau \mathrm {K L} \left(\pi \left(\cdot \mid s _ {t}\right) \| \bar {\pi} \left(\cdot \mid s _ {t}\right)\right) \\ = r _ {t} + \gamma Q _ {\mathrm {K L}} ^ {\pi} \left(s _ {t + 1} \sim P (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi (\cdot | s _ {t + 1}); \bar {\pi}\right) - \widetilde {Q} _ {\mathrm {K L}} ^ {\pi} \left(s _ {t}, a _ {t}; \bar {\pi}\right) \tag {29} \\ = r _ {t} + \gamma \left[ \widetilde {Q} _ {\mathrm {K L}} ^ {\pi} \left(s _ {t + 1} \sim P (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi (\cdot | s _ {t + 1}); \bar {\pi}\right) \right. \\ \left. \left. - \tau \mathrm {K L} (\pi (\cdot | s _ {t + 1} \sim P (\cdot | s _ {t}, a _ {t})) \| \bar {\pi} (\cdot | s _ {t + 1} \sim P (\cdot | s _ {t}, a _ {t}))) \right] - \widetilde {Q} _ {\mathrm {K L}} ^ {\pi} (s _ {t}, a _ {t}; \bar {\pi}) \right. \\ \end{array} +$$ + +Note that in the above TD update we replaced $Q_{\mathrm{KL}}^{\pi}$ with $\widetilde{Q}_{\mathrm{KL}}^{\pi}$ . + +Next, we move to (2). + +Before extending the TD update to the robust case, we first consider the robust entropy-regularized value function, which is defined as: + +$$ +V _ {\mathrm {R} \cdot \mathrm {K L}} ^ {\pi} (s; \bar {\pi}) = \inf _ {p \in \mathcal {P}} \mathbb {E} ^ {p, \pi} \left[ \right. \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left( \right.r _ {t} - \tau \mathrm {K L} \left(\pi (\cdot | s _ {t}) \| \bar {\pi} (\cdot | s _ {t}))\right) \mid s _ {0} = s \left. \right] \tag {30} +$$ + +Applying similar steps as above yields the following TD update: + +$$ +\begin{array}{l} \delta_ {t} = r _ {t} - \tau \mathrm {K L} (\pi (\cdot | s _ {t}) \| \bar {\pi} (\cdot | s _ {t})) + \gamma \inf _ {p \in \mathcal {P}} Q _ {\mathrm {R - K L}} ^ {\pi} (s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi (\cdot | s _ {t + 1}); \bar {\pi}) \\ - Q _ {\mathrm {R - K L}} ^ {\pi} \left(s _ {t}, a _ {t}; \bar {\pi}\right) \\ = r _ {t} - \tau \mathrm {K L} (\pi (\cdot | s _ {t}) \| \bar {\pi} (\cdot | s _ {t})) + \gamma \inf _ {p \in \mathcal {P}} Q _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} (s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi (\cdot | s _ {t + 1}); \bar {\pi}) \\ - \widetilde {Q} _ {\mathrm {R - K L}} ^ {\pi} (s _ {t}, a _ {t}; \bar {\pi}) + \tau \mathrm {K L} (\pi (\cdot | s _ {t}) \| \bar {\pi} (\cdot | s _ {t})) \\ = r _ {t} + \gamma \inf _ {p \in \mathcal {P}} Q _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} \left(s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi (\cdot | s _ {t + 1}); \bar {\pi}\right) - \widetilde {Q} _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} \left(s _ {t}, a _ {t}; \bar {\pi}\right) \tag {31} \\ = r _ {t} + \gamma \inf _ {p \in \mathcal {P}} \left[ \widetilde {Q} _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} \left(s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t}), a _ {t + 1} \sim \pi (\cdot | s _ {t + 1}); \bar {\pi}\right) \right. \\ \left. \left. - \tau \mathrm {K L} \left(\pi \left(\cdot \mid s _ {t + 1} \sim p \left(\cdot \mid s _ {t}, a _ {t}\right)\right) \| \bar {\pi} \left(\cdot \mid s _ {t + 1} \sim p \left(\cdot \mid s _ {t}, a _ {t}\right)\right)\right) \right] - \widetilde {Q} _ {\mathrm {R} - \mathrm {K L}} ^ {\pi} \left(s _ {t}, a _ {t}; \bar {\pi}\right) \right. \\ \end{array} +$$ + +
HyperparametersSVG
Policy net200-200-200
Q function net500-500-500
Discount factor (γ)0.99
Adam learning rate0.0003
Replay buffer size1000000
Target network update period200
Batch size1024
Activation functionelu
Tanh on output of layer normYes
Layer norm on first layerYes
Tanh on Gaussian meanYes
Min variance0.1
Max varianceunbounded
+ +Table 1: Hyperparameters for SVG + +# H EXPERIMENTS + +# H.1 ADDITIONAL DETAILS ON THE SVG BASELINE + +For the stochastic value gradients $\mathrm{SVG}(0)$ baseline we use the same policy parameterization as for our algorithm, e.g. we have + +$$ +\pi_ {\theta} = \mathcal {N} (\mu_ {\theta} (s), \sigma_ {\theta} ^ {2} (s) I), +$$ + +where $I$ denotes the identity matrix and $\sigma_{\theta}(s)$ is computed from the network output via a softplus activation function. + +To obtain a baseline that is, in spirit, similar to our algorithm we used SVG in combination with Entropy regularization. That is, we optimize the policy via gradient ascent, following the reparameterized gradient for a given state $s$ sampled from the replay: + +$$ +\nabla_ {\theta} \mathbb {E} _ {\pi_ {\theta} (a | s)} [ Q (a, s) ] + \alpha \mathrm {H} \left(\pi_ {\theta} (a | s)\right), \tag {32} +$$ + +which can be computed, using the reparameterization trick, as + +$$ +\mathbb {E} _ {\zeta \sim \mathcal {N} (0, I)} \left[ \nabla_ {\theta} g _ {\theta} (s, \zeta) \nabla_ {g} Q \left(g _ {\theta} (s, \zeta), s\right) \right] + \alpha \nabla_ {\theta} \mathrm {H} \left(\pi_ {\theta} (a | s)\right), \tag {33} +$$ + +where $g_{\theta}(s,\zeta) = \mu_{\theta}(s) + \sigma_{\theta}()*\zeta$ is now a deterministic function of a sample from the standard multivariate normal distribution. See e.g. Heess et al. (2015a) (for SVG) as well as Rezende et al. (2014); Kingma & Welling (2013) (for the reparameterization trick) for a detailed explanation. + +# H.2 EXPERIMENT DETAILS FOR MPO AND SVG + +In this section we outline the details on the hyperparameters used for the MPO and SVG algorithms. All experiments use a feed-forward two layer neural network with 50 neurons to map the current state of the network to the mean and diagonal covariance of the Gaussian policy. The policy is given by a Gaussian distribution with a diagonal covariance matrix. The neural network outputs the mean $\mu = \mu (s)$ and diagonal Cholesky factors $A = A(s)$ , such that $\Sigma = AA^T$ . The diagonal factor $A$ has positive diagonal elements enforced by the softmax transform $A_{ii}\gets \log (1 + \exp (A_{ii}))$ to ensure positive definiteness of the diagonal covariance matrix. Tables 2 and 1 show the hyperparameters used for the MPO and SVG algorithms. + +# H.3 UNCERTAINTY SET PARAMETERS + +Table 3 contains the chosen uncertainty set values for each of the domains and the corresponding holdout set perturbations. The final column of the table contains the parameter that was perturbed. + +
HyperparametersMPO
Policy net200-200-200
Number of actions sampled per state15
Q function net500-500-500
ε0.1
εμ0.01
εΣ0.00001
Discount factor (γ)0.99
Adam learning rate0.0003
Replay buffer size1000000
Target network update period200
Batch size1024
Activation functionelu
Layer norm on first layerYes
Tanh on output of layer normYes
Tanh on Gaussian meanNo
Min varianceZero
Max varianceunbounded
+ +Table 2: Hyperparameters for MPO +Table 3: The parameters chosen for the uncertainty set perturbations as well as the holdout set perturbations. The final column contains the parameter that was perturbed. + +
DomainUncertainty Set PerturbationsHold-out Test PerturbationsParameter
Acrobot1.0, 1.025, 1.05 meters1.15, 1.2, 1.25 metersFirst pole length
Cartpole Balance0.5, 1.9, 2.1 meters2.0, 2.2, 2.3 metersPole length
Cartpole Swingup1.0, 1.4, 1.7 meters1.2, 1.5, 1.8 metersPole Length
Cheetah Run0.4, 0.45, 0.5 meters0.3, 0.325, 0.35 metersTorso Length
Hopper Hop-0.32, -0.33, -0.34 meters-0.4, -0.45, -0.5 metersCalf Length
Hopper Stand-0.32, -0.33, -0.34 meters-0.4, -0.475, -0.5 metersCalf Length
Pendulum Swingup1.0, 1.1, 1.4 Kg1.5, 1.6, 1.7 KgBall Mass
Walker Run0.225, 0.2375, 0.25 meters0.35, 0.375, 0.4 metersThigh Lengths
Walker Walk0.225, 0.2375, 0.25 meters0.35, 0.375, 0.4 metersThigh Lengths
Shadow hand0.025, 0.022, 0.02 meters0.021, 0.018, 0.015 metersHalf-cube width
Cartpole Balance: Larger Test Set0.5, 1.9,2.1 meters0.5, 0.7, 0.9, 1.1, 1.3, 1.5, 1.7, 1.9 metersPole Length
Pendulum Swingup: Larger Test Set1.0, 1.1, 1.4 meters1.0, 1.1, 1.2, 1.3, 1.4, 1.5 metersPole Length
Pendulum Swingup - Offline datasets1.0, 1.1, 1.2 Kg1.5, 1.6, 1.7 KgBall Mass
Cartpole Swingup - Offline datasets1.0, 1.4, 1.7 meters1.2, 1.5, 1.8 metersPole Length
+ +# H.4 MAIN EXPERIMENTS + +This section contains two sets of plots. Figure 6 contains bar plots comparing the performance of RE-MPO (blue bars), SRE-MPO (green bars) and E-MPO (red bars) across nine Mujoco domains. The performance of the agents as a function of evaluation steps is shown in Figure 7 for all nine domains respectively. Figrue 8 shows the bar plots for R-MPO, SR-MPO and MPO and Figure 9 shows the corresponding performance of the agents as a function of evaluation steps. + +# H.5 INVESTIGATIVE EXPERIMENTS + +This section contains additional investigative experiments that were mentioned in the main paper. + +Figure 11 presents the difference in performance between the entropy-regularized agents and the non entropy-regularized agents agents. Although the performance is comparable (left figure), the entropy-regularized version performs no worse on average than the non-entropy-regularized agent. In addition, there are some tasks where there is a large improvement in performance, such as the + +![](images/cd16cf8e1e5df30dd9e85449a84a0bcb3efca354ecbb104b7adbaa69e9bda65c.jpg) + +![](images/6c6b06b7db43f00ddce83699a0e89c969b8f87b5f346799de79348782047eaeb.jpg) + +![](images/2b06043fe89f405f962c1de3767e2fb668196c0befecd49ef66212396bfbaddd.jpg) + +![](images/fb37a36dd8ec2b16f3a4136f568a9016eb82e60efe5112d4d5651ebb3abfa2e6.jpg) + +![](images/32660595c3d1e0e44cd7644b05eaaec72ff7752544f358f9ae7ac641c77005e1.jpg) + +![](images/7c3006124627adb9c79993cc56ecff80865648f17fd948dd51d544ee9b6a91b7.jpg) + +![](images/213ab9226fc4c38d178dc3b83e9282b4d3ed7e8681211c3b0afb5e4a9e8ada93.jpg) +Figure 6: All nine domains showing RE-MPO (blue), SRE-MPO (green) and E-MPO (red). + +![](images/82bc239518efa8d99e76b8fef80363795757ee013cf9a802a50ee06b7db1b4a1.jpg) + +![](images/3e4042d37dba27bfd8258b78bc6e54794879c63b4c48f3aa584d53f7f249d386.jpg) + +![](images/c8a4bb49308bec6b23857e1c098efefb9db936ddc1ce6a440e81a2831d354133.jpg) + +![](images/542cd6c1baee6554f9dc98e647d1144d2687c7fd46bac3531f03a053e057a898.jpg) + +![](images/248b6d9704b3a79adb7a3c85071af4cfb51bf4bcac0c10a34cc4c9fa7925a7a3.jpg) + +![](images/bd56d233cdb458685a009a2d286d9b55f0c76170ae8b56eb462f450c39f44971.jpg) + +![](images/26ce644c386ae257b449f7cb3a391331c710bde21d80bb2d9bedc466027e8572.jpg) + +![](images/6df5585088af80f6eadd437ee30d71b685c5c00bbe671baf1af42e7449132544.jpg) + +![](images/ba28147494e22506c6f0f3e5f3decfcabac19a66b9f05e3b93397eb28ee24167.jpg) +Figure 7: All nine domains showing RE-MPO (blue), SRE-MPO (green) and E-MPO (red) as a function of evaluation steps during training. + +![](images/63a3c69901d1e3ac338d3c751f3c6fde88cf360fe06ecf1deb0a08fc7447d6fa.jpg) + +![](images/83bb205757e071d0a40eadd6266e019f8e7c48adb5fb31e087f98c7168e855f9.jpg) + +![](images/9a1ed39b7d680ec920e26c2f35dcb074d25307c21da380faff6f633a515657d7.jpg) + +![](images/7d445e7fd1f69f6ec82a2d90ad6a3aa52aef3da39d36b15be7563f083a4ce461.jpg) + +![](images/a24d25e33b48438e132b7195f0e8f09c541cd6eb06772f33f494412e1f7536b5.jpg) + +![](images/de22ecdf6f00f2c12128e75adcfed7cd1e01c8f83fab13446365d13de3f2d83f.jpg) + +![](images/0505cdf21b73bf5f7d6a2fe95ed37ef56c0a6c2f0a84b5961b3baf5034115def.jpg) + +![](images/e5ae001d1533043709c28dfc547057234620fe4bf9471a733cb6a2bf3d048a71.jpg) + +![](images/d74d4d6224d6943ecef798ce63812c34c72af2b1c5337fff359a9f4eb4b3251d.jpg) +Figure 8: All nine domains showing R-MPO (blue), SR-MPO (green) and MPO (red). + +![](images/ddf74b77fef62d6394a3f8fb0b0836300decc6e9d2786d9ddeccf612b5b69f39.jpg) + +![](images/c77db85f5118e9798f904dbc6dfe4361158f470b187476f902a9c951f309fb58.jpg) + +![](images/e01e0f8e2caf59b5c2685c39aa5d01d8d33c7462c4feab0f9b969d11d345d7ea.jpg) + +![](images/538f93ead564ffab08318dae74316a5a22907f502af6618326c29796fbf52d2d.jpg) + +![](images/8d49db482f9be77ae252a2e851bc3754ba04afd8044de69045e1a19e575975b0.jpg) + +![](images/dacb5c00d9dba1b5e1bec7db1875f9dca66ef5b23c4dfb3e27f5f2b8ddf4e944.jpg) + +![](images/c2f0004d55003dd5f5a2cfdd7bd48b121cd78ca7f378654140114e79dbfa21a7.jpg) + +![](images/ff785419b155709fb1c4d6f32cd2ad91e9e7453edfc1ec703ded9335d8a574f0.jpg) + +![](images/f5f04e1b118f5f8926a6d3568eb245bf6bd149d2bd285df76314977c8e2eeee6.jpg) +Figure 9: All nine domains showing R-MPO (blue), SR-MPO (green) and MPO (red) as a function of evaluation steps during training. + +![](images/dc962bfdba3523b6becbd840b3acbc3b4f4b651b04162a8d5167c2d0512d49e7.jpg) + +![](images/6178416f34af9ac58147bc32a9d2f6a0803326e57d0558b0d42caa49c7d58a37.jpg) + +![](images/d79e42266230b5163bdf7abe943e7e21b5e443d04c9f5da2e1b8e707f2a6af14.jpg) + +![](images/f346dcfd56b0f5ea69f47650c98f4d8d2b267d5021cb0393f6226248b5abda1c.jpg) + +![](images/0eb4767a3c62f1da0f390bd346faece81f04e86895727f2ee2f24cb43fafa8e7.jpg) + +![](images/b07791babf539597077cc535aa818b767dcb292ebb7679fb5a83a9b63c9dce05.jpg) +Figure 10: Increasing the range of the training uncertainty set for Cartpole balance (top row) and Pendulum swingup (bottom row). + +![](images/ff0dc6f96e6e906802fdef61e702bb71e1a6b285a2fc3959f98c33ba556d815d.jpg) + +![](images/f9f42bb48de01272aa327316a68f30d4e6f686bb345340aae88986c4d6e0a03b.jpg) + +![](images/e8657422a4645734866a90e2e470bec567ae88167e99d61a637902f39e4cd3c6.jpg) +Figure 11: Comparing entropy-regularized objective to the non-entropy regularized objective (left figure). The entropy-regularized version does no worse than the non entropy-regularized setup and in some cases, for example Cheetah, performs considerably better than the expected return objective (right figure). + +![](images/a3807ae57c05a218e90ea4af6640eca1db8982b08fe42063fb3d9d33cab1bf28.jpg) + +Cheetah task for the entropy-regularized agent variants non entropy-regularized agent variants (right figure). + +Training with more samples: Adding three times more samples to the non-robust baseline still yields significantly inferior performance compared to that of the robust and soft-robust versions as seen in Figure 12 for Cartpole balance and Pendulum swingup respectively. + +What about Domain Randomization? The DR results are shown in Figure 13. As can be seen in the figure, RE-MPO makes better use of a limited number of perturbations compared to Limited-DR in Cartpole Balance (left) and Pendulum Swingup (middle) respectively. If the number of perturbations are increased to 100 (right figure) for Pendulum Swingup, DR, which uses approximately 30 times more perturbations, improves but still does not outperform RE-MPO. + +Modifying the uncertainty set: Figure 14 contains the performance for cartpole balance (top row) and pendulum swingup (bottom row) when modifying the uncertainty set. For the Cartpole Balance task, the original uncertainty set training values are 0.5, 1.4 and 2.1 meters for the cartpole arm length. We modified the third perturbation (2.1 meters) of the uncertainty set to pole lengths of 1.5, 2.5 and 3.5 meters respectively. The agent is evaluated on pole lengths of 2.0, 2.2 and 2.3 meters respectively. As seen in the top row of Figure 14, as the training perturbation is near the evaluation set, the performance of the robust and soft-robust agents are near optimal. However, as the perturbation increases further (i.e., 3.5 meters), there is a drop in robustness performance. This + +![](images/b9eddf14fce9d34b6f5f6dcc073481170e5d3334e8c116704e7a0b8f74e34f39.jpg) +Figure 12: Additional Training Samples: Two plots show 3 times more additional training samples for non-robust E-MPO (dark grey) in the Cartpole Balance and Pendulum Swingup tasks respectively. + +![](images/49ae24df343332d09e259da7d7aae3e1822fe16e78cbfa037d9b283c76748bb9.jpg) + +![](images/b8b2fa3d57c26001af8e5fd6e7dccaebb24b7c543a50778f2aed077b58db62e3.jpg) +Figure 13: Domain Randomization (DR): Domain randomization performance for the Cartpole balance (left) and Pendulum swingup (middle) tasks. As we increase the number of perturbations for DR to 100 (right figure), we see that performance improves but still does not outperform RE-MPO, which still only uses 3 perturbations. + +![](images/68cf67ea1b4758ee6835dffad3820624d3d11b76099765b98c90a6e4cf77985f.jpg) + +![](images/191e9537d0f0e33f634bc64ecd87f3702838c3e0d9f6be778e8c3ca58554b96c.jpg) + +is probably due to the agent learning a policy that is robust with respect to perturbations that are relatively far from the unseen evaluation set. However, the agent still performs significantly better than the non-robust baseline in each case. For Pendulum Swingup, the original uncertainty set values of the pendulum arm are 1.0, 1.1 and 1.4 meters. We modified the final perturbation to values of 1.2, 1.3 and 2.0 meters respectively. The agent is evaluated on unseen lengths of 1.5, 1.6 and 1.7 meters. A significant increase in performance can be seen in the bottom row of Figure 14 as the third perturbation approaches that of the unseen evaluation environments. Thus it appears that if the agent is able to approximately capture the dynamics of the unseen test environments within the training set, then the robust agent is able to adapt to the unseen test environments. Figure 10 presents the evaluation curves for the corresponding Cartpole Balance (top row) and Pendulum swingup (bottom row) tasks as the third perturbation of the uncertainty set is modified. + +Different Nominal Models: Figure 15 indicates the effect of changing the nominal model to the median and largest perturbation from the uncertainty set for the Cartpole balance (top row) and Pendulum swingup (bottom row) tasks respectively. For Cartpole, since the median and largest perturbations are significantly closer to the evaluation set, performance of the non-robust, robust and soft-robust agents are comparable. However, for Pendulum swingup, the middle actor is still far from the evaluation set and here the robust agent significantly outperforms the non-robust agent. + +![](images/970b94700f14a3bb2bb771b87617431e8dfc286c9e8fe0c47f379a052cfdb3e3.jpg) + +![](images/f49f9e1e6e574f415489940423c17867e0806a7e6150f63a40cd0383b538c98f.jpg) + +![](images/b8378cb642990ed50c3f8de5ca65e0f15c21a8c948340651e727473e0c981c4c.jpg) + +![](images/6a1de5352d3bc1a2931096d9f6629ac3f32cc7652a296c81e8c7ce27d03907a1.jpg) +Figure 14: Modifying the uncertainty set: The top row indicates the change in performance for Cartpole balance as the third perturbation of the uncertainty set is modified to 1.5, 2.5 and 3.5 meters respectively. The bottom row shows the performance for Pendulum Swingup for final perturbation changes of 1.2, 1.3 and 2.0 meters respectively. + +![](images/07d57afc469a096c5c16489c855a517f3dc354304877b6a7c31f17e6df3591e6.jpg) + +![](images/247526c9eed6954e7b6bceb368f61a6fd11d9506ede9481a95f6d1a33e4f6fc3.jpg) + +![](images/78ba63a2c1226fda38a9c016c9b22c2ce076b7a121cd1dfaa9b3ff5ecb29aa1e.jpg) + +![](images/f7de7c3239bce5ada99a0974473a75e4fb9c1292430d6d6d14d8050ac54213e7.jpg) + +![](images/d6344e4a7fb184884036cedc0c889a1c3fbef8289b70490ea364457444b8582d.jpg) +Figure 15: Changing the nominal model: The top two figures indicate setting the nominal model as the median and largest perturbation of the uncertainty set for Cartpole Balance respectively. The right two figures are the same setting but for the Pendulum swingup domain. Legend: E-MPO (red), RE-MPO (blue), SRE-MPO (green). + +![](images/c589c2384743cf2624793a59e7e9711fc91a37fc23c7089f07ec910b7c1bd950.jpg) + +Algorithm 1 Robust MPO (R-MPO) algorithm for a single iteration +1: given batch-size (K), number of actions (N), old-policy $\pi_{k}$ and replay-buffer +2: // Step 1: Perform policy evaluation on $\pi_{k}$ to yield $Q_{\theta}^{\pi_k}$ +3: + $\min_{\theta}\left(r_t + \gamma \inf_{p\in \mathcal{P}(s_t,a_t)}\left[Q_{\hat{\theta}}^{\pi_k}(s_{t + 1}\sim p(\cdot |s_t,a_t),a_{t + 1}\sim \pi_k(\cdot |s_{t + 1}))\right] - Q_{\theta}^{\pi_k}(s_t,a_t)\right)^2$ +4: repeat +5: Sample batch of size N from replay buffer +6: // Step 2: sample based policy (weights) +7: $q(a_i|s_j) = q_{ij}$ , computed as: +8: for $j = 1,\dots,K$ do +9: for $i = 1,\dots,N$ do +10: $a_i\sim \pi_k(a|s_j)$ +11: $Q_{ij} = Q^{\pi_k}(s_j,a_i)$ +12: $q_{ij} = \text{Compute Weights}(\{Q_{ij}\}_{i = 1\dots N})$ {See (Abdolmaleki et al., 2018b)} +13: // Step 3: update parametric policy +14: Given the data-set $\{s_j,(a_i,q_{ij})_{i = 1\dots N}\}_{j = 1\dots K}$ +15: Update the Policy by finding +16: $\pi_{k + 1} = \operatorname{argmax}_{\pi}\sum_{j}^{K}\sum_{i}^{N}q_{ij}\log \pi (a_{i}|s_{j})$ +17: (subject to additional (KL) regularization) +18: until Fixed number of steps +19: return $\pi_{k + 1}$ + +# I ALGORITHM + +The Robust MPO algorithm is defined as Algorithm 1. The algorithm can be divided into three steps: Step (1) perform policy evaluation on the policy $\pi_{k}$ ; Step (2) build a proposal distribution $q(a|s)$ from the action value function $Q_{\theta}^{\pi_k}$ ; Step (3) update the policy by minimizing the KL divergence between the proposal distribution $q$ and the policy $\pi$ . The corresponding robust entropy-regularized version can be seen in Algorithm 2 and the soft-robust entropy-regularized version in Algorithm 3. + +Algorithm 2 Robust Entropy-Regularized MPO (RE-MPO) algorithm for a single iteration +1: given batch-size (K), number of actions (N), old-policy $\pi_{k}$ and replay-buffer +2: // Step 1: Perform policy evaluation on $\pi_{k}$ to yield $Q_{\theta}^{\pi_k}$ +3: + $\min_{\theta}\left(r_t + \gamma \inf_{p\in \mathcal{P}(s_t,a_t)}\left[\widetilde{Q}_{\mathrm{R - KL},\hat{\theta}}^{\pi_k}(s_{t + 1}\sim p(\cdot |s_t,a_t),a_{t + 1}\sim \pi_k(\cdot |s_{t + 1});\bar{\pi})\right.\right.$ $-\tau \mathrm{KL}(\pi_k(\cdot |s_{t + 1}\sim p(\cdot |s_t,a_t))||\bar{\pi} (\cdot |s_{t + 1}\sim p(\cdot |s_t,a_t)))\Bigg{-}(\widetilde{Q}_{\mathrm{KL},\theta}^{\pi_k}(s_t,a_t;\bar{\pi})\Bigg{)}^2,$ +4: repeat +5: Sample batch of size N from replay buffer +6: // Step 2: sample based policy (weights) +7: $q(a_i|s_j) = q_{ij}$ , computed as: +8: for $j = 1,\dots,K$ do +9: for $i = 1,\dots,N$ do +10: $a_i\sim \pi_k(a|s_j)$ +11: $Q_{ij} = Q^{\pi_k}(s_j,a_i)$ +12: $q_{ij} = \text{Compute Weights}(\{Q_{ij}\}_{i = 1\dots N})$ {see (Abdolmaleki et al., 2018b)} +13: // Step 3: update parametric policy +14: Given the data-set $\{s_j,(a_i,q_{ij})_{i = 1\dots N}\}_{j = 1\dots K}$ +15: Update the Policy by finding +16: $\pi_{k + 1} = \operatorname{argmax}_{\pi}\sum_{j}^{K}\sum_{i}^{N}q_{ij}\log \pi (a_{i}|s_{j})$ +17: (subject to additional (KL) regularization) +18: until Fixed number of steps +19: return $\pi_{k + 1}$ + +Algorithm 3 Soft-Robust Entropy-Regularized MPO (SRE-MPO) algorithm for a single iteration +1: given batch-size (K), number of actions (N), old-policy $\pi_{k}$ and replay-buffer +2: // Step 1: Perform policy evaluation on $\pi_{k}$ to yield $Q_{\theta}^{\pi_k}$ +3: + $\min_{\hat{\theta}}\Bigg(r_t + \gamma \Bigg[\widetilde{Q}_{\mathrm{R - KL},\hat{\theta}}^{\pi_k}(s_{t + 1}\sim \bar{p} (\cdot |s_t,a_t),a_{t + 1}\sim \pi_k(\cdot |s_{t + 1};\bar{\pi})$ $-\tau \mathrm{KL}(\pi_k(\cdot |s_{t + 1}\sim \bar{p} (\cdot |s_t,a_t))||\bar{\pi} (\cdot |s_{t + 1}\sim \bar{p} (\cdot |s_t,a_t)))\Big] - \widetilde{Q}_{\mathrm{KL},\theta}^{\pi_k}(s_t,a_t;\bar{\pi})\Bigg)^2,$ +4: repeat +5: Sample batch of size N from replay buffer +6: // Step 2: sample based policy (weights) +7: $q(a_i|s_j) = q_{ij}$ , computed as: +8: for $j = 1,\dots,K$ do +9: for $i = 1,\dots,N$ do +10: $a_i\sim \pi_k(a|s_j)$ +11: $Q_{ij} = Q^{\pi_k}_i(s_j,a_i)$ +12: $q_{ij} = \text{Compute Weights}(\{Q_{ij}\}_{i = 1\dots N})$ {see (Abdolmaleki et al., 2018b)} +13: // Step 3: update parametric policy +14: Given the data-set $\{s_j,(a_i,q_{ij})_{i = 1\dots N}\}_{j = 1\dots K}$ +15: Update the Policy by finding +16: $\pi_{k + 1} = \operatorname{argmax}_{\pi}\sum_{j}^{K}\sum_{i}^{N}q_{ij}\log \pi (a_{i}|s_{j})$ +17: (subject to additional (KL) regularization) +18: until Fixed number of steps +19: return $\pi_{k + 1}$ \ No newline at end of file diff --git a/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/images.zip b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..989e494342dd25de18b79a9610e3d1059f719b41 --- /dev/null +++ b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e22ea861a29854d4074e33570d73a5753f05d574280966f679b2fca75792ea5 +size 1616986 diff --git a/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/layout.json b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ebb27ea00aef05400e76419944e3509ea2506920 --- /dev/null +++ b/robustreinforcementlearningforcontinuouscontrolwithmodelmisspecification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:796fd3b20fc8d8b573efdb61e148c69b2da65641789f19c83193880d280f6925 +size 798759 diff --git a/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_content_list.json b/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..58f8ead0301d9ffef9795236caa4d528d390d980 --- /dev/null +++ b/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19a10baa407e36ca76ed41d96589cd743db6665d1ff8caaf5d563d91509edcd2 +size 162595 diff --git a/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_model.json b/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..293dc5a398f5a0c2bdcc26bc571701bae625f0ca --- /dev/null +++ b/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3682f575390726097874ba270f558abbee6020212ea347b979ba239ffc46828 +size 187319 diff --git a/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_origin.pdf b/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f6f2b69dba082a1cde5d08c157958ba19fbc942b --- /dev/null +++ b/robustsubspacerecoverylayerforunsupervisedanomalydetection/28ad0447-b410-49f8-9985-ff22a5cb28cc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44b579f64084bcc65b1f9ff7d20ebfdabed8ef6aabe644549e09f896b5bb4ff5 +size 19179942 diff --git a/robustsubspacerecoverylayerforunsupervisedanomalydetection/full.md b/robustsubspacerecoverylayerforunsupervisedanomalydetection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7f38613450a3b2cacc8694b55992cbf01ba82e99 --- /dev/null +++ b/robustsubspacerecoverylayerforunsupervisedanomalydetection/full.md @@ -0,0 +1,747 @@ +# ROBUST SUBSPACE RECOVERY LAYER FOR UNSUPERVISED ANOMALY DETECTION + +Chieh-Hsin Lai* Dongmian Zou* & Gilad Lerman + +School of Mathematics + +University of Minnesota + +Minneapolis, MN 55455 + +{laixx313, dzou, lerman}@umn.edu + +# ABSTRACT + +We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and removes outliers that lie away from this subspace. It is used within an autoencoder. The encoder maps the data into a latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a "manifold" close to the original inliers. Inliers and outliers are distinguished according to the distances between the original and mapped positions (small for inliers and large for outliers). Extensive numerical experiments with both image and document datasets demonstrate state-of-the-art precision and recall. + +# 1 INTRODUCTION + +Finding and utilizing patterns in data is a common task for modern machine learning systems. However, there is often some anomalous information that does not follow a common pattern and has to be recognized. For this purpose, anomaly detection aims to identify data points that "do not conform to expected behavior" (Chandola et al., 2009). We refer to such points as either anomalous or outliers. In many applications, there is no ground truth available to distinguish anomalous from normal points, and they need to be detected in an unsupervised fashion. For example, one may need to remove anomalous images from a set of images obtained by a search engine without any prior knowledge about how a normal image should look (Xia et al., 2015). Similarly, one may need to distinguish unusual news items from a large collection of news documents without any information whether a news item is usual or not (Kannan et al., 2017). In these examples, the only assumptions are that normal data points appear more often than anomalous ones and have a simple underlying structure which is unknown to the user. + +Some early methods for anomaly detection relied on Principal Component Analysis (PCA) (Shyu et al., 2003). Here one assumes that the underlying unknown structure of the normal samples is linear. However, PCA is sensitive to outliers and will often not succeed in recovering the linear structure or identifying the outliers (Lerman & Maunu, 2018; Vaswani & Narayanamurthy, 2018). More recent ideas of Robust PCA (RPCA) (Wright et al., 2009; Vaswani & Narayanamurthy, 2018) have been considered for some specific problems of anomaly detection or removal (Zhou & Paffenroth, 2017; Paffenroth et al., 2018). RPCA assumes sparse corruption, that is, few elements of the data matrix are corrupted. This assumption is natural for some special problems in computer vision, in particular, background subtraction (De La Torre & Black, 2003; Wright et al., 2009; Vaswani & Narayanamurthy, 2018). However, a natural setting of anomaly detection with hidden linear structure may assume instead that a large portion of the data points are fully corrupted. The mathematical framework that addresses this setting is referred to as robust subspace recovery (RSR) (Lerman & Maunu, 2018). + +While Robust PCA and RSR try to extract linear structure or identify outliers lying away from such structure, the underlying geometric structure of many real datasets is nonlinear. Therefore, one + +needs to extract crucial features of the nonlinear structure of the data while being robust to outliers. In order to achieve this goal, we propose to use an autoencoder (composed of an encoder and a decoder) with an RSR layer. We refer to it as RSRAE (RSR autoencoder). It aims to robustly and nonlinearly reduce the dimension of the data in the following way. The encoder maps the data into a high-dimensional space. The RSR layer linearly maps the embedded points into a low-dimensional subspace that aims to learn the hidden linear structure of the embedded normal points. The decoder maps the points from this subspace to the original space. It aims to map the normal points near their original locations, and the anomalous points far from their original locations. + +Ideally, the encoder maps the normal data to a linear space and any anomalies lie away from this subspace. In this ideal scenario, anomalies can be removed by an RSR method directly applied to the data embedded by the encoder. Since the linear model for the normal data embedded by the encoder is only approximate, we do not directly apply RSR to the embedded data. Instead, we minimize a sum of the reconstruction error of the autoencoder and the RSR error for the data embedded by the encoder. We advocate for an alternating procedure, so that the parameters of the autoencoder and the RSR layer are optimized in turn. + +# 1.1 STRUCTURE OF THE REST OF THE PAPER + +Section 2 reviews works that are directly related to the proposed RSRAE and highlights the original contributions of this paper. Section 3 explains the proposed RSRAE, and in particular, its RSR layer and total energy function. Section 4 includes extensive experimental evidence demonstrating effectiveness of RSRAE with both image and document data. Section 5 discusses theory for the relationship of the RSR penalty with the WGAN penalty. Section 6 summarizes this work and mentions future directions. + +# 2 RELATED WORKS AND CONTRIBUTION + +We review related works in Section 2.1 and highlight our contribution in Section 2.2. + +# 2.1 RELATED WORKS + +Several recent works have used autoencoders for anomaly detection. Xia et al. (2015) proposed the earliest work on anomaly detection via an autoencoder, while utilizing large reconstruction error of outliers. They apply an iterative and cyclic scheme, where in each iteration, they determine the inliers and use them for updating the parameters of the autoencoder. Aytekin et al. (2018) apply $\ell_2$ normalization for the latent code of the autoencoder and also consider the case of multiple modes for the normal samples. Instead of using the reconstruction error, they apply $k$ -means clustering for the latent code, and identify outliers as points whose latent representations are far from all the cluster centers. Zong et al. (2018) also use an autoencoder with clustered latent code, but they fit a Gaussian Mixture Model using an additional neural network. Restricted Boltzmann Machines (RBMs) are similar to autoencoders. Zhai et al. (2016) define "energy functions" for RBMs that are similar to the reconstruction losses for autoencoders. They identify anomalous samples according to large energy values. Chalapathy et al. (2017) propose using ideas of RPCA within an autoencoder, where they alternatively optimize the parameters of the autoencoder and a sparse residual matrix. + +The above works are designed for datasets with a small fraction of outliers. However, when this fraction increases, outliers are often not distinguished by high reconstruction errors or low similarity scores. In order to identify them, additional assumptions on the structure of the normal data need to be incorporated. For example, Zhou & Paffenroth (2017) decompose the input data into two parts: low-rank and sparse (or column-sparse). The low-rank part is fed into an autoencoder and the sparse part is imposed as a penalty term with the $\ell_1$ -norm (or $\ell_{2,1}$ -norm for column-sparsity). + +In this work, we use a term analogous to the $\ell_{2,1}$ -norm, which can be interpreted as the sum of absolute deviations from a latent subspace. However, we do not decompose the data a priori, but minimize an energy combining this term and the reconstruction error. Minimization of the former term is known as least absolute deviations in RSR (Lerman & Maunu, 2018). It was first suggested for RSR and related problems in Watson (2001); Ding et al. (2006); Zhang et al. (2009). The robustness to outliers of this energy, or of relaxed versions of it, was studied in McCoy & Tropp (2011); Xu + +et al. (2012); Lerman & Zhang (2014); Zhang & Lerman (2014); Lerman et al. (2015); Lerman & Maunu (2017); Maunu et al. (2017). In particular, Maunu et al. (2017) established its well-behaved landscape under special, though natural, deterministic conditions. Under similar conditions, they guaranteed fast subspace recovery by a simple algorithm that aims to minimize this energy. + +Another directly related idea for extracting useful latent features is an addition of a linear self-expressive layer to an autoencoder (Ji et al., 2017). It is used in the different setting of unsupervised subspace clustering. By imposing the self-expressiveness, the autoencoder is robust to an increasing number of clusters. Although self-expressiveness also improves robustness to noise and outliers, Ji et al. (2017) aims at clustering and thus its goal is different than ours. Furthermore, their self-expressive energy does not explicitly consider robustness, while ours does. Lezama et al. (2018) consider a somewhat parallel idea of imposing a loss function to increase the robustness of representation. However, their goal is to increase the margin between classes and their method only applies to a supervised setting in anomaly detection, where the normal data is multi-modal. + +# 2.2 CONTRIBUTION OF THIS WORK + +This work introduces an RSR layer within an autoencoder. It incorporates a special regularizer that enforces an outliers-robust linear structure in the embedding obtained by the encoder. We clarify that the method does not alternate between application of the autoencoder and the RSR layer, but fully integrates these two components. Our experiments demonstrate that a simple incorporation of a "robust loss" within a regular autoencoder does not work well for anomaly detection. We try to explain this and also the improvement obtained by incorporating an additional RSR layer. + +Our proposed architecture is simple to implement. Furthermore, the RSR layer is not limited to a specific design of RSRAE but can be put into any well-designed autoencoder structure. The epoch time of the proposed algorithm is comparable to those of other common autoencoders. Furthermore, our experiments show that RSRAE competitively performs in unsupervised anomaly detection tasks. + +RSRAE addresses the unsupervised setting, but is not designed to be highly competitive in the semi-supervised or supervised settings, where one has access to training data from the normal class or from both classes, respectively. In these settings, RSRAE functions like a regular autoencoder without taking an advantage of its RSR layer, unless the training data for the normal class is corrupted with outliers. + +The use of RSR is not restricted to autoencoders. We establish some preliminary analysis for RSR within a generative adversarial network (GAN) (Goodfellow et al., 2014; Arjovsky et al., 2017) in Section 5. More precisely, we show that a linear WGAN intrinsically incorporates RSR in some special settings, although it is unclear how to impose an RSR layer. + +# 3 RSR LAYER FOR OUTLIER REMOVAL + +We assume input data $\{\mathbf{x}^{(t)}\}_{t = 1}^{N}$ in $\mathbb{R}^M$ , and denote by $\mathbf{X}$ its corresponding data matrix, whose $t$ -th column is $\mathbf{x}^{(t)}$ . The encoder of RSRAE, $\mathcal{E}$ , is a neural network that maps each data point, $\mathbf{x}^{(t)}$ , to its latent code $\mathbf{z}^{(t)} = \mathcal{E}(\mathbf{x}^{(t)}) \in \mathbb{R}^D$ . The RSR layer is a linear transformation $\mathbf{A} \in \mathbb{R}^{d \times D}$ that reduces the dimension to $d$ . That is, $\tilde{\mathbf{z}}^{(t)} = \mathbf{A}\mathbf{z}^{(t)} \in \mathbb{R}^d$ . The decoder $\mathcal{D}$ is a neural network that maps $\tilde{\mathbf{z}}^{(t)}$ to $\tilde{\mathbf{x}}^{(t)}$ in the original ambient space $\mathbb{R}^M$ . + +We can write the forward maps in a compact form using the corresponding data matrices as follows: + +$$ +\mathbf {Z} = \mathcal {E} (\mathbf {X}), \quad \tilde {\mathbf {Z}} = \mathbf {A} \mathbf {Z}, \quad \tilde {\mathbf {X}} = \mathcal {D} (\tilde {\mathbf {Z}}). \tag {1} +$$ + +Ideally, we would like to optimize RSRAE so it only maintains the underlying structure of the normal data. We assume that the original normal data lies on a $d$ -dimensional "manifold" in $\mathbb{R}^D$ and thus the RSR layer embeds its latent code into $\mathbb{R}^d$ . In this ideal optimization setting, the similarity between the input and the output of RSRAE is large whenever the input is normal and small whenever the input is anomalous. Therefore, by thresholding a similarity measure, one may distinguish between normal and anomalous data points. + +In practice, the matrix $\mathbf{A}$ and the parameters of $\mathcal{E}$ and $\mathcal{D}$ are obtained by minimizing a loss function, which is a sum of two parts: the reconstruction loss from the autoencoder and the loss from the RSR + +layer. For $p > 0$ , an $\ell_{2,p}$ reconstruction loss for the autoencoder is + +$$ +L _ {\mathrm {A E}} ^ {p} (\mathcal {E}, \mathbf {A}, \mathcal {D}) = \sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \tilde {\mathbf {x}} ^ {(t)} \right\| _ {2} ^ {p}. \tag {2} +$$ + +In order to motivate our choice of RSR loss, we review a common formulation for the original RSR problem. In this problem one needs to recover a linear subspace, or equivalently an orthogonal projection $\mathbf{P}$ onto this subspace. Assume a dataset $\{\mathbf{y}^{(t)}\}_{t = 1}^{N}$ and let $\mathbf{I}$ denote the identity matrix in the ambient space of the dataset. The goal is to find an orthogonal projector $\mathbf{P}$ of dimension $d$ whose subspace robustly approximates this dataset. The least $q$ -th power deviations formulation for $q > 0$ , or least absolute deviations when $q = 1$ (Lerman & Maunu, 2018), seeks $\mathbf{P}$ that minimizes + +$$ +\hat {L} (\mathbf {P}) = \sum_ {t = 1} ^ {N} \left\| (\mathbf {I} - \mathbf {P}) \mathbf {y} ^ {(t)} \right\| _ {2} ^ {q}. \tag {3} +$$ + +The solution of this problem is robust to some outliers when $q \leq 1$ (Lerman & Zhang, 2014; Lerman & Maunu, 2017); furthermore, $q < 1$ can result in a wealth of local minima and thus $q = 1$ is preferable (Lerman & Zhang, 2014; Lerman & Maunu, 2017). + +A similar loss function to (3) for RSRAE is + +$$ +\begin{array}{l} L _ {\mathrm {R S R}} ^ {q} (\mathbf {A}) = \lambda_ {1} L _ {\mathrm {R S R} _ {1}} (\mathbf {A}) + \lambda_ {2} L _ {\mathrm {R S R} _ {2}} (\mathbf {A}) \\ := \lambda_ {1} \sum_ {t = 1} ^ {N} \left\| \mathbf {z} ^ {(t)} - \mathbf {A} ^ {\mathrm {T}} \underbrace {\mathbf {A} \mathbf {z} ^ {(t)}} _ {\tilde {\mathbf {z}} ^ {(t)}} \right\| _ {2} ^ {q} + \lambda_ {2} \left\| \mathbf {A} \mathbf {A} ^ {\mathrm {T}} - \mathbf {I} _ {d} \right\| _ {\mathrm {F}} ^ {2}, \tag {4} \\ \end{array} +$$ + +where $\mathbf{A}^{\mathrm{T}}$ denotes the transpose of $\mathbf{A}$ , $\mathbf{I}_d$ denotes the $d\times d$ identity matrix and $\| \cdot \|_{\mathrm{F}}$ denotes the Frobenius norm. Here $\lambda_1,\lambda_2 > 0$ are predetermined hyperparameters, though we later show that one may solve the underlying problem without using them. We note that the first term in the weighted sum of (4) is close to (3) as long as $\mathbf{A}^{\mathrm{T}}\mathbf{A}$ is close to an orthogonal projector. To enforce this requirement we introduced the second term in the weighted sum of (4). In Appendix C we discuss further properties of the RSR energy and its minimization. + +To emphasize the effect of outlier removal, we take $p = 1$ in (2) and $q = 1$ in (4). That is, we use the $l_{2,1}$ norm, or the formulation of least absolute deviations, for both reconstruction and RSR. The loss function of RSRAE is the sum of the two loss terms in (2) and (4), that is, + +$$ +L _ {\mathrm {R S R A E}} (\mathcal {E}, \mathbf {A}, \mathcal {D}) = L _ {\mathrm {A E}} ^ {1} (\mathcal {E}, \mathbf {A}, \mathcal {D}) + L _ {\mathrm {R S R}} ^ {1} (\mathbf {A}). \tag {5} +$$ + +We remark that the sole minimization of $L_{\mathrm{AE}}^{1}$ , without $L_{\mathrm{RSR}}^{1}$ , is not effective for anomaly detection. We numerically demonstrate this in Section 4.3 and also try to explain it in Section 5.1. + +Our proposed algorithm for optimizing (5), which we refer to as the RSRAE algorithm, uses alternating minimization. It iteratively backpropagates the three terms $L_{\mathrm{AE}}^{1}$ , $L_{\mathrm{RSR}_{1}}$ , $L_{\mathrm{RSR}_{2}}$ and accordingly updates the parameters of the RSR autoencoder. For clarity, we describe this basic procedure in Algorithm 1 of Appendix A. It is independent of the values of the parameters $\lambda_{1}$ and $\lambda_{2}$ . Note that the additional gradient step with respect to the RSR loss just updates the parameters in A. Therefore it does not significantly increase the epoch time of a standard autoencoder for anomaly detection. Another possible method, which we refer to as RSRAE+, is direct minimization of $L_{\mathrm{RSRAE}}$ with predetermined $\lambda_{1}$ and $\lambda_{2}$ via auto-differentiation (see Algorithm 2 of Appendix A). Section 4.3 and Appendix I.2 demonstrate that in general, RSRAE performs better than RSRAE+, though it is possible that similar performance can be achieved by carefully tuning the parameters $\lambda_{1}$ and $\lambda_{2}$ when implementing RSRAE+. + +We remark that a standard autoencoder is obtained by minimizing only $L_{\mathrm{AE}}^2$ , without the RSR loss. One might hope that minimizing $L_{\mathrm{AE}}^1$ may introduce the needed robustness. However, Section 4.3 and Appendix I.2 demonstrate that results obtained by minimizing $L_{\mathrm{AE}}^1$ or $L_{\mathrm{AE}}^2$ are comparable, and are worse than those of RSRAE and RSRAE+. + +# 4 EXPERIMENTAL RESULTS + +We test our method on five datasets: Caltech 101 (Fei-Fei et al., 2007), Fashion-MNIST (Xiao et al., 2017), Tiny Imagenet (a small subset of Imagenet (Russakovsky et al., 2015)), Reuters-21578 (Lewis, 1997) and 20 Newsgroups (Lang, 1995). + +Caltech 101 contains 9,146 RGB images labeled according to 101 distinct object categories. We take the 11 categories that contain at least 100 images and randomly choose 100 images per category. We preprocess all 1100 images to have size $32 \times 32 \times 3$ and pixel values normalized between $-1$ and $1$ . In each experiment, the inliers are the 100 images from a certain category and we sample $c \times 100$ outliers from the rest of 1000 images of other categories, where $c \in \{0.1, 0.3, 0.5, 0.7, 0.9\}$ . + +Fashion-MNIST contains $28 \times 28$ grayscale images of clothing and accessories, which are categorized into 10 classes. We use the test set which contains 10,000 images and normalize pixel values to lie in $[-1,1]$ . In each experiment, we fix a class and the inliers are the test images in this class. We randomly sample $c \times 1,000$ outliers from the rest of classes (here and below $c$ is as above). Since there are around 1000 test images in each class, the outlier ratio is approximately $c$ . + +Tiny Imagenet contains 200 classes of RGB images from a distinct subset of Imagenet. We select 10 classes with 500 training images per class. We preprocess the images to have size $32 \times 32 \times 3$ and pixel values in $[-1,1]$ . We further represent the images by deep features obtained by a ResNet (He et al., 2016) with dimension 256 (Appendix I.1 provides results for the raw images). In each experiment, 500 inliers are from a fixed class and $c \times 500$ outliers are from the rest of classes. + +Reuters-21578 contains 90 text categories with multi-labels. We consider the five largest classes with single labels and randomly sample from them 360 documents per class. The documents are preprocessed into vectors of size 26,147 by sequentially applying the TFIDF transformer and Hashing vectorizer (Rajaraman & Ullman, 2011). In each experiment, the inliers are the documents of a fixed class and $c \times 360$ outliers are randomly sampled from the other classes. + +20 Newsgroups contains newsgroup documents with 20 different labels. We sample 360 documents per class and preprocess them as above into vectors of size 10,000. In each experiment, the inliers are the documents from a fixed class and $c \times 360$ outliers are sampled from the other classes. + +# 4.1 BENCHMARKS AND SETTING + +We compare RSRAE with the following benchmarks: Local Outlier Factor (LOF) (Breunig et al., 2000), One-Class SVM (OCSVM) (Schölkopf et al., 2000; Amer et al., 2013), Isolation Forest (IF) (Liu et al., 2012), Deep Structured Energy Based Models (DSEBMs) (Zhai et al., 2016), Geometric Transformations (GT) (Golan & El-Yaniv, 2018), and Deep Autoencoding Gaussian Mixture Model (DAGMM) (Zong et al., 2018). Of those benchmarks, LOF, OCSVM and IF are traditional, while powerful methods, for unsupervised anomaly detection and do not involve neural networks. DSEBMs, DAGMM and GT are more recent and all involve neural networks. DSEBMs is built for unsupervised anomaly detection. DAGMM and GT are designed for semi-supervised anomaly detection, but allow corruption. We use them to learn a model for the inliers and assign anomaly scores using the combined set of both inliers and outliers. GT only applies to image data. We briefly describe these methods in Appendix E. + +We implemented DSEBMs, DAGMM and GT using the codes from Golan & El-Yaniv (2018) with minimal modification so that they adapt to the data described above and the available GPUs in our machine. The LOF, OCSVM and IF methods are adapted from the scikit-learn packages. + +We describe the structure of the RSRAE as follows. For the image datasets without deep features, the encoder consists of three convolutional layers: $5 \times 5$ kernels with 32 output channels, strides 2; $5 \times 5$ kernels with 64 output channels, strides 2; and $3 \times 3$ kernels with 128 output channels, strides 2. The output of the encoder is flattened and the RSR layer transforms it into a 10-dimensional vector. That is, we fix $d = 10$ in all experiments. The decoder consists of a dense layer that maps the output of the RSR layer into a vector of the same shape as the output of the encoder, and three deconvolutional layers: $3 \times 3$ kernels with 64 output channels, strides 2; $5 \times 5$ kernels with 32 + +output channels, strides 2; $5 \times 5$ kernels with 1 (grayscale) or 3 (RGB) output channels, strides 2. For the preprocessed document datasets or the deep features of Tiny Imagenet, the encoder is a fully connected network with size (32, 64, 128), the RSR layer linearly maps the output of the encoder to dimension 10, and the decoder is a fully connected network with size (128, 64, 32, $D$ ) where $D$ is the dimension of the input. Batch normalization is applied to each layer of the encoders and the decoders. The output of the RSR layer is $\ell_2$ -normalized before applying the decoder. For DSEBMs and DAGMM we use the same number of layers and the same dimensions in each layer for the autoencoder as in RSRAE. For each experiment, the RSRAE model is optimized with Adam using a learning rate of 0.00025 and 200 epochs. The batch size is 128 for each gradient step. The setting of training is consistent for all the neural network based methods. + +The two main hyperparameters of RSRAE are the intrinsic dimension $d$ and learning rate. Their values were fixed above. Appendix G demonstrates stability to changes in these values. + +All experiments were executed on a Linux machine with 64GB RAM and four GTX1080Ti GPUs. For all experiments with neural networks, we used TensorFlow and Keras. We report runtimes in Appendix H. + +# 4.2 RESULTS + +We summarize the precision and recall of our experiments by the AUC (area under curve) and AP (average precision) scores. For completeness, we include the definitions of these common scores in Appendix E. We compute them by considering the outliers as "positive". We remark that we did not record the precision-recall-F1 scores, as in Xia et al. (2015); Zong et al. (2018), since in practice it requires knowledge of the outlier ratio. + +Figs. 1 and 2 present the AUC and AP scores of RSRAE and the methods described in Section 4.1 for the datasets described above, where GT is only applied to image data without deep features. For each constant $c$ (the outlier ratio) and each method, we average the AUC and AP scores over 5 runs with different random initializations and also compute the standard deviations. For brevity of presentation, we report the averaged scores among all classes and designate the averaged standard deviations by bars. + +The results indicate that RSRAE clearly outperforms other methods in most cases, especially when $c$ is large. Indeed, the RSR layer was designed to handle large outlier ratios. For Fashion MNIST and Tiny Imagenet with deep features, IF performs similarly to RSRAE, but IF performs poorly on the document datasets. OCSVM is the closest to RSRAE for the document datasets but it is generally not so competitive for the image datasets. + +# 4.3 COMPARISON WITH VARIATIONS OF RSRAE + +We use one image dataset (Caltech 101) and one document dataset (Reuters-21578) and compare between RSRAE and three variations of it. The first one is $\mathrm{RSRAE + }$ (see Section 3) with $\lambda_{1} = \lambda_{2} = 0.1$ in (4) (these parameters were optimized on 20 Newsgroup, though results with other choices of parameters are later demonstrated in Section G.3). The next two are simpler autoencoders without RSR layers: AE-1 minimizes $L_{\mathrm{AE}}^{1}$ , the $\ell_{2,1}$ reconstruction loss; and AE minimizes $L_{\mathrm{AE}}^{2}$ , the $\ell_{2,2}$ reconstruction loss (it is a regular autoencoder for anomaly detection). We maintain the same architecture as that of RSRAE, including the matrix $\mathbf{A}$ , but use different loss functions. + +Fig. 3 reports the AUC and AP scores. We see that for the two datasets RSRAE+ with the prespecified $\lambda_{1}$ and $\lambda_{2}$ does not perform as well as RSRAE, but its performance is still better than AE and AE-1. This is expected since we chose $\lambda_{1}$ and $\lambda_{2}$ after few trials with a different dataset, whereas RSRAE is independent of these parameters. The performance of AE and AE-1 is clearly worse, and they are also not as good as some methods compared with in Section 4.2. At last, AE is generally comparable with AE-1. Similar results are noticed for the other datasets in Appendix I.2. + +# 5 RELATED THEORY FOR THE RSR PENALTY + +We explain here why we find it natural to incorporate RSR within a neural network. In Section 5.1 we first review the mathematical idea of an autoencoder and discuss the robustness of a linear + +autoencoder with an $\ell_{2,1}$ loss (i.e., RSR loss). We then explain why a general autoencoder with an $\ell_{2,1}$ loss is not expected to be robust to outliers and why an RSR layer can improve its robustness. Section 5.2 is a first step of extending this view to a generative network. It establishes some robustness of WGAN with a linear generator, but the extension of an RSR layer to WGAN is left as an open problem. + +# 5.1 ROBUSTNESS AND RELATED PROPERTIES OF AUTOENCODERS + +Mathematically, an autoencoder for a dataset $\{\mathbf{x}^{(t)}\}_{t = 1}^{N}\subset \mathbb{R}^{D}$ and a latent dimension $d < D$ is composed of an encoder $\mathcal{E}:\mathbb{R}^D\to \mathbb{R}^d$ and a decoder $\mathcal{D}:\mathbb{R}^d\to \mathbb{R}^D$ that minimize the following energy function with $p = 2$ : + +$$ +\sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathcal {D} \circ \mathcal {E} (\mathbf {x} ^ {(t)}) \right\| _ {2} ^ {p}, \tag {6} +$$ + +where $\circ$ denotes function decomposition. It is a natural nonlinear generalization of PCA (Goodfellow et al., 2016). Indeed, in the case of a linear autoencoder, $\mathcal{E}$ and $\mathcal{D}$ are linear maps represented by matrices $\mathbf{E} \in \mathbb{R}^{d \times D}$ and $\mathbf{D} \in \mathbb{R}^{D \times d}$ , respectively, that need to minimize (among such matrices) the following loss function with $p = 2$ + +$$ +\sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {D E x} ^ {(t)} \right\| _ {2} ^ {p}. \tag {7} +$$ + +We explain in Appendix D.1 that if $(\mathbf{D}^{\star},\mathbf{E}^{\star})$ is a minimizer of (7) with $p = 2$ (among $\mathbf{E}\in \mathbb{R}^{d\times D}$ and $\mathbf{D}\in \mathbb{R}^{D\times d}$ ), then $\mathbf{D}^{\star}\mathbf{E}^{\star}$ is the orthoprojector on the $d$ -dimensional PCA subspace. This means, that the latent code $\{\mathbf{E}^{\star}\mathbf{x}^{(t)}\}_{t = 1}^{N}$ parametrizes the PCA subspace and an additional application of $\mathbf{D}^{\star}$ to $\{\mathbf{E}^{\star}\mathbf{x}^{(t)}\}_{t = 1}^{N}$ results in the projections of the data points $\{\mathbf{x}^{(t)}\}_{t = 1}^{N}$ onto the PCA subspace. The recovery error for data points on this subspace is zero (as $\mathbf{D}^{\star}\mathbf{E}^{\star}$ is the identity on this subspace), and in general, this error is the Euclidean distance to the PCA subspace, $\left\| \mathbf{x}^{(t)} - \mathbf{D}^{\star}\mathbf{E}^{\star}\mathbf{x}^{(t)}\right\| _2$ . + +Intuitively, the idea of a general autoencoder is the same. It aims to fit a nice structure, such as a manifold, to the data, where ideally $\mathcal{D} \circ \mathcal{E}$ is a projection onto this nice structure. This idea can only be made rigorous for data approximated by simple geometric structure, e.g., by a graph of a sufficiently smooth function. + +![](images/2a46e6ce189680e07406aba38280ba126967811a9259aeec6c18c22fc774608e.jpg) + +![](images/15ff0c50faa985ad88ee6819113fb327148cc033413e5adf02751068a15c273e.jpg) + +![](images/cd8d69c283e4217ed72d5c71efbeb8c8b8e929a1df99ec7b6774856d42df452a.jpg) +Figure 1: AUC and AP scores for RSRAE using Caltech 101 and Fashion MNIST. + +![](images/33f00f9e0ce6fc051a4a7d0dcb2acb6203c7f0cd2eebaf3af26d909e2b5852fa.jpg) + +![](images/a4826fcec5081ba03ffceb7c92545e9c7ff9b11641795f384ffc4d55fcdb430a.jpg) + +![](images/fcfbe8ee79b8b64d9360759a0cbf4534b1790d2a3951a17e41124ad099835e3b.jpg) + +![](images/8189690942703e67677934cd18752858bdee12e12aa8e82641c21e19f789071e.jpg) + +![](images/10f7c51778ac53f627706a042451b32000b7b23b9cf58b9085465b60a4cf7456.jpg) + +![](images/594420448d4286ed73e7100c038b72818486d67b215b972838907df4950b58fa.jpg) +Figure 2: AUC and AP scores for RSRAE using Tiny Imagenet with deep features, Reuters-21578 and 20 Newsgroups. + +![](images/c9bfc699419ecfa9ab123b05dfbae5b88d2f1a6741704e4f7cee4e99eecb6e5a.jpg) + +In order to extend these methods to anomaly detection, one needs to incorporate robust strategies, so that the methods can still recover the underlying structure of the inliers, and consequently assign lower recovery errors for the inliers and higher recovery errors for the outliers. For example, in the linear case, one may assume a set of inliers lying on and around a subspace and an arbitrary set of outliers (with some restriction on their fraction). PCA, and equivalently, the linear autoencoder that minimizes (7) with $p = 2$ , is not robust to general outliers. Thus it is not expected to distinguish well between inliers and outliers in this setting. As explained in Appendix D.1, minimizing (7) with $p = 1$ gives rise to the least absolute deviations subspace. This subspace can be robust to outliers under some conditions, but these conditions are restrictive (see examples in Lerman & Zhang (2014)). In order to deal with more adversarial outliers, it is advised to first normalize the data to the sphere (after appropriate centering) and then estimate the least absolute deviations subspace. This procedure was theoretically justified for a general setting of adversarial outliers in Mauno & Lerman (2019). + +As in the linear case, an autoencoder that uses the loss function in (6) with $p = 1$ may not be robust to adversarial outliers. Unlike the linear case, there are no simple normalizations for this case. Indeed, the normalization to the sphere can completely distort the structure of an underlying manifold and it is also hard to center in this case. Furthermore, there are some obstacles of establishing robustness for the nonlinear case even under special assumptions. + +Our basic idea for a robust autoencoder is to search for a latent low-dimensional code for the inliers within a larger embedding space. The additional RSR loss focuses on parametrizing the low-dimensional subspace of the encoded inliers, while being robust to outliers. Following the above discussion, we enhance such robustness by applying a normalization similar to the one discussed above, but adapted better to the structure of the network (see Section 4.1). The emphasis of the RSR + +![](images/187bae7bff33587968a1e13b651bc2a9f3ccdfa86bd5d6bf163e05b71e353855.jpg) + +![](images/4ad489ef155829cf3e138ea86f17201dc089c8fe5a668a7d6d1995fa9f2aa337.jpg) + +![](images/343ccce89ada908f762751c0c64bd486fabefec44dabb2818a22fbf9eb5cf830.jpg) +Figure 3: AUC and AP scores for RSRAE and alternative formulations using Caltech 101 and Reuters-21578. + +![](images/737d4f3813d57d8aed30243cca46e6c3899368d4f1b874a4e867105db4b5be07.jpg) + +layer is on appropriately encoding the inliers, where the encoding of the outliers does not matter. It is okay for the encoded outliers to lie within the subspace of the encoded inliers, as this will result in large recovery errors for the outliers. However, in general, most encoded outliers lie away from this subspace, and this is why such a mechanism is needed (otherwise, a regular autoencoder may obtain a good embedding). + +# 5.2 RELATIONSHIP OF THE RSR LOSS WITH LINEARLY GENERATED WGAN + +An open problem is whether RSR can be used within other neural network structures for unsupervised learning, such as variational autoencoders (VAEs) (Kingma & Welling, 2013) and generative adversarial networks (GANs) (Goodfellow et al., 2014). The latter two models are used in anomaly detection with a score function similar to the reconstruction error (An & Cho, 2015; Vasilev et al., 2018; Zenati et al., 2018; Kliger & Fleishman, 2018). + +While we do not solve this problem, we establish a natural relationship between RSR and Wasserstein-GAN (WGAN) (Arjovsky et al., 2017; Gulrajani et al., 2017) with a linear generator, which is analogous to the example of a linear autoencoder mentioned above. + +Let $W_{p}$ denote the $p$ -Wasserstein distance in $\mathbb{R}^{D}$ ( $p \geq 1$ ). That is, for two probability distributions $\mu, \nu$ on $\mathbb{R}^{D}$ , + +$$ +W _ {p} (\mu , \nu) = \left(\inf _ {\pi \in \Pi (\mu , \nu)} \mathbb {E} _ {(\mathbf {x}, \mathbf {y}) \sim \pi} \| \mathbf {x} - \mathbf {y} \| _ {2} ^ {p}\right) ^ {1 / p}, \tag {8} +$$ + +where $\Pi (\mu ,\nu)$ is the set of joint distributions with $\mu ,\nu$ as marginals. We formulate the following proposition (while prove it later in Appendix D.2) and then interpret it. + +Proposition 5.1. Let $p \geq 1$ and $\mu$ be a Gaussian distribution on $\mathbb{R}^D$ with mean $\mathbf{m}_X \in \mathbb{R}^D$ and full-rank covariance matrix $\boldsymbol{\Sigma}_X \in \mathbb{R}^{D \times D}$ (that is, $\mu$ is $\mathcal{N}(\mathbf{m}_X, \boldsymbol{\Sigma}_X)$ ). Then + +$$ +\begin{array}{l l} \min _ {\nu \text {i s} \mathcal {N} \left(\mathbf {m} _ {\mathrm {Y}}, \boldsymbol {\Sigma} _ {\mathrm {Y}}\right)} & W _ {p} (\mu , \nu) \\ \text {s . t .} & \mathbf {m} _ {Y} \in \mathbb {R} ^ {D} \\ & \operatorname {r a n k} \left(\boldsymbol {\Sigma} _ {\mathrm {Y}}\right) = \mathrm {d} \end{array} \tag {9} +$$ + +is achieved when $\mathbf{m}_Y = \mathbf{m}_X$ and $\boldsymbol{\Sigma}_{Y} = \mathbf{P}_{\mathcal{L}}\boldsymbol{\Sigma}_{X}\mathbf{P}_{\mathcal{L}}$ , where for $X\sim \mu$ + +$$ +\mathcal {L} = \underset {\dim \mathcal {L} = \mathrm {d}} {\operatorname {a r g m i n}} \mathbb {E} \| X - \mathbf {P} _ {\mathcal {L}} X \| _ {2} ^ {p}. \tag {10} +$$ + +The setting of this proposition implicitly assumes a linear generator of WGAN. Indeed, the linear mapping, which can be represented by a $d \times D$ matrix, maps a distribution in $\mathcal{N}(\mathbf{m}_X, \boldsymbol{\Sigma}_X)$ into a distribution in $\mathcal{N}(\mathbf{m}_Y, \boldsymbol{\Sigma}_Y)$ and reduces the rank of the covariance matrix from $D$ to $d$ . The proposition states that in this setting the underlying minimization is closely related to minimizing the loss function (3). Note that here $p \geq 1$ , however, if one further corrupts the sample, then $p = 1$ is the suitable choice (Lerman & Maunu, 2018). This choice is also more appropriate for WGAN, since there is no $p$ -WGAN for $p \neq 1$ . + +Nevertheless, training a WGAN is not exactly the same as minimizing the $W_{1}$ distance (Gulrajani et al., 2017), since it is difficult to impose the Lipschitz constraint for a neural network. Furthermore, in practice, the WGAN generator, which is a neural network, is nonlinear, and thus its output is typically non-Gaussian. The robustness of WGAN with a linear autoencoder, which we established here, does not extend to a general WGAN (this is similar to our earlier observation that the robustness of a linear autoencoder with an RSR loss does not generalize to a nonlinear autoencoder). We believe that a similar structure like the RSR layer has to be imposed for enhancing the robustness of WGAN, and possibly also other generative networks, but we leave its effective implementation as an open problem. + +# 6 CONCLUSION AND FUTURE WORK + +We constructed a simple but effective RSR layer within the autoencoder structure for anomaly detection. It is easy to use and adapt. We have demonstrated competitive results for image and document data and believe that it can be useful in many other applications. + +There are several directions for further exploration of the RSR loss in unsupervised deep learning models for anomaly detection. First, we are interested in theoretical guarantees for RSRAE. A more direct subproblem is understanding the geometric structure of the "manifold" learned by RSRAE. Second, it is possible that there are better geometric methods to robustly embed the manifold of inliers. For example, one may consider a multiscale incorporation of RSR layers, which we expand on in Appendix D.3. Third, one may try to incorporate an RSR layer in other neural networks for anomaly detection that use nonlinear dimension reduction. We hope that some of these methods may be easier to directly analyze than our proposed method. For example, we are curious about successful incorporation of robust metrics for GANs or WGANs. In particular, we wonder about extensions of the theory proposed here for WGAN when considering a more general setting. + +# ACKNOWLEDGMENTS + +This research has been supported by NSF award DMS18-30418. Part of this work was pursued when Dongmian Zou was a postdoctoral associate at the Institute for Mathematics and its Applications at the University of Minnesota. We thank Teng Zhang for his help with proving Proposition 5.1 (we discussed a related but different proposition with similar ideas of proofs). We thank Madeline Handschy for commenting on an earlier version of this paper. + +# REFERENCES + +Mennatallah Amer, Markus Goldstein, and Slim Abdennadher. Enhancing one-class support vector machines for unsupervised anomaly detection. In Proceedings of the ACM SIGKDD Workshop on Outlier Detection and Description, pp. 8-15. ACM, 2013. +Jinwon An and Sungzoon Cho. Variational autoencoder based anomaly detection using reconstruction probability. *Special Lecture on IE*, 2(1), 2015. +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 214-223, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/arjovsky17a.html. +Caglar Aytekin, Xingyang Ni, Francesco Cricri, and Emre Aksu. Clustering and unsupervised anomaly detection with $l_{2}$ normalized deep auto-encoder representations. In 2018 International + +Joint Conference on Neural Networks (IJCNN), pp. 1-6, July 2018. doi: 10.1109/IJCNN.2018.8489068. +Markus M Breunig, Hans-Peter Kriegel, Raymond T Ng, and Jörg Sander. LOF: identifying density-based local outliers. In ACM sigmoid record, volume 29, 2, pp. 93-104. ACM, 2000. +Raghavendra Chalapathy, Aditya Krishna Menon, and Sanjay Chawla. Robust, deep and inductive anomaly detection. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 36-51. Springer, 2017. +Varun Chandola, ArindaFm Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3):15, 2009. +Jesse Davis and Mark Goadrich. The relationship between precision-recall and roc curves. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, pp. 233-240, New York, NY, USA, 2006. ACM. ISBN 1-59593-383-2. doi: 10.1145/1143844.1143874. URL http://doi.acm.org/10.1145/1143844.1143874. +Fernando De La Torre and Michael J Black. A framework for robust subspace learning. International Journal of Computer Vision, 54(1-3):117-142, 2003. +Chris Ding, Ding Zhou, Xiaofeng He, and Hongyuan Zha. R1-PCA: rotational invariant $l_{1}$ -norm principal component analysis for robust subspace factorization. In Proceedings of the 23rd international conference on Machine learning, pp. 281-288. ACM, 2006. +Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer vision and Image understanding, 106(1):59-70, 2007. +Izhak Golan and Ran El-Yaniv. Deep anomaly detection using geometric transformations. In Advances in Neural Information Processing Systems, pp. 9781-9791, 2018. +Markus Goldstein and Seiichi Uchida. A comparative evaluation of unsupervised anomaly detection algorithms for multivariate data. *PloS one*, 11(4):e0152173, 2016. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. +Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016. +Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of Wasserstein GANs. In Advances in Neural Information Processing Systems, pp. 5767-5777, 2017. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630-645. Springer, 2016. +Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, and Ian Reid. Deep subspace clustering networks. In Advances in Neural Information Processing Systems, pp. 24-33, 2017. +Peter W Jones. Rectifiable sets and the traveling salesman problem. Invent Math, 102(1):1-15, 1990. +Ramakrishnan Kannan, Hyenkyun Woo, Charu C. Aggarwal, and Haesun Park. Outlier detection for text data. In Proceedings of the 2017 SIAM International Conference on Data Mining, pp. 489-497, 2017. URL https://epubs.siam.org/doi/abs/10.1137/1.9781611974973.55. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference for Learning Representations. arXiv:1412.6980, 2014. +Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In 2nd International Conference for Learning Representations. arXiv:1312.6114, 2013. + +Mark Kliger and Shachar Fleishman. Novelty detection with GAN, 2018. URL https://openreview.net/forum?id=Hy7EPh10W. +Ken Lang. Newsweeder: Learning to filter netnews. In Proceedings of the Twelfth International Conference on Machine Learning, pp. 331-339, 1995. +Gilad Lerman and Tyler Maunu. Fast, robust and non-convex subspace recovery. Information and Inference: A Journal of the IMA, 7(2):277-336, 2017. +Gilad Lerman and Tyler Maunu. An overview of robust subspace recovery. Proceedings of the IEEE, 106(8):1380-1410, 2018. +Gilad Lerman and Teng Zhang. $l_{p}$ -recovery of the most significant subspace among multiple subspaces with outliers. Constructive Approximation, 40(3):329-385, 2014. +Gilad Lerman, Michael B McCoy, Joel A Tropp, and Teng Zhang. Robust computation of linear models by convex relaxation. Foundations of Computational Mathematics, 15(2):363-410, 2015. +David Lewis. Reuters-21578 text categorization test collection. Distribution 1.0, AT&T Labs-Research, 1997. +Jose Lezama, Qiang Qiu, Pablo Muse, and Guillermo Sapiro. OLE: Orthogonal low-rank embedding-a plug and play geometric loss for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8109-8118, 2018. +Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation-based anomaly detection. ACM Transactions on Knowledge Discovery from Data (TKDD), 6(1):3, 2012. +Tyler Maunu and Gilad Lerman. Robust subspace recovery with adversarial outliers. CoRR, abs/1904.03275, 2019. URL http://arxiv.org/abs/1904.03275. +Tyler Maunu, Teng Zhang, and Gilad Lerman. A well-tempered landscape for non-convex robust subspace recovery. arXiv preprint arXiv:1706.03896, 2017. +Michael McCoy and Joel A Tropp. Two proposals for robust PCA using semidefinite programming. Electronic Journal of Statistics, 5:1123-1160, 2011. +Randy Paffenroth, Kathleen Kay, and Les Servi. Robust PCA for anomaly detection in cyber networks. arXiv preprint arXiv:1801.01571, 2018. +Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake VanderPlas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12: 2825-2830, 2011. +Anand Rajaraman and Jeffrey David Ullman. Mining of massive datasets. Cambridge University Press, 2011. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015. +Bernhard Schölkopf, Robert C Williamson, Alex J Smola, John Shawe-Taylor, and John C Platt. Support vector method for novelty detection. In Advances in neural information processing systems, pp. 582-588, 2000. +Mei-Ling Shyu, Shu-Ching Chen, Kanoksri Sarinnapakorn, and LiWu Chang. A novel anomaly detection scheme based on principal component classifier. In Proc. ICDM Foundation and New Direction of Data Mining workshop, 2003, pp. 172-179, 2003. +Aleksei Vasilev, Vladimir Golkov, Ilona Lipp, Eleonora Sgarlata, Valentina Tomassini, Derek K Jones, and Daniel Cremers. q-space novelty detection with variational autoencoders. arXiv preprint arXiv:1806.02997, 2018. + +Namrata Vaswani and Praneeth Narayanamurthy. Static and dynamic robust PCA and matrix completion: A review. Proceedings of the IEEE, 106(8):1359-1379, 2018. +G. Alistair Watson. Some Problems in Orthogonal Distance and Non-Orthogonal Distance Regression. Defense Technical Information Center, 2001. URL http://books.google.com/books?id=WKKWGwAACAAJ. +John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Advances in neural information processing systems, pp. 2080-2088, 2009. +Yan Xia, Xudong Cao, Fang Wen, Gang Hua, and Jian Sun. Learning discriminative reconstructions for unsupervised outlier removal. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1511-1519, 2015. +Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. +Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust PCA via outlier pursuit. IEEE Trans. Information Theory, 58(5):3047-3064, 2012. doi: 10.1109/TIT.2011.2173156. +Houssam Zenati, Chuan Sheng Foo, Bruno Lecouat, Gaurav Manek, and Vijay Ramaseshan Chandrasekhar. Efficient GAN-based anomaly detection, 2018. URL https://openreview.net/forum? id=BkXADmJDM. +Shuangfei Zhai, Yu Cheng, Weining Lu, and Zhongfei Zhang. Deep structured energy based models for anomaly detection. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, pp. 1100-1109, 2016. +Teng Zhang and Gilad Lerman. A novel M-estimator for robust PCA. Journal of Machine Learning Research, 15(1):749-808, 2014. +Teng Zhang, Arthur Szlam, and Gilad Lerman. Median K-flats for hybrid linear modeling with many outliers. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, pp. 234-241. IEEE, 2009. +Chong Zhou and Randy C Paffenroth. Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 665-674. ACM, 2017. +Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BJLHbb0-. + +# A DETAILS OF RSRAE AND RSRAE+ + +The implementations of both RSRAE and $\mathrm{RSRAE + }$ are simple. For completeness we provide here their details in algorithm boxes. The codes will be later posted in a supplementary webpage. Algorithm 1 describes RSRAE, which minimizes (5) by alternating minimization. It denotes the vectors of parameters of the encoder and decoder by $\theta$ and $\varphi$ , respectively. + +Algorithm 1 RSRAE +Input: Data $\{\mathbf{x}^{(t)}\}_{t = 1}^{N}$ ; thresholds $\epsilon_{\mathrm{AE}}$ $\epsilon_{\mathrm{RSR_1}}$ $\epsilon_{\mathrm{RSR_2}}$ $\epsilon_{\mathrm{T}}$ ; architecture and initial parameters of $\mathcal{E}$ $\mathcal{D}$ A (including number of columns of A); number of epochs & batches; learning rate for backpropagation; similarity measure +Output: Labels of data points as normal or anomalous +1: for each epoch do +2: Divide input data into batches +3: for each batch do +4: if $L_{\mathrm{AE}}^{1}(\theta ,\mathbf{A},\varphi) > \epsilon_{\mathrm{AE}}$ then +5: Backpropagate $L_{\mathrm{AE}}^{1}(\theta ,\mathbf{A},\varphi)$ w.r.t. $\theta ,\mathbf{A},\varphi$ & update $\theta ,\mathbf{A},\varphi$ +6: end if +7: if $L_{\mathrm{RSR_1}}^{1}(\mathbf{A}) > \epsilon_{\mathrm{RSR_1}}$ then +8: Backpropagate $L_{\mathrm{RSR_1}}^{1}(\mathbf{A})$ w.r.t. A & update A +9: end if +10: if $L_{\mathrm{RSR_2}}^{1}(\mathbf{A}) > \epsilon_{\mathrm{RSR_2}}$ then +11: Backpropagate $L_{\mathrm{RSR_2}}^{1}(\mathbf{A})$ w.r.t. A & update A +12: end if +13: end for +14: end for +15: for $t = 1,\dots ,N$ do +16: Calculate similarity between $\mathbf{x}^{(t)}$ and $\tilde{\mathbf{x}}^{(t)}$ +17: if similarity $\geq \epsilon_{\mathrm{T}}$ then +18: $\mathbf{x}^{(t)}$ is normal +19: else +20: $\mathbf{x}^{(t)}$ is anomalous +21: end if +22: end for +23: return Normality labels for $t = 1,\ldots ,N$ + +We clarify some guidelines for choosing default parameters, which we follow in all reported experiments. We set $\epsilon_{\mathrm{AE}}$ , $\epsilon_{\mathrm{RSR}_1}$ and $\epsilon_{\mathrm{RSR}_2}$ to be zero. In general, we use networks with dense layers but for image data we use convolutional layers. We prefer using tanh as the activation function due to its smoothness. However, for a dataset that does not lie in the unit cube, we use either a ReLU function if all of its coordinates are positive, or a leaky ReLU function otherwise. The network parameters and the elements of $\mathbf{A}$ are initialized to be i.i.d. standard normal. In all numerical experiments, we set the number of columns of $\mathbf{A}$ to be 10, that is, $d = 10$ . The learning rate is chosen so that there is a sufficient improvement of the loss values after each epoch. Instead of fixing $\epsilon_{\mathrm{T}}$ , we report the AUC and AP scores for different values of $\epsilon_{\mathrm{T}}$ . + +Algorithm 2 describes RSRAE+, which minimizes (5) with fixed $\lambda_{1}$ and $\lambda_{2}$ by auto-differentiation. + +Algorithm 2 RSRAE+ +Output: Labels of data points as normal or anomalous +```txt +Input: Data $\{\mathbf{x}^{(t)}\}_{t = 1}^{N}$ ; thresholds $\epsilon_{\mathrm{AE}}$ , $\epsilon_{\mathrm{T}}$ ; architecture and initial parameters of $\mathcal{E}$ , $\mathcal{D}$ , $\mathbf{A}$ (including number of columns of $\mathbf{A}$ ); parameters of the energy function $\lambda_1$ , $\lambda_2$ ; number of epochs & batches; learning rate for backpropagation; similarity measure +``` + +1: for each epoch do +2: Divide input data into batches +3: for each batch do +4: if $L_{\mathrm{AE}}^{1}(\theta, \mathbf{A}, \varphi) > \epsilon_{\mathrm{AE}}$ then +5: Backpropagate $L_{\mathrm{AE}}^{1}(\theta, \mathbf{A}, \varphi) + \lambda_{1}L_{\mathrm{RSR}_{1}}^{1}(\mathbf{A}) + \lambda_{2}L_{\mathrm{RSR}_{2}}^{1}(\mathbf{A})$ w.r.t. $\theta, \mathbf{A}, \varphi$ & update +6: end if +7: end for +8: end for +9: for $t = 1, \dots, N$ do +10: Calculate similarity between $\mathbf{x}^{(t)}$ and $\tilde{\mathbf{x}}^{(t)}$ +11: if similarity $\geq \epsilon_{\mathrm{T}}$ then +12: $\mathbf{x}^{(t)}$ is normal +13: else +14: $\mathbf{x}^{(t)}$ is anomalous +15: end if +16: end for +17: return Normality labels for $t = 1, \dots, N$ + +# B DEMONSTRATION OF RSRAE FOR ARTIFICIAL DATA + +For illustrating the performance of RSRAE, in comparison with a regular autoencoder, we consider a simple artificial geometric example. We assume corrupted data whose normal part is embedded in a "Swiss roll manifold", which is a two-dimensional manifold in $\mathbb{R}^3$ . More precisely, the normal part is obtained by mapping 1,000 points uniformly sampled from the rectangle $[3\pi /2,9\pi /2]\times [0,21]$ into $\mathbb{R}^3$ by the function + +$$ +(s, t) \mapsto (t \cos (t), s, t \sin (t)). \tag {11} +$$ + +The anomalous part is obtained by i.i.d. sampling of 500 points from an isotropic Gaussian distribution in $\mathbb{R}^3$ with zero mean and standard deviation 2 in any direction. Fig. 4a illustrates such a sample, where the inliers are in black and the outliers are in blue. We remark that Fig 5a is identical. + +We construct the RSRAE with the following structure. The encoder is composed of fully-connected layers of sizes (32, 64, 128). The decoder is composed of fully connected layers of sizes (128, 64, 32, 3). Each fully connected layer is activated by the leaky ReLU function with $\alpha = 0.2$ . The intrinsic dimension for the RSR layer, that, is the number of columns of $\mathbf{A}$ , is $d = 2$ . + +For comparison, we construct the regular autoencoder AE (see Section 4.3). Recall that both of them have the same architecture (including the linear map $\mathbf{A}$ ), but AE minimizes the $\ell_2$ loss function in (6) (with $p = 2$ ) without an additional RSR loss. We optimize both models with 10,000 epochs and a batch gradient descent using Adam (Kingma & Ba, 2014) with a learning rate of 0.01. + +The reconstructed data $(\tilde{\mathbf{X}})$ using RSRAE and AE are plotted in Figs. 4d and 5d, respectively. We further demonstrate the output obtained by the encoder and the RSR layer. The output of the encoder, $\mathbf{Z} = \mathcal{E}(\mathbf{X})$ , lies in $\mathbb{R}^{128}$ . For visualization purposes we project it onto a $\mathbb{R}^3$ as follows. We first find two vectors that span the image of $\mathbf{A}$ and we add to it the "principal direction" of $\mathbf{Z}$ orthogonal to the span of $\mathbf{A}$ . We project $\mathbf{Z}$ onto the span of these 3 vectors. Figs. 4b and 5b show these projections for RSRAE and AE, respectively. Figs. 4c and 5c demonstrate the respective mappings of $\mathbf{Z}$ by $\mathbf{A}$ during the RSR layer. + +Figs. 4d and 5d imply that the set of reconstructed normal points in RSRAE seem to lie on the original manifold, whereas the reconstructed normal points by AE seem to only lie near, but often + +not on the Swiss roll manifold. More importantly, the anomalous points reconstructed by RSRAE seem to be sufficiently far from the set of original anomalous points, unlike the reconstructed points by AE. Therefore, RSRAE can better distinguish anomalies using the distance between the original and reconstructed points, where small values are obtained for normal points and large ones for anomalous ones. Fig. 6 demonstrates this claim. They plot the histograms of the distance between the original and reconstructed points when applying RSRAE and AE, where distances for normal and anomalous points are distinguished by color. Clearly, RSRAE distinguishes normal and anomalous data better than AE. + +![](images/e3c1e3f67037e2e26c9f465fc17b037565ceb656e28a37408f4ae6c1dd608b4f.jpg) +(a) Input data X + +![](images/6c3f490d470b9f359cdb7562adaa64ab44d3f696dcd87c1733afc39c9e4fb887.jpg) +Figure 4: Demonstration of the output of the encoder, RSR layer and decoder of RSRAE on a corrupted Swiss roll dataset. + +![](images/972af7bd658af251f4477b71d02ee452be9ac5c433f9aa7d019ef99886404ade.jpg) +(a) Input data $\mathbf{X}$ + +![](images/c47e9c7d6ead16fb8d5dc04571757a64658d7a31034637aed842b694d3e43236.jpg) +Figure 5: Demonstration of the output of the encoder, mapping by $\mathbf{A}$ , and decoder of AE on a corrupted Swiss roll dataset. + +![](images/7640e14a68a886ec520044b680e683334204ea03484d1563186b397e3ad8e5be.jpg) +(a) Error distribution for RSRAE. + +![](images/0988ad569f1278deee35e146f508e7ecd40d8378cc6889fd1531e6878c2df699.jpg) +(b) Error distribution for AE. +Figure 6: Demonstration of the reconstruction error distribution for RSRAE and AE. + +# C FURTHER DISCUSSION OF THE RSR TERM + +The RSR energy in (4) includes two different terms. The proposition below indicates that the second term of (4) is zero when plugging into it the solution of the minimization of the first term of (4) with the additional requirement that $\mathbf{A}$ has full rank. That is, in theory, one may only minimize the first term of (4) over the set of matrices $\mathbf{A} \in \mathbb{R}^{d \times D}$ with full rank. We then discuss computational issues of this different minimization. + +Proposition C.1. Assume that $\{\mathbf{z}^{(t)}\}_{t = 1}^{N}\subset \mathbb{R}^{D}$ spans $\mathbb{R}^D$ $d\leqslant D$ and let + +$$ +\mathbf {A} ^ {\star} = \underset { \begin{array}{c} \mathbf {A} \in \mathbb {R} ^ {d \times D} \\ \operatorname {r a n k} (\mathbf {A}) = \mathrm {d} \end{array} } {\operatorname {a r g m i n}} \sum_ {t = 1} ^ {N} \left\| \mathbf {z} ^ {(t)} - \mathbf {A} ^ {\mathrm {T}} \mathbf {A} \mathbf {z} ^ {(t)} \right\| _ {2}. \tag {12} +$$ + +Then $\mathbf{A}^{\star}\mathbf{A}^{\star \mathrm{T}} = \mathbf{I}_d$ + +Proof. Let $\mathbf{A}^{\star}$ be an optimizer of (12) and $\mathbf{P}^{\star}$ denote the orthogonal projection onto the range of $\mathbf{A}^{\star\mathrm{T}}\mathbf{A}^{\star}$ . Note that $\mathbf{P}^{\star}$ can be written as $\tilde{\mathbf{A}}^{\mathrm{T}}\tilde{\mathbf{A}}$ , where $\tilde{\mathbf{A}}$ is a $d\times D$ matrix composed of an orthonormal basis of the range of $\mathbf{P}^{\star}$ . Therefore, being an optimum of (12), $\mathbf{A}^{\star}$ satisfies + +$$ +\left\| \mathbf {z} ^ {(t)} - \mathbf {P} ^ {\star} \mathbf {z} ^ {(t)} \right\| _ {2} \geq \left\| \mathbf {z} ^ {(t)} - \mathbf {A} ^ {\star \mathrm {T}} \mathbf {A} ^ {\star} \mathbf {z} ^ {(t)} \right\| _ {2}, \quad t = 1, \dots , N. \tag {13} +$$ + +On the other hand, the definition of orthogonal projection implies that + +$$ +\left\| \mathbf {z} ^ {(t)} - \mathbf {P} ^ {\star} \mathbf {z} ^ {(t)} \right\| _ {2} \leq \left\| \mathbf {z} ^ {(t)} - \mathbf {A} ^ {\star \mathrm {T}} \mathbf {A} ^ {\star} \mathbf {z} ^ {(t)} \right\| _ {2}, \quad t = 1, \dots , N. \tag {14} +$$ + +That is, equality is obtained in (13) and (14). This equality and the fact that $\mathbf{P}^{\star}$ is a projection on the range of $\mathbf{A}^{\star \mathrm{T}}\mathbf{A}^{\star}$ imply that + +$$ +\mathbf {P} ^ {\star} \mathbf {z} ^ {(t)} = \mathbf {A} ^ {\star \mathrm {T}} \mathbf {A} ^ {\star} \mathbf {z} ^ {(t)}, \quad t = 1, \dots , N. \tag {15} +$$ + +Since $\{\mathbf{z}^{(t)}\}_{t = 1}^{N}$ spans $\mathbb{R}^D$ (15) results in + +$$ +\mathbf {P} ^ {\star} = \mathbf {A} ^ {\star^ {\mathrm {T}}} \mathbf {A} ^ {\star}, \tag {16} +$$ + +which further implies that + +$$ +\mathbf {A} ^ {\star} \mathbf {A} ^ {\star^ {\mathrm {T}}} \mathbf {A} ^ {\star} = \mathbf {A} ^ {\star} \mathbf {P} ^ {\star} = \mathbf {A} ^ {\star}. \tag {17} +$$ + +Combining this observation $(\mathbf{A}^{\star}\mathbf{A}^{\star \mathrm{T}}\mathbf{A}^{\star} = \mathbf{A}^{\star})$ with the constraint that $\mathbf{A}^{\star}$ has a full rank, we conclude that $\mathbf{A}^{\star}\mathbf{A}^{\star \mathrm{T}} = \mathbf{I}_d$ . + +The minimization in (12) is nonconvex and intractable. Nevertheless, Lerman & Maunu (2017) propose a heuristic to solve it with some weak guarantees and Maunu et al. (2017) propose an algorithm with guarantees under some conditions. However, such a minimization is even more difficult when applied to the combined energy in (5), instead of (4). Therefore, we find it necessary to include the second term in (4) that imposes the nearness of $\mathbf{A}^{\mathrm{T}}\mathbf{A}$ to an orthogonal projection (equivalently, of $\mathbf{A}\mathbf{A}^{\mathrm{T}}$ to the identity). + +# D MORE ON RELATED THEORY FOR THE RSR PENALTY + +In Section D.1 we characterize the solution of (7) via a subspace problem. Special case solutions to this problem include both the PCA subspace and the least absolute deviations subspace. In Section D.2 we prove Proposition 5.1. In Section D.3 we review some pure mathematical work that we find relevant to this discussion. + +# D.1 PROPERTY OF LINEAR AUTOENCODERS + +The following proposition expresses the solution of (7) in terms of another minimization problem. After proving it, we clarify that the other minimization problem is related to both PCA and RSR. + +Proposition D.1. Let $p \geq 1, d < D$ , and $\{\mathbf{x}^{(t)}\}_{t=1}^{N} \subset \mathbb{R}^{D}$ be a dataset with rank at least $d$ . If $(\mathbf{D}^{\star}, \mathbf{E}^{\star}) \in \mathbb{R}^{D \times d} \times \mathbb{R}^{d \times D}$ is a minimizer of (7), then + +$$ +\mathbf {D} ^ {\star} \mathbf {E} ^ {\star} = \mathbf {P} ^ {\star}, \tag {18} +$$ + +where $\mathbf{P}^{\star}\in \mathbb{R}^{D\times D}$ is a minimizer of + +$$ +\sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {P x} ^ {(t)} \right\| _ {2} ^ {p}, \tag {19} +$$ + +among all orthopointers $\mathbf{P}$ (that is, $\mathbf{P} = \mathbf{P}^T$ and $\mathbf{P}^2 = \mathbf{P}$ ) of rank $d$ . + +Proof. Let $\mathbf{P}^{\diamond}$ be a minimizer of (19) and $(\mathbf{D}^{\star},\mathbf{E}^{\star})$ be a minimizer of (7). Since $\mathbf{P}^{\diamond}$ is an orthoprojector of rank $d$ it can be written as $\mathbf{P}^{\diamond} = \mathbf{U}^{\diamond}\mathbf{U}^{\diamond \mathrm{T}}$ , where $\mathbf{U}^{\diamond}\in \mathbb{R}^{D\times d}$ , and thus + +$$ +\sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {D} ^ {\star} \mathbf {E} ^ {\star} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p} \leq \sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {U} ^ {\diamond} \mathbf {U} ^ {\diamond \mathrm {T}} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p} = \sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {P} ^ {\diamond} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p}. \tag {20} +$$ + +Let $\mathcal{L}$ denote the column space of $\mathbf{D}^{\star}\mathbf{E}^{\star}$ . Then by the property of orthoprojection + +$$ +\left\| \mathbf {x} ^ {(t)} - \mathbf {D} ^ {\star} \mathbf {E} ^ {\star} \mathbf {x} ^ {(t)} \right\| _ {2} \geq \left\| \mathbf {x} ^ {(t)} - \mathbf {P} _ {\mathscr {L}} \mathbf {x} ^ {(t)} \right\| _ {2} \text {f o r} 1 \leq t \leq N \tag {21} +$$ + +and consequently + +$$ +\sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {D} ^ {\star} \mathbf {E} ^ {\star} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p} \geq \sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {P} _ {\mathcal {L}} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p} \geq \sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {P} ^ {\diamond} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p}. \tag {22} +$$ + +The combination of (20) and (22) yields the following two equalities + +$$ +\sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {P} _ {\mathcal {L}} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p} = \sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {P} ^ {\diamond} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p}, \tag {23} +$$ + +$$ +\sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {D} ^ {\star} \mathbf {E} ^ {\star} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p} = \sum_ {t = 1} ^ {N} \left\| \mathbf {x} ^ {(t)} - \mathbf {P} _ {\mathscr {L}} \mathbf {x} ^ {(t)} \right\| _ {2} ^ {p}. \tag {24} +$$ + +We note that (23) implies that $\mathbf{P}_{\mathcal{L}}$ is a minimizer of (19) (among all rank $d$ orthoprojectors). We further note that (21) and (24) yield that for all $1 \leq t \leq N$ + +$$ +\left\| \mathbf {x} ^ {(t)} - \mathbf {D} ^ {\star} \mathbf {E} ^ {\star} \mathbf {x} ^ {(t)} \right\| _ {2} = \left\| \mathbf {x} ^ {(t)} - \mathbf {P} _ {\mathscr {L}} \mathbf {x} ^ {(t)} \right\| _ {2}. \tag {25} +$$ + +Since $\mathbf{D}^{\star}\mathbf{E}^{\star}\mathbf{x}^{(t)}\in \mathcal{L}$ and $\mathbf{P}_{\mathcal{L}}$ is an orthoprojector we conclude from (25) that + +$$ +\mathbf {D} ^ {\star} \mathbf {E} ^ {\star} \mathbf {x} ^ {(t)} = \mathbf {P} _ {\mathscr {L}} \mathbf {x} ^ {(t)} \text {f o r} 1 \leq t \leq N. \tag {26} +$$ + +We note that the definition of $(\mathbf{D}^{\star},\mathbf{E}^{\star})$ implies that $\mathcal{L}$ (which is the column space of $\mathbf{D}^{\star}\mathbf{E}^{\star}$ ) is contained in the span of $\{\mathbf{x}^{(t)}\}_{t = 1}^{N}$ . We also recall that the dimension of the span of $\{\mathbf{x}^{(t)}\}_{t = 1}^{N}$ is at least the dimension of $\mathcal{L}$ , that is, $d$ . Combining the latter facts with (26) we obtain that $\mathbf{D}^{\star}\mathbf{E}^{\star} = \mathbf{P}_{\mathcal{L}}$ . This and the fact that $\mathbf{P}_{\mathcal{L}}$ is a minimizer of (19) (which was derived from (23)) concludes (18). + +Note that when $p = 2$ , the energy function in (19) corresponds to PCA. More precisely, a minimizer $\mathbf{P}^{\star}$ of (19) (among rank $d$ orthoprojectors) is an orthoprojector on a $d$ -dimensional PCA subspace, equivalently, a subspace spanned by top $d$ eigenvectors of the sample covariance (we assume for simplicity linear, and not affine, autoencoder, so the PCA subspace is linear and thus when $p = 2$ the data is centered at the origin). This minimizer is unique if and only if the $d$ -th eigenvalue of the sample covariance is larger than the $(d + 1)$ -st eigenvalue. These elementary facts are reviewed in Section II-A of Lerman & Maunu (2018). + +When $p = 1$ , the minimizer $\mathbf{P}^{\star}$ of (19) (among rank $d$ orthoprojectors) is an orthoprojector on the $d$ -dimensional least absolute deviations subspace. This subspace is reviewed in Section II-D of Lerman & Maunu (2018) as a common approach for RSR. The minimizer is often not unique, where sufficient and necessary conditions for local minima of (19) are studied in Lerman & Zhang (2014). + +# D.2 PROOF OF PROPOSITION 5.1 + +Proof. We denote the subspace $\mathcal{L}$ in the left hand side of (10) by $\mathcal{L}^{\star}$ in order to distinguish it from the generic notation $\mathcal{L}$ for subspaces. Consider the random variable $X\sim \mu$ , Where $\mu$ is $\mathcal{N}(\mathbf{m}_X,\pmb {\Sigma}_X)$ . Fix $\pi \in \Pi (\mu ,\nu)$ . We note that + +$$ +\begin{array}{l} \mathbb {E} _ {(X, Y) \sim \pi} \| X - Y \| _ {2} ^ {p} \\ = \int_ {\mathbb {R} ^ {D}} \int_ {\mathbb {R} ^ {D}} \| \mathbf {x} - \mathbf {y} \| _ {2} ^ {p} \pi (\mathbf {x}, \mathbf {y}) d \mathbf {x} d \mathbf {y} \\ \geq \min _ {\dim \mathcal {L} = \mathrm {d}} \int_ {\mathbb {R} ^ {D}} \operatorname {d i s t} (\mathbf {x}, \mathscr {L}) ^ {\mathrm {p}} \int_ {\mathbb {R} ^ {\mathrm {D}}} \pi (\mathbf {x}, \mathbf {y}) \mathrm {d} \mathbf {y} \mathrm {d} \mathbf {x} \tag {27} \\ = \min _ {\dim \mathcal {L} = \mathrm {d}} \int_ {\mathbb {R} ^ {D}} \operatorname {d i s t} (\mathbf {x}, \mathcal {L}) ^ {\mathrm {p}} \mu (\mathbf {x}) \mathrm {d} \mathbf {x} \\ = \min _ {\dim \mathcal {L} = \mathrm {d}} \mathbb {E} \| X - \mathbf {P} _ {\mathcal {L}} X \| _ {2} ^ {p}. \\ \end{array} +$$ + +The inequality in (27) holds since $X$ is fixed and $Y$ satisfies $(X,Y)\sim \pi$ , so the distribution of $Y$ is $\mathcal{N}(\mathbf{m}_Y,\mathbf{\Sigma}_Y)$ . Therefore, almost surely, $Y$ takes values in the $d$ -dimensional affine subspace $\{\mathbf{y}\in \mathbb{R}^D:\mathbf{y} - \mathbf{m}_Y\in \mathrm{range}(\mathbf{\Sigma}_Y)\}$ . Furthermore, we note that equality in (27) is achieved when $Y = \mathbf{P}_{\mathcal{L}^{\star}}X$ . + +We conclude the proof by showing that + +$$ +\mathbf {m} _ {X} \in \mathcal {L} ^ {\star}. \tag {28} +$$ + +Indeed, (28) implies that the orthogonal projection of $X \sim \mathcal{N}(\mathbf{m}_X, \boldsymbol{\Sigma}_X)$ onto $\mathcal{L}^{\star}$ results in a random variable with distribution $\nu$ which is $\mathcal{N}(\mathbf{m}_X, \mathbf{P}_{\mathcal{L}^{\star}} \boldsymbol{\Sigma}_X \mathbf{P}_{\mathcal{L}^{\star}})$ . By the above observation about the optimality of $Y = \mathbf{P}_{\mathcal{L}^{\star}} X$ , the density of this distribution is the optimal solution of (9). + +To prove (28), we assume without loss of generality that $\mathbf{m}_X = \mathbf{0}$ . Denote the orthogonal projection of the origin onto the affine subspace $\mathcal{L}^{\star}$ by $\mathbf{m}_{\mathcal{L}^{\star}}$ and let $\mathcal{L}_0 = \mathcal{L}^{\star} - \mathbf{m}_{\mathcal{L}^{\star}}$ . We need to show that $\mathcal{L}^{\star} = \mathcal{L}_0$ , or equivalently, $\mathbf{m}_{\mathcal{L}^{\star}} = \mathbf{0}$ . We note $\mathcal{L}_0$ is a linear subspace, $\mathbf{m}_{\mathcal{L}^{\star}}$ is orthogonal to $\mathcal{L}_0$ and thus there exists a rotation matrix $\mathbf{O}$ such that + +$$ +\mathbf {O L} _ {0} = \left\{\left(0, \dots , 0, z _ {D - d + 1}, \dots , z _ {D}\right): z _ {D - d + 1}, \dots z _ {D} \in \mathbb {R} \right\}, \tag {29} +$$ + +and + +$$ +\mathbf {O m} _ {\mathcal {L} ^ {*}} = \left(m _ {1}, \dots , m _ {D - d}, 0, \dots , 0\right). \tag {30} +$$ + +For any $\mathbf{x} \in \mathbb{R}^D$ we note that $\mu(\mathbf{x}) = \mu(-\mathbf{x})$ since $\mu$ is Gaussian. Using this observation, other basic observations and the notation $\mathbf{O}\mathbf{x} = (x_1', \dots, x_D')$ we obtain that + +$$ +\begin{array}{l} \operatorname {d i s t} (\mathbf {x}, \mathcal {L} ^ {\star}) ^ {\mathrm {p}} \mu (\mathbf {x}) + \operatorname {d i s t} (- \mathbf {x}, \mathcal {L} ^ {\star}) ^ {\mathrm {p}} \mu (- \mathbf {x}) \\ = \left(\operatorname {d i s t} (\mathbf {x}, \mathcal {L} ^ {\star}) ^ {\mathrm {p}} + \operatorname {d i s t} (- \mathbf {x}, \mathcal {L} ^ {\star}) ^ {\mathrm {p}}\right) \mu (\mathbf {x}) \\ = (\operatorname {d i s t} (\mathbf {O x}, \mathbf {O L} ^ {\star}) ^ {\mathrm {p}} + \operatorname {d i s t} (- \mathbf {O x}, \mathbf {O L} ^ {\star}) ^ {\mathrm {p}}) \mu (\mathbf {x}) \\ = \left(\left(\sum_ {i = 1} ^ {D - d} \left(x _ {i} ^ {\prime} - m _ {i}\right) ^ {2}\right) ^ {p / 2} + \left(\sum_ {i = 1} ^ {D - d} \left(- x _ {i} ^ {\prime} - m _ {i}\right) ^ {2}\right) ^ {p / 2}\right) \mu (\mathbf {x}) \\ \end{array} +$$ + +$$ +\begin{array}{l} = \left(\left(\sum_ {i = 1} ^ {D - d} \left(x _ {i} ^ {\prime} - m _ {i}\right) ^ {2}\right) ^ {p / 2} + \left(\sum_ {i = 1} ^ {D - d} \left(x _ {i} ^ {\prime} + m _ {i}\right) ^ {2}\right) ^ {p / 2}\right) \mu (\mathbf {x}) \\ \geq 2 \left(\sum_ {i = 1} ^ {D - d} x _ {i} ^ {\prime 2}\right) ^ {p / 2} \mu (\mathbf {x}) \tag {31} \\ = 2 \operatorname {d i s t} (\mathbf {O x}, \mathbf {O L} _ {0}) ^ {\mathrm {p}} \mu (\mathbf {x}) \\ = 2 \operatorname {d i s t} (\mathbf {x}, \mathcal {L} _ {0}) ^ {\mathrm {p}} \mu (\mathbf {x}) \\ = \left(\operatorname {d i s t} (\mathbf {x}, \mathcal {L} _ {0}) ^ {\mathrm {p}} + \operatorname {d i s t} (- \mathbf {x}, \mathcal {L} _ {0}) ^ {\mathrm {p}}\right) \mu (\mathbf {x}) \\ = \operatorname {d i s t} (\mathbf {x}, \mathcal {L} _ {0}) ^ {\mathrm {p}} \mu (\mathbf {x}) + \operatorname {d i s t} (- \mathbf {x}, \mathcal {L} _ {0}) ^ {\mathrm {p}} \mu (- \mathbf {x}). \\ \end{array} +$$ + +The inequality in (31) follows from the fact that for $p \geq 1$ , the function $\| \cdot \|_2^p$ is convex as it is a composition of the convex function $\| \cdot \|_2 : \mathbb{R}^d \to \mathbb{R}_+$ and the increasing convex function $(\cdot)^p : \mathbb{R}_+ \to \mathbb{R}_+$ . Equality is achieved in (31) if $m_i = 0$ for $i = 1, \dots, D - d$ , that is, $\mathcal{L}^\star = \mathcal{L}_0$ . + +Integrating the left and right hand sides of (31) over $\mathbb{R}^D$ results in + +$$ +\int_ {\mathbb {R} ^ {D}} \operatorname {d i s t} (\mathbf {x}, \mathcal {L} ^ {\star}) ^ {\mathrm {p}} \mu (\mathbf {x}) \mathrm {d} \mathbf {x} \geq \int_ {\mathbb {R} ^ {D}} \operatorname {d i s t} (\mathbf {x}, \mathcal {L} _ {0}) ^ {\mathrm {p}} \mu (\mathbf {x}) \mathrm {d} \mathbf {x}. \tag {32} +$$ + +Since $\mathcal{L}^{\star}$ is a minimizer among all affine subspaces of rank $d$ of $\int_{\mathbb{R}^D}\mathrm{dist}(\mathbf{x},\mathcal{L})^\mathrm{p}\mu (\mathbf{x})\mathrm{d}\mathbf{x} = \mathbb{E}\| \mathrm{X} - \mathbf{P}_{\mathcal{L}}\mathrm{X}\| _2^{\mathrm{p}}$ , equality is obtained in (32). Consequently, equality is obtained, almost everywhere, in (31). Therefore, $\mathcal{L}^{\star} = \mathcal{L}_0$ and the claim is proved. + +# D.3 RELEVANT MATHEMATICAL THEORY + +We note that a complex network can represent a large class of functions. Consequently, for a sufficiently complex network, minimizing the loss function in (6) results in minimum value zero. In this case the minimizing "manifold" contains the original data, including the outliers. On the other hand, the RSR loss term imposes fitting a subspace that robustly fits only part of the data and thus cannot result in minimum value zero. Nevertheless, imposing a subspace constraint might be too restrictive, even in the latent space. A seminal work by Jones (1990) studies optimal types of curves that contain general sets. This work relates the construction and optimal properties of these curves with multiscale approximation of the underlying set by lines. It was generalized to higher dimensions in (?) and to a setting relevant to outliers in (?) These works suggest loss functions that incorporate several linear RSR layers from different scales. Nevertheless, their pure setting does not directly apply to our setting. We have also noticed various technical difficulties when trying to directly implement these ideas to our setting. + +# E BRIEF DESCRIPTION OF THE BASELINES AND METRICS + +We first clarify the methods used as baselines in Section 4. + +Local Outlier Factor (LOF) measures the local deviation of a given data point with respect to its neighbors. If the LOF of a data point is too large then the point is determined to be an outlier. + +One-Class SVM (OCSVM) learns a margin for a class of data. Since outliers contribute less than the normal class, it also applies to the unsupervised setting (Goldstein & Uchida, 2016). It is usually applied with a non-linear kernel. + +Isolation Forest (IF) determines outliers by looking at the number of splittings needed for isolating a sample. It constructs random decision trees. A short path length for separating a data point implies a higher probability that the point is an outlier. + +Geometric Transformations (GT) applies a variety of geometric transforms to input images and consequently creates a self-labeled dataset, where the labels are the types of transformations. Its anomaly detection is based on Dirichlet Normality score according to the softmax output from a classification network for the labels. + +Deep Structured Energy-Based Models (DSEBMs) outputs an energy function which is the negative log probability that a sample follows the data distribution. The energy based model is connected to an autoencoder to avoid the need of complex sampling methods. + +Deep Autoencoding Gaussian Mixture Model (DAGMM) is also a deep autoencoder model. It optimizes an end-to-end structure that contains both an autoencoder and an estimator for Gaussian Mixture Model. The anomaly detection is done after modeling the density function of the Gaussian Mixture Model. + +Next, we review the definitions of the two metrics that we used: the AUC and AP scores (Davis & Goadrich, 2006). In computing these metrics we identify the outliers as "positive". + +AUC (area-under-curve) is the area under the Receiver Operating Characteristic (ROC) curve. Recall that the True Positive Rate (TPR), or Recall, is the number of samples correctly labeled as positive divided by the total number of actual positive samples. The False Positive Rate (FPR), on the other hand, is the number of negative samples incorrectly labeled as positive divided by the total number of actual negative samples. The ROC curve is a graph of TPR as a function of FPR. It is drawn by recording values of FPR and TPR for different choices of $\epsilon_{\mathrm{T}}$ in Algorithm 1. + +AP (average-precision) is the area under the Precision-Recall Curve. While Recall is the TPR, Precision is the number of samples correctly labeled as positive divided by the total number of predicted positives. The Precision-Recall curve is the graph of Precision as a function of Recall. It is drawn by recording values of Precision and Recall for different choices of $\epsilon_{\mathrm{T}}$ in Algorithm 1. + +Both AUC and AP can be computed using the corresponding functions in the scikit-learn package (Pedregosa et al., 2011). + +# F COMPARISON WITH RSR AND RCAE + +We demonstrate basic properties of our framework by comparing it to two different frameworks. The first framework is direct RSR, which tries to model the inliers by a low-dimensional subspace, as opposed to the nonlinear model discussed in here. Based on careful comparison of RSR methods in Lerman & Maunu (2018), we use the Fast Median Subspace (FMS) algorithm (Lerman & Maunu, 2017) and its normalized version, the Spherical FMS (SFMS). The other framework can be viewed as a nonlinear version of RPCA, instead of RSR. It assumes sparse elementwise corruption of the data matrix, instead of corruption of whole data points, or equivalently, of some columns of the data matrix. For this purpose we use the Robust Convolutional Autoencoder (RCAE) algorithm of Chalapathy et al. (2017), who advocate it as "extension of robust PCA to allow for a nonlinear manifold that explains most of the data". We adopt the same network structures as in Section 4.1. + +Fig. 7 reports comparisons of RSRAE, FMS, SFMS and RCAE on the datasets used in Section 4.2. We first note that both FMS and SFMS are not effective for the datasets we have been using. That is, the inliers in these datasets are not well-approximated by a linear model. It is also interesting to notice that without normalization to the sphere, FMS can be much worse than SFMS. That is, SFMS is often way more robust to outliers than FMS. This observation and the fact that there are no obvious normalization procedures a general autoencoder (see Section 5) clarifies why the mere use of the $L_{\mathrm{AE}}^{1}$ loss for an autoencoder is not expected to be robust enough to outliers. + +Comparing with RSRAE, we note that RCAE is not a competitive method for these datasets. This is not surprising since the model of RCAE, which assumes sparse elementwise corruption, does not fit well to the problem of anomaly detection, but to other problems, such as background detection. + +![](images/11e0844043e87b45c13c665c87ae63fdb43a41a2a8bbcb1054c0eba3ff951b79.jpg) + +![](images/d032910af54d7f87b59b93417525b7d795391aaab173fffa0b8e24f79a7407be.jpg) + +![](images/f2f1f46ef7f1bd11d232e5b2475e8c87094098676829eab8ff4143d7d110f658.jpg) + +![](images/ff8e622b86c2b686702f9aeab489bd714c88166da0f5d00512cca991526e537c.jpg) + +![](images/ec0ca4e21bdcf335deaa04294b728a8859cadfe422c84f84c680e1790b656acd.jpg) + +![](images/8364a63861cd2fdc1d45588681f3d82ee39ecec3adf163e8ea3c3a77a3a890e1.jpg) + +![](images/210cc5441dc51caa299dd962f17ff1aebb35275a3cb8940eb0940d194477dcf6.jpg) + +![](images/a1d6fcec2fd58dd70ebd0e65ee7490d485a023730b1346c5f1514fa36344f040.jpg) + +![](images/59e8d90ad3cc8b11ff387e90428ebc678bd6fc2de1ef17078dac8144dc6828e4.jpg) +Figure 7: AUC and AP scores for RSRAE, FMS, SFMS and RCAE. From top to bottom are the results using Caltech 101, Fashion MNIST, Tiny Imagenet with deep features, Reuters-21578 and 20 Newsgroups. + +![](images/51e5b4760b955379c2bc69b13b45b0fed59c6efb2afaab1ac8518b046b27e297.jpg) + +# G SENSITIVITY TO HYPERPARAMETERS + +We examine the sensitivity of some of the reported results to changes in the hyperparameters. Section G.1 tests the sensitivity of RSRAE to changes in the intrinsic dimension $d$ . Section G.2 tests the sensitivity of RSRAE to changes in the learning rate. Section G.3 tests the sensitivity of RSRAE+ to changes in $\lambda_{1}$ and $\lambda_{2}$ . + +# G.1 SENSITIVITY TO THE INTRINSIC DIMENSION + +In the experiments reported in Section 4 we fixed $d = 10$ . Here we check the sensitivity of the reported results to changes in $d$ . We use the same datasets of Section 4.2 with an outlier ratio of $c = 0.5$ and test the following values of $d$ : 1, 2, 5, 8, 10, 12, 15, 20, 30, 40, 50. Fig. 8 reports the AUC and AP scores for these choice of $d$ and for these datasets with $c = 0.5$ . We note that, in general, our results are not sensitive to choices of $d \leq 30$ . + +We believe that the structure of these datasets is complex, and is not represented by a smooth manifold of a fixed dimension. Therefore, low-dimensional encoding of the inliers is beneficial with various choices of low dimensions. + +When $d$ gets closer to $D$ the performance deteriorates. Such a decrease in accuracy is noticeable for Reuters-21578 and 20 Newsgroups, where for both datasets $D = 128$ . For the image data sets (without deep features) $D = 1152$ and thus only relatively small values of $d$ were tested. As an example of large $d$ for an image dataset, we consider the case of $d = D = 1152$ in Caltech101 with $c = 0.5$ . In this case, AUC = 0.619 and AP = 0.512, which are very low scores. + +We conclude that in our experiments (with $c = 0.5$ ), RSRAE was stable in $d$ around our choice of $d = 10$ . + +![](images/83716d0da49a2b571d699d8dace3af15c21ea8364b50ce4ea3a32f9bcbd51489.jpg) + +![](images/c99d9b252a6e875945725320aab0711e4b14eef48b64b8c31c98e67098f63519.jpg) + +![](images/89a5c5e189b3e188a3a81d06c869e91f8cbdfe5613c5a824862da3600f05c0f7.jpg) + +![](images/46e23030dbbac8f6d5ea306b167ac68a35e1fca5371fddcfccf53cf6181ab9ed.jpg) + +![](images/26fe2c67e1fdd6424bdc24e3b0a9ea0e24339967635327b1e0e2a03973aa9243.jpg) +Figure 8: AUC and AP scores for different choices of $d$ . The datasets are the same as those in Section 4.2, where the outlier ratio is $c = 0.5$ . + +# G.2 SENSITIVITY TO THE LEARNING RATE + +In the experiments reported in Section 4 we fixed the learning rate for RSRAE to be 0.00025. Here we check the sensitivity of the reported results to changes in the learning rate. We use the same datasets of Section 4.2 with an outlier ratio of $c = 0.5$ and test the following values of the learning rate: 0.0001, 0.00025, 0.0005, 0.001, 0.0025, 0.005, 0.01, 0.025, 0.05, 0.1. Fig. 9 reports the AUC and AP scores for these values and for these datasets (with $c = 0.5$ ). We note that the performance is stable for learning rates not exceeding 0.01. + +![](images/4e6a55dcc3f7949ea9aab9110a35e674a27b11a23eee6b17c25ce87ae2003823.jpg) + +![](images/add474bfa34d9356e969b30b56cc413b90c078fe454d21cf923719200991e84e.jpg) + +![](images/054ddab0a611b039391456bde839fd83775a51f077076b6546f16ee7f72cb29a.jpg) + +![](images/1b6b5eebcb11d7849367f74a348bc13fa436db952077d1a2e4f06c2a1d31f29a.jpg) + +![](images/72e5cd131070ed7b8c489e0aaf07745b40f58fcbe9cc2f14abdc1ba696d9aa09.jpg) +Figure 9: AUC and AP scores for various learning rates. The datasets are the same as those in Section 4.2, where the outlier ratio is $c = 0.5$ . + +# G.3 SENSITIVITY OF RSRAE+ TO $\lambda_{1}$ AND $\lambda_{2}$ + +We study the sensitivity of RSRAE+ to different choices of $\lambda_{1}$ and $\lambda_{2}$ . We recall that RSRAE does not require these parameters. It is still interesting to check such sensitivity and find out whether careful tuning of these parameters in RSRAE+ can yield better scores than those of RSRAE. We use the same datasets of Section 4.2 with an outlier ratio of $c = 0.5$ and simultaneously test the following values of either $\lambda_{1}$ or $\lambda_{2}$ : 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0. Figs. 10 and 11 report the AUC and AP scores for these values and datasets (with $c = 0.5$ ). For each subfigure, the above values of $\lambda_{1}$ and $\lambda_{2}$ are recorded on the $x$ and $y$ axes, respectively. The darker colors of the heat map correspond to larger scores. For comparison, the corresponding AUC or AP score of RSRAE is indicated in the title of each subfigure. + +We note that RSRAE+ is more sensitive to $\lambda_{1}$ than $\lambda_{2}$ . Furthermore, as $\lambda_{1}$ increases the scores are often more stable to changes in $\lambda_{1}$ . That is, the magnitudes of the derivatives of the scores with respect to $\lambda_{1}$ seem to generally decrease with $\lambda_{1}$ . In Section 4.3 we used $\lambda_{1} = \lambda_{2} = 0.1$ as this choice seemed optimal for the independent set of 20 Newsgroup. We note though that optimal + +hyperparameters depend on the dataset and it is thus not a good idea to optimize them using different datasets. They also depend on the choice of $c$ , but for brevity we only test them with $c = 0.5$ . + +At last we note that the AUC and AP scores of RSRAE are comparable to the fine-tuned ones of RSRAE+ (where $c = 0.5$ ). We thus advocate using the alternating minimization of RSRAE, which is independent of $\lambda_{1}$ and $\lambda_{2}$ . + +![](images/2a0a037a5556585c73d971c6fb4620fea31f849816264cba14f34986ba9bb093.jpg) + +![](images/d2ea5b008f046078948299cb01a1cba44efce5b69e1027938ce5200874499203.jpg) + +![](images/3b8ea767aab7b5d0ad4c7131388c2c13e3ffbcdf02a569cb61c1c2eb6763278e.jpg) + +![](images/8ede71991a2f99ba8c480053a0a529b0f74cf389e3d1e70c1037c0f753ccbc93.jpg) + +![](images/e07cfbd6c0e413bac9bd768d8ba3f69dbced4d0a26bc960590781f590c3c9538.jpg) +Figure 10: AUC and AP scores for RSRAE+ with various choices of $\lambda_{1}$ and $\lambda_{2}$ for Caltech 101, Fashion MNIST and Tiny Imagenet with deep features, where $c = 0.5$ . + +![](images/e548f38d9389ff5066ce3fe43dbcb9b131271048c4c058e4183f558d6354d7fc.jpg) + +![](images/e90b098f441b99f9ee7d807d297b56e9f189df66cb4f318b8767c3a94ea6be05.jpg) + +![](images/111ab3273c300f5ca567af66bf2f755ac4d5c6fd7cda84287aa88f751b8abac4.jpg) + +![](images/6e204aac3742080e9fbec430c676609c4579ab6451e94353508d310e1648c83a.jpg) +Figure 11: AUC and AP scores for RSRAE+ with various choices of $\lambda_{1}$ and $\lambda_{2}$ using Reuters-21578 and 20 Newsgroup, where $c = 0.5$ . + +![](images/2a77ef4933c45de8f6468c95bd7238a2a0419fa63bbec27d4c85fd52e6e4e505.jpg) + +# H RUNTIME COMPARISON + +Table 1 records runtimes for all the methods and datasets in Section 4.2 with the choice of $c = 0.5$ . More precisely, a runtime is the time needed to complete a single experiment, where 200 epochs were used for the neural networks. The table averages each runtime over the different classes. + +Note that LOF, OCSVM and IF are faster than the rest of methods since they do not require training neural networks. We also note that the runtime of RSRAE is competitive in comparison to the other tested methods, that is, DSEBMs, DAGMM, and GT. The neural network structures of these four methods are the same, and thus the difference in runtime is mainly due to different pre and post processing. + +Table 1: Runtime comparison: runtimes (in seconds) are reported for all methods and datasets in Section 4.2, where the outlier ratio is $c = 0.5$ . Since GT was only applied to the image datasets without deep features, its runtime is not available (N/A) for the last three datasets. + +
Datasets +BenchmarksCaltech 101Fashion MNISTTinyImagenetReuters-2157820 Newsgroups
LOF0.2337.1630.70725.34210.516
OCSVM0.1203.1510.4738.7264.169
IF0.3391.4850.51120.4816.751
GT21.68187.729N/AN/AN/A
DSEBMs14.29346.93325.19441.08333.852
DAGMM21.06671.63241.21183.55160.720
RSRAE6.30533.85310.94032.06118.869
+ +# I ADDITIONAL RESULTS + +We include some supplementary numerical results. In Section I.1 we show the results for Tiny Imagenet without deep features. In Section I.2 we extend the results reported in section 4.3 for the other datasets. + +# I.1 TINY IMAGENET WITHOUT DEEP FEATURES + +Fig. 12 presents the results for Tiny Imagenet without deep features. We see that RSRAE performs the best, but in general all the methods do not perform well. Indeed, the performance is significantly worse to that with deep features. + +![](images/3e525ec1969da535832d5cc1a72a9c61baebee25f9ac980ae12f6b4e6ccdd789.jpg) +Figure 12: AUC and AP scores for the Tiny Imagenet without using the deep features. + +![](images/5f4ff9a0e3e80a6a7f4c6d91b999f575d221283d5b54f34932c14a4e4d3c4e00.jpg) + +# I.2 ADDITIONAL COMPARISON WITH VARIATIONS OF RSRAE + +Figs. 13 and 14 extend the comparisons in Section 4.3 for additional datasets. The conclusion is the same. In general, RSRAE performs better by a large margin than AE and AE-1. On the other hand, $\mathrm{RSRAE + }$ is often in between RSRAE and AE/AE-1. However, for 20 Newsgroups, $\mathrm{RSRAE + }$ performs similarly to RSRAE, and possibly slightly better, than RSRAE. It seems that in this case our choice of $\lambda_{1}$ and $\lambda_{2}$ is good. + +![](images/fac26fe0417f8471402ba6a450c36d44287d8804e735c406aee0cbadab9173b7.jpg) + +![](images/ed6364ba5da3d7ed8c2fbaf195ac2eb2fa984433040767b2281fb2e3f57af65c.jpg) + +![](images/32c62d773ff4ed2e38745d864ed6abb253322cbcb3af39df8a04a25f1aed5482.jpg) +Figure 13: AUC and AP scores for RSRAE and alternative formulations using Fashion MNIST and deep features of Tiny Imagenet, where $c = 0.5$ . + +![](images/47bd17bd43bccb4174cadfa001b1cefb95a7956e9a434a3ce065714fc23504d1.jpg) + +![](images/1d8c198bfa1019b90388246d31e2b1b81e78da59815fbb13a4ed81987bb19aad.jpg) + +![](images/6154f16701af5d4a836e4a316bfbcca3be02c4f155aeece4766a08b6355d06a4.jpg) + +![](images/9b6c4a2d02c05c2a2f7dbb900897b5a300d77bb69d339c70f6e6dc140171f4a2.jpg) +Figure 14: AUC and AP scores for RSRAE and alternative formulations using Tiny Imagenet (images) and 20 Newsgroup, where $c = 0.5$ . + +![](images/1122dec846795d588835b5c58ce9d518b3bbd8796ef6cd667aea6eb52e58ffb9.jpg) \ No newline at end of file diff --git a/robustsubspacerecoverylayerforunsupervisedanomalydetection/images.zip b/robustsubspacerecoverylayerforunsupervisedanomalydetection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d2004cf6704f32ca41cd77358876654035311ed2 --- /dev/null +++ b/robustsubspacerecoverylayerforunsupervisedanomalydetection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3771189a9b74af873d685b07a261158e5e62a9f0fee78708cc065911c9e5c04c +size 1681753 diff --git a/robustsubspacerecoverylayerforunsupervisedanomalydetection/layout.json b/robustsubspacerecoverylayerforunsupervisedanomalydetection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d7a8839732e385850ba7c016e861487d9c70baed --- /dev/null +++ b/robustsubspacerecoverylayerforunsupervisedanomalydetection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d24d0ef43505eee8767e9b0d37e14d902f4089dacaa33a2cff6e5ce04bdf42c +size 1007768 diff --git a/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_content_list.json b/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d04ae870442e24a176011ac715f238b4d922c54e --- /dev/null +++ b/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd20fbb68b8bfaaee4888488a5abf72874bee539847df772355f44e8d93a90d5 +size 88844 diff --git a/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_model.json b/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7e2cb0117ddf3be7bb3d47878177225ad0a33dd1 --- /dev/null +++ b/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c30045db10a05dad72800e924618cca6a665f39876e13b6dec8ffa3795192fb6 +size 108721 diff --git a/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_origin.pdf b/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7078f9967d251c35910b7bf420ec7805279fe3cb --- /dev/null +++ b/robusttrainingwithensembleconsensus/91896eeb-929a-42b8-8189-1ba7d4d7b6d3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f30fef2b7d75bd39cfa6940034db5a1ef181905ea7be9e297b604874f0c1541b +size 1216269 diff --git a/robusttrainingwithensembleconsensus/full.md b/robusttrainingwithensembleconsensus/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ef3ee30c290033779d9bd401e29835efef445b7a --- /dev/null +++ b/robusttrainingwithensembleconsensus/full.md @@ -0,0 +1,418 @@ +# ROBUST TRAINING WITH ENSEMBLE CONSENSUS + +Jisoo Lee & Sae-Young Chung + +Korea Advanced Institute of Science and Technology + +Daejeon, South Korea + +{jisolee, schung}@kaist.ac.kr + +# ABSTRACT + +Since deep neural networks are over-parameterized, they can memorize noisy examples. We address such a memorization issue in the presence of label noise. From the fact that deep neural networks cannot generalize to neighborhoods of memorized features, we hypothesize that noisy examples do not consistently incur small losses on the network under a certain perturbation. Based on this, we propose a novel training method called Learning with Ensemble Consensus (LEC) that prevents overfitting to noisy examples by removing them based on the consensus of an ensemble of perturbed networks. One of the proposed LECs, LTEC outperforms the current state-of-the-art methods on noisy MNIST, CIFAR-10, and CIFAR-100 in an efficient manner. + +# 1 INTRODUCTION + +Deep neural networks (DNNs) have shown excellent performance (Krizhevsky et al., 2012; He et al., 2016) on visual recognition datasets (Deng et al., 2009). However, it is difficult to obtain high-quality labeled datasets in practice (Wang et al., 2018a). Even worse, DNNs might not learn patterns from the training data in the presence of noisy examples (Zhang et al., 2016). Therefore, there is an increasing demand for robust training methods. In general, DNNs optimized with SGD first learn patterns relevant to clean examples under label noise (Arpit et al., 2017). Based on this, recent studies regard examples that incur small losses on the network that does not overfit noisy examples as clean (Han et al., 2018; Shen & Sanghavi, 2019). However, such small-loss examples could be noisy, especially under a high level of noise. Therefore, sampling trainable examples from a noisy dataset by relying on small-loss criteria might be impractical. + +To address this, we find the method to identify noisy examples among small-loss ones based on well-known observations: (i) noisy examples are learned via memorization rather than via pattern learning and (ii) under a certain perturbation, network predictions for memorized features easily fluctuate, while those for generalized features do not. Based on these two observations, we hypothesize that out of small-loss examples, training losses of noisy examples would increase by injecting certain perturbation to network parameters, while those of clean examples would not. This suggests that examples that consistently incur small losses under multiple perturbations can be regarded as clean. This idea comes from an artifact of SGD optimization, thereby being applicable to any architecture optimized with SGD. + +In this work, we introduce a method to perturb parameters to distinguish noisy examples from small-loss examples. We then propose a method to robustly train neural networks under label noise, which is termed learning with ensemble consensus (LEC). In LEC, the network is initially trained on the entire training set for a while and then trained on the intersection of small-loss examples of the ensemble of perturbed networks. We present three LECs with different perturbations and evaluate their effectiveness on three benchmark datasets with random label noise (Goldberger & Ben-Reuven, 2016; Ma et al., 2018), open-set noise (Wang et al., 2018b), and semantic noise. Our proposed LEC outperforms existing robust training methods by efficiently removing noisy examples from training batches. + +# 2 RELATED WORK + +Generalization of DNNs. Although DNNs are over-parameterized, they have impressive generalization ability (Krizhevsky et al., 2012; He et al., 2016). Some studies argue that gradient-based optimization plays an important role in regularizing DNNs (Neyshabur et al., 2014; Zhang et al., 2016). Arpit et al. (2017) show that DNNs optimized with gradient-based methods learn patterns relevant to clean examples in the early stage of training. Since mislabeling reduces the correlation with other training examples, it is likely that noisy examples are learned via memorization. Therefore, we analyze the difference between generalized and memorized features to discriminate clean and noisy examples. + +Training DNNs with Noisy datasets. Label noise issues can be addressed by reducing negative impact of noisy examples. One direction is to train with a modified loss function based on the noise distribution. Most studies of this direction estimate the noise distribution prior to training as it is not accessible in general (Sukhbaatar et al., 2014; Goldberger & Ben-Reuven, 2016; Patrini et al., 2017; Hendrycks et al., 2018). Another direction is to train with modified labels using the current model prediction (Reed et al., 2014; Ma et al., 2018). Aside from these directions, recent work suggests a method of exploiting small-loss examples (Jiang et al., 2017; Han et al., 2018; Yu et al., 2019; Shen & Sanghavi, 2019) based on the generalization ability of DNNs. However, it is still hard to find clean examples by relying on training losses. This study presents a simple method to overcome such a problem of small-loss criteria. + +# 3 ROBUST TRAINING WITH ENSEMBLE CONSENSUS + +# 3.1 PROBLEM STATEMENT + +Suppose that $\epsilon \%$ of examples in a dataset $\mathcal{D} \coloneqq \mathcal{D}_{\text{clean}} \cup \mathcal{D}_{\text{noisy}}$ are noisy. Let $S_{\epsilon, \mathcal{D}, \theta}$ denote the set of $(100 - \epsilon)\%$ small-loss examples of the network $f$ parameterized by $\theta$ out of examples in $\mathcal{D}$ . Since it is generally hard to learn only all clean examples especially on the highly corrupted training set, it is problematic to regard all examples in $S_{\epsilon, \mathcal{D}, \theta}$ as being clean. To mitigate this, we suggest a simple idea: to find noisy examples among examples in $S_{\epsilon, \mathcal{D}, \theta}$ . + +# 3.2 LEARNING WITH ENSEMBLE CONSENSUS (LEC) + +Since noisy examples are little correlated with other training examples, they are likely to be learned via memorization. However, DNNs cannot generalize to neighborhoods of the memorized features. This means that even if training losses of noisy examples are small, they can be easily increased under a certain perturbation $\delta$ , i.e., for $(x,y) \in \mathcal{D}_{noisy}$ , + +$$ +(x, y) \in \mathcal {S} _ {\epsilon , \mathcal {D}, \theta} \Rightarrow (x, y) \notin \mathcal {S} _ {\epsilon , \mathcal {D}, \theta + \delta}. +$$ + +Unlike noisy examples, the network $f$ trained on the entire set $\mathcal{D}$ can learn patterns from some clean examples in the early stage of training. Thus, their training losses are consistently small in the presence of the perturbation $\delta$ , i.e., for $(x,y) \in \mathcal{D}_{\text{clean}}$ , + +$$ +(x, y) \in \mathcal {S} _ {\epsilon , \mathcal {D}, \theta} \Rightarrow (x, y) \in \mathcal {S} _ {\epsilon , \mathcal {D}, \theta + \delta}. +$$ + +This suggests that noisy examples can be identified from the inconsistency of losses under certain perturbation $\delta$ . Based on this, we regard examples in the intersection of $(100 - \epsilon)\%$ small-loss examples of an ensemble of $M$ networks generated by adding perturbations $\delta_{1}, \delta_{2}, \ldots, \delta_{M}$ to $\theta$ , i.e., + +$$ +\cap_ {m = 1} ^ {M} \mathcal {S} _ {\epsilon , \mathcal {D}, \theta + \delta_ {m}} +$$ + +as clean. We call it ensemble consensus filtering because examples are selected via ensemble consensus. With this filtering, we develop a training method termed learning with ensemble consensus (LEC) described in Algorithms 1 and 2. Both algorithms consist of warming-up and filtering processes. The difference between these two lies in the filtering process. During the filtering process of Algorithm 1, the network is trained on the intersection of $(100 - \epsilon)\%$ small-loss examples of $M$ networks within a mini batch $\mathcal{B}$ . Therefore, the number of examples updated at once is changing. + +We can encourage more stable training with a fixed number of examples to be updated at once as described in Algorithm 2. During the filtering process of Algorithm 2, we first obtain the intersection of small-loss examples of $M$ networks within a full batch $\mathcal{D}$ at each epoch. We then sample a subset of batchsize from the intersection and train them at each update like a normal SGD. + +# Algorithm 1 LEC + +Require: noisy dataset $\mathcal{D}$ with noise ratio $\epsilon \%$ duration of warming-up $T_{w}$ , # of networks used for filtering $M$ , perturbation $\delta$ + +1: Initialize $\theta$ randomly + +2: for epoch $t = 1 : {T}_{w}$ do + +3: for mini-batch index $b = 1 : \frac{|\mathcal{D}|}{\text{batchsize}}$ do + +4: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ + +5: $\theta \gets \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_h|}\sum_{(x,y)\in \mathcal{B}_b}CE(f_\theta (x),y)$ + +6: end for + +7: end for + +8: for epoch $t = T_w + 1 : T_{end}$ do + +9: for mini-batch index $b = 1 : \frac{\left| \mathcal{D} \right| }{\text{batchsize }}\mathbf{{do}}$ + +10: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ + +11: for $m = 1:M$ do + +12: $\theta_{m} = \theta +\delta_{m,b,t}$ Adding perturbation + +13: $S_{\epsilon ,\mathcal{B}_b,\theta m}:= (100 - \epsilon)\%$ small-loss examples of $f_{\theta m}$ within a mini batch $\mathcal{B}_b$ + +14: end for + +15: $\mathcal{B}_b' = \cap_{m=1}^M \mathcal{S}_{\epsilon, \mathcal{B}_h, \theta_m} \quad \triangleright$ Ensemble consensus + +filtering + +16: $\theta \gets \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_{b^{\prime}}|}\sum_{(x,y)\in \mathcal{B}_{b^{\prime}}}CE(f_{\theta}(x),y)$ + +17: end for + +18: end for + +# 3.3 PERTURBATION TO IDENTIFY NOISY EXAMPLES + +Now we aim to find a perturbation $\delta$ to be injected to discriminate memorized features from generalized ones. We present three LECs with different perturbations in the following. The pseudocodes can be found in Section A.1.3. + +- Network-Ensemble Consensus (LNEC): Inspired by the observation that an ensemble of networks with the same architecture is correlated during generalization and is decorrelated during memorization (Morcos et al., 2018), the perturbation $\delta$ comes from the difference between $M$ networks. During the warming-up process, $M$ networks are trained independently. During the filtering process, $M$ networks are trained on the intersection of $(100 - \epsilon)\%$ small-loss examples of $M$ networks. +- Self-Ensemble Consensus (LSEC): We focus on the relationship between Morcos et al. (2018) and Lakshminarayanan et al. (2017): network predictions for memorized features are uncertain and those for generalized features are certain. Since the uncertainty of predictions also can be captured by multiple stochastic predictions (Gal & Ghahramani, 2016), the perturbation $\delta$ comes from the difference between $M$ stochastic predictions of a single network. During the filtering process, the network is trained on the intersection of $(100 - \epsilon)\%$ small-loss examples obtained with $M$ stochastic predictions. +- Temporal-Ensemble Consensus (LTEC): Inspired by the observation that during training, atypical features are more easily forgetful compared to typical features (Toneva et al., 2018), the perturbation $\delta$ comes from the difference between networks at current and preceding epochs. During the filtering process, the network is trained on the intersection of $(100 - \epsilon)\%$ small-loss examples at the current epoch $t$ and preceding $\min(M - 1, t - 1)$ epochs. We collect $(100 - \epsilon)\%$ small-loss examples at the preceding epochs, rather than network parameters to reduce memory usage. + +# 4 EXPERIMENTS + +In this section, we show (i) the effectiveness of three perturbations at removing noisy examples from small-loss examples and (ii) the comparison of LEC and other existing methods under various annotation noises. + +# 4.1 EXPERIMENTAL SETUP + +Annotation noise. We study random label noise (Goldberger & Ben-Reuven, 2016; Ma et al., 2018), open-set noise (Wang et al., 2018b), and semantic noise. To generate these noises, we use MNIST (LeCun et al., 1998), CIFAR-10/100 (Krizhevsky et al., 2009) that are commonly used to assess the robustness. For each benchmark dataset, we only corrupt its training set, while leaving its test set intact for testing. The details can be found in Section A.1.1. + +- Random label noise. Annotation issues can happen in easy images as well as hard images (Wang et al., 2018a). This is simulated in two ways: sym- $\epsilon\%$ and asym- $\epsilon\%$ . For sym- $\epsilon\%$ , $\epsilon\%$ of the entire set are randomly mislabeled to one of the other labels and for asym- $\epsilon\%$ , each label $i$ of $\epsilon\%$ of the entire set is changed to $i + 1$ . We study four types: sym-20% and asym-20% to simulate a low level of noise, and sym-60% and asym-40% to simulate a high level of noise. +- Open-set noise. In reality, annotated datasets may contain out-of-distribution (OOD) examples. As in Yu et al. (2019), to make OOD examples, images of $\epsilon \%$ examples randomly sampled from the original dataset are replaced with images from another dataset, while labels are left intact. SVHN (Netzer et al., 2011) is used to make open-set noise of CIFAR-100, and ImageNet-32 (Chrabaszcz et al., 2017) and CIFAR-100 are used to make open-set noise of CIFAR-10. We study two types: $20\%$ and $40\%$ open-set noise. +- Semantic noise. In general, images with easy patterns are correctly labeled, while images with ambiguous patterns are obscurely mislabeled. To simulate this, we select the top $\epsilon \%$ most uncertain images and then flip their labels to the confusing ones. The uncertainty of each image is computed by the amount of disagreement between predictions of networks trained with clean dataset as in Lakshminarayanan et al. (2017). Then, the label of each image is assigned to the label with the highest value of averaged softmax outputs of the networks trained with a clean dataset except for its ground-truth label. We study two types: 20% and 40% semantic noise. + +Architecture and optimization. Unless otherwise specified, we use a variant of 9-convolutional layer architecture (Laine & Aila, 2016; Han et al., 2018). All parameters are trained for 200 epochs with Adam (Kingma & Ba, 2014) with a batch size of 128. The details can be found in Section A.1.2. + +Hyperparameter. The proposed LEC involves three hyperparameters: duration of warming-up $T_{w}$ , noise ratio $\epsilon \%$ , and the number of networks used for filtering $M$ . Unless otherwise specified, $T_{w}$ is set to 10, and $M$ is set to 5 for random label noise and open-set noise, and 10 for semantic noise. We assume that a noise ratio of $\epsilon \%$ is given. Further study can be found in Section 5.2. + +Evaluation. We use two metrics: test accuracy and label precision (Han et al., 2018). At the end of each epoch, test accuracy is measured as the ratio of correctly predicted test examples to all test examples, and label precision is measured as the ratio of clean examples used for training to examples used for training. Thus, for both metrics, higher is better. For methods with multiple networks, the averaged values are reported. We report peak as well as final accuracy because a small validation set may be available in reality. + +For each noise type, every method is run four times with four random seeds, e.g., four runs of Standard on CIFAR-10 with sym-20%. A noisy dataset is randomly generated and initial network parameters are randomized for each run of both random label noise and open-set noise. Note that four noisy datasets generated in four runs are the same for all methods. On the other hand, semantic noise is generated in a deterministic way. Thus, only initial network parameters are randomized for each run of semantic noise. + +![](images/27d5c7437f67eb7f5645d7ca4f3eea29f68e055d38222db5f377f2b4a4686490.jpg) +Figure 1: Label precision (\%) of Self-training and three LECs on CIFAR-10 with random label noise. We plot the average as a solid line and the standard deviation as a shadow around the line. + +Table 1: Average of final/peak test accuracy (%) of Self-training and three LECs on CIFAR-10 with random label noise. The best is highlighted in bold. + +
DatasetNoise typeSelf-trainingLNECLSECLTEC
CIFAR-10sym-20%84.96/85.0286.72/86.7885.42/85.6388.18/88.28
sym-60%73.99/74.3579.61/79.6476.73/76.9280.38/80.52
asym-20%85.02/85.2486.90/87.1185.44/85.6488.86/88.93
asym-40%78.84/79.6684.01/84.4880.74/81.4986.36/86.50
+ +![](images/edba232a114d76abeffd4c9f307837ef02e80e88ca70e010df78481393c1e0c2.jpg) +Figure 2: Label precision $(\%)$ of small-loss examples of the current network (in green) and the intersection of small-loss examples of the current and preceding networks (in red) during running LTEC on CIFAR-10 with random label noise. We report the precision from epoch 11 when the filtering process starts. + +# 4.2 EFFECTIVENESS OF LECS AT IDENTIFYING NOISY EXAMPLES + +Comparison with Self-training. In Section 3.1, we argue that $(100 - \epsilon)\%$ small-loss examples may be corrupted. To show this, we run LEC with $M = 1$ , which is a method of training on $(100 - \epsilon)\%$ small-loss examples. Note that this method is similar to the idea of Jiang et al. (2017); Shen & Sanghavi (2019). We call it Self-training for simplicity. Figure 1 shows the label precision of Self-training is low especially under the high level of noise, i.e., sym-60%. Compared to Self-training, three LECs are trained on higher precision data, achieving higher test accuracy as shown in Table 1. Out of these three, LTEC performs the best in both label precision and test accuracy. + +Noisy examples are removed through ensemble consensus filtering. In LTEC, at every batch update, we first obtain $(100 - \epsilon)\%$ small-loss examples of the current network and then train on the intersection of small-loss examples of the current and preceding networks. We plot label precisions of small-loss examples of the current network (in green) and the intersection (in red) during running LTEC on CIFAR-10 with random noise in Figure 2. We observe that label precision of the intersection is always higher, indicating that noisy examples are removed through ensemble consensus filtering. + +# 4.3 COMPARISON WITH STATE-OF-THE-ART METHODS + +Competing methods. The competing methods include a regular training method: Standard, a method of training with corrected labels: D2L (Ma et al., 2018), a method of training with modified loss function based on the noise distribution: Forward (Patrini et al., 2017), and a method of exploiting small-loss examples: Co-teaching (Han et al., 2018). We tune all the methods individually as described in Section A.1.4. + +![](images/655ca9f9717c57a0461f232f0e2da666932ee7076ae1da1c9e611fcf4e58fe0d.jpg) +Figure 3: Test accuracy $(\%)$ of different algorithms on MNIST/CIFAR with random label noise. + +![](images/078e2164610fd5eb8e101eb8cf38471c2c455d2cb4cb07f7462383d19b4e2a13.jpg) +Figure 4: Label precision (\%) of different algorithms on MNIST/CIFAR with random label noise. + +Results on MNIST/CIFAR with random label noise. The overall results can be found in Figures 3 and 4, and Table 2. We plot the average as a solid line and the standard deviation as a shadow around the line. Figure 3 states that the test accuracy of D2L increases at the low level of label noise as training progresses, but it does not increase at the high level of label noise. This is because D2L puts large weights on given labels in the early stage of training even under the high level of noise. Forward shows its strength only in limited scenarios such as MNIST. Co-teaching does not work well on CIFAR-100 with asym- $40\%$ , indicating that its cross-training scheme is vulnerable to smallloss examples of a low label precision (see Figure 4). Unlike Co-teaching, our methods attempt to remove noisy examples in small-loss examples. Thus, on CIFAR-100 with asym- $40\%$ noise, both LTEC and LTEC-full surpass Co-teaching by a wide margin of about $6 \%$ and $5 \%$ , respectively. + +Table 2: Average of final/peak test accuracy (%) of different algorithms on MNIST/CIFAR with random label noise. The best is highlighted in bold. + +
DatasetNoise typeStandardD2LForwardCo-teachingLTECLTEC-full
MNISTsym-20%95.21/99.3698.38/99.3596.88/99.2997.84/99.2499.52/99.5899.58/99.64
sym-60%55.88/98.5059.40/98.3764.03/98.2691.52/98.5399.16/99.2599.38/99.44
asym-20%89.74/99.3292.88/99.4197.71/99.5296.11/99.4099.49/99.5999.61/99.66
asym-40%65.13/96.5866.44/96.9995.76/99.5191.10/98.8198.47/99.3299.40/99.48
CIFAR-10sym-20%79.50/80.7484.60/84.6880.29/80.9185.46/85.5288.18/88.2888.16/88.31
sym-60%41.91/65.0644.10/65.2644.38/61.8975.01/75.1980.38/80.5279.13/79.26
asym-20%79.24/81.3984.27/84.4079.89/82.0885.24/85.4488.86/88.9389.04/89.14
asym-40%57.50/68.7760.63/67.4658.53/67.1979.53/80.1986.36/86.5084.56/84.69
CIFAR-100sym-20%50.28/50.8955.47/55.5850.01/50.5857.87/57.9459.73/59.8259.91/59.98
sym-60%20.79/34.2623.72/34.8921.78/34.0143.36/43.6846.24/46.4345.77/45.89
asym-20%52.40/52.4257.31/57.5352.44/52.5655.88/55.9158.72/58.8658.05/58.16
asym-40%37.64/37.6640.12/40.3736.95/37.6140.99/41.0147.70/47.8245.49/45.55
+ +Table 3: Average of final/peak test accuracy (%) of different algorithms on CIFAR with open-set noise. The best is highlighted in bold. + +
Dataset + Open-setNoise typeStandardD2LForwardCo-teachingLTECLTEC-full
CIFAR-10 + CIFAR-10020%86.74/86.8389.42/89.4986.87/86.9688.58/88.6188.69/88.8289.07/89.11
40%82.64/82.7185.32/85.4182.57/82.6886.18/86.2286.37/86.4186.26/86.33
CIFAR-10 + ImageNet-3220%88.27/88.3690.60/90.6488.24/88.2988.99/89.0689.15/89.2489.34/89.42
40%85.90/85.9987.91/87.9585.84/85.9986.99/87.0386.63/86.7887.00/87.12
CIFAR-100 + SVHN20%59.08/59.1962.89/62.9858.99/59.0860.69/60.7561.65/61.7861.87/61.98
40%53.32/53.3556.30/56.3853.18/53.3056.45/56.5256.95/57.1857.77/57.90
+ +Table 4: Average of final/peak test accuracy (%) of different algorithms on CIFAR with semantic noise. The best is highlighted in bold. + +
DatasetNoise typeStandardD2LForwardCo-teachingLTECLTEC-full
CIFAR-1020%81.29/81.3683.96/83.9981.10/81.2383.53/83.5684.48/84.6684.48/84.58
40%71.64/74.3674.72/74.9471.38/73.4776.61/76.8975.52/76.5276.57/78.06
CIFAR-10020%56.88/56.9660.23/60.4056.60/56.7458.45/58.5058.75/58.7858.73/58.80
40%49.56/49.6953.04/53.1949.57/49.6952.96/52.9852.58/52.7853.15/54.18
+ +Results on CIFAR with open-set noise. The overall results can be found in Table 3. All the methods including LTEC and LTEC-full perform well under open-set noise. We speculate that this is due to a low correlation between open-set noisy examples. This is supported by the results on CIFAR-10, i.e., all the methods perform better on ImageNet-32 noise than on CIFAR-100 noise, as ImageNet-32 has more classes than CIFAR-100. Similar to poorly annotated examples, it is hard for deep networks to learn patterns relevant to out-of-distribution examples during the warming-up process. Therefore, those examples can be removed from training batches through ensemble consensus filtering. + +Results on CIFAR with semantic noise. The overall results can be found in Table 4. The semantically generated noisy examples are highly correlated with each other, making it difficult to filter out those examples through ensemble consensus. We use 10 as the value of $M$ for semantic noise because ensemble consensus with a bigger $M$ is more conservative. On CIFAR with semantic noise, LTEC and LTEC-full perform comparably or best, compared to the other methods. Of the two, LTEC-full performs better on $40\%$ semantic noise due to its training stability. + +# 5 DISCUSSION + +# 5.1 HARD-TO-CLASSIFY BUT CLEAN EXAMPLES + +It is hard to learn all clean examples during the warming-up process. Therefore, clean examples with large losses may be excluded from training batches during the filtering process. However, we expect that the number of clean examples used for training would increase gradually as training + +![](images/625b1c3790eab854e5dd5ea132896758e82eaeac107a17dee600c3dc8dac801c.jpg) +Figure 5: Recall (\%) of LTEC and LTEC-full on CIFAR-10 with random label noise. We plot the average as a solid line and the standard deviation as a shadow around the line. + +Table 5: Average of final/peak test accuracy (%) of LTEC with varying the number of networks used for filtering $M$ . The best is highlighted in bold. + +
DatasetNoise typeLTEC
M=1M=3M=5M=7M=9M=∞
CIFAR-10sym-20%84.96/85.0287.68/87.7888.18/88.2888.63/88.7788.79/88.8786.57/86.62
sym-60%73.99/74.3579.73/79.8080.38/80.5280.39/80.4580.28/80.3971.63/71.86
asym-20%85.02/85.2487.85/88.1588.86/88.9388.96/89.0788.99/89.1185.55/85.59
asym-40%78.84/79.6685.44/85.5986.36/86.5086.78/86.8286.59/86.6377.30/77.40
+ +Table 6: Average of final/peak test accuracy (%) of Co-teaching and LTEC with estimates of noise ratio (simulated). The best is highlighted in bold. + +
DatasetNoise typeunder-estimated (0.9ε)correctly estimated (ε)over-estimated (1.1ε)
Co-teachingLTECCo-teachingLTECCo-teachingLTEC
CIFAR-10sym-20%84.51/84.5887.93/88.0885.46/85.5288.18/88.2886.40/86.4588.72/88.75
sym-60%70.47/73.1177.98/78.2275.01/75.1980.38/80.5279.15/79.1779.34/79.45
asym-20%84.61/84.7388.15/88.3985.24/85.4488.86/88.9386.41/86.5789.04/89.22
asym-40%76.14/77.4184.42/84.5279.53/80.1986.36/86.5082.19/82.6386.93/86.96
+ +proceeds since LEC allows the network to learn from patterns clean examples without overfitting. To confirm this, we measure recall defined by the ratio of clean examples used for training to all clean examples at the end of each epoch during running LTEC and LTEC-full. As expected, recalls of both LTEC and LTEC-full sharply increase in the first 50 epochs as described in Figure 5. Pretraining (Hendrycks et al., 2019) prior to the filtering process may help to prevent the removal of clean examples from training batches. + +# 5.2 ABLATION STUDY + +The number of networks used for filtering. During the filtering process of LEC, we use only the intersection of small-loss examples of $M$ perturbed networks for training. This means that the number of examples used for training highly depends on $M$ . To understand the effect of $M$ , we run LTEC with varying $M$ on CIFAR-10 with random label noise. In particular, the range of $M$ is $\{1, 3, 5, 7, 9, \infty\}$ . Table 5 shows that a larger $M$ is not always lead to better performance. This is because too many examples may be removed from training batches as $M$ increases. Indeed, the total number of examples used for training is critical for the robustness as claimed in Rolnick et al. (2017); Li et al. (2017). + +Noise ratio. In reality, only a poorly estimated noise ratio may be accessible. To study the effect of poor noise estimates, we run LTEC on CIFAR-10 with random label noise using a bit lower and higher values than the actual noise ratio as in Han et al. (2018). We also run Co-teaching that requires the noise ratio for comparison. The overall results can be found in Table 6. Since it is generally difficult to learn all clean examples, training on small-loss examples selected using the over-estimated ratio (i.e., $1.1\epsilon$ ) is often helpful in both Co-teaching and LTEC. In contrast, small-loss examples selected using the under-estimated ratio may be highly corrupted. In this case, LTEC is robust to the estimation error of noise ratio, while Co-teaching is not. Such robustness of LTEC against noise estimation error comes from ensemble consensus filtering. + +Table 7: Average of final/peak test accuracy (%) of Standard and LTEC with ResNet. The best is highlighted in bold. + +
DatasetNoise TypeStandard (ResNet)LTEC (ResNet)
CIFAR-10sym-20%81.31/85.3089.01/89.12
sym-60%61.94/72.8081.46/81.66
asym-20%81.93/87.3288.90/89.04
asym-40%62.76/77.1086.62/86.85
+ +Applicability to different architecture. The key idea of LEC is rooted in the difference between generalization and memorization, i.e., the ways of learning clean examples and noisy examples in the early SGD optimization (Arpit et al., 2017). Therefore, we expect that LEC would be applicable to any architecture optimized with SGD. To support this, we run Standard and LTEC with ResNet-20 (He et al., 2016). The architecture is optimized based on Chollet et al. (2015), achieving the final test accuracy of $90.67\%$ on clean CIFAR-10. Here, $T_{w}$ is set to 30 for the optimization details. Table 7 shows LTEC (ResNet) beats Standard (ResNet) in both peak and final accuracies, as expected. + +# 6 CONCLUSION + +This work presents the method of generating and using the ensemble for robust training. We explore three simple perturbation methods to generate the ensemble and then develop the way of identifying noisy examples through ensemble consensus on small-loss examples. Along with growing attention to the use of small-loss examples for robust training, we expect that our ensemble method will be useful for such training methods. + +# ACKNOWLEDGMENTS + +We thank Changho Suh, Jinwoo Shin, Su-Young Lee, Minguk Jang, and anonymous reviewers for their great suggestions. This work was supported by the ICT R&D program of MSIP/IITP. [2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion] + +# REFERENCES + +Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B Bobba, and Christian Gagne. Controlling over-generalization and its effect on adversarial examples generation and detection. arXiv preprint arXiv:1808.08282, 2018. +Devansh Arpit, Stanisław Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxin-der S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 233-242. JMLR.org, 2017. +François Chollet et al. Keras. https://github.com/fchollel/keras, 2015. +Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of imagenet as an alternative to the cifar datasets. arXiv preprint arXiv:1707.08819, 2017. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. +Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050-1059, 2016. +Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. 2016. + +Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in neural information processing systems, pp. 8527-8537, 2018. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. In Advances in neural information processing systems, pp. 10456-10465, 2018. +Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. arXiv preprint arXiv:1901.09960, 2019. +Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. arXiv preprint arXiv:1712.05055, 2017. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. +David Krueger, Nicolas Ballas, Stanislaw Jastrzebski, Devansh Arpit, Maxinder S Kanwal, Tegan Maharaj, Emmanuel Bengio, Asja Fischer, and Aaron Courville. Deep nets don't learn via memorization. 2017. +Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. +Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, pp. 6402-6413, 2017. +Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017. +Xingjun Ma, Yisen Wang, Michael E Houle, Shuo Zhou, Sarah M Erfani, Shu-Tao Xia, Sudan-thi Wijewickrema, and James Bailey. Dimensionality-driven learning with noisy labels. arXiv preprint arXiv:1806.02612, 2018. +Ari Morcos, Maithra Raghu, and Samy Bengio. Insights on representational similarity in neural networks with canonical correlation. In Advances in Neural Information Processing Systems, pp. 5727-5736, 2018. +Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. +Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014. +Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems, pp. 3235-3246, 2018. + +Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944-1952, 2017. +Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596, 2014. +David Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017. +Yanyao Shen and Sujay Sanghavi. Learning with bad training data via iterative trimmed loss minimization. In International Conference on Machine Learning, pp. 5739-5748, 2019. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. +Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080, 2014. +Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. +Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018. +Fei Wang, Liren Chen, Cheng Li, Shiyao Huang, Yanjie Chen, Chen Qian, and Chen Change Loy. The devil of face recognition is in the noise. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 765-780, 2018a. +Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, and Shu-Tao Xia. Iterative learning with open-set noisy labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8688-8696, 2018b. +Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W Tsang, and Masashi Sugiyama. How does disagreement benefit co-teaching? arXiv preprint arXiv:1901.04215, 2019. +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016. + +# A APPENDIX + +# A.1 IMPLEMENTATION DETAILS + +# A.1.1 ANNOTATION NOISES + +- Random label noise: For sym- $\epsilon\%$ , $\epsilon\%$ of the entire set are randomly mislabeled to one of the other labels and for asym- $\epsilon\%$ , each label $i$ of $\epsilon\%$ of the entire set is changed to $i + 1$ . The corruption matrices of sym- $\epsilon\%$ and asym- $\epsilon\%$ are described in Figures A1a and A1b, respectively. +- Open-set noise: For $\epsilon \%$ open-set noise, images of $\epsilon \%$ examples randomly sampled from the original dataset are replaced with images from external sources, while labels are left intact. For CIFAR-10 with open-set noise, we sample images from 75 classes of CIFAR-100 (Abbasi et al., 2018) and 748 classes of ImageNet (Oliver et al., 2018) to avoid sampling similar images with CIFAR-10. +- Semantic noise: For semantic noise, we choose uncertain images and then mislabel them ambiguously. In Figure A2, we see that clean examples have simple and easy images, while noisy examples have not. Also, its corruption matrix (see Figure A1c) describes the similarity between classes, e.g., cat and dog, car and truck, etc. + +![](images/d0f86b4bc2a73dd110db55efc4e5e905988d1a9e1a2af80aa682447f8d820a7a.jpg) +(a) Random-sym + +![](images/a0ffbfbc269e21cfd7e18eba6c434b3e338b0924b414a4da148c2e9d76d5085f.jpg) +(b) Random-asym + +![](images/376275978a3c32d68e2b03ec1803166cc17b04cd79384c59eeaf2787e8651c55.jpg) +(c) Semantic + +![](images/359d7e3efae26cf1fb8cb56ec30db9bbc6e28fa828493855fe60ff468c017f31.jpg) +Figure A1: Corruption matrices of CIFAR-10 with random label noise and semantic noise. +Figure A2: Clean examples (top) and noisy examples (bottom) randomly sampled from CIFAR-10 with $20\%$ semantic noise. We observe that noisy examples contain atypical features and are semantically mislabeled. + +# A.1.2 ARCHITECTURE AND OPTIMIZATION DETAILS + +The 9-convolutional layer architecture used in this study can be found in Table A1. The network is optimized with Adam (Kingma & Ba, 2014) with a batchsize of 128 for 200 epochs. The initial learning rate $\alpha$ is set to 0.1. The learning rate is linearly annealed to zero during the last 120 epochs for MNIST and CIFAR-10, and during the last 100 epochs for CIFAR-100. The momentum parameters $\beta_{1}$ and $\beta_{2}$ are set to 0.9 and 0.999, respectively. $\beta_{1}$ is linearly annealed to 0.1 during the last 120 epochs for MNIST and CIFAR-10, and during the last 100 epochs for CIFAR-100. The images of CIFAR are divided by 255 and are whitened with ZCA. Additional regularizations such as data augmentation are not applied. The results on clean MNIST, CIFAR-10, and CIFAR-100 can be found in Table A2. + +Table A1: 9-conv layer architecture. + +
Input image
Gaussian noise (σ = 0.15)
3 × 3 conv, 128, padding = 'same' batch norm, LReLU (α = 0.01)
3 × 3 conv, 128, padding = 'same' batch norm, LReLU (α = 0.01)
3 × 3 conv, 128, padding = 'same' batch norm, LReLU (α = 0.01)
2 × 2 maxpooling, padding = 'same' dropout (drop rate = 0.25)
3 × 3 conv, 256, padding = 'same' batch norm, LReLU (α = 0.01)
3 × 3 conv, 256, padding = 'same' batch norm, LReLU (α = 0.01)
3 × 3 conv, 256, padding = 'same' batch norm, LReLU (α = 0.01)
2 × 2 maxpooling, padding = 'same' dropout (drop rate = 0.25)
3 × 3 conv, 512, padding = 'valid' batch norm, LReLU (α = 0.01)
3 × 3 conv, 256, padding = 'valid' batch norm, LReLU (α = 0.01)
3 × 3 conv, 128, padding = 'valid' batch norm, LReLU (α = 0.01)
global average pooling fc (128 → # of classes)
+ +Table A2: Avg (± stddev) of final test accuracy of a regular training on clean MNIST, CIFAR-10, and CIFAR-100. + +
DatasetMNISTCIFAR-10CIFAR-100
Test accuracy99.60±0.0290.59±0.1564.38±0.20
+ +# A.1.3 PSEUDOCODES FOR LECS + +We present three LECs with different perturbations in Section 3.3. The pseudocodes for LNEC, LSEC, LTEC, and LTEC-full are described in the following. In LTEC-full, we obtain small-loss examples utilized for filtering from the second epoch to encourage its stability. + +# Algorithm A1 LNEC + +Require: noisy dataset $\mathcal{D}$ with noise ratio $\epsilon \%$ duration of warming-up $T_{w}$ , The number of networks used for filtering $M$ +1: Initialize $\theta_{1},\theta_{2},\dots,\theta_{M}$ randomly +2: for epoch $t = 1:T_w$ do Warming-up process +3: for mini-batch index $b = 1:\frac{|D|}{\text{batchsize}}$ do +4: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ +5: for network index $m = 1:M$ do +6: $\theta_{m}\gets \theta_{m} - \alpha \nabla_{\theta_{m}}\frac{1}{|\mathcal{B}_{b}|}\sum_{(x,y)\in \mathcal{B}_{b}}CE(f_{\theta_{m}}(x),y)$ +7: end for +8: end for +9: end for +10: for epoch $t = T_w + 1:T_{end}$ do Filtering process +11: for mini-batch index $b = 1:\frac{|D|}{\text{batchsize}}$ do +12: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ +13: for network index $m = 1:M$ do +14: $S_{\epsilon ,\mathcal{B}_b,\theta_m}\coloneqq (100 - \epsilon)\%$ small-loss examples of $f_{\theta_m}$ within $\mathcal{B}_b$ +15: end for +16: $B_b^{\prime} = \cap_{m = 1}^{M}S_{\epsilon ,\mathcal{B}_b,\theta_m}$ Network-ensemble consensus filtering +17: for network index $m = 1:M$ do +18: $\theta_{m}\gets \theta_{m} - \alpha \nabla_{\theta_{m}}\frac{1}{|\mathcal{B}_{b}^{\prime}|}\sum_{(x,y)\in \mathcal{B}_{b}^{\prime}}CE(f_{\theta_{m}}(x),y)$ +19: end for +20: end for +21: end for + +# Algorithm A2 LSEC + +Require: noisy dataset $\mathcal{D}$ with noise ratio $\epsilon \%$ duration of warming-up $T_{w}$ , The number of networks used for filtering $M$ +1: Initialize $\theta$ randomly Warming-up process +2: for epoch $t = 1:T_w$ do +3: for mini-batch index $b = 1:\frac{|D|}{\text{batchsize}}$ do +4: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ +5: $\theta \leftarrow \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_b|}\sum_{(x,y)\in \mathcal{B}_b}CE(f_\theta (x),y)$ +6: end for +7: end for +8: for epoch $t = T_w + 1:T_{end}$ do Filtering process +9: for mini-batch index $b = 1:\frac{|D|}{\text{batchsize}}$ do +10: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ +11: for forward pass index $m = 1:M$ do +12: $\theta_{m} = \theta +\delta_{m}$ where $\delta_{m}$ comes from the stochasticity of network architecture +13: $S_{\epsilon ,\mathcal{B}_b,\theta_m}\coloneqq (100 - \epsilon)\%$ small-loss examples of $f_{\theta_m}$ within $\mathcal{B}_b$ +14: end for +15: $B_b^{\prime} = \cap_{m = 1}^{M}S_{\epsilon ,\mathcal{B}_b,\theta_m}$ Self-ensemble consensus filtering +16: $\theta \leftarrow \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_b^{\prime}|}\sum_{(x,y)\in \mathcal{B}_b^{\prime}}CE(f_\theta (x),y)$ +17: end for +18: end for + +Algorithm A3 LTEC +Require: noisy dataset $\mathcal{D}$ with noise ratio $\epsilon \%$ , duration of warming-up $T_{w}$ , The number of networks used for filtering M +1: Initialize $\theta$ randomly +2: for epoch $t = 1:T_{end}$ do +3: $\mathcal{P}_t = \emptyset$ +4: for mini-batch index $b = 1:\frac{|D|}{\text{batchsize}}$ do +5: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ +6: $S_{\epsilon ,\mathcal{B}_b,\theta}:= (100 - \epsilon)\%$ small-loss examples of $f_{\theta}$ within $\mathcal{B}_b$ +7: $\mathcal{P}_t\gets \mathcal{P}_t\cup S_{\epsilon ,\mathcal{B}_b,\theta}$ +8: if $t < T_w + 1$ then Warming-up process +9: $\theta \leftarrow \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_b|}\sum (x,y)\in \mathcal{B}_bCE(f_\theta (x),y)$ +10: else Filtering process +11: if $t = 1$ then +12: $B_b' = S_{\epsilon ,\mathcal{B}_b,\theta}$ +13: else if $t < M$ then +14: $B_b' = \mathcal{P}_1\cap \mathcal{P}_2\cap \dots \cap \mathcal{P}_{t - 1}\cap \mathcal{S}_{\epsilon ,\mathcal{B}_b,\theta}$ +15: else +16: $B_b' = \mathcal{P}_{t - (M - 1)}\cap \mathcal{P}_{t - (M - 2)}\cap \dots \cap \mathcal{P}_{t - 1}\cap \mathcal{S}_{\epsilon ,\mathcal{B}_b,\theta}$ Temporal-ensemble consensus filtering +17: end if +18: $\theta \leftarrow \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_b'|}\sum (x,y)\in \mathcal{B}_b'CE(f_\theta (x),y)$ +19: end if +20: end for +21: end for + +Algorithm A4 LTEC-full +Require: noisy dataset $\mathcal{D}$ with noise ratio $\epsilon \%$ , duration of warming-up $T_{w}$ , The number of networks used for filtering $M$ +1: Initialize $\theta$ randomly +2: for mini-batch index $b = 1:\frac{|D|}{\text{batchsize}}$ do +3: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ +4: $\theta \leftarrow \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_b|}\sum_{(x,y)\in \mathcal{B}_b}CE(f_\theta (x),y)$ +5: end for +6: for epoch $t = 2:T_{end}$ do +7: $\mathcal{P}_t\coloneqq (100 - \epsilon)\%$ small-loss examples of $f_{\theta}$ within $\mathcal{D}$ ▷ Small-loss examples are computed from the 2nd epoch +8: if $t < T_w + 1$ then Warming-up process +9: for mini-batch index $b = 1:\frac{|D|}{\text{batchsize}}$ do +10: Sample a subset of batchsize $\mathcal{B}_b$ from a full batch $\mathcal{D}$ +11: $\theta \leftarrow \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_b|}\sum_{(x,y)\in \mathcal{B}_b}CE(f_\theta (x),y)$ +12: end for +13: else Filtering process +14: if $t < M + 1$ then +15: $\mathcal{D}_t^\prime = \mathcal{P}_2\cap \mathcal{P}_3\cap \dots \cap \mathcal{P}_{t - 1}\cap \mathcal{P}_t$ +16: else +17: $\mathcal{D}_t^\prime = \mathcal{P}_{t - (M - 1)}\cap \mathcal{P}_{t - (M - 2)}\cap \dots \cap \mathcal{P}_{t - 1}\cap \mathcal{P}_t$ ▷ Temporal-ensemble consensus filtering +18: end if +19: for mini-batch index $b = 1:\frac{|D_t'|}{\text{batchsize}}$ do +20: Sample a subset of batchsize $\mathcal{B}_b^\prime$ from $\mathcal{D}_t^\prime$ +21: $\theta \leftarrow \theta -\alpha \nabla_{\theta}\frac{1}{|\mathcal{B}_b^{\prime}|}\sum_{(x,y)\in \mathcal{B}_b^{\prime}}CE(f_\theta (x),y)$ +22: end for +23: end if +24: end for + +# A.1.4 COMPETING METHODS + +The competing methods include a regular training method: Standard, a method of training with corrected labels: D2L (Ma et al., 2018), a method of training with modified loss function based on the noise distribution: Forward (Patrini et al., 2017), and a method of exploiting small-loss examples: Co-teaching (Han et al., 2018). We tune all the methods individually as follows: + +- Standard : The network is trained using the cross-entropy loss. +- D2L: The input vector of a fully connected layer in the architecture is used to measure the LID estimates. The parameter involved with identifying the turning point, window size $W$ is set to 12. The network is trained using original labels until the turning point is found and then trained using the bootstrapping target with adaptively tunable mixing coefficient. +- **Forward:** Prior to training, the corruption matrix $C$ where $C_{ji} = \mathbb{P}(y = i | y_{true} = j)$ is estimated based on the 97th percentile of probabilities for each class on MNIST and CIFAR- + +10, and the 100th percentile of probabilities for each class on CIFAR-100 as in Hendrycks et al. (2018). The network is then trained using the corrected labels for 200 epochs. + +- Co-teaching: Two networks are employed. At every update, they select their small-loss examples within a minibatch and then provide them to each other. The ratio of selected examples based on training losses is linearly annealed from $100\%$ to $(100 - \epsilon)\%$ over the first 10 epochs. + +# A.2 COMPLEXITY ANALYSIS + +We compute space complexity as the number of network parameters and computational complexity as the number of forward and backward passes. Here we assume that early stopping is not used and the noise ratio of $\epsilon \%$ is given. Note that the computational complexity of each method depends on its hyperparameter values, e.g., the duration of the warming-up process $T_{w}$ and the noise ratio $\epsilon \%$ The analysis is reported in Table A3. Our proposed LTEC is the most efficient because it can be implemented with a single network based on Section A.1.3 and only a subset of the entire training set is updated after the warming-up process. + +Table A3: Complexity analysis: $M$ indicates the number of networks for filtering in LECs. + +
ComplexityStandardSelf-trainingCo-teachingLNECLSECLTEC/LTEC-full
Space complexity
# of network parametersmm2mMmmm
Computational complexity
# of forward passesnn2nMnMnn
# of backward passesn≤n≤2n≤Mn≤n≤n
+ +# A.3 ADDITIONAL RESULTS + +# A.3.1 RESULTS OF LTEC WITH $M = \infty$ + +Figure A3 shows that ensemble consensus filtering with too large $M$ removes clean examples from training batches in the early stage of the filtering process. Unlike LTEC with $M = 5$ , the recall of LTEC with $M = \infty$ does not increase as training proceeds, suggesting that its generalization performance is not enhanced. This shows that a larger $M$ does not always lead to better performance. We expect that pre-training (Hendrycks et al., 2019) prior to the filtering process helps to reduce the number of clean examples removed by ensemble consensus filtering regardless of $M$ . + +![](images/7f1ddd3d082e224ea94cb11ca3f5215a3ec22a6bb087c01cd22fd3250651c066.jpg) +Figure A3: Recall (\%) of LTECs with varying $M$ on CIFAR-10 with random label noise. \ No newline at end of file diff --git a/robusttrainingwithensembleconsensus/images.zip b/robusttrainingwithensembleconsensus/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1baca0f88a7d2e32b42fa339f6de7dee67f6eeed --- /dev/null +++ b/robusttrainingwithensembleconsensus/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:746879b4a3c922471f6cd0df95fbc6c5533deebf6bfc447c4b7c76b2db2b87d3 +size 776949 diff --git a/robusttrainingwithensembleconsensus/layout.json b/robusttrainingwithensembleconsensus/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d7f1dac357095b84fec068b6fced99d85f1021d8 --- /dev/null +++ b/robusttrainingwithensembleconsensus/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:022edcf9fd8b599b938faff8550d64a8d377f498d26685059df1ffd4d644d4fa +size 572234 diff --git a/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_content_list.json b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2970ac9d81023bccccb913273ba89fbd83db05f5 --- /dev/null +++ b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8cb0b8bedb44282c737b025b552e243627aedab5133ab3c2fc2b217c679dd2e +size 65808 diff --git a/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_model.json b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6b379a7d8e676b58aba56ef40c0ca4de866a3bf4 --- /dev/null +++ b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be7cdb59b593e7ad220b9e2846eb572b52fd9b87046175af34946cf1ba6b8661 +size 79522 diff --git a/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_origin.pdf b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..06159fde8d1b79cb68167bd4fec52b21e83ad691 --- /dev/null +++ b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/e03538be-9350-44a8-a7c4-971d8b2e9729_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e152e4b54c0b16db0b696e5dbd23ff07d349fb40c2b2f4fddccaacecdc5c1c81 +size 31725050 diff --git a/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/full.md b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4d07b279bfc2bf7a3b8fcee33c1a2adb9c910f19 --- /dev/null +++ b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/full.md @@ -0,0 +1,243 @@ +# ROTATION-INVARIANT CLUSTERING OF NEURONAL RESPONSES IN PRIMARY VISUAL CORTEX + +Ivan Ustyuzhaninov, $^{1-3}$ Santiago A. Cadena, $^{1-3}$ Emmanouil Froudarakis, $^{4,5}$ Paul G. Fahey, $^{4,5}$ Edgar Y. Walker, $^{4,5}$ Erick Cobos, $^{4,5}$ Jacob Reimer, $^{4,5}$ Fabian H. Sinz, $^{4,5}$ Andreas S. Tolias, $^{1,4-6}$ Matthias Bethge, $^{1-3,5,\dagger}$ Alexander S. Ecker $^{1-3,5,\dagger,\ddagger,*}$ + +1 Centre for Integrative Neuroscience, University of Tübingen, Germany +2 Bernstein Center for Computational Neuroscience, University of Tübingen, Germany +3 Institute for Theoretical Physics, University of Tübingen, Germany +$^{4}$ Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA +5 Center for Neuroscience and Artificial Intelligence, BCM, Houston, TX, USA +$^{6}$ Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA + +†Authors contributed equally + +$\ddagger$ Present address: Department of Computer Science, University of Göttingen, Germany + +* ecker@cs.uni-goettingen.de + +# ABSTRACT + +Similar to a convolutional neural network (CNN), the mammalian retina encodes visual information into several dozen nonlinear feature maps, each formed by one ganglion cell type that tiles the visual space in an approximately shift-equivariant manner. Whether such organization into distinct cell types is maintained at the level of cortical image processing is an open question. Predictive models building upon convolutional features have been shown to provide state-of-the-art performance, and have recently been extended to include rotation equivariance in order to account for the orientation selectivity of V1 neurons. However, generally no direct correspondence between CNN feature maps and groups of individual neurons emerges in these models, thus rendering it an open question whether V1 neurons form distinct functional clusters. Here we build upon the rotation-equivariant representation of a CNN-based V1 model and propose a methodology for clustering the representations of neurons in this model to find functional cell types independent of preferred orientations of the neurons. We apply this method to a dataset of 6000 neurons and visualize the preferred stimuli of the resulting clusters. Our results highlight the range of non-linear computations in mouse V1. + +![](images/2527eef9a2f55d614f97653d984639592833383d6909407969fde3b82383f823.jpg) +Figure 1: An overview of our approach. $①$ Fit rotation-equivariant CNN to predict neural responses and use readout vectors $\pmb{r}_i$ as proxies for neural computations. $②$ Align readouts to account for different preferred orientations. $③$ Cluster the aligned readouts. + +![](images/25d14ded753d9d2de717724984e55a267c2d4edd153a15cec6dde9f1b8e609f2.jpg) + +![](images/422d988c87e6b6993a67eebb460df505c2169c48665dd91c87a7764f8a02b1b0.jpg) + +# 1 INTRODUCTION + +A compact description of the nonlinear computations in primary visual cortex (V1) is still elusive. Like in the retina (Baden et al., 2016; Sanes & Masland, 2015), such understanding could come from a functional classification of neurons. However, it is currently unknown if excitatory neurons in V1 are organized into functionally distinct cell types. + +It has recently been proposed that predictive models of neural responses based on convolutional neural networks could help answer this question (Klindt et al., 2017; Ecker et al., 2019). These models are based on a simple principle (Fig. 1-①): learn a core (e.g. a convolutional network) that is shared among all neurons and provides nonlinear features $\Phi(x)$ , which are turned into predictions of neural responses by a linear readout for each neuron (Antolík et al., 2016). Models based on this basic architecture exploit aspects of our current understanding of V1 processing. First, convolutional weight sharing allows us to characterize neurons performing the same computation but with differently located receptive fields by the same feature map (Klindt et al., 2017; McIntosh et al., 2016; Kindel et al., 2019; Cadena et al., 2019). Second, V1 neurons can extract local oriented features such as edges at different orientations, and most low-level image features can appear at arbitrary orientations. Therefore, Ecker et al. (2019) proposed a rotation-equivariant convolutional neural network model of V1 that extends the convolutional weight sharing to the orientations domain. + +The basic idea of previous work (Klindt et al., 2017; Ecker et al., 2019) is that each convolutional feature map could correspond to one cell type. While this idea is conceptually appealing, it hinges on the assumption that V1 neurons are described well by individual units in the shared feature space. However, existing models do not tend to converge to such solutions. Instead, V1 neurons are better described by linearly combining units from the same spatial location in multiple different feature maps (Ecker et al., 2019). Whether or not there are distinct functional cell types in V1 is therefore still an open question. + +Here, we address this question by introducing a clustering method on rotation-equivariant spaces. We treat the feature weights (Fig. 1-①) that map convolutional features to neural responses as an approximate low-dimensional vector representation of this neuron's input-output function. We then split neurons into functional types using a two-stage procedure: first, because these feature weights have a rotation-equivariant structure, we find an alignment that rotates them into a canonical orientation (Fig. 1-②); in a second step, we cluster them using standard approaches such as k-means or Gaussian mixture models (Fig. 1-③). We apply our method to the published model and data of Ecker et al. (2019) that contains recordings of around 6000 neurons in mouse V1 under stimulation with natural images. Our results suggest that V1 neurons might indeed be organized into functional clusters. The dataset is best described by a GMM with around 100 clusters, which are to some extent redundant but can be grouped into a smaller number of 10–20 groups. We analyse the resulting clusters via their maximally exciting inputs (MEIs) (Walker et al., 2018) to show that many of these functional clusters do indeed correspond to distinct computations. + +# 2 RELATED WORK + +Unsupervised functional clustering via system identification As outlined in the introduction, our work builds directly upon the methods developed by Klindt et al. (2017) and Ecker et al. (2019). While these works view the feature weights as indicators assigning each neuron to its 'cell type' (feature map), we here take a different view on the same model: rather than focusing on the convolutional features and viewing them as cell types, we treat the feature weights as a low-dimensional representation of the input-output function of each neuron and perform clustering in this space. This view on the problem has the advantage that there is no one-to-one correspondence between the number of feature maps and the number of cell types and we disentangle model fitting from its interpretation. On the other hand, our approach comes with an addition complexity: because the feature weights obey rotational equivariance and we would like our clustering to be invariant to rotations, we require a clustering algorithm that is invariant with respect to a class of (linear) transformations. + +Invariant clustering A number of authors have developed clustering methods that are invariant to linear (Tarpey, 2007), affine (Brubaker & Vempala, 2008) or image transformations by rotations, + +calings and translations (Frey & Jojic, 2002). Ju et al. (2019) cluster natural images by using a CNN to represent the space of invariant transformations rather than specifying it explicitly. + +Alignments Instead of using custom clustering algorithms that are invariant under certain transformations, we take a simpler approach: we first transform our features such that they are maximally aligned using the class of transformations the clustering should be invariant to. This approach has been used in other contexts before, usually by minimizing the distances between the transformed observations. Examples include alignment of shapes in $\mathbb{R}^d$ using rigid motions (Gower, 1975; Dryden & Mardia, 1998), alignment of temporal signals by finding monotonic input warps (Zhou & De la Torre, 2012), or alignment of manifolds with the distance between the observations being defined according to the metric on the manifold (Wang & Mahadevan, 2008; Cui et al., 2014). There is also work on alignment objectives beyond minimizing distances between transformed observations, examples of which include objectives based on generative models of observations (Kurtek et al., 2011; Duncker & Sahani, 2018) or probabilistic ones which are particularly suited for alignment with multiple groups of underlying observations (Kazlauskaite et al., 2019). + +# 3 ROTATION-EQUIVARIANT CLUSTERING + +Our goal is to cluster neurons in the dataset into groups performing similar computations. To do so, we use their low-dimensional representations obtained from the published rotation-equivariant CNN of Ecker et al. (2019), which predicts neural activity as a function of an external image stimulus. We briefly review this model before describing our approach to rotation-invariant clustering. + +CNN model architecture The model consists of two parts (Fig. 1-①): + +1. A convolutional core that is shared by all neurons and computes feature representations $\Phi (\pmb {x})\in \mathbb{R}^{W\times H\times K}$ , where $\pmb{x}$ is the input image, $W\times H$ is the spatial dimensionality and $K$ is the number of feature maps. +2. A separate linear readout $\pmb{w}_n = \pmb{s}_n \otimes \pmb{r}_n \in \mathbb{R}^{W \times H \times K}$ for each neuron $n = 1, \dots, N$ , factorized into a spatial mask $\pmb{s}_n$ and a vector of feature weights $\pmb{r}_n$ . + +The predicted activity of a neuron $n$ for image $x$ is + +$$ +a _ {n} (\boldsymbol {x}) = f \left(\boldsymbol {w} _ {n} \cdot \Phi (\boldsymbol {x})\right) = f \left(\boldsymbol {r} _ {n} \cdot \boldsymbol {s} _ {n} \cdot \Phi (\boldsymbol {x})\right) \in \mathbb {R} \tag {1} +$$ + +where $f(\cdot)$ is a non-linear activation function. Such a CNN therefore provides $K$ -dimensional feature weights $\mathbf{r}_n$ characterising linear combinations of spatially weighted image features $s_n \cdot \Phi(\mathbf{x})$ that are predictive of neural activity. We treat these feature weights as finite-dimensional proxies of actual computations implemented by the neurons. Because masks $s_n$ (another component of readouts $\mathbf{w}_n$ defined above) are irrelevant for our analysis, we will often refer to $\mathbf{r}_n$ simply as readout vectors. We will be referring to the matrix having $\mathbf{r}_i$ as its rows as the readout matrix $\mathbf{R} \in \mathbb{R}^{N \times K}$ . + +Rotation-equivariant core Feature representations $\Phi(\pmb{x})$ are rotation-equivariant, meaning that weight sharing is not only applied across space but also across rotations: for each convolutional filter there exist $O$ rotated copies, each rotated by $2\pi/O$ . Feature vectors $\phi_{ij}(\pmb{x})$ at position $(i,j)$ therefore consist of $F$ different features, each computed in $O$ linearly-spaced orientations (such that $F \times O = K$ ). We can think of $\phi_{ij}(\pmb{x})$ as being reshaped into an array of size $F \times O$ . Having computed $\phi_{ij}(\pmb{x})$ , we can compute $\phi_{ij}(\pmb{x}')$ with $\pmb{x}'$ being a stimulus $\pmb{x}$ rotated around $(i,j)$ by $2\pi/O$ by cyclically shifting the last axis of $\phi_{ij}(\pmb{x})$ by one step (this mechanism is illustrated in Fig. 2). + +Rotation-equivalent computations The linear readout adheres to the same rotation-equivariant structure as the core. As our goal is to cluster the neurons by the patterns of features they pool while being invariant to orientation, we need to account for the set of weight transformations that correspond to a rotation of the stimulus when clustering neurons. We illustrate this issue with a small toy example consisting of six neurons that fall into two cell types (Fig. 2B). Within each cell type (columns in Fig. 2B), the individual neurons differ only in their orientation. More formally, we define the computations performed by two neurons $n_i$ and $n_j$ to be rotation-equivalent if there exists a rotation $\psi_{ij}$ such that for any input stimulus $x$ we have $a_{n_i}(x) = a_{n_j}(\psi_{ij}(x))$ . We will refer to such neurons as rotation-equivalent as well. + +![](images/d75d6d53007469a22c8ce2124d1b0042b3583f009491efdb1d7da7200d38cb71.jpg) +A +B + +![](images/f970f998eab370245944461d0aeae9d26cbb2ff5d7716b0bf2df75d1021e27dd.jpg) +Figure 2: Toy example illustrating the computations in a rotation-equivariant CNN with two features (red and blue; cartoon feature representations are shown on top of corresponding values of $\Phi(x)$ ) in three orientations $(0, \pi/3, 2\pi/3)$ . A: Output of the rotation-equivariant CNN for an input image rotated by $\pi/3$ (base orientation) can be computed by a cyclic shift. B: Example of two distinct types of neurons (columns) in three orientations (rows). Computations for both types consist of linear combinations of the two features computed by the CNN with the same weights, but in different relative orientations. Readouts of neurons of the same type in different orientations are cyclic shifts of each other, since they produce the same outputs on correspondingly rotated inputs. + +![](images/9fda584959e62422fa6b954cf636915af73476d73161f16cba2f49d05cbe228b.jpg) + +$$ +a _ {n} ^ {0} \left(\bigoplus_ {\mathbf {x}}\right) = \sum \left(\left[ \begin{array}{l l l} 1 & 0. 4 & 2. 6 \\ 0. 5 & 2 & 0. 9 \end{array} \right] \odot \left[ \begin{array}{l l l} 1 & 0 & 0 \\ 2 & 0 & 0 \end{array} \right]\right) = 1. 5 +$$ + +$$ +a _ {n} ^ {\pi / 3} \left(\widehat {\mathbb {O}}\right) = \sum \left(\left[ \begin{array}{c c c} 1 & 0. 4 & 2. 6 \\ 0. 5 & 2 & 0. 9 \end{array} \right] \odot \left[ \begin{array}{c c c} 0 & 1 & 0 \\ 0 & 2 & 0 \end{array} \right]\right) = 2. 4 +$$ + +$$ +a _ {n} ^ {2 \pi / 3} \left(\bigcirc\right) = \sum_ {\substack {\left( \begin{array}{ccc} 1 & 0.4 & 2.6 \\ 0.5 & 2 & 0.9 \end{array} \right) \odot \underbrace {\left[ \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 2 \end{array} \right]} _ {\text {feature vectors (reshaped)}}}} = 3.5 +$$ + +$$ +a _ {n} ^ {0} (\bigcirc) = \sum \left(\left[ \begin{array}{l l l} 1 & 0. 4 & 2. 6 \\ 0. 5 & 2 & 0. 9 \end{array} \right] \odot \left[ \begin{array}{l l l} 1 & 0 & 0 \\ 0 & 2 & 0 \end{array} \right]\right) = 3 +$$ + +$$ +a _ {n} ^ {\pi / 3} (\bigcirc) = \sum \left(\left[ \begin{array}{c c c} 1 & 0. 4 & 2. 6 \\ 0. 5 & 2 & 0. 9 \end{array} \right] \odot \left[ \begin{array}{c c c} 0 & 1 & 0 \\ 0 & 0 & 2 \end{array} \right]\right) = 2. 2 +$$ + +$$ +a _ {n} ^ {2 \pi / 3} \left(\begin{array}{c} \end{array} \right) = \sum \left(\left[ \begin{array}{c c c} 1 & 0. 4 & 2. 6 \\ 0. 5 & 2 & 0. 9 \end{array} \right] \odot \underbrace {\left[ \begin{array}{c c c} 0 & 0 & 1 \\ 2 & 0 & 0 \end{array} \right]} _ {\text {f e a t u r e v e c t o r s (r e s h a p e d)}}\right) = 3. 6 +$$ + +Readouts of rotation-equivalent neurons Directly clustering the readout matrix $\pmb{R}$ does not respect the rotation equivalence, because readout vectors of neurons implementing rotation-equivalent computations are not identical (Fig. 2). To address that, we first modify $\pmb{R}$ to obtain a matrix $\tilde{\pmb{R}}$ with the rows corresponding to rotation-equivalent neurons aligned to a canonical form, and then cluster $\tilde{\pmb{R}}$ to obtain functional cell types. Rotating an input $\pmb{x}$ by a multiple of $2\pi /O$ corresponds to cyclically shifting $\Phi (\pmb {x})$ , so the readout vectors of two rotation-equivalent neurons whose orientation difference $\psi_{ij}$ is a multiple of $2\pi /O$ are also cyclic shifts of each other (Fig. 2). For arbitrary rotations that are not necessarily a multiple of $2\pi /O$ , we assume the readout $\pmb{r}_{n_j}$ of neuron $n_j$ to be a linear interpolation of cyclic shifts of $\pmb{r}_{n_i}$ corresponding to the two nearest rotations which are multiples of $2\pi /O$ . Formally, we define a cyclic rotation matrix by an angle $\alpha \in [0,2\pi)$ as follows (matrix has shape $O\times O$ ; column indices are shown above the matrix): + +$$ +S _ {\alpha} = \left( \begin{array}{c c c c c c c c} 1 & & i & i + 1 & i + 2 & i + 3 & & O \\ 0 & \dots & 0 & 1 - \gamma & \gamma & 0 & \dots & 0 \\ 0 & \dots & 0 & 0 & 1 - \gamma & \gamma & \dots & 0 \\ & \vdots & & & & & \vdots \\ 0 & \dots & 1 - \gamma & \gamma & 0 & 0 & \dots & 0 \end{array} \right), \quad \begin{array}{l} i = \left\lfloor \frac {\alpha O}{2 \pi} \right\rfloor \bmod O, \\ \gamma = \frac {\alpha O}{2 \pi} - i. \end{array} \tag {2} +$$ + +Given a readout vector $\mathbf{r}_n \in \mathbb{R}^K$ , we can think of it as a matrix $\mathbf{r}_n \in \mathbb{R}^{O \times F}$ with columns corresponding to readout coefficients for different orientations of a single feature. The cyclic rotation of a readout $\mathbf{r}_n$ by an angle $\alpha \in [0, 2\pi)$ can be expressed as a matrix multiplication $\mathbf{r}_n(\alpha) = S_\alpha \mathbf{r}_n$ . Note, by writing $\mathbf{r}_n(\alpha)$ as a function of $\alpha$ we refer to cyclically rotated (transformed) readouts, while $\mathbf{r}_n$ are the fixed ones coming from a pre-trained CNN and $\mathbf{r}_n(0) = \mathbf{r}_n$ . + +For two rotation-equivalent neurons $n_i$ and $n_j$ , the readout vector $r_{n_j}$ can be computed as a cyclic rotation of $r_{n_i}$ by $\alpha$ , which is the rotation angle of $\psi_{ij}$ . If $\alpha$ is a multiple of $2\pi / O$ ; otherwise it is only an approximation which becomes increasingly accurate as $O$ increases. + +Aligning the readouts Assuming V1 neurons form discrete functional cell types, all neurons in the dataset (and hence the readouts characterising them) can be partitioned into non-overlapping classes w.r.t. the rotation equivalence relation we introduced above. Choosing one representative of each class and replacing the rows of $\pmb{R}$ with their class representatives, we can obtain $\tilde{\pmb{R}}$ from $\pmb{R}$ . Next, we discuss an algorithm for finding such representatives of each class. + +We claim that by minimising the sum of pairwise distances between the cyclically rotated readouts + +$$ +\left\{\alpha_ {i} ^ {*} \right\} = \underset {\left\{\alpha_ {i} \right\}} {\arg \min } \sum_ {i = 1} ^ {N} \sum_ {j = i + 1} ^ {N} \left| \left| \boldsymbol {r} _ {i} \left(\alpha_ {i}\right) - \boldsymbol {r} _ {j} \left(\alpha_ {j}\right) \right| \right|, \tag {3} +$$ + +we can transform each readout into a representative of a corresponding equivalence class (same for all readouts of a class), i.e. $\boldsymbol{r}_i(\alpha_i^*) = \boldsymbol{r}_j(\alpha_j^*)$ if neurons $i$ and $j$ are rotation-equivalent. This is indeed the case because the readouts of the same equivalence class lie on the orbit obtained by cyclically rotating any representative of that class. Such angles $\{\alpha_i^*\}$ that neurons of the same class end up on the same point on the orbit (i.e. aligned to the same class representative) clearly minimise Eq. (3), and since different orbits do not intersect (they are different classes of equivalence), readouts of different equivalence classes cannot end up at the same point. Note that the resulting class representatives are not arbitrary, but those with the smallest sum of distances between each other in the Euclidean space. This mechanism is illustrated in Fig. 1-(2). + +Clustering aligned readouts Having obtained $\pmb{R}$ with rows $r_i(\alpha_i^*)$ , we can cluster the rows of this matrix using any standard clustering method (e.g. K-Means, GMM, etc.) to obtain groups of neurons (cell types) performing similar computations independent of their preferred orientations. + +Continuous relaxation of cyclic rotations The rotation-invariant clustering described above is based on solving the optimisation problem in Eq. (3). To do so, we would typically use a gradient-based optimisation, which is prone to local minima because of the way we define cyclic rotations in Eq. (2). According to that definition, a rotated readout is a linear combination of two nearest base rotations, or rather a linear combination of all such rotations with only two coefficients being non-zero. That means that gradients of all but two coefficients w.r.t. the angle $\alpha$ are zero, and the optimisation would converge to a local minimum corresponding to the best linear combination of the two base rotations initialised with non-zero coefficients (Fig. 3). + +To address this issue, we propose to approximate Eq. (2), such that $\boldsymbol{r}_{n_i}(\alpha)$ is a linear combination of all base rotations with nonzero coefficients, with the coefficients for the two nearest base rotations being the largest. Specifically we compute the coefficients by sampling the von Mises density at fixed orientations to ensure cyclic boundary conditions and define $\tilde{\boldsymbol{r}}_{n_i}(\alpha)$ , a continuous relaxation of $\boldsymbol{r}_{n_i}(\alpha)$ , as $\tilde{\boldsymbol{r}}_{n_i}(\alpha) = \tilde{S}_\alpha \boldsymbol{r}_{n_i}$ where + +![](images/0ea98469d87a2253458ed1a3c6286bbda43ce9d05e747a6326893bd6da6b77dd.jpg) +Figure 3: Distance between two vectors (top left corner) with first one fixed and second cyclically shifted by an angle on the x-axis. Continuous relaxation (shades of blue) of linearly interpolated (black) cyclic shifts smooths gradients and helps overcome local minima. + +$$ +\tilde {S} _ {\alpha} = \left( \begin{array}{c c c c} \gamma_ {1} & \gamma_ {2} & \dots & \gamma_ {O} \\ \gamma_ {O} & \gamma_ {1} & \dots & \gamma_ {O - 1} \\ \vdots & & & \vdots \\ \gamma_ {2} & \gamma_ {3} & \dots & \gamma_ {1} \end{array} \right) \quad \text {w i t h} \quad \gamma_ {i} = \frac {\exp (T \cos (\alpha - (i - 1) \cdot 2 \pi / O))}{\sum_ {i = 1} ^ {O} \exp (T \cos (\alpha - (i - 1) \cdot 2 \pi / O))}. \tag {4} +$$ + +The parameter $T \geq 0$ controls the sparseness of coefficients $\{\gamma_i\}$ . For small $T$ , many of the coefficients are significantly greater than zero, allowing the optimiser to propagate the gradients and reduce the effect of initialisation. As $T$ increases, $\tilde{r}_{n_i}(\alpha)$ becomes more similar to $r_{n_i}(\alpha)$ , and the rotations by multiples of $2\pi / O$ are recovered. In the limit, $\tilde{r}_{n_i}(2\pi k / O) \to r_{n_i}(2\pi k / O)$ as $T \to \infty$ (Fig. 3). Instead of fixing $T$ , we learn it by optimising the regularised alignment objective with additional reconstruction loss preventing trivial solutions (e.g. all coordinates of $\tilde{r}_i(\alpha_i)$ being the same for $T = 0$ ): + +$$ +\left\{\alpha_ {i} ^ {*} \right\} = \underset {\left\{\alpha_ {i} \right\}, T} {\arg \min } \sum_ {i = 1} ^ {N} \sum_ {j = i + 1} ^ {N} \left| \left| \tilde {\boldsymbol {r}} _ {i} \left(\alpha_ {i}\right) - \tilde {\boldsymbol {r}} _ {j} \left(\alpha_ {j}\right) \right| \right| + \beta \sum_ {i = 1} ^ {N} \left| \left| \tilde {\boldsymbol {r}} _ {i} (0) - \boldsymbol {r} _ {i} \right| \right|. \tag {5} +$$ + +![](images/eabc3a422614bae9f8b944afb6c9004edf65d59cb4455b0689291e0f5af8f67c.jpg) +Figure 4: Synthetic data set: generation, alignment and dependence on noise. A: Panel $\pmb{R}$ shows the unaligned synthetic data set as well as the corresponding shifted GP samples for each of the two groups of neurons (see details in the main text). Colored boxes in $\pmb{R}$ correspond to the colors of corresponding GP samples. Panels $\tilde{R} (\alpha^{*})$ and $\tilde{R} (0)$ show aligned readouts and readouts rotated by 0 using Eq. (4) respectively. $\tilde{R} (0)$ should be similar to $\pmb{R}$ for an adequate choice of $\beta$ (and consequently optimised value of $T$ ). B: Effect of observation noise. Means of pairwise distances for each of the two groups shown in matrix $\pmb{R}$ for two levels of Gaussian noise added to the dataset. Black dashed line: expected pairwise distance due to noise only (i.e. for perfectly aligned data with added noise). Raw and aligned matrices for each of the two groups are shown below the bar plots. + +# 4 EXPERIMENTS + +Synthetic dataset We generate a small toy dataset consisting of 16 hypothetical neurons (readouts) of two cell types to illustrate the proposed alignment method. Each readout consists of just one feature in eight base orientations (linearly spaced between 0 and $7\pi /4$ ) and is generated by one of the two underlying types of readouts cyclically shifted by a random angle $\phi \in [0,2\pi)$ . To generate such a dataset, we draw two independent noiseless functions from a Gaussian process (GP) with a periodic kernel (with period $2\pi$ ), then for each readout we randomly choose one of the two GP samples, shift it by an angle $\phi$ and evaluate the shifted function at the base orientations to obtain an 8-dimensional vector modelling the observed readout values. This process is illustrated in Fig. 4A. + +Neural data We use the same dataset as in Ecker et al. (2019), consisting of simultaneous recordings of responses of 6005 excitatory neurons in mouse primary visual cortex (layers 2/3 and 4). + +Model details We analyse a rotation-equivariant CNN consisting of a three-layer core with 16 features in 8 orientations in each layer (kernel sizes 13, 5, 5) and 128-dimensional readouts ( $F = 16$ , $O = 8$ ). We use the pre-trained model provided by Ecker et al. (2019). We align the readout matrix $\mathbf{R}$ by minimising Eq. (5) w.r.t. the rotation angles $\alpha_{i}$ and temperature $T$ . We fit models for 20 log-spaced values of $\beta$ in [0.001, 10], and choose for analysis the one with the smallest alignment loss (Eq. (3)) among the models with optimised temperature $T > 5$ . We use Adam (Kingma & Ba, 2015) with early stopping and initial learning rate of 0.01 decreased three times. + +Clustering aligned readouts We use the Gaussian mixture model implemented in scikit-learn (Pedregosa et al., 2011) for clustering the aligned readouts $\tilde{R}$ . We use spherical covariances to reduce the number of optimised parameters. To obtain a quantitative estimate of the number of clusters in $\tilde{R}$ , we randomly split the dataset of 6005 neurons into training (4000 neurons) and test (2005 neurons) sets, fit GMMs with different numbers of clusters on the training set, and then evaluate the likelihood of the fitted model on the test set. + +# 5 RESULTS + +# 5.1 SYNTHETIC DATA SET ALIGNMENT + +We start by demonstrating on a synthetic dataset (Sec. 4) that optimising Eq. (5) can successfully align the readouts (Fig. 4A), assuming $\beta$ has been chosen appropriately. Note that readouts have been shifted by arbitrary angles (not multiples of $\pi /4$ as demonstrated for readouts in colored boxes in Fig. 4A), and they are aligned precisely via interpolation Eq. (4). We can also see the effect of the parameter $\beta$ , controlling the relative weight of the reconstruction term (i.e. similarity of readouts rotated by 0 degrees to the observations). Small values of $\beta$ incur a small cost for poor reconstructions resulting in small optimised values of $T$ and over-smoothed aligned readouts. + +We next ask whether the alignment procedure still works in the presence of observational noise (Fig. 4B). For small to moderate noise levels (Fig. 4B, left), alignment reduces the pairwise distances to the level expected from the observation noise (shown at the top), confirming the visual impression (shown at the bottom) that alignment works well. For high noise levels (Fig. 4B, right), alignment breaks down as expected, and we observe overfitting to the noise patterns (shown by the pairwise distances after alignment dropping below the level expected from observation noise alone). + +# 5.2 MOUSE V1 DATASET + +Clustering We evaluate the GMM used to cluster $\tilde{\pmb{R}}$ for different numbers of clusters. The test likelihood starts to plateau at around 100 clusters (Fig. 5), so we use 100 clusters in the following. + +Visualization of clusters To visualize the clustering result, we compute a two-dimensional t-SNE embedding (van der Maaten & Hinton, 2008) of the matrix of aligned readouts $\tilde{\pmb{R}}$ , which is coloured according to the GMM clustering of $\tilde{\pmb{R}}$ with 100 clusters (Fig. 6). Note that we use the embedding only for visualization, but cluster 128D aligned readouts in $\tilde{\pmb{R}}$ directly. In addition to the embeddings, we also visualize the computations performed by some of the clusters by showing maximally exciting inputs (MEIs). We compute MEIs via activity maximisation (Erhan et al., 2009; Walker et al., 2018) and show the stimuli that maximally drive the 16 best-predicted neurons of each cluster. We ob- + +serve that MEIs corresponding to neurons of the same cluster are generally consistent up to rotation and receptive field location, suggesting that the proposed clustering method captures the similarities in the neural computations while ignoring the nuisances as desired. + +![](images/0300bc186cdf10fe7b9d9a80387ff7f5b56f0704e828215ae8025acb04d488a6.jpg) +Figure 5: Test set likelihood of GMMs applied to $\tilde{R}$ as a function of the number of clusters. + +Network learned redundant features We noticed a number of clusters with similar MEIs (e.g. Block 9 and Block 13 in Fig. 6). There could be two reasons for this observation: (a) the neural computations corresponding to these clusters could be different in some other aspect, which we cannot tell by inspecting MEIs as they represent only the maximum of a complex function, or (b) the features learned by the CNN could be redundant, i.e. the hidden layers could learn to approximate the same function in multiple different ways. To answer this question, we compute a cluster confusion matrix (Fig. 7, left), which quantifies how similar the response predictions of different clusters are across images. The element $(p,q)$ corresponds to the correlation coefficient between the predicted responses on the entire training set of hypothetical neurons with cluster means for clusters $p$ and $q$ used as readouts, accounting for potential differences in canonical orientation across clusters. By greedily re-arranging clusters in the matrix into blocks based on their correlations, we show that the 100 clusters in the model can be grouped into a much smaller number of functionally distinct clusters. Using a correlation threshold of 0.5 in this re-arrangement procedure, we obtain an example arrangement into 17 blocks (Fig. 7). Thus, the network has learned an internal representation that allows constructing very similar functions in multiple ways, suggesting that further pruning the learned network before clustering could lead to a more compact feature space for V1. + +Finally, to quantify how consistent the resulting 17 groups of clusters are, we compute an MEI confusion matrix (Fig. 7, right panel). Its $(i,j)$ element is the predicted activity of neuron $j$ for the MEI of neuron $i$ , after accounting for orientation and receptive field location (i.e. $a_{j}(\pmb{y}_{i})$ , where + +![](images/621cd741f5fb51138d795c38b6c0bfd49174be57e5ad92fc715a93ac8403b0e2.jpg) +Figure 6: 2D t-SNE embedding of the aligned readouts $\tilde{R}$ , colored according to the GMM clustering with 100 components. Black stars show the locations of cluster centers. For some of the clusters, the MEIs of 16 best predicted neurons of that cluster are shown. The titles in the MEI subfigures show which matrix block in Fig. 7 (left) that cluster belongs to. + +$y_{i}$ is the MEI of neuron $i$ moved and rotated such that it optimally drives neuron $j$ . We show this matrix using the same grouping as for the cluster confusion matrix above and restrict it to the 542 (out of 6005) best predicted neurons (with test set correlation $\geq 0.7$ ). Note that some of the blocks from the cluster confusion matrix do not appear here, indicating that those clusters include poorly predicted neurons (e.g. block 17). The MEI confusion matrix exhibits a block-diagonal structure, with most MEIs driving neurons within the same blocks most strongly, albeit with different degrees of within-block similarity. + +![](images/405111a6cae85cb92e91fca8153cc264031e655b7c9605d9d9505424ca934f16.jpg) +Figure 7: Left: Cluster confusion matrix $(100 \times 100)$ for 100 clusters shown in Fig. 6. Rows and columns have been arranged into 17 groups (blocks). Right: MEI confusion matrix for well-predicted neurons (test correlation $\geq 0.7$ ) arranged into the same 17 blocks as on the left. + +![](images/48e24c5ac20799d33271500f89b7143f1b1a710e7e20b845b9bc6fb05c432458.jpg) + +# 6 CONCLUSIONS AND FUTURE WORK + +We have presented an approach to clustering neurons into putative functional cell types invariant to location and orientation of their receptive field. We find around 10-20 functional clusters, the boundaries of some of which are not very clear-cut. To systematically classify the V1 functional cell types, these proposals need to be subsequently examined based on a variety of biological criteria reflecting the different properties of the neurons and the prior knowledge about the experiment. + +# REFERENCES + +Ján Antolík, Sonja B. Hofer, James A. Bednar, and Thomas D. Mrsic-flogel. Model Constrained by Visual Hierarchy Improves Prediction of Neural Responses to Natural Scenes. PLoS Comput Biol, 2016. +Tom Baden, Philipp Berens, Katrin Franke, Miroslav Roman Rosón, Matthias Bethge, and Thomas Euler. The functional diversity of retinal ganglion cells in the mouse. Nature, 529, 2016. +Spencer Ch. Brubaker and Santosh Vempala. Isotropic pca and affine-invariant clustering. In Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2008, 2008. +Santiago A. Cadena, George H. Denfield, Edgar Y. Walker, Leon A. Gatys, Andreas S. Tolias, Matthias Bethge, and Alexander S. Ecker. Deep convolutional models improve predictions of macaque v1 responses to natural images. PLoS computational biology, 15(4), 2019. +Zhen Cui, Hong Chang, Shiguang Shan, and Xilin Chen. Generalized unsupervised manifold alignment. In Advances in Neural Information Processing Systems (NIPS). 2014. +Ian L. Dryden and Kanti V. Mardia. Statistical Shape Analysis. Wiley, Chichester, 1998. +Lea Duncker and Maneesh Sahani. Temporal alignment and latent gaussian process factor inference in population spike trains. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems (NIPS). 2018. +Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Andreas S. Tolias, and Matthias Bethge. A rotation-equivariant convolutional neural network model of primary visual cortex. In International Conference on Learning Representations, 2019. +Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. Technical report, University of Montreal, 2009. Also presented at the ICML 2009 Workshop on Learning Feature Hierarchies, Montréal, Canada. +Brendan J Frey and Nebojsa Joic. Fast, large-scale transformation-invariant clustering. In Advances in Neural Information Processing Systems (NIPS). MIT Press, 2002. +James C. Gower. Generalized procrustes analysis. Psychometrika, 40(1), 1975. +Xu Ju, João F. Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. In The IEEE International Conference on Computer Vision (ICCV), 2019. +Ieva Kazlauskaite, Carl Henrik Ek, and Neill Campbell. Gaussian process latent variable alignment learning. In AISTATS, volume 89. PMLR, 2019. +William F. Kindel, Elijah D. Christensen, and Joel Zylberberg. Using deep learning to probe the neural code for images in primary visual cortex. Journal of Vision, 19(4), 2019. +Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations (ICLR), 2015. +David Klindt, Alexander S. Ecker, Thomas Euler, and Matthias Bethge. Neural system identification for large populations separating "what" and "where". In Advances in Neural Information Processing Systems (NIPS), 2017. +Sebastian A. Kurtek, Anuj Srivastava, and Wei Wu. Signal estimation under random time-warpings and nonlinear signal alignment. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems (NIPS). 2011. +Lane McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, and Stephen Baccus. Deep learning models of the retinal response to natural scenes. In Advances in Neural Information Processing Systems (NIPS). 2016. + +Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 2011. +Joshua R. Sanes and Richard H. Masland. The types of retinal ganglion cells: current status and implications for neuronal classification. Annual review of neuroscience, 38, 2015. +Thaddeus Tarpey. Linear Transformations and the k-Means Clustering Algorithm: Applications to Clustering Curves. The American Statistician, 61, 2007. +Laurens van der Maaten and Geoffrey E. Hinton. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9, 2008. +Edgar Y. Walker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Taliah Muhammad, Alexander S. Ecker, Erick Cobos, Jacob Reimer, Xaq Pitkow, and Andreas S. Tolias. Inception in visual cortex: in vivo-silico loops reveal most exciting images. bioRxiv, 2018. +Chang Wang and Sridhar Mahadevan. Manifold alignment using procrustes analysis. In Proceedings of the 25th International Conference on Machine Learning (ICML), 2008. +Feng Zhou and Fernando De la Torre. Generalized time warping for multi-modal alignment of human motion. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012. + +# A RANDOM PERMUTATIONS OF FEATURES + +![](images/0b6291382bc6d658e7450122e7680abbf4b3670671d1ed8c44f1527d8a07819b.jpg) +Figure A1: 2D t-SNE embedding of the aligned readouts $\tilde{R}$ with feature weights randomly permuted for each of the neurons. The colors correspond to the GMM clustering with 100 components. Black stars show the locations of cluster centers. For some of the clusters, the MEIs of 16 best predicted neurons of that cluster are shown. + +![](images/dd954c5239108e78dad26030b0d2d304ed10e886fc6fdbb84e0b76834f0d2129.jpg) +Figure A2: 2D t-SNE embedding of the aligned readouts $\tilde{\pmb{R}}$ with feature weights randomly permuted across the neurons. The colors correspond to the GMM clustering with 100 components. Black stars show the locations of cluster centers. For some of the clusters, the MEIs of 16 best predicted neurons of that cluster are shown. + +![](images/60b30516d608c236b374aaf92a0656bccb29c570fa0bf245698e9fc3ad0d5468.jpg) +Figure A3: t-SNE embeddings for the aligned readouts (Fig. 6), and the controls with randomly permuted features for each neuron (Fig. A1) and across the neurons (Fig. A2). + +# B SYNTHETIC DATASET: DEPENDENCE ON NOISE + +![](images/7f47dbc63357a25837c86d13c63b7d979caa20795fa439107ca6daf6404e8f1c.jpg) +Figure B1: Alignment of a synthetic dataset of 100 observations generated using the procedure described in Sec. 4 for different amount of i.i.d. Gaussian noise added to the observations. The panels for each noise level show the 16 (out of 100) examples of the raw and aligned data as well the t-SNE embeddings of raw and aligned data coloured according to the GMM clustering with two components. + +# C MERGES AND Splits OF CLUSTER CONFUSION MATRIX BLOCKS + +![](images/71a57e6a10d40fa794fcb7221944d2c3dcf8184cacab9ab642810df9153d3402.jpg) +Figure C1: Sequential merges of the three pairs of blocks with the highest correlations in the cluster confusion matrix (Fig. 7, left). The merged blocks and the correlation values are shown in the titles of panels. + +![](images/bae772ff98d8dd4326ef7c75bef26c55b0056ddee1d5ca23b9c02b19cf0c4914.jpg) +Figure C2: Sequential splits of the three pairs of blocks in the cluster confusion matrix (Fig. 7, left). The merged blocks, the correlation values, and the examples of MEIs of one of the GMM clusters in each of the splitted blocks are shown for each splitting step. \ No newline at end of file diff --git a/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/images.zip b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..562ab0d37241cdb2f2f6dcef85df5bf7b1b31279 --- /dev/null +++ b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9507fbf6f5c013c1f16c597b49b19d67ed18580ed1e4be6a3113c1e88caf0880 +size 1245824 diff --git a/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/layout.json b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2d2d8c596a8395fcbe34953020eb458d84d69b37 --- /dev/null +++ b/rotationinvariantclusteringofneuronalresponsesinprimaryvisualcortex/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad2eccc1f6af41b484cc7ca25f24b058e275466885ec5ebfe6a75e7de9d60ccb +size 409705 diff --git a/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_content_list.json b/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..779df4e183e25604a7970ad8590102fff1219ebe --- /dev/null +++ b/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f51dcd2a226c9b6442492f80785b1d025551be5db37f07afb842c4187a1f1e1 +size 140697 diff --git a/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_model.json b/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e40afe0ff9f21dc3a50ea9158bb31c7e28f6db24 --- /dev/null +++ b/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08a645bb1c0d971acd765b67d183d21bbe8d506567216533fa62a2449f03a0c5 +size 169170 diff --git a/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_origin.pdf b/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0c9de3cae9ff86fffa6b678a046d2a673da7fbf6 --- /dev/null +++ b/sadamavariantofadamforstronglyconvexfunctions/6ec32a71-0a73-4d62-bfe2-279e495d9c9f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd00a793b05a7da7ab7f3cbb5ecce56152aeab8095600ef821f97e8dbd2efe9d +size 562177 diff --git a/sadamavariantofadamforstronglyconvexfunctions/full.md b/sadamavariantofadamforstronglyconvexfunctions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fdcd8baaef99f13db8c22375ba3eeae7c8bcb612 --- /dev/null +++ b/sadamavariantofadamforstronglyconvexfunctions/full.md @@ -0,0 +1,703 @@ +# SADAM: A VARIANT OF ADAM FOR STRONGLY CONVEX FUNCTIONS + +Guanghui Wang $^{1}$ , Shiyin Lu $^{1}$ , Quan Cheng $^{1}$ , Wei-Wei Tu $^{2}$ and Lijun Zhang $^{1,*}$ + +$^{1}$ National Key Laboratory for Novel Software Technology, Nanjing University, China + +24Paradigm Inc., Beijing, China + +{wanggh, lusy, chengq, zhanglj} $@$ lamda.nju.edu.cn, tuwcn@gmail.com + +# ABSTRACT + +The Adam algorithm has become extremely popular for large-scale machine learning. Under convexity condition, it has been proved to enjoy a data-dependent $O(\sqrt{T})$ regret bound where $T$ is the time horizon. However, whether strong convexity can be utilized to further improve the performance remains an open problem. In this paper, we give an affirmative answer by developing a variant of Adam (referred to as SAdam) which achieves a data-dependent $O(\log T)$ regret bound for strongly convex functions. The essential idea is to maintain a faster decaying yet under controlled step size for exploiting strong convexity. In addition, under a special configuration of hyperparameters, our SAdam reduces to SC-RMSprop, a recently proposed variant of RMSprop for strongly convex functions, for which we provide the first data-dependent logarithmic regret bound. Empirical results on optimizing strongly convex functions and training deep networks demonstrate the effectiveness of our method. + +# 1 INTRODUCTION + +Online Convex Optimization (OCO) is a well-established learning framework which has both theoretical and practical appeals (Shalev-Shwartz et al., 2012). It is performed in a sequence of consecutive rounds: In each round $t$ , firstly a learner chooses a decision $\mathbf{x}_t$ from a convex set $\mathcal{D} \subseteq \mathbb{R}^d$ , at the same time, an adversary reveals a loss function $f_t(\cdot): \mathcal{D} \mapsto \mathbb{R}$ , and consequently the learner suffers a loss $f_t(\mathbf{x}_t)$ . The goal is to minimize regret, defined as the difference between the cumulative loss of the learner and that of the best decision in hindsight (Hazan et al., 2016): + +$$ +R (T) := \sum_ {t = 1} ^ {T} f _ {t} (\mathbf {x} _ {t}) - \min _ {\mathbf {x} \in \mathcal {D}} \sum_ {t = 1} ^ {T} f _ {t} (\mathbf {x}). +$$ + +The most classic algorithm for OCO is Online Gradient Descent (OGD) (Zinkevich, 2003), which attains an $O(\sqrt{T})$ regret. OGD iteratively performs descent step towards gradient direction with a predetermined step size, which is oblivious to the characteristics of the data being observed. As a result, its regret bound is data-independent, and can not benefit from the structure of data. To address this limitation, various of adaptive gradient methods, such as Adagrad (Duchi et al., 2011), RMSprop (Tieleman & Hinton, 2012) and Adadelta (Zeiler, 2012) have been proposed to exploit the geometry of historical data. Among them, Adam (Kingma & Ba, 2015), which dynamically adjusts the step size and the update direction by exponential average of the past gradients, has been extensively popular and successfully applied to many applications (Xu et al., 2015; Gregor et al., 2015; Kiros et al., 2015; Denkowski & Neubig, 2017; Bahar et al., 2017). Despite the outstanding performance, Reddi et al. (2018) pointed out that Adam suffers the non-convergence issue, and developed two modified versions, namely AMSgrad and AdamNC. These variants are equipped with data-dependent regret bounds, which are $O(\sqrt{T})$ in the worst case and become tighter when gradients are sparse1. + +While the theoretical behavior of Adam in convex cases becomes clear, it remains an open problem whether strong convexity can be exploited to achieve better performance. Such property arises, for instance, in support vector machines as well as other regularized learning problems, and it is well-known that the vanilla OGD with an appropriately chosen step size enjoys a much better $O(\log T)$ regret bound for strongly convex functions (Hazan et al., 2007). In this paper, we propose a variant of Adam adapted to strongly convex functions, referred to as SAdam. Our algorithm follows the general framework of Adam, yet keeping a faster decaying step size controlled by time-variant hyperparameters to exploit strong convexity. Theoretical analysis demonstrates that SAdam achieves a data-dependent $O(\log T)$ regret bound for strongly convex functions, which means that it converges faster than AMSgrad and AdamNC in such cases, and also enjoys a huge gain in the face of sparse gradients. + +Furthermore, under a special configuration of heperparameters, the proposed algorithm reduces to the SC-RMSprop (Mukkamala & Hein, 2017), which is a variant of RMSprop algorithm for strongly convex functions. We provide an alternative proof for SC-RMSprop, and establish the first data-dependent logarithmic regret bound. Finally, we evaluate the proposed algorithm on strongly convex problems as well as deep networks, and the empirical results demonstrate the effectiveness of our method. + +Notation. Throughout the paper, we use lower case bold face letters to denote vectors, lower case letters to denote scalars, and upper case letters to denote matrices. We use $\| \cdot \|$ to denote the $\ell_2$ -norm and $\| \cdot \|_{\infty}$ the infinite norm. For a positive definite matrix $H \in \mathbb{R}^{d \times d}$ , the weighted $\ell_2$ -norm is defined by $\| \mathbf{x} \|_H^2 = \mathbf{x}^\top H \mathbf{x}$ . The $H$ -weighted projection $\Pi_D^H(\mathbf{x})$ of $\mathbf{x}$ onto $\mathcal{D}$ is defined by $\Pi_D^H(\mathbf{x}) = \arg \min_{\mathbf{y} \in \mathcal{D}} \| \mathbf{y} - \mathbf{x} \|_H^2$ . We use $\mathbf{g}_t$ to denote the gradient of $f_t(\cdot)$ at $\mathbf{x}_t$ . For vector sequence $\{\mathbf{v}_t\}_{t=1}^T$ , we denote the $i$ -th element of $\mathbf{v}_t$ by $v_{t,i}$ . For diagonal matrix sequence $\{M_t\}_{t=1}^T$ , we use $m_{t,i}$ to denote the $i$ -th element in the diagonal of $M_t$ . We use $\mathbf{g}_{1:t,i} = [g_{1,i}, \dots, g_{t,i}]$ to denote the vector obtained by concatenating the $i$ -th element of the gradient sequence $\{\mathbf{g}_t\}_{t=1}^T$ . + +# 2 RELATED WORK + +In this section, we briefly review related work in online convex and strongly convex optimization. + +In the literature, most studies are devoted to the minimization of regret for convex functions. Under the assumptions that the infinite norm of gradients and the diameter of the decision set are bounded, OGD with step size on the order of $O(1 / \sqrt{t})$ (referred to as convex OGD) achieves a data-independent $O(d\sqrt{T})$ regret (Zinkevich, 2003), where $d$ is the dimension. To conduct more informative updates, Duchi et al. (2011) introduce Adagrad algorithm, which adjusts the step size of OGD in a per-dimension basis according to the geometry of the past gradients. In particular, the diagonal version of the algorithm updates decisions as + +$$ +\mathbf {x} _ {t + 1} = \mathbf {x} _ {t} - \frac {\alpha}{\sqrt {t}} V _ {t} ^ {- 1 / 2} \mathbf {g} _ {t} \tag {1} +$$ + +where $\alpha > 0$ is a constant factor, $V_{t}$ is a $d \times d$ diagonal matrix, and $\forall i \in [d], v_{t,i} = (\sum_{j=1}^{t} g_{j,i}^{2}) / t$ is the arithmetic average of the square of the $i$ -th elements of the past gradients. Intuitively, while the step size of Adagrad, i.e., $(\alpha / \sqrt{t}) V_{t}^{-1/2}$ , decreases generally on the order of $O(1 / \sqrt{t})$ as that in convex OGD, the additional matrix $V_{t}^{-1/2}$ will automatically increase step sizes for sparse dimensions in order to seize the infrequent yet valuable information therein. For convex functions, Adagrad enjoys an $O(\sum_{i=1}^{d} \| \mathbf{g}_{1:T,i} \|)$ regret, which is $O(d \sqrt{T})$ in the worst case and becomes tighter when gradients are sparse. + +Although Adagrad works well in sparse cases, its performance has been found to deteriorate when gradients are dense due to the rapid decay of the step size since it uses all the past gradients in the update (Zeiler, 2012). To tackle this issue, Tieleman & Hinton (2012) propose RMSprop, which alters the arithmetic average procedure with Exponential Moving Average (EMA), i.e., + +$$ +V _ {t} = \beta V _ {t - 1} + (1 - \beta) \operatorname {d i a g} \left(\mathbf {g} _ {t} \mathbf {g} _ {t} ^ {\top}\right) +$$ + +where $\beta \in [0,1]$ is a hyperparameter, and $\mathrm{diag}(\cdot)$ denotes extracting the diagonal matrix. In this way, the weights assigned to past gradients decay exponentially so that the reliance of the update is + +essentially limited to recent few gradients. Since the invention of RMSprop, many EMA variants of Adagrad have been developed (Zeiler, 2012; Kingma & Ba, 2015; Dozat, 2016). One of the most popular algorithms is Adam (Kingma & Ba, 2015), where the first-order momentum acceleration, shown in (2), is incorporated into RMSprop to boost the performance: + +$$ +\hat {\mathbf {g}} _ {t} = \beta_ {1} \hat {\mathbf {g}} _ {t - 1} + (1 - \beta_ {1}) \mathbf {g} _ {t} \tag {2} +$$ + +$$ +V _ {t} = \beta_ {2} V _ {t - 1} + (1 - \beta_ {2}) \operatorname {d i a g} \left(\mathbf {g} _ {t} \mathbf {g} _ {t} ^ {\top}\right) \tag {3} +$$ + +$$ +\mathbf {x} _ {t + 1} = \mathbf {x} _ {t} - \frac {\alpha}{\sqrt {t}} V _ {t} ^ {- 1 / 2} \hat {\mathbf {g}} _ {t} \tag {4} +$$ + +While it has been successfully applied to various practical applications, a recent study by Reddi et al. (2018) shows that Adam could fail to converge to the optimal decision even in some simple one-dimensional convex scenarios due to the potential rapid fluctuation of the step size. To resolve this issue, they design two modified versions of Adam. The first one is AMSgrad, + +$$ +\hat {V} _ {t} = \max \left\{\hat {V} _ {t - 1}, V _ {t} \right\} +$$ + +$$ +\mathbf {x} _ {t + 1} = \mathbf {x} _ {t} - \frac {\alpha}{\sqrt {t}} \hat {V} _ {t} ^ {- 1 / 2} \hat {\mathbf {g}} _ {t} +$$ + +where an additional element-wise maximization procedure is employed before the update of $\mathbf{x}_t$ to ensure a stable step size. The other is AdamNC, where the framework of Adam remains unchanged, yet a time-variant $\beta_{2}$ (i.e., $\beta_{2t}$ ) is adopted to keep the step size under control. Theoretically, the two algorithms achieve data-dependent $O(\sqrt{T}\sum_{i = 1}^{d}v_{T,i} + \sum_{i = 1}^{d}\| \mathbf{g}_{1:T,i}\| \log T)$ and $O(\sqrt{T}\sum_{i = 1}^{d}v_{T,i} + \sum_{i = 1}^{d}\| \mathbf{g}_{1:T,i}\|)$ regrets respectively. In the worst case, they suffer $O(d\sqrt{T}\log T)$ and $O(d\sqrt{T})$ regrets respectively, and enjoy a huge gain when data is sparse. + +Note that the aforementioned algorithms are mainly analysed in general convex settings and suffer at least $O(d\sqrt{T})$ regret in the worst case. For online strongly convex optimization, the classical OGD with step size proportional to $O(1 / t)$ (referred to as strongly convex OGD) achieves a data-independent $O(d\log T)$ regret (Hazan et al., 2007). Inspired by this, Mukkamala & Hein (2017) modify the update rule of Adagrad in (1) as follows + +$$ +\mathbf {x} _ {t + 1} = \mathbf {x} _ {t} - \frac {\alpha}{t} V _ {t} ^ {- 1} \mathbf {g} _ {t} +$$ + +so that the step size decays approximately on the order of $O(1 / t)$ , which is similar to that in strongly convex OGD. The new algorithm, named SC-Adagrad, is proved to enjoy a data-dependent regret bound of $O(\sum_{i = 1}^{d}\log (\| \mathbf{g}_{1:T,i}\| ^2))$ , which is $O(d\log T)$ in the worst case. They further extend this idea to RMSprop, and propose an algorithm named SC-RMSprop. However, as pointed out in Section 3, their regret bound for SC-RMSprop is in fact data-independent, and in this paper we provide the first data-dependent regret bound for this algorithm. + +Very recently, several modifications of Adam adapted to non-convex settings have been developed (Chen et al., 2019; Basu et al., 2018; Zhang et al., 2018; Shazeer & Stern, 2018). However, to our knowledge, none of these algorithms are particularly designed for strongly convex functions, nor enjoy a logarithmic regret bound. + +# 3 SADAM + +In this section, we first describe the proposed algorithm, then state its theoretical guarantees, and finally compare it with the SC-RMSprop algorithm. + +# 3.1 THE ALGORITHM + +Before proceeding to our algorithm, following previous studies, we introduce some standard definitions (Boyd & Vandenberghe, 2004) and assumptions (Reddi et al., 2018). + +Definition 1. A function $f(\cdot): \mathcal{D} \mapsto \mathbb{R}$ is $\lambda$ -strongly convex if $\forall \mathbf{x}_1, \mathbf{x}_2 \in \mathcal{D}$ , + +$$ +f \left(\mathbf {x} _ {1}\right) \geq f \left(\mathbf {x} _ {2}\right) + \nabla f \left(\mathbf {x} _ {2}\right) ^ {\top} \left(\mathbf {x} _ {1} - \mathbf {x} _ {2}\right) + \frac {\lambda}{2} \| \mathbf {x} _ {1} - \mathbf {x} _ {2} \| ^ {2}. \tag {5} +$$ + +# Algorithm 1 SAdam + +1: Input: $\{\beta_{1t}\}_{t=1}^{T}, \{\beta_{2t}\}_{t=1}^{T}, \delta$ +2: Initialize: $\hat{\mathbf{g}}_0 = \mathbf{0}$ , $\hat{V}_0 = \mathbf{0}_{d\times d},\mathbf{x}_1 = \mathbf{0}$ . +3: for $t = 1, \dots, T$ do +4: $\mathbf{g}_t = \nabla f_t(\mathbf{x}_t)$ +5: $\hat{\mathbf{g}}_t = \beta_{1t}\hat{\mathbf{g}}_{t - 1} + (1 - \beta_{1t})\mathbf{g}_t$ +6: $V_{t} = \beta_{2t}V_{t - 1} + (1 - \beta_{2t})\mathrm{diag}(\mathbf{g}_{t}\mathbf{g}_{t}^{\top})$ +7: $\dot{V}_t = V_t + \frac{\delta}{t} I_d$ +8: $\mathbf{x}_{t + 1} = \Pi_{\mathcal{D}}^{\hat{V}_t}\left(\mathbf{x}_t - \frac{\alpha}{t}\hat{V}_t^{-1}\hat{\mathbf{g}}_t\right)$ +9: end for + +Assumption 1. The infinite norm of the gradients of all loss functions are bounded by $G_{\infty}$ , i.e., their exists a constant $G_{\infty} > 0$ such that $\max_{\mathbf{x} \in \mathcal{D}} \| \nabla f_t(\mathbf{x}) \|_{\infty} \leq G_{\infty}$ holds for all $t \in [T]$ . + +Assumption 2. The decision set $\mathcal{D}$ is bounded. Specifically, their exists a constant $D_{\infty} > 0$ such that $\max_{\mathbf{x}_1,\mathbf{x}_2\in \mathcal{D}}\| \mathbf{x}_1 - \mathbf{x}_2\|_{\infty}\leq D_{\infty}$ . + +We are now ready to present our algorithm, which follows the general framework of Adam and is summarized in Algorithm 1. In each round $t$ , we firstly observe the gradient at $\mathbf{x}_t$ (Step 4), then compute the first-order momentum $\hat{\mathbf{g}}_t$ (Step 5). Here $\beta_{1t}$ a time-variant hyperparameter. Next, we calculate the second-order momentum $V_t$ by EMA of the square of past gradients (Step 6). This procedure is controlled by $\beta_{2t}$ , whose value will be discussed later. After that, we add a vanishing factor $\frac{\delta}{t}$ to the diagonal of $V_t$ and get $\hat{V}_t$ (Step 7), which is a standard technique for avoiding too large steps caused by small gradients in the beginning iterations. Finally, we update the decision by $\hat{\mathbf{g}}_t$ and $\hat{V}_t$ , which is then projected onto the decision set (Step 8). + +While SAdam is inspired by Adam, there exist two key differences: One is the update rule of $\mathbf{x}_t$ in Step 8, and the other is the configuration of $\beta_{2t}$ in Step 6. Intuitively, both modifications stem from strongly convex OGD, and jointly result in a faster decaying yet under controlled step size which helps utilize the strong convexity while preserving the practical benefits of Adam. Specifically, in the first modification, we remove the square root operation in (4) of Adam, and update $\mathbf{x}_t$ at Step 8 as follows + +$$ +\mathbf {x} _ {t + 1} = \mathbf {x} _ {t} - \frac {\alpha}{t} \hat {V} _ {t} ^ {- 1} \hat {\mathbf {g}} _ {t}. \tag {6} +$$ + +In this way, the step size used to update the $i$ -th element of $\mathbf{x}_t$ is $\frac{\alpha}{t}\hat{v}_{t,i}^{-1}$ , which decays in general on the order of $O(1 / t)$ , and can still be automatically tuned in a per-feature basis via the EMA of the historical gradients. + +The second modification is made to $\beta_{2t}$ , which determines the value of $V_{t}$ and thus also controls the decaying rate of the step size. To help understand the motivation behind our algorithm, we first revisit Adam, where $\beta_{2t}$ is simply set to be constant, which, however, could cause rapid fluctuation of the step size, and further leads to the non-convergence issue. To ensure convergence, Reddi et al. (2018) propose that $\beta_{2t}$ should satisfy the following two conditions: + +Condition 1. $\forall t\in [T]$ and $i\in [d]$ + +$$ +\frac {\sqrt {t} v _ {t , i} ^ {1 / 2}}{\alpha} - \frac {\sqrt {t - 1} v _ {t - 1 , i} ^ {1 / 2}}{\alpha} \geq 0. +$$ + +Condition 2. For some $\zeta >0$ and $\forall t\in [T],i\in [d],$ + +$$ +\sqrt {t \sum_ {j = 1} ^ {t} \Pi_ {k = 1} ^ {t - j} \beta_ {2 (t - k + 1)} (1 - \beta_ {2 j}) g _ {j , i} ^ {2}} \geq \frac {1}{\zeta} \sqrt {\sum_ {j = 1} ^ {t} g _ {j , i} ^ {2}}. +$$ + +The first condition implies that the difference between the inverses of step sizes in two consecutive rounds is positive. It is inherently motivated by convex OGD (i.e., OGD with step size $\frac{\alpha}{\sqrt{t}}$ , where $\alpha > 0$ is a constant factor), in which + +$$ +\frac {\sqrt {t}}{\alpha} - \frac {\sqrt {t - 1}}{\alpha} \geq 0, \forall t \in [ T ] +$$ + +is a key condition used in the analysis. We first modify Condition 1 by mimicking the behavior of strongly convex OGD as we are devoted to minimizing regret for strongly convex functions. In strongly convex OGD (Hazan et al., 2007), the step size at each round $t$ is set as $\frac{\alpha}{t}$ with $\alpha \geq \frac{1}{\lambda}$ for $\lambda$ -strongly convex functions. Under this configuration, we have + +$$ +\frac {t}{\alpha} - \frac {t - 1}{\alpha} \leq \lambda , \forall t \in [ T ]. \tag {7} +$$ + +Motivated by this, we propose the following condition for our SAdam, which is an analog to (7). + +Condition 3. Their exists a constant $C > 0$ such that for any $\alpha \geq \frac{C}{\lambda}$ , we have $\forall t \in [T]$ and $i \in [d]$ , + +$$ +\frac {t v _ {t , i}}{\alpha} - \frac {(t - 1) v _ {t - 1 , i}}{\alpha} \leq \lambda (1 - \beta_ {1}). \tag {8} +$$ + +Note that the extra $(1 - \beta_{1})$ in the righthand side of (8) is necessary because SAdam involves the first-order momentum in its update. + +Finally, since the step size of SAdam scales with $1 / t$ rather than $1 / \sqrt{t}$ in Adam, we modify Condition 2 accordingly as follows: + +Condition 4. For some $\zeta >0,\forall t\in [T]$ and $i\in [d]$ + +$$ +t \sum_ {j = 1} ^ {t} \prod_ {k = 1} ^ {t - j} \beta_ {2 (t - k + 1)} \left(1 - \beta_ {2 j}\right) g _ {j, i} ^ {2} \geq \frac {1}{\zeta} \sum_ {j = 1} ^ {t} g _ {j, i} ^ {2}. \tag {9} +$$ + +# 3.2 THEORETICAL GUARANTEES + +In the following, we give a general regret bound when the two conditions are satisfied. + +Theorem 1. Suppose Assumptions 1 and 2 hold, and all loss functions $f_{1}(\cdot), \ldots, f_{T}(\cdot)$ are $\lambda$ -strongly convex. Let $\delta > 0$ , $\beta_{1t} = \beta_{1}\nu^{t - 1}$ , where $\beta_{1} \in [0,1)$ , $\nu \in [0,1)$ , and $\{\beta_{2t}\}_{t=1}^{T} \in [0,1]^{T}$ be a parameter sequence such that Conditions 3 and 4 are satisfied. Let $\alpha \geq \frac{C}{\lambda}$ . The regret of SAadam satisfies + +$$ +R (T) \leq \frac {d D _ {\infty} ^ {2} \delta}{2 \alpha \left(1 - \beta_ {1}\right)} + \frac {d \beta_ {1} D _ {\infty} ^ {2} \left(G _ {\infty} ^ {2} + \delta\right)}{2 \alpha \left(1 - \beta_ {1}\right) (\nu - 1) ^ {2}} + \frac {\alpha \zeta}{\left(1 - \beta_ {1}\right) ^ {3}} \sum_ {i = 1} ^ {d} \log \left(\frac {1}{\zeta \delta} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). \tag {10} +$$ + +Remark 1. The above theorem implies that our algorithm enjoys an $O(\sum_{i=1}^{d} \log (\| \mathbf{g}_{1:T,i} \|^2))$ regret bound, which is $O(d \log T)$ in the worst case, and automatically becomes tighter whenever the gradients are small or sparse such that $\| \mathbf{g}_{1:T,i} \|^2 \ll G_\infty^2 T$ for some $i \in [d]$ . The superiority of data-dependent bounds have been witnessed by a long list of literature, such as Duchi et al. (2011); Mukkamala & Hein (2017); Reddi et al. (2018). In the following, we give some concrete examples: + +- Consider a one-dimensional sparse setting where non-zero gradient appears with probability $c / T$ and $c > 1$ is a constant. Then $\mathbb{E}\left[\log \left(\sum_{t = 1}^{T}g_{t,1}^{2}\right)\right] = O(\log (c))$ , which is a constant factor. +- Consider a high-dimensional sparse setting where in each dimension of gradient non-zero element appears with probability $p = T^{(m - d) / d}$ with $m \in [1, d)$ being a constant. Then, $\mathbb{E}\left[\sum_{i=1}^{d} \log \left(\sum_{j=1}^{T} g_{j,i}^2\right)\right] = O(m \log T)$ , which is much tighter than $O(d \log T)$ . + +Remark 2. In practice, first-order momentum is a powerful technique that can significantly boost the performance (Reddi et al., 2018), and our paper is the first to show that algorithms equipped with such technique can achieve logarithmic regret bound for strongly convex functions. However, since the regret bound of SAdam is data-dependent, it is difficult to rigorously analyse the influence of the first-order momentum parameter $\beta_{1}$ as it affects all the gradients appearing in the last term of the regret of Theorem 1. We will further investigate this in the feature work. We note that the regret bounds of adaptive algorithms with first-order momentum (e.g., Reddi et al., 2018; Chen et al., 2019) all share a similar structure as our regret bound with respect to $\beta_{1}$ . + +Next, we provide an instantiation of $\{\beta_{2t}\}_{t=1}^{T}$ such that Conditions 3 and 4 hold, and derive the following Corollary. + +Corollary 2. Suppose Assumptions 1 and 2 hold, and all loss functions $f_{1}(\cdot), \ldots, f_{T}(\cdot)$ are $\lambda$ -strongly convex. Let $\delta > 0$ , $\beta_{1t} = \beta_{1}\nu^{t - 1}$ where $\nu, \beta_{1} \in [0,1)$ , and $1 - \frac{1}{t} \leq \beta_{2t} \leq 1 - \frac{\gamma}{t}$ , where $\gamma \in (0,1]$ . Then we have: + +1. For any $\alpha \geq \frac{(2 - \gamma)G_{\infty}^2}{\lambda(1 - \beta_1)}$ , $\forall t \in [T]$ and $i \in [d]$ , + +$$ +\frac {t v _ {t , i}}{\alpha} - \frac {(t - 1) v _ {t - 1 , i}}{\alpha} \leq \lambda (1 - \beta_ {1}). +$$ + +2. For all $t\in [T]$ and $j\in [d]$ + +$$ +t \sum_ {j = 1} ^ {t} \prod_ {k = 1} ^ {t - j} \beta_ {2 (t - k + 1)} \left(1 - \beta_ {2 j}\right) g _ {j, i} ^ {2} \geq \gamma \sum_ {j = 1} ^ {t} g _ {j, i} ^ {2}. +$$ + +Moreover, let $\alpha \geq \frac{(2 - \gamma)G_{\infty}^2}{\lambda(1 - \beta_1)}$ , and the regret of SAadam satisfies: + +$$ +R (T) \leq \frac {d D _ {\infty} ^ {2} \delta}{2 \alpha (1 - \beta_ {1})} + \frac {d \beta_ {1} D _ {\infty} ^ {2} \left(G _ {\infty} ^ {2} + \delta\right)}{2 \alpha (1 - \beta_ {1}) (\nu - 1) ^ {2}} + \frac {\alpha}{\gamma (1 - \beta_ {1}) ^ {3}} \sum_ {i = 1} ^ {d} \log \left(\frac {\gamma}{\delta} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). +$$ + +Furthermore, as a special case, by setting $\beta_{1t} = 0$ and $1 - \frac{1}{t} \leq \beta_{2t} \leq 1 - \frac{\gamma}{t}$ , our algorithm reduces to SC-RMSprop (Mukkamala & Hein, 2017), which is a variant of RMSprop for strongly convex functions. Although Mukkamala & Hein (2017) have provided theoretical guarantees for this algorithm, we note that their regret bound is in fact data-independent. Specifically, the regret bound provided by Mukkamala & Hein (2017) takes the following form: + +$$ +R (T) \leq \frac {\delta d D _ {\infty} ^ {2}}{2 \alpha} + \frac {\alpha}{2 \gamma} \sum_ {i = 1} ^ {d} \log \left(\frac {T v _ {T , i}}{\delta} + 1\right) + \frac {\alpha}{\gamma} \sum_ {i = 1} ^ {d} \frac {(1 - \gamma) (1 + \log T)}{\inf _ {j \in [ 1 , T ]} j v _ {j , i} + \delta}. \tag {11} +$$ + +Focusing on the denominator of the last term in (11), we have + +$$ +\inf _ {j \in [ 1, T ]} j v _ {j, i} + \delta \leq 1 v _ {1, i} + \delta \leq G _ {\infty} + \delta +$$ + +thus + +$$ +\frac {\alpha}{\gamma} \sum_ {i = 1} ^ {d} \frac {(1 - \gamma) (1 + \log T)}{\inf _ {j \in [ 1 , T ]} j v _ {j , i} + \delta} \geq \frac {d \alpha}{\gamma} \frac {(1 - \gamma) (\log T + 1)}{G _ {\infty} + \delta} +$$ + +which implies that their regret bound is of order $O(d\log T)$ , and thus data-independent. In contrast, based on Corollary 2, we present a new regret bound for SC-RMSprop in the following, which is $O(\sum_{i=1}^{d}\log(\|\mathbf{g}_{1:T,i}\|^2))$ , and thus data-dependent. + +Corollary 3. Suppose Assumptions 1 and 2 hold, and all loss functions $f_{1}(\cdot), \ldots, f_{T}(\cdot)$ are $\lambda$ -strongly convex. Let $\delta > 0$ , $\beta_{1t} = 0$ , and $1 - \frac{1}{t} \leq \beta_{2t} \leq 1 - \frac{\gamma}{t}$ , where $\gamma \in (0,1]$ . Let $\alpha \geq \frac{(2 - \gamma)G_{\infty}^2}{\lambda}$ . Then SAdam reduces to SC-RMSprop, and its regret satisfies + +$$ +R (T) \leq \frac {d D _ {\infty} ^ {2} \delta}{2 \alpha} + \frac {\alpha}{\gamma} \sum_ {i = 1} ^ {d} \log \left(\frac {\gamma}{\delta} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). \tag {12} +$$ + +Finally, we note that Mukkamala & Hein (2017) also consider a more general version of SC-RMSprop which uses a time-variant non-increasing $\delta$ for each dimension $i$ . In Appendix D we introduce the $\delta$ -variant technique to our SAdam, and provide the corresponding theoretical guarantee. + +# 4 EXPERIMENTS + +In this section, we present empirical results on optimizing strongly convex functions and training deep networks. More results can be found in Appendix. + +Algorithms. In both experiments, we compare the following algorithms: + +![](images/3312999bdb8b133c5039dd2bcafda55e8b608fc09551779009e93d5dc0e4d99d.jpg) +(a) MNIST + +![](images/7933d89e9c6d7dda6795ad8cf83c31be093d802587fb85566eb2c08777ab62fc.jpg) +(b) CIFAR10 + +![](images/57cfd37bb32d27e9af0f64d2360eaeb9a207f22a2b74dd0750585a0f701bb5b1.jpg) +(c) CIFAR100 +Fig. 1: Regret v.s. data proportion for $\ell_2$ -regularized softmax regression + +- SC-Adagrad (Mukkamala & Hein, 2017), with step size $\alpha_{t} = \alpha / t$ . +SC-RMSprop (Mukkamala & Hein, 2017), with step size $\alpha_{t} = \alpha / t$ and $\beta_{t} = 1 - \frac{0.9}{t}$ . +- Adam (Kingma & Ba, 2015) and AMSgrad (Reddi et al., 2018), both with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , $\alpha_{t} = \alpha / \sqrt{t}$ for convex problems and time-invariant $\alpha_{t} = \alpha$ for non-convex problems. +- AdamNC (Reddi et al., 2018), with $\beta_{1} = 0.9$ , $\beta_{2t} = 1 - 1 / t$ , and $\alpha_{t} = \alpha / \sqrt{t}$ for convex problems and a time-invariant $\alpha_{t} = \alpha$ for non-convex problems. +- Online Gradient Descent (OGD), with step size $\alpha_{t} = \alpha / t$ for strongly convex problems and a time-invariant $\alpha_{t} = \alpha$ for non-convex problems. +- Our proposed SAdam, with $\beta_{1} = 0.9$ , $\beta_{2t} = 1 - \frac{0.9}{t}$ . + +For Adam, AdamNC and AMSgrad, we choose $\delta = 10^{-8}$ according to the recommendations in their papers. For SC-Adagrad and SC-RMSprop, following Mukkamala & Hein (2017), we choose a time-variant $\delta_{t,i} = \xi_2e^{-\xi_1tv_{t,i}}$ for each dimension $i$ , with $\xi_1 = 0.1$ , $\xi_2 = 1$ for convex problems and $\xi_1 = 0.1$ , $\xi_2 = 0.1$ for non-convex problems. For our SAdam, since the removing of the square root procedure and very small gradients may cause too large step sizes in the beginning iterations, we use a rather large $\delta = 10^{-2}$ to avoid this problem. To conduct a fair comparison, for each algorithm, we choose $\alpha$ from the set $\{0.1, 0.01, 0.001, 0.0001\}$ and report the best results. + +Datasets. In both experiments, we examine the performances of the aforementioned algorithms on three widely used datasets: MNIST (60000 training samples, 10000 test samples), CIFAR10 (50000 training samples, 10000 test samples), and CIFAR100 (50000 training samples, 10000 test samples). We refer to LeCun (1998) and Krizhevsky (2009) for more details of the three datasets. + +# 4.1 OPTIMIZING STRONGLY CONVEX FUNCTIONS + +In the first experiment, we consider the problem of mini-batch $\ell_2$ -regularized softmax regression, which belongs to the online strongly convex optimization framework. Let $K$ be the number of classes and $m$ be the batch size. In each round $t$ , firstly a mini-batch of training samples $\{(\mathbf{x}_m,y_m)\}_{i=1}^m$ arrives, where $y_i \in [K], \forall i \in [m]$ . Then, the algorithm predicts parameter vectors $\{\mathbf{w}_i,b_i\}_{i=1}^K$ , and suffers a loss which takes the following form: + +$$ +J (\mathbf {w}, b) = - \frac {1}{m} \sum_ {i = 1} ^ {m} \log \left(\frac {e ^ {\mathbf {w} _ {y _ {i}} ^ {\top} \mathbf {x} _ {i} + b _ {y _ {i}}}}{\sum_ {j = 1} ^ {K} e ^ {\mathbf {w} _ {j} ^ {\top} \mathbf {x} _ {i} + b _ {j}}}\right) + \lambda_ {1} \sum_ {k = 1} ^ {K} \| \mathbf {w} _ {k} \| ^ {2} + \lambda_ {2} \sum_ {k = 1} ^ {K} b _ {k} ^ {2}. +$$ + +The value of $\lambda_{1}$ and $\lambda_{2}$ are set to be $10^{-2}$ for all experiments. The regret (in log scale) v.s. dataset proportion is shown in Fig. 1. It can be seen that our SAdam outperforms other methods across all the considered datasets. Besides, we observe that data-dependent strongly convex methods such as SC-Adagrad, SC-RMSprop and SAdam preform better than algorithms for general convex functions such as Adam, AMSgrad and AdamNC. Finally, OGD has the overall highest regret on all three datasets. + +![](images/79c633ca95111438959a836fc931941015a763b99995734ebce012684c45a241.jpg) +(a) MNIST + +![](images/5a20dcafa83599bf945cc81c34654f806c22e2c523e236d7624a0a399ddd8a7d.jpg) +(b) CIFAR10 + +![](images/9ad0629d652e2a51025fb797934635780f14d8ddd39d1d0bbb2cd41a600042c7.jpg) +(c) CIFAR100 + +![](images/abcec58f04ec3ecfc7cfa01671b6eec18f959bdcbce596e61e4f62271db3874c.jpg) +(a) MNIST +Fig. 3: Testing accuracy v.s. number of epochs for 4-layer CNN + +![](images/4ba5e808741190e206fda7c272c711a987b88f12b3ddf6d019826b34c08b48e1.jpg) +(b) CIFAR10 + +![](images/356be85f4cf571f498b0492948c85064a387d44fd6c8aa8b283fb795ff3dc6a0.jpg) +Fig. 2: Training loss v.s. number of epochs for 4-layer CNN +(c) CIFAR100 + +# 4.2 TRAINING DEEP NETWORKS + +Following Mukkamala & Hein (2017), we also conduct experiments on a 4-layer CNN, which consists of two convolutional layers (each with 32 filters of size $3 \times 3$ ), one max-pooling layer (with a $2 \times 2$ window and 0.25 dropout), and one fully connected layer (with 128 hidden units and 0.5 dropout). We employ ReLU function as the activation function for convolutional layers and softmax function as the activation function for the fully connected layer. The loss function is the cross-entropy. The training loss v.s. epoch is shown in Fig. 2, and the testing accuracy v.s. epoch is presented in Fig. 4. As can be seen, our SAdam achieves the lowest training loss on the three data sets. Moreover, this performance gain also translates into good performance on testing accuracy. The experimental results show that although our proposed SAdam is designed for strongly convex functions, it could lead to superior practical performance even in some highly non-convex cases such as deep learning tasks. + +# 5 CONCLUSION AND FUTURE WORK + +In this paper, we provide a variant of Adam adapted to strongly convex functions. The proposed algorithm, namely SAdam, follows the general framework of Adam, while keeping a step size decaying in general on the order of $O(1 / t)$ and controlled by data-dependent hyperparameters to exploit strong convexity. Theoretical analysis shows that SAdam achieves a data-dependent $O(d\log T)$ regret bound for strongly convex functions, which means that it converges much faster than Adam, AdamNC, and AMSgrad in such cases, and can enjoy a huge gain in the face of sparse gradients. In addition, we also provide the first data-dependent logarithmic regret bound for SC-RMSprop. Finally, we test the proposed algorithm on optimizing strongly convex functions as well as training deep networks, and the empirical results demonstrate the effectiveness of our method. + +Since SAdam enjoys a data-dependent $O(d\log T)$ regret for online strongly convex optimization, it can be easily translated into a data-dependent $O(d\log T / T)$ convergence rate for stochastic strongly convex optimization (SSCO) by using the online-to-batch conversion (Kakade & Tewari, 2009). However, this rate is not optimal for SSCO, and it is sill an open problem how to achieve a data-dependent $O(d / T)$ convergence rate for SSCO. Recent development on adaptive gradient method (Chen et al., 2018) has proved that Adagrad combined with the multi-stage scheme (Hazan & Kale, 2014) can achieve this rate, but it is highly non-trivial to extend this technique to SAdam, and we leave it as a future work. + +# 6 ACKNOWLEDGEMENT + +This work was partially supported by NSFC (61976112), NSFC-NRF Joint Research Project (61861146001), and the Collaborative Innovation Center of Novel Software Technology and Industrialization. + +# REFERENCES + +Parnia Bahar, Tamer Alkhouli, Jan-Thorsten Peter, Christopher Jan-Steffen Brix, and Hermann Ney. Empirical investigation of optimization algorithms in neural machine translation. The Prague Bulletin of Mathematical Linguistics, 108(1):13-25, 2017. +Amitabh Basu, Soham De, Anirbit Mukherjee, and Enayat Ullah. Convergence guarantees for rmsprop and adam in non-convex optimization and their comparison to nesterov acceleration on autoencoders. arXiv preprint arXiv:1807.06766, 2018. +Sebastian Bock, Josef Goppold, and Martin Wei. An improvement of the convergence proof of the adam-optimizer. arXiv preprint arXiv:1804.10587, 2019. +Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. +Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the convergence of a class of adam-type algorithms for non-convex optimization. In Proceedings of the 7th International Conference on Learning Representations, 2019. +Zaiyi Chen, Yi Xu, Enhong Chen, and Tianbao Yang. Sadagrad: Strongly adaptive stochastic gradient methods. In Proceedings of 35th International Conference on Machine Learning, pp. 912-920, 2018. +Michael Denkowski and Graham Neubig. Stronger baselines for trustable results in neural machine translation. In Proceedings of the 1st Workshop on Neural Machine Translation, pp. 18-27, 2017. +Timothy Dozat. Incorporating nesterov momentum into adam. In Proceedings of 4th International Conference on Learning Representations, Workshop Track, 2016. +John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, 2011. +Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: a recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1462-1471, 2015. +Elad Hazan and Satyen Kale. Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization. Journal of Machine Learning Research, 15:2489-2512, 2014. +Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69:169-192, 2007. +Elad Hazan et al. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2(3-4):157-325, 2016. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016. +Sham M Kakade and Ambuj Tewari. On the generalization ability of online strongly convex programming algorithms. In Advances in Neural Information Processing Systems 21, pp. 801-808, 2009. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of 3rd International Conference on Learning Representations, 2015. + +Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information Processing Systems 27, pp. 3294-3302, 2015. +Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +Yann LeCun. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998. +H Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. In Proceedings of the 23rd Annual Conference on Learning Theory, pp. 224-256, 2010. +Mahesh Chandra Mukkamala and Matthias Hein. Variants of rmsprop and adagrad with logarithmic regret bounds. In Proceedings of the 33rd International Conference on Machine Learning, pp. 2545-2553, 2017. +Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In Proceedings of 6th International Conference on Learning Representations, 2018. +Shai Shalev-Shwartz et al. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107-194, 2012. +Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In Proceedings of the 35th International Conference on Machine Learning, pp. 4596-4604, 2018. +Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, pp. 26-31, 2012. +Phuong Thi Tran et al. On the convergence proof of amsgrad and a new version. IEEE Access, 7: 61706-61716, 2019. +Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of 32nd International Conference on Machine Learning, pp. 2048-2057, 2015. +Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. +Jiawei Zhang, Limeng Cui, and Fisher B Gouza. Gadam: Genetic-evolutionary adam for deep neural network optimization. arXiv preprint arXiv:1805.07500, 2018. +Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pp. 928-936, 2003. + +# A PROOF OF THEOREM 1 + +From Definition 1, we can upper bound regret as + +$$ +R (T) = \sum_ {t = 1} ^ {T} f _ {t} (\mathbf {x} _ {t}) - \sum_ {t = 1} ^ {T} f _ {t} (\mathbf {x} _ {*}) \stackrel {(5)} {\leq} \sum_ {t = 1} ^ {T} \mathbf {g} _ {t} ^ {\top} \left(\mathbf {x} _ {t} - \mathbf {x} _ {*}\right) - \frac {\lambda}{2} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2} \tag {13} +$$ + +where $\mathbf{x}_{*} := \min_{\mathbf{x} \in \mathcal{D}} \sum_{t=1}^{T} f_{t}(\mathbf{x})$ is the best decision in hindsight. On the other hand, by the update rule of $\mathbf{x}_{t+1}$ in Algorithm 1, we have + +$$ +\begin{array}{l} \left\| \mathbf {x} _ {t + 1} - \mathbf {x} _ {*} \right\| _ {\hat {V} _ {t}} ^ {2} = \left\| \Pi_ {\mathcal {D}} ^ {\hat {V} _ {t}} \left(\mathbf {x} _ {t} - \alpha_ {t} \hat {V} _ {t} ^ {- 1} \hat {\mathbf {g}} _ {t}\right) - \mathbf {x} _ {*} \right\| _ {\hat {V} _ {t}} ^ {2} \\ \leq \left\| \mathbf {x} _ {t} - \alpha_ {t} \hat {V} _ {t} ^ {- 1} \hat {\mathbf {g}} _ {t} - \mathbf {x} _ {*} \right\| _ {\hat {V} _ {t}} ^ {2} \\ = - 2 \alpha_ {t} \hat {\mathbf {g}} _ {t} ^ {\top} \left(\mathbf {x} _ {t} - \mathbf {x} _ {*}\right) + \left\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \right\| _ {\hat {V} _ {t}} ^ {2} + \alpha_ {t} ^ {2} \left\| \hat {\mathbf {g}} _ {t} \right\| _ {\hat {V} _ {t} ^ {- 1}} ^ {2} \tag {14} \\ = - 2 \alpha_ {t} \left(\beta_ {1 t} \hat {\mathbf {g}} _ {t - 1} + (1 - \beta_ {1 t}) \mathbf {g} _ {t}\right) ^ {\top} \left(\mathbf {x} _ {t} - \mathbf {x} _ {*}\right) \\ + \left\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \right\| _ {\hat {V} _ {t}} ^ {2} + \alpha_ {t} ^ {2} | | \hat {\mathbf {g}} _ {t} | | _ {\hat {V} _ {t} ^ {- 1}} ^ {2} \\ \end{array} +$$ + +where $\alpha_{t} \coloneqq \alpha / t$ , and the inequality is due to the following lemma, which implies that the weighted projection procedure is non-expansive. + +Lemma 1. (McMahan & Streeter, 2010) Let $A \in \mathbb{R}^{d \times d}$ be a positive definite matrix and $\mathcal{D} \subseteq \mathbb{R}^d$ be a convex set. Then we have, $\forall \mathbf{x}, \mathbf{y} \in \mathbb{R}^d$ , + +$$ +\left\| \Pi_ {\mathcal {D}} ^ {A} (\mathbf {x}) - \Pi_ {\mathcal {D}} ^ {A} (\mathbf {y}) \right\| _ {A} \leq \| \mathbf {x} - \mathbf {y} \| _ {A}. \tag {15} +$$ + +Rearranging (14), we have + +$$ +\begin{array}{l} \mathbf {g} _ {t} ^ {\top} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \leq \frac {\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2} - \| \mathbf {x} _ {t + 1} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2}}{2 \alpha_ {t} (1 - \beta_ {1 t})} + \frac {\beta_ {1 t}}{1 - \beta_ {1 t}} \hat {\mathbf {g}} _ {t - 1} ^ {\top} (\mathbf {x} _ {*} - \mathbf {x} _ {t}) \\ + \frac {\alpha_ {t}}{2 (1 - \beta_ {1 t})} \| \tilde {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t}} ^ {2} \tag {16} \\ \leq \frac {\left\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \right\| _ {\hat {V} _ {t}} ^ {2} - \left\| \mathbf {x} _ {t + 1} - \mathbf {x} _ {*} \right\| _ {\hat {V} _ {t}} ^ {2}}{2 \alpha_ {t} (1 - \beta_ {1 t})} + \frac {\beta_ {1 t}}{1 - \beta_ {1 t}} \hat {\mathbf {g}} _ {t - 1} ^ {\top} (\mathbf {x} _ {*} - \mathbf {x} _ {t}) \\ + \frac {\alpha_ {t}}{2 (1 - \beta_ {1})} \| \hat {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t} ^ {- 1}} ^ {2} \\ \end{array} +$$ + +where the last inequality is due to $\beta_{1t} \leq \beta_1$ . We proceed to bound the second term of the inequality above. When $t \geq 2$ , by Young's inequality and the definition of $\beta_{1t}$ , we get + +$$ +\begin{array}{l} \frac {\beta_ {1 t}}{1 - \beta_ {1 t}} \hat {\mathbf {g}} _ {t - 1} ^ {\top} \left(\mathbf {x} _ {*} - \mathbf {x} _ {t}\right) \leq \frac {\alpha_ {t - 1} \beta_ {1 t}}{2 \left(1 - \beta_ {1 t}\right)} \| \hat {\mathbf {g}} _ {t - 1} \| _ {\hat {V} _ {t - 1} ^ {- 1}} ^ {2} + \frac {\beta_ {1 t}}{2 \alpha_ {t - 1} \left(1 - \beta_ {1 t}\right)} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2} \tag {17} \\ \leq \frac {\alpha_ {t - 1}}{2 (1 - \beta_ {1})} \| \hat {\mathbf {g}} _ {t - 1} \| _ {\hat {V} _ {t - 1} ^ {- 1}} ^ {2} + \frac {\beta_ {1 t}}{2 \alpha_ {t - 1} (1 - \beta_ {1 t})} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2}. \\ \end{array} +$$ + +When $t = 1$ , this term becomes 0 since $\hat{\mathbf{g}}_0 = \mathbf{0}$ in Algorithm 1. Plugging (16) and (17) into (13), we get the following inequality, of which we divide the righthand side into three parts and upper bound each of them one by one. + +$$ +\begin{array}{l} R (T) \leq \underbrace {\sum_ {t = 1} ^ {T} \left(\frac {\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\dot {V} _ {t}} ^ {2} - \| \mathbf {x} _ {t + 1} - \mathbf {x} _ {*} \| _ {\dot {V} _ {t}} ^ {2}}{2 \alpha_ {t} (1 - \beta_ {1 t})} - \frac {\lambda}{2} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2}\right)} _ {P _ {1}} \\ + \underbrace {\frac {1}{2 (1 - \beta_ {1})} \sum_ {t = 2} ^ {T} \alpha_ {t - 1} \| \hat {\mathbf {g}} _ {t - 1} \| _ {V _ {t - 1} ^ {- 1}} ^ {2} + \frac {\sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {V _ {t} ^ {- 1}} ^ {2}}{2 (1 - \beta_ {1})}} _ {P _ {2}} + \underbrace {\sum_ {t = 2} ^ {T} \frac {\beta_ {1 t}}{2 \alpha_ {t - 1} (1 - \beta_ {1 t})} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {V _ {t - 1}} ^ {2}} _ {P _ {3}}. \\ \end{array} +$$ + +To bound $P_{1}$ , we have + +$$ +\begin{array}{l} P _ {1} = \frac {\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \| _ {V _ {1}} ^ {2}}{2 \alpha_ {1} (1 - \beta_ {1})} \underbrace {- \frac {\| \mathbf {x} _ {T + 1} - \mathbf {x} _ {*} \| _ {V _ {T}} ^ {2}}{2 \alpha_ {T} (1 - \beta_ {1 T})}} _ {\leq 0} - \sum_ {t = 1} ^ {T} \left(\frac {\lambda}{2} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2}\right) \\ + \sum_ {t = 2} ^ {T} \left(\frac {\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2}}{2 \alpha_ {t} (1 - \beta_ {1 t})} - \frac {\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2}}{2 \alpha_ {t - 1} (1 - \beta_ {1 (t - 1)})}\right) \\ \leq \frac {\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \| _ {\hat {V} _ {1}} ^ {2}}{2 \alpha_ {1} (1 - \beta_ {1})} - \sum_ {t = 1} ^ {T} \left(\frac {\lambda}{2} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2}\right) + \sum_ {t = 2} ^ {T} \frac {1}{1 - \beta_ {1 t}} \left(\frac {\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2}}{2 \alpha_ {t}} - \frac {\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2}}{2 \alpha_ {t - 1}}\right) \\ = \sum_ {t = 2} ^ {T} \frac {1}{2 \alpha (1 - \beta_ {1 t})} \left(t \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2} - (t - 1) \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2} - \lambda \alpha (1 - \beta_ {1 t}) \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2}\right) \\ + \left(\frac {\left\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \right\| _ {V _ {1}} ^ {2}}{2 \alpha_ {1} (1 - \beta_ {1})} - \frac {\lambda}{2} \left\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \right\| ^ {2}\right) \tag {18} \\ \end{array} +$$ + +where the inequality is derived from $\beta_{1(t - 1)}\geq \beta_{1t}$ . For the first term in the last equality of (18), we have + +$$ +\begin{array}{l} t \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2} - (t - 1) \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2} - \lambda \alpha (1 - \beta_ {1 t}) \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2} \\ = \sum_ {i = 1} ^ {d} \left(x _ {t, i} - x _ {* , i}\right) ^ {2} \left(t \hat {v} _ {t, i} - (t - 1) \hat {v} _ {t - 1, i} - \lambda \alpha \left(1 - \beta_ {1 t}\right)\right) \\ = \sum_ {i = 1} ^ {d} \left(x _ {t, i} - x _ {* , i}\right) ^ {2} \left(t v _ {t, i} - (t - 1) v _ {t - 1, i} - \lambda \alpha \left(1 - \beta_ {1 t}\right)\right) \tag {19} \\ \stackrel {(8)} {\leq} \sum_ {i = 1} ^ {d} (x _ {t, i} - x _ {*, i}) ^ {2} (\alpha \lambda (1 - \beta_ {1}) - \lambda \alpha (1 - \beta_ {1 t})) \leq 0 \\ \end{array} +$$ + +where the second equality is because $\hat{v}_{t,i} = v_{t,i} + \frac{\delta}{t}$ , and the second inequality is due to $1 - \beta_{1} \leq 1 - \beta_{1t}$ . + +For the second term of (18), we have + +$$ +\begin{array}{l} \left(\frac {\left\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \right\| _ {\hat {V} _ {1}} ^ {2}}{2 \alpha_ {1} (1 - \beta_ {1})} - \frac {\lambda}{2} \left\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \right\| ^ {2}\right) = \sum_ {i = 1} ^ {d} (x _ {1, i} - x _ {*}, i) ^ {2} \left(\frac {\hat {v} _ {1 , i} - \lambda \alpha (1 - \beta_ {1})}{2 \alpha (1 - \beta_ {1})}\right) \\ = \sum_ {i = 1} ^ {d} \left(x _ {1, i} - x _ {*, i}\right) ^ {2} \left(\frac {v _ {1 , i} - \lambda \alpha \left(1 - \beta_ {1}\right) + \delta}{2 \alpha \left(1 - \beta_ {1}\right)}\right) \tag {20} \\ \leq \sum_ {i = 1} ^ {d} \left(x _ {1, i} - x _ {*, i}\right) ^ {2} \left(\frac {\delta}{2 \alpha \left(1 - \beta_ {1}\right)}\right) \\ \leq \frac {d D _ {\infty} ^ {2} \delta}{2 \alpha (1 - \beta_ {1})} \\ \end{array} +$$ + +where first inequality is due to Condition 3, and the second inequality follows from Assumption 2. + +Combining (18), (19) and (20), we get + +$$ +P _ {1} \leq \frac {d D _ {\infty} ^ {2} \delta}{2 \alpha \left(1 - \beta_ {1}\right)}. \tag {21} +$$ + +To bound $P_{2}$ , we introduce the following lemma. + +Lemma 2. The following inequality holds + +$$ +\sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t} ^ {- 1}} ^ {2} \leq \frac {\alpha \zeta}{(1 - \beta_ {1}) ^ {2}} \sum_ {i = 1} ^ {d} \log \left(\frac {1}{\zeta \delta} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). \tag {22} +$$ + +By Lemma 2, we have + +$$ +P _ {2} = \frac {1}{2 (1 - \beta_ {1})} \sum_ {t = 2} ^ {T} \alpha_ {t - 1} \| \hat {\mathbf {g}} _ {t - 1} \| _ {\hat {V} _ {t - 1} ^ {- 1}} ^ {2} + \frac {\sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t} ^ {- 1}} ^ {2}}{2 (1 - \beta_ {1})} +$$ + +$$ +\leq \frac {1}{(1 - \beta_ {1})} \sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t} ^ {- 1}} ^ {2} \tag {23} +$$ + +$$ +\stackrel {(2 2)} {\leq} \frac {\alpha \zeta}{(1 - \beta_ {1}) ^ {3}} \sum_ {i = 1} ^ {d} \log \left(\frac {1}{\zeta \delta} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). +$$ + +Finally, we turn to upper bound $P_{3}$ : + +$$ +\begin{array}{l} P _ {3} = \sum_ {t = 2} ^ {T} \frac {\beta_ {1 t}}{2 \alpha_ {t - 1} (1 - \beta_ {1 t})} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2} \\ = \sum_ {i = 1} ^ {d} \sum_ {t = 2} ^ {T} \frac {\beta_ {1 t}}{2 \alpha (1 - \beta_ {1 t})} (x _ {t, i} - x _ {* , i}) ^ {2} (t - 1) \hat {v} _ {t - 1, i} \\ \leq \frac {D _ {\infty} ^ {2} \left(G _ {\infty} ^ {2} + \delta\right)}{2 \alpha} \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \frac {\beta_ {1 t}}{1 - \beta_ {1 t}} t \\ \leq \frac {\beta_ {1} D _ {\infty} ^ {2} \left(G _ {\infty} ^ {2} + \delta\right)}{2 \alpha} \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \frac {\nu^ {t - 1}}{1 - \beta_ {1}} t \\ = \frac{\beta_{1}D^{2}_{\infty}(G^{2}_{\infty} + \delta)}{2\alpha(1 - \beta_{1})}\sum_{i = 1}^{d}\underbrace{\sum_{t = 0}^{T - 1}\nu^{t}(t + 1)}_{P^{\prime}_{3}}. \\ \end{array} +$$ + +To further bound $P_3'$ , following Bock et al. (2019), we have + +$$ +\begin{array}{l} P _ {3} ^ {\prime} = \sum_ {t = 0} ^ {T - 1} \nu^ {t} t + \nu^ {t} \\ = \left(\frac {(T - 1) \nu^ {T + 1} - T \nu^ {T} + \nu}{(\nu - 1) ^ {2}} + \frac {1 - \nu^ {T}}{1 - \nu}\right) \tag {24} \\ = \frac {1 - T \left(\nu^ {T} - \nu^ {T + 1}\right) - \nu T}{(1 - \nu) ^ {2}} \\ \leq \frac {1}{(1 - \nu) ^ {2}} \\ \end{array} +$$ + +where the inequality follows from $\nu^T\geq \nu^{T + 1}$ . Thus, + +$$ +P _ {3} \leq \frac {d \beta_ {1} D _ {\infty} ^ {2} \left(G _ {\infty} ^ {2} + \delta\right)}{2 \alpha \left(1 - \beta_ {1}\right) (\nu - 1) ^ {2}}. \tag {25} +$$ + +We complete the proof by combining (21), (23) and (25). + +# B PROOF OF COROLLARY 2 + +For the first condition, we have + +$$ +\begin{array}{l} t v _ {t, i} - (t - 1) v _ {t - 1, i} = t \beta_ {2 t} v _ {t - 1, i} + t \left(1 - \beta_ {2 t}\right) g _ {t, i} ^ {2} - (t - 1) v _ {t - 1, i} \\ \leq t \left(1 - \frac {\gamma}{t}\right) v _ {t - 1, i} + t \frac {1}{t} g _ {t, i} ^ {2} - (t - 1) v _ {t - 1, i} \tag {26} \\ \leq (t - \gamma - (t - 1)) v _ {t - 1, i} + G _ {\infty} ^ {2} \\ \leq (2 - \gamma) G _ {\infty} ^ {2} \\ \end{array} +$$ + +where the first inequality is derived from the definition of $\beta_{2t}$ , the second and the third inequalities are due to Assumption 1. Based on (26), for any $\alpha \geq \frac{(2 - \gamma)G_{\infty}^2}{\lambda(1 - \beta_1)}$ , we have $\frac{tv_{t,i}}{\alpha} - \frac{(t - 1)v_{t - 1,i}}{\alpha} \leq \lambda (1 - \beta_1)$ holds for all $t \in [T]$ and $i \in [d]$ . + +For the second condition, we have + +$$ +\begin{array}{l} t \sum_ {j = 1} ^ {t} \prod_ {k = 1} ^ {t - j} \beta_ {2 (t - k + 1)} \left(1 - \beta_ {2 j}\right) g _ {j, i} ^ {2} \geq t \sum_ {j = 1} ^ {t} \prod_ {k = 1} ^ {t - j} \left(1 - \frac {1}{t - k + 1}\right) \frac {\gamma}{j} g _ {j, i} ^ {2} \\ = t \sum_ {j = 1} ^ {t} \prod_ {k = 1} ^ {t - j} \frac {t - k}{t - k + 1} \frac {\gamma}{j} g _ {j, i} ^ {2} \tag {27} \\ = t \sum_ {j = 1} ^ {t} \frac {j}{t} \frac {\gamma}{j} g _ {j, i} ^ {2} \\ = \gamma \sum_ {j = 1} ^ {t} g _ {j, i} ^ {2} \\ \end{array} +$$ + +where the inequality follows from $\beta_{t} \geq 1 - \frac{1}{t}$ and $1 - \beta_{t} \geq \frac{\gamma}{t}$ . + +# C PROOF OF LEMMA 2 + +We begin with the following lemma that is central to our analysis. + +Lemma 3. For all $i \in [d]$ and $t \in [T]$ , we have + +$$ +\sum_ {j = 1} ^ {T} \frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta} \leq \log \left(\frac {\sum_ {j = 1} ^ {T} g _ {j , i} ^ {2}}{\zeta \delta} + 1\right). \tag {28} +$$ + +Proof. For any $a \geq b > 0$ , the inequality $1 + x \leq e^{x}$ implies that + +$$ +\frac {1}{a} (a - b) \leq \log \frac {a}{b}. \tag {29} +$$ + +Let $m_0 = \zeta \delta$ , and $m_j = \sum_{k=1}^{j} g_{k,i}^2 + \zeta \delta > 0$ . By (29), we have + +$$ +\frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta} = \frac {m _ {j} - m _ {j - 1}}{m _ {j}} \leq \log \frac {m _ {j}}{m _ {j - 1}}. +$$ + +Summing over 1 to $T$ , we have + +$$ +\sum_ {j = 1} ^ {T} \frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta} \leq \log \frac {m _ {t}}{m _ {0}} = \log \left(\frac {\sum_ {j = 1} ^ {T} g _ {j , i} ^ {2}}{\zeta \delta} + 1\right). +$$ + +![](images/394efc4f0e319582e0f53e913e7c67ed626ade0c23d7ca5e6ece565b3f51bc89.jpg) + +Now we turn to the proof of Lemma 2. First, expending the last term in the summation by the update rule of Algorithm 1, we get + +$$ +\alpha_ {T} \| \hat {\mathbf {g}} _ {T} \| _ {\hat {V} _ {T} ^ {- 1}} ^ {2} = \alpha_ {T} \sum_ {i = 1} ^ {d} \frac {\hat {g} _ {T , i} ^ {2}}{v _ {T , i} + \frac {\delta}{T}} = \alpha \sum_ {i = 1} ^ {d} \frac {\left(\sum_ {j = 1} ^ {T} (1 - \beta_ {1 j}) \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)} g _ {j , i}\right) ^ {2}}{T \sum_ {j = 1} ^ {T} (1 - \beta_ {2 j}) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta}. \tag {30} +$$ + +The above equality can be further bounded as + +$$ +\begin{array}{l} \alpha_ {T} \| \hat {\mathbf {g}} _ {T} \| _ {\hat {V} _ {T} ^ {- 1}} ^ {2} \leq \alpha \sum_ {i = 1} ^ {d} \frac {\left(\sum_ {j = 1} ^ {T} \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)} g _ {j , i}\right) ^ {2}}{T \sum_ {j = 1} ^ {T} \left(1 - \beta_ {2 j}\right) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta} \\ \leq \alpha \sum_ {i = 1} ^ {d} \frac {\left(\sum_ {j = 1} ^ {T} \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)}\right) \left(\sum_ {j = 1} ^ {T} \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)} g _ {j , i} ^ {2}\right)}{T \sum_ {j = 1} ^ {T} \left(1 - \beta_ {2 j}\right) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta} \\ \leq \alpha \sum_ {i = 1} ^ {d} \frac {\left(\sum_ {j = 1} ^ {T} \beta_ {1} ^ {T - j}\right) \left(\sum_ {j = 1} ^ {T} \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)} g _ {j , i} ^ {2}\right)}{T \sum_ {j = 1} ^ {T} \left(1 - \beta_ {2 j}\right) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta} \tag {31} \\ \leq \frac {\alpha}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \frac {\sum_ {j = 1} ^ {T} \beta_ {1} ^ {T - j} g _ {j , i} ^ {2}}{T \sum_ {j = 1} ^ {T} (1 - \beta_ {2 j}) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta} \\ \stackrel {(9)} {\leq} \frac {\alpha \zeta}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \frac {\sum_ {j = 1} ^ {T} \beta_ {1} ^ {T - j} g _ {j , i} ^ {2}}{\sum_ {j = 1} ^ {T} g _ {j , i} ^ {2} + \zeta \delta} \leq \frac {\alpha \zeta}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {T} \beta_ {1} ^ {T - j} \frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta} \\ \end{array} +$$ + +where the first inequality is due to $1 - \beta_{1j} \leq 1$ , the second inequality follows from Cauchy-Schwarz inequality, the third inequality is due to $\beta_{1t} \leq \beta_1$ . Let $r_j = g_{j,i}^2 / (\sum_{k=1}^{j} g_{k,i}^2 + \zeta \delta)$ . Using similar arguments for all time steps and summing over 1 to $T$ , we have + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t} ^ {- 1}} ^ {2} \leq \frac {\alpha \zeta}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \sum_ {j = 1} ^ {t} \beta_ {1} ^ {t - j} r _ {j} \\ = \frac {\alpha \zeta}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {T} \sum_ {l = 0} ^ {T - j} \beta_ {1} ^ {l} r _ {j} \\ = \frac {\alpha \zeta}{\left(1 - \beta_ {1}\right)} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {T} \frac {\sum_ {l = 0} ^ {T - j} \beta_ {1} ^ {l} g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta} \tag {32} \\ \leq \frac {\alpha \zeta}{(1 - \beta_ {1}) ^ {2}} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {T} \frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta} \\ \stackrel {(2 8)} {\leq} \frac {\alpha \zeta}{(1 - \beta_ {1}) ^ {2}} \sum_ {i = 1} ^ {d} \log \left(\frac {\sum_ {j = 1} ^ {T} g _ {j , i} ^ {2}}{\zeta \delta} + 1\right). \\ \end{array} +$$ + +# D SADAM WITH A DECAYING REGULARIZATION FACTOR + +Algorithm 2 SAdam with time-variant $\delta_t$ (SAdamD) + +1: Input: $\{\beta_{1t}\}_{t = 1}^{T},\{\beta_{2t}\}_{t = 1}^{T},\{\delta_t\}_{t = 1}^T$ +2: Initialize: $\hat{\mathbf{g}}_0 = \mathbf{0}$ , $\dot{V}_0 = \mathbf{0}_{d\times d},\mathbf{x}_1 = \mathbf{0}$ . +3: for $t = 1, \dots, T$ do +4: $\mathbf{g}_t = \nabla f_t(\mathbf{x}_t)$ +5: $\hat{\mathbf{g}}_t = \beta_{1t}\hat{\mathbf{g}}_{t - 1} + (1 - \beta_{1t})\mathbf{g}_t$ +6: $V_{t} = \beta_{2t}V_{t - 1} + (1 - \beta_{2t})\mathrm{diag}(\mathbf{g}_{t}\mathbf{g}_{t}^{\top})$ +7: $\dot{V}_t = V_t + \mathrm{diag}\left(\frac{\delta_t}{t}\right)$ +8: $\mathbf{x}_{t + 1} = \Pi_{\mathcal{D}}^{\hat{V}_t}\left(\mathbf{x}_t - \frac{\alpha}{t}\hat{V}_t^{-1}\hat{\mathbf{g}}_t\right)$ +9: end for + +In this section, we establish a generalized version of SAdam, which employs a time-variant regularization factor $\delta_{t,i}$ for each dimension $i$ , instead of a fixed one for all $i \in [d]$ and $t \in [T]$ as in the original SAdam. The algorithm is referred to as SAdamD and summarized in Algorithm 2. It can be seen that + +our SAdamD reduces to SC-RMSprop with time-variant $\delta_t$ when $\beta_{1t} = 0$ and $1 - \frac{1}{t} \leq \beta_{2t} \leq 1 - \frac{\gamma}{t}$ . For SAdamD, we prove the following theoretical guarantee: + +Theorem 4. Suppose Assumptions 1 and 2 hold, and all loss functions $f_{1}(\cdot), \ldots, f_{T}(\cdot)$ are $\lambda$ -strongly convex. Let $\{\delta_{t,i}\}_{t=1}^{T} \in (0,1)^{T}$ be a non-increasing sequence for all $i \in [d]$ , $\beta_{1t} = \beta_{1}\nu^{t-1}$ where $\beta_{1} \in [0,1)$ , $\nu \in [0,1)$ , and $\{\beta_{2t}\}_{t=1}^{T} \in [0,1)^{T}$ be a parameter sequence such that Conditions 3 and 4 are satisfied. Let $\alpha \geq \frac{C}{\lambda}$ . The regret of SAdamD satisfies + +$$ +R (T) \leq \frac {D _ {\infty} ^ {2} \sum_ {i = 1} ^ {d} \delta_ {1 , i}}{2 \alpha \left(1 - \beta_ {1}\right)} + \frac {\alpha \zeta}{\left(1 - \beta_ {1}\right) ^ {3}} \sum_ {i = 1} ^ {d} \log \left(\frac {1}{\zeta \delta_ {T , i}} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right) + \frac {\beta_ {1} D _ {\infty} ^ {2} \left(d G _ {\infty} ^ {2} + \sum_ {i = 1} ^ {d} \delta_ {1 , i}\right)}{2 \alpha \left(1 - \beta_ {1}\right) (\nu - 1) ^ {2}}. \tag {33} +$$ + +By setting $\beta_{1t} = 0$ and $1 - \frac{1}{t} \leq \beta_{2t} \leq 1 - \frac{\gamma}{t}$ , we can derive the following regret bound for SC-RMSprop: + +Corollary 5. Suppose Assumptions 1 and 2 hold, and all loss functions $f_{1}(\cdot), \ldots, f_{T}(\cdot)$ are $\lambda$ -strongly convex. Let $\{\delta_{t,i}\}_{t=1}^{T} \in (0,1)^{T}$ be a non-increasing sequence for all $i \in [d]$ , and $1 - \frac{1}{t} \leq \beta_{2t} \leq 1 - \frac{\gamma}{t}$ , where $\gamma \in (0,1]$ . Let $\alpha \geq \frac{(2 - \gamma)G_{\infty}^2}{\lambda}$ . Then SAdamD reduces to SC-RMSprop, and the regret satisfies + +$$ +R (T) \leq \frac {D _ {\infty} ^ {2} \sum_ {i = 1} ^ {d} \delta_ {1 , i}}{2 \alpha} + \frac {\alpha}{\gamma} \sum_ {i = 1} ^ {d} \log \left(\frac {\gamma}{\delta_ {T , i}} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). \tag {34} +$$ + +Finally, we provide an instantiation of $\delta_t$ and derive the following Corollary. + +Corollary 6. Suppose Assumptions 1 and 2 hold, and all loss functions $f_{1}(\cdot), \ldots, f_{T}(\cdot)$ are $\lambda$ -strongly convex. Let $\delta_{t,i} = \frac{\xi_2}{1 + \xi_1\sum_{j=1}^{t}g_{j,i}^2}$ , where $\xi_{2} \in (0,1]$ and $\xi_{1} \geq 0$ are hyperparameters. Then we have $\delta_{t,i} \in (0,1]$ and is non-increasing $\forall i \in [d], t \in [T]$ . Let $1 - \frac{1}{t} \leq \beta_{2t} \leq 1 - \frac{\gamma}{t}$ , where $\gamma \in (0,1]$ , and $\alpha \geq \frac{(2 - \gamma)G_{\infty}^2}{\lambda}$ . Then SAdamD reduces to SC-RMSprop, and the regret satisfies + +$$ +R (T) \leq \frac {d D _ {\infty} ^ {2} \xi_ {2}}{2 \alpha} + \frac {\alpha}{\gamma} \sum_ {i = 1} ^ {d} \log \left(\gamma \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + \xi_ {2}\right) + \frac {\alpha}{\gamma} \sum_ {i = 1} ^ {d} \log \left(\frac {\xi_ {1}}{\xi_ {2}} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + \frac {1}{\xi_ {2}}\right). \tag {35} +$$ + +# E PROOF OF THEOREM 4 + +By similar arguments as in the proof of Theorem 1, we can upper bound regret as + +$$ +\begin{array}{l} R (T) \leq \underbrace {\sum_ {t = 1} ^ {T} \left(\frac {\| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2} - \| \mathbf {x} _ {t + 1} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2}}{2 \alpha_ {t} (1 - \beta_ {1 t})} - \frac {\lambda}{2} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2}\right)} _ {P _ {1}} \\ + \underbrace {\frac {1}{2 (1 - \beta_ {1})} \sum_ {t = 2} ^ {T} \alpha_ {t - 1} \| \hat {\mathbf {g}} _ {t - 1} \| _ {V _ {t - 1} ^ {- 1}} ^ {2} + \frac {\sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {V _ {t} ^ {- 1}} ^ {2}}{2 (1 - \beta_ {1})}} _ {P _ {2}} + \underbrace {\sum_ {t = 2} ^ {T} \frac {\beta_ {1 t}}{2 \alpha_ {t - 1} (1 - \beta_ {1 t})} \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {V _ {t - 1}} ^ {2}} _ {P _ {3}}. \\ \end{array} +$$ + +To bound $P_{1}$ , based on (18), we have + +$$ +\begin{array}{l} P _ {1} \leq \sum_ {t = 2} ^ {T} \frac {1}{2 \alpha \left(1 - \beta_ {1 t}\right)} \left(t \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2} - (t - 1) \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2} - \lambda \alpha \left(1 - \beta_ {1 t}\right) \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2}\right) \tag {36} \\ + \left(\frac {\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \| _ {V _ {1}} ^ {2}}{2 \alpha_ {1} (1 - \beta_ {1})} - \frac {\lambda}{2} \| \mathbf {x} _ {1} - \mathbf {x} _ {*} \| ^ {2}\right). \\ \end{array} +$$ + +For the first term in (36), we have + +$$ +\begin{array}{l} t \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t}} ^ {2} - (t - 1) \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| _ {\hat {V} _ {t - 1}} ^ {2} - \lambda \alpha (1 - \beta_ {1 t}) \| \mathbf {x} _ {t} - \mathbf {x} _ {*} \| ^ {2} \\ = \sum_ {i = 1} ^ {d} (x _ {t, i} - x _ {\ast , i}) ^ {2} (t \hat {v} _ {t, i} - (t - 1) \hat {v} _ {t - 1, i} - \lambda \alpha (1 - \beta_ {1 t})) \\ = \sum_ {i = 1} ^ {d} \left(x _ {t, i} - x _ {*, i}\right) ^ {2} \left(t v _ {t, i} - (t - 1) v _ {t - 1, i} - \lambda \alpha \left(1 - \beta_ {1 t}\right) + \delta_ {t, i} - \delta_ {t - 1, i}\right) \tag {37} \\ \leq \sum_ {i = 1} ^ {d} (x _ {t, i} - x _ {* , i}) ^ {2} (\underbrace {\lambda \alpha (1 - \beta_ {1}) - \lambda \alpha (1 - \beta_ {1 t})} _ {\leq 0} + \underbrace {\delta_ {t , i} - \delta_ {t - 1 , i}} _ {\leq 0}) \\ \leq 0 \\ \end{array} +$$ + +For the second term of (36), we have + +$$ +\begin{array}{l} \left(\frac {\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \| _ {\hat {V} _ {1}} ^ {2}}{2 \alpha_ {1} (1 - \beta_ {1})} - \frac {\lambda}{2} \| \mathbf {x} _ {1} - \mathbf {x} _ {*} \| ^ {2}\right) = \sum_ {i = 1} ^ {d} (x _ {1, i} - x _ {*}, i) ^ {2} \left(\frac {\hat {v} _ {1 , i} - \lambda \alpha (1 - \beta_ {1})}{2 \alpha (1 - \beta_ {1})}\right) \\ = \sum_ {i = 1} ^ {d} \left(x _ {1, i} - x _ {*, i}\right) ^ {2} \left(\frac {v _ {1 , i} - \lambda \alpha \left(1 - \beta_ {1}\right) + \delta_ {1 , i}}{2 \alpha \left(1 - \beta_ {1}\right)}\right) \tag {38} \\ \leq \frac {D _ {\infty} ^ {2} \sum_ {i = 1} ^ {d} \delta_ {1 , i}}{2 \alpha (1 - \beta_ {1})} \\ \end{array} +$$ + +where the inequality follows from Condition 3. Combining (36), (37) and (38), we have + +$$ +P _ {1} \leq \frac {D _ {\infty} ^ {2} \sum_ {i = 1} ^ {d} \delta_ {1 , i}}{2 \alpha \left(1 - \beta_ {1}\right)}. \tag {39} +$$ + +To bound $P_{2}$ , we first introduce the following lemma. + +Lemma 4. The following inequality holds + +$$ +\sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t} ^ {- 1}} ^ {2} \leq \frac {\alpha \zeta}{(1 - \beta_ {1}) ^ {2}} \sum_ {i = 1} ^ {d} \log \left(\frac {1}{\zeta \delta_ {T , i}} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). \tag {40} +$$ + +The proof of Lemma 4 can be found in Appendix F. Based on Lemma 4, we have + +$$ +\begin{array}{l} P _ {2} = \frac {1}{2 (1 - \beta_ {1})} \sum_ {t = 2} ^ {T} \alpha_ {t - 1} \| \hat {\mathbf {g}} _ {t - 1} \| _ {\hat {V} _ {t - 1} ^ {- 1}} ^ {2} + \frac {\sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t} ^ {- 1}} ^ {2}}{2 (1 - \beta_ {1})} \\ \leq \frac {1}{(1 - \beta_ {1})} \sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {\hat {V} _ {t} ^ {- 1}} ^ {2} \tag {41} \\ \stackrel {(4 0)} {\leq} \frac {\alpha \zeta}{(1 - \beta_ {1}) ^ {3}} \sum_ {i = 1} ^ {d} \log \left(\frac {1}{\zeta \delta_ {T , i}} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). \\ \end{array} +$$ + +Finally, we turn to upper bound $P_{3}$ : + +$$ +\begin{array}{l} P _ {3} \leq \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \frac {\beta_ {1 t}}{2 \alpha (1 - \beta_ {1 t})} (x _ {t, i} - x _ {*, i}) ^ {2} t \hat {v} _ {t, i} \\ \leq \frac {D _ {\infty} ^ {2}}{2 \alpha} \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \frac {\beta_ {1 t}}{1 - \beta_ {1 t}} t \left(G _ {\infty} ^ {2} + \delta_ {1, i}\right) \\ \leq \frac {\beta_ {1} D _ {\infty} ^ {2}}{2 \alpha} \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \frac {\nu^ {t - 1}}{1 - \beta_ {1}} t \left(G _ {\infty} ^ {2} + \delta_ {1, i}\right) \tag {42} \\ = \frac {\beta_ {1} D _ {\infty} ^ {2}}{2 \alpha} \sum_ {i = 1} ^ {d} \frac {\left(G _ {\infty} ^ {2} + \delta_ {1 , i}\right)}{1 - \beta_ {1}} \sum_ {t = 0} ^ {T - 1} \nu^ {t} (t + 1) \\ \stackrel {(2 4)} {\leq} \frac {\beta_ {1} D _ {\infty} ^ {2}}{2 \alpha} \sum_ {i = 1} ^ {d} \frac {\left(G _ {\infty} ^ {2} + \delta_ {1 , i}\right)}{\left(1 - \beta_ {1}\right) (\nu - 1) ^ {2}} \\ = \frac {\beta_ {1} D _ {\infty} ^ {2} (d G _ {\infty} ^ {2} + \sum_ {i = 1} ^ {d} \delta_ {1 , i})}{2 \alpha (1 - \beta_ {1}) (\nu - 1) ^ {2}}. \\ \end{array} +$$ + +We finish the proof by combining (39), (41) and (42). + +# F PROOF OF LEMMA 4 + +Expending the last term in the summation by using the update rule of Algorithm 2, we have + +$$ +\alpha_ {T} \| \hat {\mathbf {g}} _ {T} \| _ {V _ {T} ^ {- 1}} ^ {2} = \alpha_ {T} \sum_ {i = 1} ^ {d} \frac {\hat {g} _ {T , i} ^ {2}}{v _ {T , i} + \frac {\delta_ {T , i}}{T}} = \alpha \sum_ {i = 1} ^ {d} \frac {\left(\sum_ {j = 1} ^ {T} (1 - \beta_ {1 j}) \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)} g _ {j , i}\right) ^ {2}}{T \sum_ {j = 1} ^ {T} (1 - \beta_ {2 j}) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta_ {T , i}}. \tag {43} +$$ + +The above equality can be further bounded as + +$$ +\begin{array}{l} \alpha_ {T} \| \hat {\mathbf {g}} _ {T} \| _ {\hat {V} _ {T} ^ {- 1}} ^ {2} \leq \alpha \sum_ {i = 1} ^ {d} \frac {\left(\sum_ {j = 1} ^ {T} \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)}\right) \left(\sum_ {j = 1} ^ {T} \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)} g _ {j , i} ^ {2}\right)}{T \sum_ {j = 1} ^ {T} \left(1 - \beta_ {2 j}\right) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta_ {T , i}} \\ \leq \alpha \sum_ {i = 1} ^ {d} \frac {\left(\sum_ {j = 1} ^ {T} \beta_ {1} ^ {T - j}\right) \left(\sum_ {j = 1} ^ {T} \prod_ {k = 1} ^ {T - j} \beta_ {1 (T - k + 1)} g _ {j , i} ^ {2}\right)}{T \sum_ {j = 1} ^ {T} \left(1 - \beta_ {2 j}\right) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta_ {T , i}} \\ \leq \frac {\alpha}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \frac {\sum_ {j = 1} ^ {T} \beta_ {1} ^ {T - j} g _ {j , i} ^ {2}}{T \sum_ {j = 1} ^ {T} (1 - \beta_ {2 j}) \Pi_ {k = 1} ^ {T - j} \beta_ {2 (T - k + 1)} g _ {j , i} ^ {2} + \delta_ {T , i}} \tag {44} \\ \stackrel {(9)} {\leq} \frac {\alpha \zeta}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \frac {\sum_ {j = 1} ^ {T} \beta_ {1} ^ {T - j} g _ {j , i} ^ {2}}{\sum_ {j = 1} ^ {T} g _ {j , i} ^ {2} + \zeta \delta_ {T , i}} \\ \leq \frac {\alpha \zeta}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {T} \beta_ {1} ^ {T - j} \frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta_ {T , i}}. \\ \end{array} +$$ + +The first inequality follows from Cauchy-Schwarz inequality and $1 - \beta_{1t} \leq 1$ , and the second inequality is due to $\beta_{1t} \leq \beta_{1}$ . Let $r_{j} = \frac{g_{j,i}^{2}}{\sum_{k=1}^{j} g_{k,i}^{2} + \zeta \delta_{T,i}}$ . By using similar arguments as in (32), we + +have + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \alpha_ {t} \| \hat {\mathbf {g}} _ {t} \| _ {\dot {V} _ {t} ^ {- 1}} ^ {2} \leq \frac {\alpha \zeta}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \sum_ {j = 1} ^ {t} \beta_ {1} ^ {T - j} \frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta_ {t , i}} \\ \leq \frac {\alpha \zeta}{(1 - \beta_ {1})} \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \sum_ {j = 1} ^ {t} \beta_ {1} ^ {T - j} \frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta_ {T , i}} \\ = \frac {\alpha \zeta}{\left(1 - \beta_ {1}\right)} \sum_ {i = 1} ^ {d} \sum_ {t = 1} ^ {T} \sum_ {j = 1} ^ {t} r _ {j} (45) \\ \leq \frac {\alpha \zeta}{\left(1 - \beta_ {1}\right)} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {T} \frac {\sum_ {l = 0} ^ {T - j} \beta_ {1} ^ {l} g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta_ {T , i}} (45) \\ \leq \frac {\alpha \zeta}{(1 - \beta_ {1}) ^ {2}} \sum_ {i = 1} ^ {d} \sum_ {j = 1} ^ {T} \frac {g _ {j , i} ^ {2}}{\sum_ {k = 1} ^ {j} g _ {k , i} ^ {2} + \zeta \delta_ {T , i}} \\ \stackrel {(2 8)} {\leq} \frac {\alpha \zeta}{(1 - \beta_ {1}) ^ {2}} \sum_ {i = 1} ^ {d} \log \left(\frac {1}{\zeta \delta_ {T , i}} \sum_ {j = 1} ^ {T} g _ {j, i} ^ {2} + 1\right). \\ \end{array} +$$ + +# G ON THE CONVERGENCE OF AMSGRAD + +In this section, we firstly provide the AMSgrad algorithm and its theoretical guarantees (Reddi et al., 2018), then state a theoretical flaw in their analysis revealed by Tran et al. (2019), and finally propose a simple solution to fix this problem. + +The AMSgrad algorithm developed in Reddi et al. (2018) is summarized in Algorithm 3. + +# Algorithm 3 AMSgrad + +1: Input: $\{\beta_{1t}\}_{t=1}^{T}, \beta_{2}$ +2: Initialize: $\hat{\mathbf{g}}_0 = \mathbf{0}$ , $\hat{V}_0 = \mathbf{0}_{d\times d}$ , $\mathbf{x}_1 = \mathbf{0}$ . +3: for $t = 1, \dots, T$ do +4: $\mathbf{g}_t = \nabla f_t(\mathbf{x}_t)$ +5: $\hat{\mathbf{g}}_t = \beta_{1t}\hat{\mathbf{g}}_{t - 1} + (1 - \beta_{1t})\mathbf{g}_t$ +6: $V_{t} = \beta_{2}V_{t - 1} + (1 - \beta_{2})\mathrm{diag}(\mathbf{g}_{t}\mathbf{g}_{t}^{\top})$ +7: $\hat{V}_t = \max \{V_t, \hat{V}_{t-1}\}$ +8: $\mathbf{x}_{t + 1} = \Pi_{\mathcal{D}}^{\sqrt{\hat{V}_t}}\left(\mathbf{x}_t - \alpha_t\hat{V}_t^{-1 / 2}\hat{\mathbf{g}}_t\right)$ , where $\alpha_{t} = \frac{\alpha}{\sqrt{t}}$ +9: end for + +For AMSgrad, Reddi et al. (2018) provide the following regret bound. + +Theorem 7 (Theorem 4 in Reddi et al. (2018), problematic). Suppose Assumptions 1 and 2 hold, and all loss functions $f_{1}(\cdot), \ldots, f_{T}(\cdot)$ are convex. Let $\delta > 0$ , $\beta_{1} > 0$ , $\beta_{1t} \leq \beta_{1}$ , and $\gamma = \frac{\beta_{1}}{\sqrt{\beta_{2}}} \leq 1$ . The regret of AMSgrad satisfies + +$$ +\begin{array}{l} R (T) \leq \frac {D _ {\infty} ^ {2} \sqrt {T}}{\alpha \left(1 - \beta_ {1}\right)} \sum_ {i = 1} ^ {d} \hat {v} _ {T, i} ^ {1 / 2} + \frac {D _ {\infty} ^ {2}}{2 \left(1 - \beta_ {1}\right)} \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {d} \frac {\beta_ {1 t} \hat {v} _ {t , i} ^ {1 / 2}}{\alpha_ {t}} \tag {46} \\ + \frac {\alpha \sqrt {1 + \log T}}{(1 - \beta_ {1}) ^ {2} (1 - \gamma) \sqrt {(1 - \beta_ {2})}} \sum_ {i = 1} ^ {d} \| g _ {1: T, i} \|. \\ \end{array} +$$ + +Recently, Tran et al. (2019) point out a mistake in the proof of Theorem 7. Specifically, in Reddi et al. (2018), the following inequality is utilized (Proof of Lemma 2, Page 18): + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \left[ \frac {1}{2 \alpha_ {t} (1 - \beta_ {1 t})} \left[ \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2} - \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t + 1} - \mathbf {x} _ {*}) \| ^ {2} \right] \right. \\ + \frac {\beta_ {1 t}}{2 \alpha_ {t} (1 - \beta_ {1 t})} \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2} \biggr ] + \frac {\alpha \sqrt {1 + \log T}}{(1 - \beta_ {1}) ^ {2} (1 - \gamma) \sqrt {(1 - \beta_ {2})}} \sum_ {i = 1} ^ {d} \| g _ {1: T, i} \| \\ \leq \frac {1}{2 \alpha_ {1} (1 - \beta_ {1})} \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {1} - \mathbf {x} _ {*}) \| ^ {2} + \frac {1}{2 (1 - \beta_ {1})} \sum_ {t = 2} ^ {T} \left[ \frac {\| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2}}{\alpha_ {t}} - \frac {\| \hat {V} _ {t - 1} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2}}{\alpha_ {t - 1}} \right] \\ + \sum_ {t = 1} ^ {T} \left[ \frac {\beta_ {1 t}}{2 \alpha_ {t} (1 - \beta_ {1})} \| \hat {V} _ {t} ^ {1 / 4} \left(\mathbf {x} _ {t} - \mathbf {x} _ {*}\right) \| ^ {2} \right] + \frac {\alpha \sqrt {1 + \log T}}{(1 - \beta_ {1}) ^ {2} (1 - \gamma) (\sqrt {1 - \beta_ {2}})} \sum_ {i = 1} ^ {d} \| g _ {1: T, i} \| \tag {47} \\ \end{array} +$$ + +which, however, may not hold. To see this, we note that essentially (47) uses + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \frac {1}{2 \alpha_ {t} \left(1 - \beta_ {1 t}\right)} \left[ \| \hat {V} _ {t} ^ {1 / 4} \left(\mathbf {x} _ {t} - \mathbf {x} _ {*}\right) \| ^ {2} - \| \hat {V} _ {t} ^ {1 / 4} \left(\mathbf {x} _ {t + 1} - \mathbf {x} _ {*}\right) \| ^ {2} \right] \tag {48} \\ \leq \frac {1}{(1 - \beta_ {1})} \sum_ {t = 1} ^ {T} \frac {1}{2 \alpha_ {t}} \left[ \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2} - \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t + 1} - \mathbf {x} _ {*}) \| ^ {2} \right] \\ \end{array} +$$ + +which holds only if $\beta_{1t} \leq \beta_1$ and $\| \hat{V}_t^{1/4}(\mathbf{x}_t - \mathbf{x}_*)\|^2 - \| \hat{V}_t^{1/4}(\mathbf{x}_{t+1} - \mathbf{x}_*)\|^2$ is non-negative. However, as empirically shown by Tran et al. (2019), the letter requirement can be violated in some counterexamples. Note that similar problems exist in many recent proposed Adam variants. To address this issue, Tran et al. (2019) establish a new convergence proof of AMSGrad, which indicates an $O(d\sqrt{T})$ data-independent regret bound. Moreover, as an alternative, they also propose a variant of AMSgrad, called AdamX, which alters the stricture of AMSgrad to force the inequality being satisfied. For AdamX, they also give a new theoretical analysis and an $O(d\sqrt{T})$ data-independent regret bound. + +In this paper, we find out that the above problem can be solved by simply configuring $\beta_{1t}$ of AMSgrad in a non-increasing manner, i.e., $\forall t\geq 2$ , $\beta_{1t}\leq \beta_{1(t - 1)}$ . Specifically, when $\beta_{1t}$ is non-increasing, we can rewrite (47) as + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \left[ \frac {1}{2 \alpha_ {t} (1 - \beta_ {1 t})} \left[ \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2} - \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t + 1} - \mathbf {x} _ {*}) \| ^ {2} \right] \right. \\ \left. + \frac {\beta_ {1 t}}{2 \alpha_ {t} (1 - \beta_ {1 t})} \left\| \hat {V} _ {t} ^ {1 / 4} \left(\mathbf {x} _ {t} - \mathbf {x} _ {*}\right) \right\| ^ {2} \right] + \frac {\alpha \sqrt {1 + \log T}}{(1 - \beta_ {1}) ^ {2} (1 - \gamma) \sqrt {(1 - \beta_ {2})}} \sum_ {i = 1} ^ {T} \| g _ {1: T, i} \| \\ \leq \frac {1}{2 \alpha_ {1} (1 - \beta_ {1})} \| \hat {V} _ {1} ^ {1 / 4} (\mathbf {x} _ {1} - \mathbf {x} _ {*}) \| ^ {2} + \frac {1}{2} \sum_ {t = 2} ^ {T} \left[ \frac {\| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2}}{\alpha_ {t} (1 - \beta_ {1 t})} - \frac {\| \hat {V} _ {t - 1} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2}}{\alpha_ {t - 1} (1 - \beta_ {1 (t - 1)})} \right] \\ + \sum_ {t = 1} ^ {T} \left[ \frac {\beta_ {1 t}}{2 \alpha_ {t} (1 - \beta_ {1})} \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2} \right] + \frac {\alpha \sqrt {1 + \log T}}{(1 - \beta_ {1}) ^ {2} (1 - \gamma) (\sqrt {1 - \beta_ {2}})} \sum_ {i = 1} ^ {d} \| g _ {1: T, i} \| \\ \leq \frac {1}{2 \alpha_ {1} (1 - \beta_ {1})} \| \hat {V} _ {1} ^ {1 / 4} (\mathbf {x} _ {1} - \mathbf {x} _ {*}) \| ^ {2} + \frac {1}{2} \sum_ {t = 2} ^ {T} \left[ \frac {\| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2}}{\alpha_ {t} (1 - \beta_ {1 t})} - \frac {\| \hat {V} _ {t - 1} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2}}{\alpha_ {t - 1} (1 - \beta_ {1 t})} \right] \\ + \sum_ {t = 1} ^ {T} \left[ \frac {\beta_ {1 t}}{2 \alpha_ {t} (1 - \beta_ {1})} \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2} \right] + \frac {\alpha \sqrt {1 + \log T}}{(1 - \beta_ {1}) ^ {2} (1 - \gamma) (\sqrt {1 - \beta_ {2}})} \sum_ {i = 1} ^ {d} \| g _ {1: T, i} \| \\ \end{array} +$$ + +$$ +\begin{array}{l} \leq \frac {1}{2 \alpha_ {1} (1 - \beta_ {1})} \| \hat {V} _ {1} ^ {1 / 4} (\mathbf {x} _ {1} - \mathbf {x} _ {*}) \| ^ {2} + \frac {1}{2 (1 - \beta_ {1})} \sum_ {t = 2} ^ {T} \left[ \frac {\| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2}}{\alpha_ {t}} - \frac {\| \hat {V} _ {t - 1} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2}}{\alpha_ {t - 1}} \right] \\ + \sum_ {t = 1} ^ {T} \left[ \frac {\beta_ {1 t}}{2 \alpha_ {t} (1 - \beta_ {1})} \| \hat {V} _ {t} ^ {1 / 4} (\mathbf {x} _ {t} - \mathbf {x} _ {*}) \| ^ {2} \right] + \frac {\alpha \sqrt {1 + \log T}}{(1 - \beta_ {1}) ^ {2} (1 - \gamma) (\sqrt {1 - \beta_ {2}})} \sum_ {i = 1} ^ {d} \| g _ {1: T, i} \| \\ \end{array} +$$ + +where the second inequality is derived from $\beta_{1t} \leq \beta_{1(t-1)}$ , and the last inequality is due to the fact that $\beta_{1t} \leq \beta_1$ and $\left[\frac{\|\hat{V}_t^{1/4}(\mathbf{x}_t - \mathbf{x}_*)\|^2}{\alpha_t} - \frac{\|\hat{V}_{t-1}^{1/4}(\mathbf{x}_t - \mathbf{x}_*)\|^2}{\alpha_{t-1}}\right]$ is non-negative. In this way, the proof of AMSgrad can proceed, and the algorithm structure as well as the conclusion in Theorem 7 remain unchanged. + +To summarize, we restate Theorem 7 as follows. + +Theorem 8 (Fixed theoretical guarantee of AMSgrad). Suppose Assumptions 1 and 2 hold, and all loss functions $f_{1}(\cdot), \ldots, f_{T}(\cdot)$ are convex. Let $\delta > 0$ , $\beta_{1} > 0$ , $\beta_{1t} \leq \beta_{1(t-1)}$ where $t \geq 2$ , $\beta_{11} = \beta_{1}$ , and $\gamma = \frac{\beta_{1}}{\sqrt{\beta_{2}}} \leq 1$ . The regret of AMSgrad satisfies + +$$ +\begin{array}{l} R (T) \leq \frac {D _ {\infty} ^ {2} \sqrt {T}}{\alpha \left(1 - \beta_ {1}\right)} \sum_ {i = 1} ^ {d} \hat {v} _ {T, i} ^ {1 / 2} + \frac {D _ {\infty} ^ {2}}{2 \left(1 - \beta_ {1}\right)} \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {d} \frac {\beta_ {1 t} \hat {v} _ {t , i} ^ {1 / 2}}{\alpha_ {t}} \tag {49} \\ + \frac {\alpha \sqrt {1 + \log T}}{(1 - \beta_ {1}) ^ {2} (1 - \gamma) \sqrt {(1 - \beta_ {2})}} \sum_ {i = 1} ^ {d} \| g _ {1: T, i} \|. \\ \end{array} +$$ + +![](images/bc87dc071a7be693b70c8b0a6261804fcbebf060c51897a425d6dab8498a79d6.jpg) +H EXPERIMENTS ON RESNET18 +(a) Training Loss v.s. number of epochs +Fig. 4: experimental results of ResNet18 on CIFAR10 dataset + +![](images/b4e3cf3aba1eed10c146b66f2e8bccd37c08c983802e3289816af035a988f6f0.jpg) +(b) Testing accuracy v.s. number of epochs + +In this section, we conduct the image classification task on CIFAR10 dataset using ResNet18 (He et al., 2016). The parameter configuration of algorithms is the same as that in Section 4. We repeat each experiment 10 times and take their average. The training loss v.s. epoch is shown in Fig. 4a, and the testing accuracy v.s. epoch is shown in Fig. 4b. As can be seen, our proposed SAdam outperforms other algorithms. \ No newline at end of file diff --git a/sadamavariantofadamforstronglyconvexfunctions/images.zip b/sadamavariantofadamforstronglyconvexfunctions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a60dfd134c2f0a637e5440248b6f3287530d6f68 --- /dev/null +++ b/sadamavariantofadamforstronglyconvexfunctions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8963a13c9028733489a50bff991705f33d2a35a2ffdc63260940c1816272fef9 +size 1368673 diff --git a/sadamavariantofadamforstronglyconvexfunctions/layout.json b/sadamavariantofadamforstronglyconvexfunctions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a32f02f59e651d8945935b30eb1d20dbc025320f --- /dev/null +++ b/sadamavariantofadamforstronglyconvexfunctions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:054c5c773a044b5db4854a57b015880869ac36d478d187dca479882280213999 +size 839259 diff --git a/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_content_list.json b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0a18e837f979dd1ddbcad5ae0ef54e6be98c00bf --- /dev/null +++ b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e36f7b3b27db733be70c53f954a6bb6412b28659506824bc80b9a7a9185bc9e +size 174040 diff --git a/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_model.json b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5c6b55b801cd94f0b40957d7523c3afa421bfba5 --- /dev/null +++ b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:577e320c906d65db936daf79573b7a6d0cdbc5fa9cfb8be711647d32133dac2e +size 203798 diff --git a/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_origin.pdf b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..255e297f640a703ac5e8e426834a08b57291c206 --- /dev/null +++ b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/19a71f6d-d3cd-41fe-8c2c-c4a9068e2ada_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e648b2a47ba972e84439b599e9fe8f6d29aa0e1bd31037b85002027514ad23d +size 727469 diff --git a/sampleefficientpolicygradientmethodswithrecursivevariancereduction/full.md b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1530e54330da5148b2e61f37bb58a7c81ea21b70 --- /dev/null +++ b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/full.md @@ -0,0 +1,777 @@ +# SAMPLE EFFICIENT POLICY GRADIENT METHODS WITH RECURSIVE VARIANCE REDUCTION + +Pan Xu, Felicia Gao, Quanquan Gu + +Department of Computer Science + +University of California, Los Angeles + +Los Angeles, CA 90094, USA + +panxu@cs.ucla.edu, fxgao1160@engineering.ucla.edu, qgu@cs.ucla.edu + +# ABSTRACT + +Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires $O(1 / \epsilon^{3/2})^1$ episodes to find an $\epsilon$ -approximate stationary point of the nonconcave performance function $J(\pmb{\theta})$ (i.e., $\pmb{\theta}$ such that $\| \nabla J(\pmb{\theta}) \|_2^2 \leq \epsilon$ ). This sample complexity improves the existing result $O(1 / \epsilon^{5/3})$ for stochastic variance reduced policy gradient algorithms by a factor of $O(1 / \epsilon^{1/6})$ . In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms. + +# 1 INTRODUCTION + +Reinforcement learning (RL) (Sutton & Barto, 2018) has received significant success in solving various complex problems such as learning robotic motion skills (Levine et al., 2015), autonomous driving (Shalev-Shwartz et al., 2016) and Go game (Silver et al., 2017), where the agent progressively interacts with the environment in order to learn a good policy to solve the task. In RL, the agent makes its decision by choosing the action based on the current state and the historical rewards it has received so far. After performing the chosen action, the agent's state will change according to some transition probability model and a new reward would be revealed to the agent by the environment based on the action and new state. Then the agent continues to choose the next action until it reaches a terminal state. The aim of the agent is to maximize its expected cumulative rewards. Therefore, the pivotal problem in RL is to find a good policy which is a function that maps the state space to the action space and thus informs the agent which action to take at each state. To optimize the agent's policy in the high dimensional continuous action space, the most popular approach is the policy gradient method (Sutton et al., 2000) that parameterizes the policy by an unknown parameter $\theta \in \mathbb{R}^d$ and directly optimizes the policy by finding the optimal $\theta$ . The objective function $J(\theta)$ is chosen to be the performance function, which is the expected return under a specific policy and is usually non-concave. Our goal is to maximize the value of $J(\theta)$ by finding a stationary point $\theta^{*}$ such that $\| \nabla J(\theta^{*})\|_{2} = 0$ using gradient based algorithms. + +Due to the expectation in the definition of $J(\theta)$ , it is usually infeasible to compute the gradient exactly. In practice, one often uses stochastic gradient estimators such as REINFORCE (Williams, 1992), PGT (Sutton et al., 2000) and GPOMDP (Baxter & Bartlett, 2001) to approximate the gradient of the expected return based on a batch of sampled trajectories. However, this approximation will introduce additional variance and slow down the convergence of policy gradient, which thus requires a huge amount of trajectories to find a good policy. Theoretically, these stochastic gradient (SG) based algorithms require $O(1 / \epsilon^2)$ trajectories (Robbins & Monro, 1951) to find an $\epsilon$ -approximate stationary point such that $\mathbb{E}[\| \nabla J(\pmb {\theta})\| _2^2 ]\leq \epsilon$ . In order to reduce the variance of policy gradient algorithms, Papini et al. (2018) proposed a stochastic variance-reduced policy gradient (SVRPG) + +Table 1: Comparison on sample complexities of different algorithms to achieve $\| \nabla J(\pmb {\theta})\| _2^2\leq \epsilon$ + +
AlgorithmsComplexity
REINFORCE (Williams, 1992)O(1/ε2)
PGT (Sutton et al., 2000)O(1/ε2)
GPOMDP (Baxter & Bartlett, 2001)O(1/ε2)
SVRPG (Papini et al., 2018)O(1/ε2)
SVRPG (Xu et al., 2019)O(1/ε5/3)
SRVR-PG (This paper)O(1/ε3/2)
+ +algorithm by borrowing the idea from the stochastic variance reduced gradient (SVRG) (Johnson & Zhang, 2013; Allen-Zhu & Hazan, 2016; Reddi et al., 2016a) in stochastic optimization. The key idea is to use a so-called semi-stochastic gradient to replace the stochastic gradient used in SG methods. The semi-stochastic gradient combines the stochastic gradient in the current iterate with a snapshot of stochastic gradient stored in an early iterate which is called a reference iterate. In practice, SVRPG saves computation on trajectories and improves the performance of SG based policy gradient methods. Papini et al. (2018) also proved that SVRPG converges to an $\epsilon$ -approximate stationary point $\pmb{\theta}$ of the nonconcave performance function $J(\pmb{\theta})$ with $\mathbb{E}[\|\nabla J(\pmb{\theta})\|_2^2] \leq \epsilon$ after $O(1/\epsilon^2)$ trajectories, which seems to have the same sample complexity as SG based methods. Recently, the sample complexity of SVRPG has been improved to $O(1/\epsilon^{5/3})$ by a refined analysis (Xu et al., 2019), which theoretically justifies the advantage of SVRPG over SG based methods. + +This paper continues on this line of research. We propose a Stochastic Recursive Variance Reduced Policy Gradient algorithm (SRVR-PG), which provably improves the sample complexity of SVRPG. At the core of our proposed algorithm is a recursive semi-stochastic policy gradient inspired from the stochastic path-integrated differential estimator (Fang et al., 2018), which accumulates all the stochastic gradients from different iterates to reduce the variance. We prove that SRVR-PG only takes $O(1/\epsilon^{3/2})$ trajectories to converge to an $\epsilon$ -approximate stationary point $\pmb{\theta}$ of the performance function, i.e., $\mathbb{E}[\|\nabla J(\pmb{\theta})\|_2^2] \leq \epsilon$ . We summarize the comparison of SRVR-PG with existing policy gradient methods in terms of sample complexity in Table 1. Evidently, the sample complexity of SRVR-PG is lower than that of REINFORCE, PGT and GPOMDP by a factor of $O(1/\epsilon^{1/2})$ , and is lower than that of SVRPG (Xu et al., 2019) by a factor of $O(1/\epsilon^{1/6})$ . + +In addition, we integrate our algorithm with parameter-based exploration (PGPE) method (Sehnke et al., 2008; 2010), and propose a SRVR-PG-PE algorithm which directly optimizes the prior probability distribution of the policy parameter $\theta$ instead of finding the best value. The proposed SRVR-PG-PE enjoys the same trajectory complexity as SRVR-PG and performs even better in some applications due to its additional exploration over the parameter space. Our experimental results on classical control tasks in reinforcement learning demonstrate the superior performance of the proposed SRVR-PG and SRVR-PG-PE algorithms and verify our theoretical analysis. + +# 1.1 ADDITIONAL RELATED WORK + +We briefly review additional relevant work to ours with a focus on policy gradient based methods. For other RL methods such as value based (Watkins & Dayan, 1992; Mnih et al., 2015) and actor-critic (Konda & Tsitsiklis, 2000; Peters & Schaal, 2008a; Silver et al., 2014) methods, we refer the reader to Peters & Schaal (2008b); Kober et al. (2013); Sutton & Barto (2018) for a complete review. + +To reduce the variance of policy gradient methods, early works have introduced unbiased baseline functions (Baxter & Bartlett, 2001; Greensmith et al., 2004; Peters & Schaal, 2008b) to reduce the variance, which can be constant, time-dependent or state-dependent. Schulman et al. (2015b) proposed the generalized advantage estimation (GAE) to explore the trade-off between bias and variance of policy gradient. Recently, action-dependent baselines are also used in Tucker et al. (2018); Wu et al. (2018) which introduces bias but reduces variance at the same time. Sehnke et al. (2008; 2010) proposed policy gradient with parameter-based exploration (PGPE) that explores in the parameter space. It has been shown that PGPE enjoys a much smaller variance (Zhao et al., + +2011). The Stein variational policy gradient method is proposed in Liu et al. (2017). See Peters & Schaal (2008b); Deisenroth et al. (2013); Li (2017) for a more detailed survey on policy gradient. + +Stochastic variance reduced gradient techniques such as SVRG (Johnson & Zhang, 2013; Xiao & Zhang, 2014), batching SVRG (Harikandeh et al., 2015), SAGA (Defazio et al., 2014) and SARAH (Nguyen et al., 2017) were first developed in stochastic convex optimization. When the objective function is nonconvex (or nonconcave for maximization problems), nonconvex SVRG (Allen-Zhu & Hazan, 2016; Reddi et al., 2016a) and SCSG (Lei et al., 2017; Li & Li, 2018) were proposed and proved to converge to a first-order stationary point faster than vanilla SGD (Robbins & Monro, 1951) with no variance reduction. The state-of-the-art stochastic variance reduced gradient methods for nonconvex functions are the SNVRG (Zhou et al., 2018) and SPIDER (Fang et al., 2018) algorithms, which have been proved to achieve near optimal convergence rate for smooth functions. + +There are yet not many papers studying variance reduced gradient techniques in RL. Du et al. (2017) first applied SVRG in policy evaluation for a fixed policy. Xu et al. (2017) introduced SVRG into trust region policy optimization for model-free policy gradient and showed that the resulting algorithm SVRPO is more sample efficient than TRPO. Yuan et al. (2019) further applied the techniques in SARAH (Nguyen et al., 2017) and SPIDER (Fang et al., 2018) to TRPO (Schulman et al., 2015a). However, no analysis on sample complexity (i.e., number of trajectories required) was provided in the aforementioned papers (Xu et al., 2017; Yuan et al., 2019). We note that a recent work by Shen et al. (2019) proposed a Hessian aided policy gradient (HAPG) algorithm that converges to the stationary point of the performance function within $O(H^2/\epsilon^{3/2})$ trajectories, which is worse than our result by a factor of $O(H^2)$ where $H$ is the horizon length of the environment. Moreover, they need additional samples to approximate the Hessian vector product, and cannot handle the policy in a constrained parameter space. Another related work pointed out by the anonymous reviewer is Yang & Zhang (2019), which extended the stochastic mirror descent algorithm (Ghadimi et al., 2016) in the optimization field to policy gradient methods and achieved $O(H^2/\epsilon^2)$ sample complexity. After the ICLR conference submission deadline, Yang & Zhang (2019) revised their paper by adding a new variance reduction algorithm that achieves $O(H^2/\epsilon^{3/2})$ sample complexity, which is also worse than our result by a factor of $O(H^2)$ . + +Apart from the convergence analysis of the general nonconcave performance functions, there has emerged a line of work (Cai et al., 2019; Liu et al., 2019; Yang et al., 2019; Wang et al., 2019) that studies the global convergence of (proximal/trust-region) policy optimization with neural network function approximation, which applies the theory of overparameterized neural networks (Du et al., 2019b;a; Allen-Zhu et al., 2019; Zou et al., 2019; Cao & Gu, 2019) to reinforcement learning. + +Notation $\| \mathbf{v}\| _2$ denotes the Euclidean norm of a vector $\mathbf{v}\in \mathbb{R}^d$ and $\| \mathbf{A}\| _2$ denotes the spectral norm of a matrix $\mathbf{A}\in \mathbb{R}^{d\times d}$ . We write $a_{n} = O(b_{n})$ if $a_{n}\leq Cb_{n}$ for some constant $C > 0$ . The Dirac delta function $\delta (x)$ satisfies $\delta (0) = +\infty$ and $\delta (x) = 0$ if $x\neq 0$ . Note that $\delta (x)$ satisfies $\int_{-\infty}^{+\infty}\delta (x)\mathrm{d}x = 1$ . For any $\alpha >0$ , we define the Remyi divergence (Remyi et al., 1961) between distributions $P$ and $Q$ as + +$$ +D _ {\alpha} (P | | Q) = \frac {1}{\alpha - 1} \log_ {2} \int_ {x} P (x) \left(\frac {P (x)}{Q (x)}\right) ^ {\alpha - 1} \mathrm {d} x, +$$ + +which is non-negative for all $\alpha > 0$ . The exponentiated Rényi divergence is $d_{\alpha}(P||Q) = 2^{D_{\alpha}(P||Q)}$ . + +# 2 BACKGROUNDSDONPOLICYGRADIENT + +Markov Decision Process: A discrete-time Markov Decision Process (MDP) is a tuple $\mathcal{M} = \{\mathcal{S},\mathcal{A},\mathcal{P},r,\gamma ,\rho \}$ . $\mathcal{S}$ and $\mathcal{A}$ are the state and action spaces respectively. $\mathcal{P}(s'|s,a)$ is the transition probability of transiting to state $s^{\prime}$ after taking action $a$ at state $s$ . Function $r(s,a): S\times \mathcal{A}\to [-R,R]$ emits a bounded reward after the agent takes action $a$ at state $s$ , where $R > 0$ is a constant. $\gamma \in (0,1)$ is the discount factor. $\rho$ is the distribution of the starting state. A policy at state $s$ is a probability function $\pi (a|s)$ over action space $\mathcal{A}$ . In episodic tasks, following any stationary policy, the agent can observe and collect a sequence of state-action pairs $\tau = \{s_0,a_0,s_1,a_1,\dots ,s_{H - 1},a_{H - 1},s_H\}$ , which is called a trajectory or episode. $H$ is called the trajectory horizon or episode length. In practice, we can set $H$ to be the maximum value among all + +the actual trajectory horizons we have collected. The sample return over one trajectory $\tau$ is defined as the discounted cumulative reward $\mathcal{R}(\tau) = \sum_{h=0}^{H-1} \gamma^h r(s_h, a_h)$ . + +Policy Gradient: Suppose the policy, denoted by $\pi_{\theta}$ , is parameterized by an unknown parameter $\pmb{\theta} \in \mathbb{R}^d$ . We denote the trajectory distribution induced by policy $\pi_{\theta}$ as $p(\tau|\pmb{\theta})$ . Then + +$$ +p (\tau | \boldsymbol {\theta}) = \rho (s _ {0}) \prod_ {h = 0} ^ {H - 1} \pi_ {\boldsymbol {\theta}} \left(a _ {h} \mid s _ {h}\right) P \left(s _ {h + 1} \mid s _ {h}, a _ {h}\right). \tag {2.1} +$$ + +We define the expected return under policy $\pi_{\theta}$ as $J(\pmb{\theta}) = \mathbb{E}_{\tau \sim p(\cdot|\pmb{\theta})}[\mathcal{R}(\tau)|\mathcal{M}]$ , which is also called the performance function. To maximize the performance function, we can update the policy parameter $\pmb{\theta}$ by iteratively running gradient ascent based algorithms, i.e., $\pmb{\theta}_{k+1} = \pmb{\theta}_k + \eta \nabla_{\pmb{\theta}} J(\pmb{\theta}_k)$ , where $\eta > 0$ is the step size and the gradient $\nabla_{\pmb{\theta}} J(\pmb{\theta})$ is derived as follows: + +$$ +\begin{array}{l} \nabla_ {\boldsymbol {\theta}} J (\boldsymbol {\theta}) = \int_ {\tau} \mathcal {R} (\tau) \nabla_ {\boldsymbol {\theta}} p (\tau | \boldsymbol {\theta}) d \tau = \int_ {\tau} \mathcal {R} (\tau) \left(\nabla_ {\boldsymbol {\theta}} p (\tau | \boldsymbol {\theta}) / p (\tau | \boldsymbol {\theta})\right) p (\tau | \boldsymbol {\theta}) d \tau \\ = \mathbb {E} _ {\tau \sim p (\cdot | \boldsymbol {\theta})} \left[ \nabla_ {\boldsymbol {\theta}} \log p (\tau | \boldsymbol {\theta}) \mathcal {R} (\tau) | \mathcal {M} \right]. \tag {2.2} \\ \end{array} +$$ + +However, it is intractable to calculate the exact gradient in (2.2) since the trajectory distribution $p(\tau|\theta)$ is unknown. In practice, policy gradient algorithm samples a batch of trajectories $\{\tau_i\}_{i=1}^N$ to approximate the exact gradient based on the sample average over all sampled trajectories: + +$$ +\widehat {\nabla} _ {\boldsymbol {\theta}} J (\boldsymbol {\theta}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \nabla_ {\boldsymbol {\theta}} \log p \left(\tau_ {i} \mid \boldsymbol {\theta}\right) \mathcal {R} \left(\tau_ {i}\right). \tag {2.3} +$$ + +At the $k$ -th iteration, the policy is then updated by $\pmb{\theta}_{k + 1} = \pmb{\theta}_k + \eta \widehat{\nabla}_{\pmb{\theta}}J(\pmb{\theta}_k)$ . According to (2.1), we know that $\nabla_{\pmb{\theta}}\log p(\tau_i|\pmb{\theta})$ is independent of the transition probability matrix $P$ . Recall the definition of $\mathcal{R}(\tau)$ , we can rewrite the approximate gradient as follows + +$$ +\begin{array}{l} \widehat {\nabla} _ {\boldsymbol {\theta}} J (\boldsymbol {\theta}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\sum_ {h = 0} ^ {H - 1} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {h} ^ {i} \mid s _ {h} ^ {i}\right)\right) \left(\sum_ {h = 0} ^ {H - 1} \gamma^ {h} r \left(s _ {h} ^ {i}, a _ {h} ^ {i}\right)\right) \\ \stackrel {\text {d e f}} {=} \frac {1}{N} \sum_ {i = 1} ^ {N} g (\tau_ {i} | \boldsymbol {\theta}), \tag {2.4} \\ \end{array} +$$ + +where $\tau_{i} = \{s_{0}^{i},a_{0}^{i},s_{1}^{i},a_{1}^{i},\ldots ,s_{H - 1}^{i},a_{H - 1}^{i},s_{H}^{i}\}$ for all $i = 1,\dots ,N$ and $g(\tau_i|\theta)$ is an unbiased gradient estimator computed based on the $i$ -th trajectory $\tau_{i}$ . The gradient estimator in (2.4) is based on the likelihood ratio methods and is often referred to as the REINFORCE gradient estimator (Williams, 1992). Since $\mathbb{E}[\nabla_{\theta}\log \pi_{\theta}(a|s)] = 0$ , we can add any constant baseline $b_{t}$ to the reward that is independent of the current action and the gradient estimator still remains unbiased. With the observation that future actions do not depend on past rewards, another famous policy gradient theorem (PGT) estimator (Sutton et al., 2000) removes the rewards from previous states: + +$$ +g \left(\tau_ {i} \mid \boldsymbol {\theta}\right) = \sum_ {h = 0} ^ {H - 1} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {h} ^ {i} \mid s _ {h} ^ {i}\right) \left(\sum_ {t = h} ^ {H - 1} \gamma^ {t} r \left(s _ {t} ^ {i}, a _ {t} ^ {i}\right) - b _ {t}\right), \tag {2.5} +$$ + +where $b_{t}$ is a constant baseline. It has been shown (Peters & Schaal, 2008b) that the PGT estimator is equivalent to the commonly used GPOMDP estimator (Baxter & Bartlett, 2001) defined as follows: + +$$ +g \left(\tau_ {i} \mid \boldsymbol {\theta}\right) = \sum_ {h = 0} ^ {H - 1} \left(\sum_ {t = 0} ^ {h} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {t} ^ {i} \mid s _ {t} ^ {i}\right)\right) \left(\gamma^ {h} r \left(s _ {h} ^ {i}, a _ {h} ^ {i}\right) - b _ {h}\right). \tag {2.6} +$$ + +All the three gradient estimators mentioned above are unbiased (Peters & Schaal, 2008b). It has been proved that the variance of the PGT/GPOMDP estimator is independent of horizon $H$ while the variance of REINFORCE depends on $H$ polynomially (Zhao et al., 2011; Pirotta et al., 2013). Therefore, we will focus on the PGT/GPOMDP estimator in this paper and refer to them interchangeably due to their equivalence. + +# 3 THE PROPOSED ALGORITHM + +The approximation in (2.3) using a batch of trajectories often causes a high variance in practice. In this section, we propose a novel variance reduced policy gradient algorithm called stochastic recursive variance reduced policy gradient (SRVR-PG), which is displayed in Algorithm 1. Our SRVR-PG algorithm consists of $S$ epochs. In the initialization, we set the parameter of a reference policy to be $\widetilde{\pmb{\theta}}^0 = \pmb{\theta}_0$ . At the beginning of the $s$ -th epoch, where $s = 0,\dots ,S - 1$ , we set the initial policy parameter $\pmb{\theta}_0^{s + 1}$ to be the same as that of the reference policy $\widehat{\pmb{\theta}}^s$ . The algorithm then samples $N$ episodes $\{\tau_i\}_{i = 1}^N$ from the reference policy $\pi_{\widetilde{\pmb{\theta}}^s}$ to compute a gradient estimator $\mathbf{v}_0^s = 1 / N\sum_{i = 1}^N g(\tau_i|\widetilde{\pmb{\theta}}^s)$ , where $g(\tau_i|\widetilde{\pmb{\theta}}^s)$ is the PGT/GPOMDP estimator. Then the policy is immediately update as in Line 6 of Algorithm 1. + +Within the epoch, at the $t$ -th iteration, SRVR-PG samples $B$ episodes $\{\tau_j\}_{j=1}^B$ based on the current policy $\pi_{\theta_t^{s+1}}$ . We define the following recursive semi-stochastic gradient estimator: + +$$ +\mathbf {v} _ {t} ^ {s + 1} = \frac {1}{B} \sum_ {j = 1} ^ {B} g \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \frac {1}{B} \sum_ {j = 1} ^ {B} g _ {\omega} \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) + \mathbf {v} _ {t - 1} ^ {s + 1}, \tag {3.1} +$$ + +where the first term is a stochastic gradient based on $B$ episodes sampled from the current policy, and the second term is a stochastic gradient defined based on the step-wise important weight between the current policy $\pi_{\pmb{\theta}_t^{s + 1}}$ and the reference policy $\pi_{\widetilde{\theta}^s}$ . Take the GPOMDP estimator for example, for a behavior policy $\pi_{\pmb{\theta}_1}$ and a target policy $\pi_{\pmb{\theta}_2}$ , the step-wise importance weighted estimator is defined as follows + +$$ +g _ {\omega} \left(\tau_ {j} \mid \boldsymbol {\theta} _ {1}\right) = \sum_ {h = 0} ^ {H - 1} \omega_ {0: h} (\tau \mid \boldsymbol {\theta} _ {2}, \boldsymbol {\theta} _ {1}) \left(\sum_ {t = 0} ^ {h} \nabla_ {\boldsymbol {\theta} _ {2}} \log \pi_ {\boldsymbol {\theta} _ {2}} \left(a _ {t} ^ {j} \mid s _ {t} ^ {j}\right)\right) \gamma^ {h} r \left(s _ {h} ^ {j}, a _ {h} ^ {j}\right), \tag {3.2} +$$ + +where $\omega_{0:h}(\tau|\pmb{\theta}_2,\pmb{\theta}_1) = \prod_{h'=0}^h\pi_{\pmb{\theta}_2}(a_h|s_h)/\pi_{\pmb{\theta}_1}(a_h|s_h)$ is the importance weight from $p(\tau_h|\pmb{\theta}_t^{s+1})$ to $p(\tau_h|\pmb{\theta}_{t-1}^{s+1})$ and $\tau_h$ is a truncated trajectory $\{(a_t,s_t)\}_{t=0}^h$ from the full trajectory $\tau$ . It is easy to verify that $\mathbb{E}_{\tau \sim p(\tau|\pmb{\theta}_1)}[g_\omega(\tau_j|\pmb{\theta}_1)] = \mathbb{E}_{\tau \sim p(\tau|\pmb{\theta}_2)}[g(\tau|\pmb{\theta}_2)]$ . + +The difference between the last two terms in (3.1) can be viewed as a control variate to reduce the variance of the stochastic gradient. In many practical applications, the policy parameter space is a subset of $\mathbb{R}^d$ , i.e., $\pmb{\theta} \in \Theta$ with $\Theta \subseteq \mathbb{R}^d$ being a convex set. In this case, we need to project the updated policy parameter onto the constraint set. Base on the semi-stochastic gradient (3.1), we can update the policy parameter using projected gradient ascent along the direction of $\mathbf{v}_t^{s+1}$ : $\pmb{\theta}_{t+1}^{s+1} = \mathcal{P}_{\Theta}(\pmb{\theta}_t^{s+1} + \eta \mathbf{v}_t^{s+1})$ , where $\eta > 0$ is the step size and the projection operator associated with $\Theta$ is defined as + +$$ +\mathcal {P} _ {\Theta} (\boldsymbol {\theta}) = \underset {\mathbf {u} \in \Theta} {\operatorname {a r g m i n}} \| \boldsymbol {\theta} - \mathbf {u} \| _ {2} ^ {2} = \underset {\mathbf {u} \in \mathbb {R} ^ {d}} {\operatorname {a r g m i n}} \left\{\mathbb {1} _ {\Theta} (\mathbf {u}) + \frac {1}{2 \eta} \| \boldsymbol {\theta} - \mathbf {u} \| _ {2} ^ {2} \right\}, \tag {3.3} +$$ + +where $\mathbb{1}_{\Theta}(\mathbf{u})$ is the set indicator function on $\Theta$ , i.e., $\mathbb{1}_{\Theta}(\mathbf{u}) = 0$ if $\mathbf{u} \in \Theta$ and $\mathbb{1}_{\Theta}(\mathbf{u}) = +\infty$ otherwise. $\eta > 0$ is any finite real value and is chosen as the step size in our paper. It is easy to see that $\mathbb{1}_{\Theta}(\cdot)$ is nonsmooth. At the end of the $s$ -th epoch, we update the reference policy as $\widetilde{\theta}^{s+1} = \theta_m^{s+1}$ , where $\theta_m^{s+1}$ is the last iterate of this epoch. + +The goal of our algorithm is to find a point $\pmb{\theta} \in \Theta$ that maximizes the performance function $J(\pmb{\theta})$ subject to the constraint, namely, $\max_{\pmb{\theta} \in \Theta} J(\pmb{\theta}) = \max_{\pmb{\theta} \in \mathbb{R}^d} \{J(\pmb{\theta}) - \mathbb{1}_{\Theta}(\pmb{\theta})\}$ . The gradient norm $\| \nabla J(\pmb{\theta}) \|_2$ is not sufficient to characterize the convergence of the algorithm due to additional the constraint. Following the literature on nonsmooth optimization (Reddi et al., 2016b; Ghadimi et al., 2016; Nguyen et al., 2017; Li & Li, 2018; Wang et al., 2018), we use the generalized first-order stationary condition: $\mathcal{G}_{\eta}(\pmb{\theta}) = \mathbf{0}$ , where the gradient mapping $\mathcal{G}_{\eta}$ is defined as follows + +$$ +\mathcal {G} _ {\eta} (\boldsymbol {\theta}) = \frac {1}{\eta} \left(\mathcal {P} _ {\boldsymbol {\Theta}} (\boldsymbol {\theta} + \eta \nabla J (\boldsymbol {\theta})) - \boldsymbol {\theta}\right). \tag {3.4} +$$ + +We can view $\mathcal{G}_{\eta}$ as a generalized projected gradient at $\pmb{\theta}$ . By definition if $\Theta = \mathbb{R}^d$ , we have $\mathcal{G}_{\eta}(\pmb{\theta}) \equiv \nabla J(\pmb{\theta})$ . Therefore, the policy is update is displayed in Line 10 in Algorithm 1, where + +Algorithm 1 Stochastic Recursive Variance Reduced Policy Gradient (SRVR-PG) +1: Input: number of epochs $S$ , epoch size $m$ , step size $\eta$ , batch size $N$ , mini-batch size $B$ , gradient estimator $g$ , initial parameter $\widetilde{\pmb{\theta}}^0 = \pmb{\theta}_0 \in \Theta$ +2: for $s = 0, \dots, S - 1$ do +3: $\pmb{\theta}_0^{s + 1} = \widetilde{\pmb{\theta}}^s$ +4: Sample $N$ trajectories $\{\tau_i\}$ from $p(\cdot|\widetilde{\pmb{\theta}}^s)$ +5: $\mathbf{v}_0^{s + 1} = \widehat{\nabla}_{\pmb{\theta}}J(\widetilde{\pmb{\theta}}^s) := 1/N\sum_{i=1}^{N} g(\tau_i|\widetilde{\pmb{\theta}}^s)$ +6: $\pmb{\theta}_1^{s + 1} = \mathcal{P}_{\Theta}(\pmb{\theta}_0^{s + 1} + \eta \mathbf{v}_0^{s + 1})$ +7: for $t = 1, \dots, m - 1$ do +8: Sample $B$ trajectories $\{\tau_j\}$ from $p(\cdot|\pmb{\theta}_t^{s + 1})$ +9: $\mathbf{v}_t^{s + 1} = \mathbf{v}_{t - 1}^{s + 1} + \frac{1}{B}\sum_{j = 1}^{B}\left(g(\tau_j|\pmb{\theta}_t^{s + 1}) - g_\omega(\tau_j|\pmb{\theta}_{t - 1}^{s + 1})\right)$ +10: $\pmb{\theta}_{t + 1}^{s + 1} = \mathcal{P}_{\Theta}(\pmb{\theta}_t^{s + 1} + \eta \mathbf{v}_t^{s + 1})$ +11: end for +12: $\widetilde{\pmb{\theta}}^{s + 1} = \pmb{\theta}_m^{s + 1}$ +13: end for +14: return $\pmb{\theta}_{\mathrm{out}}$ , which is uniformly picked from $\{\pmb{\theta}_t^s\}_{t = 0,\dots,m - 1;s = 0,\dots,S}$ + +prox is the proximal operator defined in (3.3). Similar recursive semi-stochastic gradients to (3.1) were first proposed in stochastic optimization for finite-sum problems, leading to the stochastic recursive gradient algorithm (SARAH) (Nguyen et al., 2017; 2019) and the stochastic path-integrated differential estimator (SPIDER) (Fang et al., 2018; Wang et al., 2018). However, our gradient estimator in (3.1) is noticeably different from that in Nguyen et al. (2017); Fang et al. (2018); Wang et al. (2018); Nguyen et al. (2019) due to the gradient estimator $g_{\omega}(\tau_j|\pmb{\theta}_{t-1}^{s+1})$ defined in (3.2) that is equipped with step-wise importance weights. This term is essential to deal with the non-stationarity of the distribution of the trajectory $\tau$ . Specifically, $\{\tau_j\}_{j=1}^B$ are sampled from policy $\pi_{\pmb{\theta}_t^{s+1}}$ while the PGT/GPOMDP estimator $g(\cdot|\pmb{\theta}_{t-1}^{s+1})$ is defined based on policy $\pi_{\pmb{\theta}_{t-1}^{s+1}}$ according to (2.6). This inconsistency introduces extra challenges in the convergence analysis of SRVR-PG. Using importance weighting, we can obtain + +$$ +\mathbb {E} _ {\tau \sim p (\tau | \pmb {\theta} _ {t} ^ {s + 1})} [ g _ {\omega} (\tau | \pmb {\theta} _ {t - 1} ^ {s + 1}) ] = \mathbb {E} _ {\tau \sim p (\tau | \pmb {\theta} _ {t - 1} ^ {s + 1})} [ g (\tau | \pmb {\theta} _ {t - 1} ^ {s + 1}) ], +$$ + +which eliminates the inconsistency caused by the varying trajectory distribution. + +It is worth noting that the semi-stochastic gradient in (3.1) also differs from the one used in SVRPG (Papini et al., 2018) because we recursively update $\mathbf{v}_t^{s + 1}$ using $\mathbf{v}_{t - 1}^{s + 1}$ from the previous iteration, while SVRPG uses a reference gradient that is only updated at the beginning of each epoch. Moreover, SVRPG wastes $N$ trajectories without updating the policy at the beginning of each epoch, while Algorithm 1 updates the policy immediately after this sampling process (Line 6), which saves computation in practice. + +We notice that very recently another algorithm called SARAPO (Yuan et al., 2019) is proposed which also uses a recursive gradient update in trust region policy optimization (Schulman et al., 2015a). Our Algorithm 1 differs from their algorithm at least in the following ways: (1) our recursive gradient $\mathbf{v}_t^s$ defined in (3.1) has an importance weight from the snapshot gradient while SARAPO does not; (2) we are optimizing the expected return while Yuan et al. (2019) optimizes the total advantage over state visitation distribution and actions under KullbackLeibler divergence constraint; and most importantly (3) there is no convergence or sample complexity analysis for SARAPO. + +# 4 MAIN THEORY + +In this section, we present the theoretical analysis of Algorithm 1. We first introduce some common assumptions used in the convergence analysis of policy gradient methods. + +Assumption 4.1. Let $\pi_{\theta}(a|s)$ be the policy parameterized by $\pmb{\theta}$ . There exist constants $G, M > 0$ such that the gradient and Hessian matrix of $\log \pi_{\theta}(a|s)$ with respect to $\pmb{\theta}$ satisfy + +$$ +\left\| \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} (a | s) \right\| \leq G, \quad \left\| \nabla_ {\boldsymbol {\theta}} ^ {2} \log \pi_ {\boldsymbol {\theta}} (a | s) \right\| _ {2} \leq M, +$$ + +for all $a\in \mathcal{A}$ and $s\in S$ + +The above boundedness assumption is reasonable since we usually require the policy function to be twice differentiable and easy to optimize in practice. Similarly, in Papini et al. (2018), the authors assume that $\frac{\partial}{\partial\theta_i}\log \pi_\pmb {\theta}(a|s)$ and $\frac{\partial^2}{\partial\theta_i\partial\theta_j}\log \pi_\pmb {\theta}(a|s)$ are upper bounded elementwisely, which is actually stronger than our Assumption 4.1. + +In the following proposition, we show that Assumption4.1 directly implies that the Hessian matrix of the performance function $\nabla^2 J(\pmb{\theta})$ is bounded, which is often referred to as the smoothness assumption and is crucial in analyzing the convergence of nonconvex optimization (Reddi et al., 2016a; Allen-Zhu & Hazan, 2016). + +Proposition 4.2. Let $g(\tau|\theta)$ be the PGT estimator defined in (2.5). Assumption 4.1 implies: + +(1). $\| g(\tau |\pmb {\theta}_1) - g(\tau |\pmb {\theta}_2)\| _2\leq L\| \pmb {\theta}_1 - \pmb {\theta}_2\| _2,\forall \pmb {\theta}_1,\pmb {\theta}_2\in \mathbb{R}^d$ , where the smoothness parameter is $L = MR / (1 - \gamma)^2 +2G^2 R / (1 - \gamma)^3;$ +(2). $J(\pmb{\theta})$ is $L$ -smooth, namely $\| \nabla_{\pmb{\theta}}^{2}J(\pmb{\theta})\|_{2}\leq L;$ +(3). $\| g(\tau |\pmb {\theta})\|_{2}\leq C_{g}$ for all $\pmb {\theta}\in \mathbb{R}^d$ , with $C_g = GR / (1 - \gamma)^2$ + +Similar properties are also proved in Xu et al. (2019). However, in contrast to their results, the smoothness parameter $L$ and the bound on the gradient norm here do not rely on horizon $H$ . When $H \approx 1 / (1 - \gamma)$ and $\gamma$ is sufficiently close to 1, we can see that the order of the smoothness parameter is $O(1 / (1 - \gamma)^3)$ , which matches the order $O(H^2 /(1 - \gamma))$ in Xu et al. (2019). The next assumption requires the variance of the gradient estimator is bounded. + +Assumption 4.3. There exists a constant $\xi >0$ such that $\mathrm{Var}\big(g(\tau |\pmb {\theta})\big)\leq \xi^2$ , for all policy $\pi_{\pmb{\theta}}$ + +In Algorithm 1, we have used importance sampling to connect the trajectories between two different iterations. The following assumption ensures that the variance of the importance weight is bounded, which is also made in Papini et al. (2018); Xu et al. (2019). + +Assumption 4.4. Let $\omega (\cdot |\pmb {\theta}_1,\pmb {\theta}_2) = p(\cdot |\pmb {\theta}_1) / p(\cdot |\pmb {\theta}_2)$ . There is a constant $W < \infty$ such that for each policy pairs encountered in Algorithm 1, + +$$ +\operatorname {V a r} (\omega (\tau | \boldsymbol {\theta} _ {1}, \boldsymbol {\theta} _ {2})) \leq W, \quad \forall \boldsymbol {\theta} _ {1}, \boldsymbol {\theta} _ {2} \in \mathbb {R} ^ {d}, \tau \sim p (\cdot | \boldsymbol {\theta} _ {2}). +$$ + +# 4.1 CONVERGENCE RATE AND SAMPLE COMPLEXITY OF SRVR-PG + +Now we are ready to present the convergence result of SRVR-PG to a stationary point: + +Theorem 4.5. Suppose that Assumptions 4.1, 4.3 and 4.4 hold. In Algorithm 1, we choose the step size $\eta \leq 1 / (4L)$ and epoch size $m$ and mini-batch size $B$ such that + +$$ +B \geq \frac {7 2 m \eta G ^ {2} (2 G ^ {2} / M + 1) (W + 1) \gamma}{(1 - \gamma) ^ {2}}. +$$ + +Then the generalized projected gradient of the output of Algorithm 1 satisfies + +$$ +\mathbb {E} \left[ \left\| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {\text {o u t}}\right) \right\| _ {2} ^ {2} \right] \leq \frac {8 \left[ J \left(\boldsymbol {\theta} ^ {*}\right) - J \left(\boldsymbol {\theta} _ {0}\right) - \mathbb {1} _ {\Theta} \left(\boldsymbol {\theta} ^ {*}\right) + \mathbb {1} _ {\Theta} \left(\boldsymbol {\theta} _ {0}\right) \right]}{\eta S m} + \frac {6 \xi^ {2}}{N}, +$$ + +where $\pmb{\theta}^{*} = \operatorname{argmax}_{\pmb{\theta} \in \Theta} J(\pmb{\theta})$ + +Remark 4.6. Theorem 4.5 states that under a proper choice of step size, batch size and epoch length, the expected squared gradient norm of the performance function at the output of SRVR-PG is in the order of + +$$ +O \left(\frac {1}{S m} + \frac {1}{N}\right). +$$ + +Recall that $S$ is the number of epochs and $m$ is the epoch length of SRVR-PG, so $Sm$ is the total number of iterations of SRVR-PG. Thus the first term $O(1/(Sm))$ characterizes the convergence rate of SRVR-PG. The second term $O(1/N)$ comes from the variance of the stochastic gradient used in the outer loop, where $N$ is the batch size used in the snapshot gradient $\mathbf{v}_0^{s+1}$ in Line 5 of SRVR-PG. Compared with the $O(1/(Sm) + 1/N + 1/B)$ convergence rate in Papini et al. (2018), + +our analysis avoids the additional term $O(1 / B)$ that depends on the mini-batch size within each epoch. + +Compared with Xu et al. (2019), our mini-batch size $B$ is independent of the horizon length $H$ . This enables us to choose a smaller mini-batch size $B$ while maintaining the same convergence rate. As we will show in the next corollary, this improvement leads to a lower sample complexity. + +Corollary 4.7. Suppose the same conditions as in Theorem 4.5 hold. Set step size as $\eta = 1 / (4L)$ , the batch size parameters as $N = O(1 / \epsilon)$ and $B = O(1 / \epsilon^{1 / 2})$ respectively, epoch length as $m = O(1 / \epsilon^{1 / 2})$ and the number of epochs as $S = O(1 / \epsilon^{1 / 2})$ . Then Algorithm 1 outputs a point $\pmb{\theta}_{\mathrm{out}}$ that satisfies $\mathbb{E}[\| \mathcal{G}_\eta (\pmb{\theta}_{\mathrm{out}})\| _2^2 ]\leq \epsilon$ within $O(1 / \epsilon^{3 / 2})$ trajectories in total. + +Note that the results in Papini et al. (2018); Xu et al. (2019) are for $\| \nabla_{\pmb{\theta}}J(\pmb {\theta})\| _2^2\leq \epsilon$ , while our result in Corollary 4.7 is more general. In particular, when the policy parameter $\pmb{\theta}$ is defined on the whole space $\mathbb{R}^d$ instead of $\Theta$ , our result reduces to the case for $\| \nabla_{\pmb{\theta}}J(\pmb {\theta})\| _2^2\leq \epsilon$ since $\Theta = \mathbb{R}^{d}$ and $\mathcal{G}_{\eta}(\pmb {\theta}) = \nabla_{\pmb{\theta}}J(\pmb {\theta})$ . In Xu et al. (2019), the authors improved the sample complexity of SVRPG (Papini et al., 2018) from $O(1 / \epsilon^2)$ to $O(1 / \epsilon^{5 / 3})$ by a sharper analysis. According to Corollary 4.7, SRVR-PG only needs $O(1 / \epsilon^{3 / 2})$ number of trajectories to achieve $\| \nabla_{\pmb{\theta}}J(\pmb {\theta})\| _2^2\leq \epsilon$ , which is lower than the sample complexity of SVRPG by a factor of $O(1 / \epsilon^{1 / 6})$ . This improvement is more pronounced when the required precision $\epsilon$ is very small. + +# 4.2 IMPLICATION FOR GAUSSIAN POLICY + +Now, we consider the Gaussian policy model and present the sample complexity of SRVR-PG in this setting. For bounded action space $\mathcal{A} \subset \mathbb{R}$ , a Gaussian policy parameterized by $\theta$ is defined as + +$$ +\pi_ {\boldsymbol {\theta}} (a | s) = \frac {1}{\sqrt {2 \pi}} \exp \left(- \frac {\left(\boldsymbol {\theta} ^ {\top} \phi (s) - a\right) ^ {2}}{2 \sigma^ {2}}\right), \tag {4.1} +$$ + +where $\sigma^2$ is a fixed standard deviation parameter and $\phi : S \mapsto \mathbb{R}^d$ is a mapping from the state space to the feature space. For Gaussian policy, under the mild condition that the actions and the state feature vectors are bounded, we can verify that Assumptions 4.1 and 4.3 hold, which can be found in Appendix D. It is worth noting that Assumption 4.4 does not hold trivially for all Gaussian distributions. In particular, Cortes et al. (2010) showed that for two Gaussian distributions $\pi_{\pmb{\theta}_1}(a|s) \sim N(\mu_1, \sigma_1^2)$ and $\pi_{\pmb{\theta}_2}(a|s) \sim N(\mu_2, \sigma_2^2)$ , if $\sigma_2 > \sqrt{2}/2\sigma_1$ , then the variance of $\omega(\tau|\pmb{\theta}_1, \pmb{\theta}_2)$ is bounded. For our Gaussian policy defined in (4.1) where the standard deviation $\sigma^2$ is fixed, we have $\sigma > \sqrt{2}/2\sigma$ trivially hold, and therefore Assumption 4.4 holds for some finite constant $W > 0$ according to (2.1). + +Recall that Theorem 4.5 holds for any general models under Assumptions 4.1, 4.3 and 4.4. Based on the above arguments, we know that the convergence analysis in Theorem 4.5 applies to Gaussian policy. In the following corollary, we present the sample complexity of Algorithm 1 for Gaussian policy with detailed dependency on precision parameter $\epsilon$ , horizon size $H$ and the discount factor $\gamma$ . + +Corollary 4.8. Given the Gaussian policy defined in (4.1), suppose Assumption 4.4 holds and we have $|a| \leq C_a$ for all $a \in \mathcal{A}$ and $\| \phi(s) \|_2 \leq M_\phi$ for all $s \in S$ , where $C_a, M_\phi > 0$ are constants. If we set step size as $\eta = O((1 - \gamma)^3)$ , the mini-batch sizes and epoch length as $N = O((1 - \gamma)^{-3}\epsilon^{-1})$ , $B = O((1 - \gamma)^{-1}\epsilon^{-1/2})$ and $m = O((1 - \gamma)^{-2}\epsilon^{-1/2})$ , then the output of Algorithm 1 satisfies $\mathbb{E}[\| \mathcal{G}_\eta(\pmb{\theta}_{\mathrm{out}}) \|_2^2] \leq \epsilon$ after $O(1 / ((1 - \gamma)^4\epsilon^{3/2}))$ trajectories in total. + +Remark 4.9. For Gaussian policy, the number of trajectories Algorithm 1 needs to find an $\epsilon$ -approximate stationary point, i.e., $\mathbb{E}[\| \mathcal{G}_{\eta}(\pmb{\theta}_{\mathrm{out}})\|_2^2 ]\leq \epsilon$ , is also in the order of $O(\epsilon^{-3 / 2})$ , which is faster than PGT and SVRPG. Additionally, we explicitly show that the sample complexity does not depend on the horizon $H$ , which is in sharp contrast with the results in Papini et al. (2018); Xu et al. (2019). The dependence on $1 / (1 - \gamma)$ comes from the variance of PGT estimator. + +# 5 EXPERIMENTS + +In this section, we provide experiment results of the proposed algorithm on benchmark reinforcement learning environments including the Cartpole, Mountain Car and Pendulum problems. In all + +![](images/498f215b3d48ec30c0552a7704f045e3200ca06b5db2ea0353b0909466c43055.jpg) +(a) Cartpole + +![](images/6b766140d542e5f3d67d70d0bc57f8d7ae0ad1c70a03381ff1a1ae2e618060f3.jpg) +(b) Mountain Car + +![](images/44cb061a09c2b9b1e6d10946712e86b0d2fea9e1cc996eb29e9b699333aa126c.jpg) +(c) Pendulum + +![](images/1624eabf23ad70a82052b7b7abdc5c5c459bc50207ae22d36085a5894645e95c.jpg) +(d) Cartpole + +![](images/bf1123ef2277b3ef5d756fe6063c2c724599b259dd961564446d39212a5cbe78.jpg) +(e) Mountain Car + +![](images/eb340bee62a03163f3b0c25ee0b1963da843ba7950388b8e86e135a73a21fa37.jpg) +(f) Pendulum +Figure 1: (a)-(c): Comparison of different algorithms. Experimental results are averaged over 10 repetitions. (d)-(f): Comparison of different batch size $B$ on the performance of SRVR-PG. + +the experiments, we use the Gaussian policy defined in (4.1). In addition, we found that the proposed algorithm works well without the extra projection step. Therefore, we did not use projection in our experiments. For baselines, we compare the proposed SRVR-PG algorithm with the most relevant methods: GPOMDP (Baxter & Bartlett, 2001) and SVRPG (Papini et al., 2018). For the learning rates $\eta$ in all of our experiments, we use grid search to directly tune $\eta$ . For instance, we searched $\eta$ for the Cartpole problem by evenly dividing the interval $[10^{-5}, 10^{-1}]$ into 20 points in the log-space. For the batch size parameters $N$ and $B$ and the epoch length $m$ , according to Corollary 4.7, we choose $N = O(1 / \epsilon)$ , $B = O(1 / \epsilon^{1/2})$ and thus $m = O(1 / \epsilon^{1/2})$ , where $\epsilon > 0$ is a user-defined precision parameter. In our experiments, we set $N = C_0 / \epsilon$ , $B = C_1 / \epsilon^{1/2}$ and $m = C_2 / \epsilon^{1/2}$ and tune the constant parameters $C_0, C_1, C_2$ using grid search. The detailed parameters used in the experiments are presented in Appendix E. + +We evaluate the performance of different algorithms in terms of the total number of trajectories they require to achieve a certain threshold of cumulative rewards. We run each experiment repeatedly for 10 times and plot the averaged returns with standard deviation. For a given environment, all experiments are initialized from the same random initialization. Figures 1(a), 1(b) and 1(c) show the results on the comparison of GPOMDP, SVRPG, and our proposed SRVR-PG algorithm across three different RL environments. It is evident that, for all environments, GPOMDP is overshadowed by the variance reduced algorithms SVRPG and SRVR-PG significantly. Furthermore, SRVR-PG outperforms SVRPG in all experiments, which is consistent with the comparison on the sample complexity of GPOMDP, SVRPG and SRVR-PG in Table 1. + +Corollaries 4.7 and 4.8 suggest that when the mini-batch size $B$ is in the order of $O(\sqrt{N})$ , SRVR-PG achieves the best performance. Here $N$ is the number of episodes sampled in the outer loop of Algorithm 1 and $B$ is the number of episodes sampled at each inner loop iteration. To validate our theoretical result, we conduct a sensitivity study to demonstrate the effectiveness of different batch sizes within each epoch of SRVR-PG on its performance. The results on different environments are displayed in Figures 1(d), 1(e) and 1(f) respectively. To interpret these results, we take the Pendulum problem as an example. In this setting, we choose outer loop batch size $N$ of Algorithm 1 to be $N = 250$ . By Corollary 4.8, the optimal choice of batch size in the inner loop of Algorithm 1 is $B = C\sqrt{N}$ , where $C > 1$ is a constant depending on horizon $H$ and discount factor $\gamma$ . Figure 1(f) + +shows that $B = 50 \approx 3\sqrt{N}$ yields the best convergence results for SRVR-PG on Pendulum, which validates our theoretical analysis and implies that a larger batch size $B$ does not necessarily result in an improvement in sample complexity, as each update requires more trajectories, but a smaller batch size $B$ pushes SRVR-PG to behave more similar to GPOMDP. Moreover, by comparing with the outer loop batch size $N$ presented in Table 2 for SRVR-PG in Cartpole and Mountain Car environments, we found that the results in Figures 1(d) and 1(e) are again in alignment with our theory. Due to the space limit, additional experiment results are included in Appendix E. + +# 6 CONCLUSIONS + +We propose a novel policy gradient method called SRVR-PG, which is built on a recursively updated stochastic policy gradient estimator. We prove that the sample complexity of SRVR-PG is lower than the sample complexity of the state-of-the-art SVRPG (Papini et al., 2018; Xu et al., 2019) algorithm. We also extend the new variance reduction technique to policy gradient with parameter-based exploration and propose the SRVR-PG-PE algorithm, which outperforms the original PGPE algorithm both in theory and practice. Experiments on the classic reinforcement learning benchmarks validate the advantage of our proposed algorithms. + +# ACKNOWLEDGMENTS + +We would like to thank the anonymous reviewers for their helpful comments. We would also like to thank Rui Yuan for pointing out an error on the calculation of the smoothness parameter for the performance function in the previous version. This research was sponsored in part by the National Science Foundation IIS-1904183, IIS-1906169 and Adobe Data Science Research Award. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies. + +# REFERENCES + +Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. In International Conference on Machine Learning, pp. 699-707, 2016. +Zeyuan Allen-Zhu, Yuzhhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In International Conference on Machine Learning, pp. 242-252, 2019. +Jonathan Baxter and Peter L Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15:319-350, 2001. +Qi Cai, Zhuoran Yang, Jason D Lee, and Zhaoran Wang. Neural temporal-difference learning converges to global optima. In Advances in Neural Information Processing Systems, 2019. +Yuan Cao and Quanquan Gu. A generalization theory of gradient descent for learning overparameterized deep relu networks. arXiv preprint arXiv:1902.01384, 2019. +Corinna Cortes, Yishay Mansour, and Mehryar Mohri. Learning bounds for importance weighting. In Advances in Neural Information Processing Systems, pp. 442-450, 2010. +Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, pp. 1646-1654, 2014. +Marc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1-142, 2013. +Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, pp. 1675-1685, 2019a. +Simon S Du, Jianshu Chen, Lihong Li, Lin Xiao, and Dengyong Zhou. Stochastic variance reduction methods for policy evaluation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1049-1058. JMLR.org, 2017. + +Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In International Conference on Learning Representations, 2019b. URL https://openreview.net/forum?id=S1eK3i09YQ. +Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. In Advances in Neural Information Processing Systems, pp. 686-696, 2018. +Saeed Ghadimi, Guanghui Lan, and Hongchao Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1-2): 267-305, 2016. +Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471-1530, 2004. +Reza Harikandeh, Mohamed Osama Ahmed, Alim Virani, Mark Schmidt, Jakub Konečny, and Scott Sallinen. Stopwasting my gradients: Practical svrg. In Advances in Neural Information Processing Systems, pp. 2251-2259, 2015. +Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pp. 315-323, 2013. +Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238-1274, 2013. +Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In Advances in Neural Information Processing Systems, pp. 1008-1014, 2000. +Lihua Lei, Cheng Ju, Jianbo Chen, and Michael I Jordan. Non-convex finite-sum optimization via scsg methods. In Advances in Neural Information Processing Systems, pp. 2348-2358, 2017. +Sergey Levine, Nolan Wagener, and Pieter Abbeel. Learning contact-rich manipulation skills with guided policy search. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 156-163. IEEE, 2015. +Yuxi Li. Deep reinforcement learning: An overview. CoRR, abs/1701.07274, 2017. URL http://arxiv.org/abs/1701.07274. +Zhize Li and Jian Li. A simple proximal stochastic gradient method for nonsmooth nonconvex optimization. In Advances in Neural Information Processing Systems, pp. 5569-5579, 2018. +Boyi Liu, Qi Cai, Zhuoran Yang, and Zhaoran Wang. Neural proximal/trust region policy optimization attains globally optimal policy. In Advances in Neural Information Processing Systems, 2019. +Yang Liu, Prajit Ramachandran, Qiang Liu, and Jian Peng. Stein variational policy gradient. CoRR, abs/1704.02399, 2017. URL http://arxiv.org/abs/1704.02399. +Alberto Maria Metelli, Matteo Papini, Francesco Faccio, and Marcello Restelli. Policy optimization via importance sampling. In Advances in Neural Information Processing Systems, pp. 5447-5459, 2018. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč. Sarah: A novel method for machine learning problems using stochastic recursive gradient. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2613-2621. JMLR.org, 2017. +Lam M Nguyen, Marten van Dijk, Dzung T Phan, Phuong Ha Nguyen, Tsui-Wei Weng, and Jayant R Kalagnanam. Optimal finite-sum smooth non-convex optimization with sarah. CoRR, abs/1901.07648, 2019. URL http://arxiv.org/abs/1901.07648. + +Matteo Papini, Damiano Binaghi, Giuseppe Canonaco, Matteo Pirotta, and Marcello Restelli. Stochastic variance-reduced policy gradient. In International Conference on Machine Learning, pp. 4023-4032, 2018. +Jan Peters and Stefan Schaal. Natural actor-critic. Neurocomputing, 71(7-9):1180-1190, 2008a. +Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. *Neural Networks*, 21(4):682-697, 2008b. +Matteo Pirotta, Marcello Restelli, and Luca Bascetta. Adaptive step-size for policy gradient methods. In Advances in Neural Information Processing Systems, pp. 1394-1402, 2013. +Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic variance reduction for nonconvex optimization. In International Conference on Machine Learning, pp. 314-323, 2016a. +Sashank J Reddi, Suvrit Sra, Barnabas Poczos, and Alexander J Smola. Proximal stochastic methods for nonsmooth nonconvex finite-sum optimization. In Advances in Neural Information Processing Systems, pp. 1145-1153, 2016b. +Alfréd Rényi et al. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. The Regents of the University of California, 1961. +Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pp. 400-407, 1951. +John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, volume 37, pp. 1889-1897, 2015a. +John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. CoRR, abs/1506.02438, 2015b. URL https://arxiv.org/abs/1506.02438. +Frank Sehnke, Christian Osendorfer, Thomas Rückstieß, Alex Graves, Jan Peters, and Jürgen Schmidhuber. Policy gradients with parameter-based exploration for control. In International Conference on Artificial Neural Networks, pp. 387-396. Springer, 2008. +Frank Sehnke, Christian Osendorfer, Thomas Rückstieß, Alex Graves, Jan Peters, and Jürgen Schmidhuber. Parameter-exploring policy gradients. *Neural Networks*, 23(4):551-559, 2010. +Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. CoRR, abs/1610.03295, 2016. URL http://arxiv.org/abs/1610.03295. +Zebang Shen, Alejandro Ribeiro, Hamed Hassani, Hui Qian, and Chao Mi. Hessian aided policy gradient. In International Conference on Machine Learning, pp. 5729-5738, 2019. +David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International Conference on Machine Learning, 2014. +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017. +Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. +Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, pp. 1057-1063, 2000. + +George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard Turner, Zoubin Ghahramani, and Sergey Levine. The mirage of action-dependent baselines in reinforcement learning. In International Conference on Machine Learning, pp. 5022-5031, 2018. +Lingxiao Wang, Qi Cai, Zhuoran Yang, and Zhaoran Wang. Neural policy gradient methods: Global optimality and rates of convergence. arXiv preprint arXiv:1909.01150, 2019. +Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, and Vahid Tarokh. Spiderboost: A class of faster variance-reduced algorithms for nonconvex optimization. CoRR, abs/1810.10690, 2018. URL http://arxiv.org/abs/1810.10690. +Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992. +Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229-256, 1992. +Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M Bayen, Sham Kakade, Igor Mordatch, and Pieter Abbeel. Variance reduction for policy gradient with action-dependent factorized baselines. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1tSsb-AW. +Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057-2075, 2014. +Pan Xu, Felicia Gao, and Quanquan Gu. An improved convergence analysis of stochastic variance-reduced policy gradient. In International Conference on Uncertainty in Artificial Intelligence, 2019. +Tianbing Xu, Qiang Liu, and Jian Peng. Stochastic variance reduction for policy gradient estimation. CoRR, abs/1710.06034, 2017. URL http://arxiv.org/abs/1710.06034. +Long Yang and Yu Zhang. Policy optimization with stochastic mirror descent. arXiv preprint arXiv:1906.10462, 2019. +Zhuoran Yang, Yongxin Chen, Mingyi Hong, and Zhaoran Wang. On the global convergence of actor-critic: A case for linear quadratic regulator with ergodic cost. In Advances in Neural Information Processing Systems, 2019. +Huizhuo Yuan, Chris Junchi Li, Yuhao Tang, and Yuren Zhou. Policy optimization via stochastic recursive gradient algorithm, 2019. URL https://openreview.net/forum?id=rJl3S2A9t7. +Tingting Zhao, Hirotaka Hachiya, Gang Niu, and Masashi Sugiyama. Analysis and improvement of policy gradient estimation. In Advances in Neural Information Processing Systems, pp. 262-270, 2011. +Tingting Zhao, Hirotaka Hachiya, Voot Tangkaratt, Jun Morimoto, and Masashi Sugiyama. Efficient sample reuse in policy gradients with parameter-based exploration. Neural computation, 25(6): 1512-1547, 2013. +Dongruo Zhou, Pan Xu, and Quanquan Gu. Stochastic nested variance reduced gradient descent for nonconvex optimization. In Advances in Neural Information Processing Systems, pp. 3922-3933, 2018. +Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep relu networks. Machine Learning, 2019. + +# A EXTENSION TO PARAMETER-BASED EXPLORATION + +Although SRVR-PG is proposed for action-based policy gradient, it can be easily extended to the policy gradient algorithm with parameter-based exploration (PGPE) (Sehnke et al., 2008). Unlike action-based policy gradient in previous sections, PGPE does not directly optimize the policy parameter $\theta$ but instead assumes that it follows a prior distribution with hyper-parameter $\rho$ : $\theta \sim p(\theta|\rho)$ . The expected return under the policy induced by the hyper-parameter $\rho$ is formulated as follows2 + +$$ +J (\boldsymbol {\rho}) = \int \int p (\boldsymbol {\theta} | \boldsymbol {\rho}) p (\tau | \boldsymbol {\theta}) \mathcal {R} (\tau) \mathrm {d} \tau \mathrm {d} \boldsymbol {\theta}. \tag {A.1} +$$ + +PGPE aims to find the hyper-parameter $\rho^{*}$ that maximizes the performance function $J(\pmb {\rho})$ . Since $p(\pmb {\theta}|\pmb {\rho})$ is stochastic and can provide sufficient exploration, we can choose $\pi_{\pmb{\theta}}(a|s) = \delta (a - \mu_{\pmb{\theta}}(s))$ to be a deterministic policy, where $\delta$ is the Dirac delta function and $\mu_{\pmb{\theta}}(\cdot)$ is a deterministic function. For instance, a linear deterministic policy is defined as $\pi_{\pmb{\theta}}(a|s) = \delta (a - \pmb{\theta}^{\top}s)$ (Zhao et al., 2011; Metelli et al., 2018). Given the policy parameter $\pmb{\theta}$ , a trajectory $\tau$ is only decided by the initial state distribution and the transition probability. Therefore, PGPE is called a parameter-based exploration approach. Similar to the action-based policy gradient methods, we can apply gradient ascent to find $\rho^{*}$ . In the $k$ -th iteration, we update $\rho_{k}$ by $\rho_{k + 1} = \rho_{k} + \eta \nabla_{\rho}J(\rho)$ . The exact gradient of $J(\rho)$ with respect to $\pmb{\rho}$ is given by + +$$ +\nabla_ {\boldsymbol {\rho}} J (\boldsymbol {\rho}) = \int \int p (\boldsymbol {\theta} | \boldsymbol {\rho}) p (\tau | \boldsymbol {\theta}) \nabla_ {\boldsymbol {\rho}} \log p (\boldsymbol {\theta} | \boldsymbol {\rho}) \mathcal {R} (\tau) \mathrm {d} \tau \mathrm {d} \boldsymbol {\theta}. +$$ + +To approximate $\nabla_{\pmb{\rho}}J(\pmb{\rho})$ , we first sample $N$ policy parameters $\{\pmb{\theta}_i\}$ from $p(\pmb{\theta}|\pmb{\rho})$ . Then we sample one trajectory $\tau_i$ for each $\pmb{\theta}_i$ and use the following empirical average to approximate $\nabla_{\pmb{\rho}}J(\pmb{\rho})$ + +$$ +\widehat {\nabla} _ {\boldsymbol {\rho}} J (\boldsymbol {\rho}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \nabla_ {\boldsymbol {\rho}} \log p \left(\boldsymbol {\theta} _ {i} \mid \boldsymbol {\rho}\right) \sum_ {h = 0} ^ {H} \gamma^ {h} r \left(s _ {h} ^ {i}, a _ {h} ^ {i}\right) := \frac {1}{N} \sum_ {i = 1} ^ {N} g \left(\tau_ {i} \mid \boldsymbol {\rho}\right), \tag {A.2} +$$ + +where $\gamma \in [0,1)$ is the discount factor. Compared with the PGT/GPOMDP estimator in Section 2, the likelihood term $\nabla_{\rho}\log p(\pmb{\theta}_i|\pmb{\rho})$ in (A.2) for PGPE is independent of horizon $H$ . + +Algorithm 1 can be directly applied to the PGPE setting, where we replace the policy parameter $\pmb{\theta}$ with the hyper-parameter $\pmb{\rho}$ . When we need to sample $N$ trajectories, we first sample $N$ policy parameters $\{\pmb{\theta}_i\}$ from $p(\pmb{\theta}|\pmb{\rho})$ . Since the policy is deterministic with given $\pmb{\theta}_i$ , we sample one trajectory $\tau_i$ from each policy $p(\tau|\pmb{\theta}_i)$ . The recursive semi-stochastic gradient is given by + +$$ +\mathbf {v} _ {t} ^ {s + 1} = \frac {1}{B} \sum_ {j = 1} ^ {B} g \left(\tau_ {j} \mid \boldsymbol {\rho} _ {t} ^ {s + 1}\right) - \frac {1}{B} \sum_ {j = 1} ^ {B} g _ {\omega} \left(\tau_ {j} \mid \boldsymbol {\rho} _ {t - 1} ^ {s + 1}\right) + \mathbf {v} _ {t - 1} ^ {s + 1}, \tag {A.3} +$$ + +where $g_{\omega}(\tau_j|\pmb{\rho}_{t - 1}^{s + 1})$ is the gradient estimator with step-wise importance weight defined in the way as in (3.2). We call this variance reduced parameter-based algorithm SRVR-PG-PE, which is displayed in Algorithm 2. + +Under similar assumptions on the parameter distribution $p(\theta|\rho)$ , as Assumptions 4.1, 4.3 and 4.4, we can easily prove that SRVR-PG-PE converges to a stationary point of $J(\pmb{\rho})$ with $O(1/\epsilon^{3/2})$ sample complexity. In particular, we assume the policy parameter $\pmb{\theta}$ follows the distribution $p(\pmb{\theta}|\pmb{\rho})$ and we update our estimation of $\pmb{\rho}$ based on the semi-stochastic gradient in (A.3). Recall the gradient $\widehat{\nabla}_{\pmb{\rho}}J(\pmb{\rho})$ derived in (A.2). Since the policy in SRVR-PG-PE is deterministic, we only need to make the boundedness assumption on $p(\pmb{\theta}|\pmb{\rho})$ . In particular, we assume that + +1. $\| \nabla_{\pmb{\rho}}\log p(\pmb {\theta}|\pmb {\rho})\| _2$ and $\| \nabla_{\pmb{\rho}}^{2}\log p(\pmb {\theta}|\pmb {\rho})\|_{2}$ are bounded by constants in a similar way to Assumption 4.1; +2. the gradient estimator $g(\tau |\pmb {\rho}) = \nabla_{\pmb{\rho}}\log p(\pmb {\theta}|\pmb {\rho})\sum_{h = 0}^{H}\gamma^{h}r(s_{h},a_{h})$ has bounded variance; +3. and the importance weight $\omega (\tau_j|\pmb{\rho}_{t - 1}^{s + 1},\pmb{\rho}_t^{s + 1}) = p(\pmb {\theta}_j|\pmb{\rho}_{t - 1}^{s + 1}) / p(\pmb {\theta}_j|\pmb {\rho}_t^{s + 1})$ has bounded variance in a similar way to Assumption 4.4. + +Then the same gradient complexity $O(1 / \epsilon^{3 / 2})$ for SRVR-PG-PE can be proved in the same way as the proof of Theorem 4.5 and Corollary 4.7. Since the analysis is almost the same as that of SRVR-PG, we omit the proof of the convergence of SRVR-PG-PE. In fact, according to the analysis in Zhao et al. (2011); Metelli et al. (2018), all the three assumptions listed above can be easily verified under a Gaussian prior for $\theta$ and a linear deterministic policy. + +Algorithm 2 Stochastic Recursive Variance Reduced Policy Gradient with Parameter-based Exploration (SRVR-PG-PE) +1: Input: number of epochs $S$ , epoch size $m$ , step size $\eta$ , batch size $N$ , mini-batch size $B$ , gradient estimator $g$ , initial parameter $\rho_{m}^{0} \coloneqq \widetilde{\rho}^{0} \coloneqq \rho_{0}$ +2: for $s = 0, \dots, S - 1$ do +3: $\rho_{0}^{s + 1} = \rho^{s}$ +4: Sample $N$ policy parameters $\{\theta_{i}\}$ from $p(\cdot | \rho^{s})$ +5: Sample one trajectory $\tau_{i}$ from each policy $\pi_{\theta_{i}}$ +6: $\mathbf{v}_{0}^{s + 1} = \widehat{\nabla}_{\rho} J(\rho^{s}) \coloneqq \frac{1}{N} \sum_{i = 1}^{N} g(\tau_{i} | \widetilde{\rho}^{s})$ +7: $\rho_{1}^{s + 1} = \rho_{0}^{s + 1} + \eta \mathbf{v}_{0}^{s + 1}$ +8: for $t = 1, \dots, m - 1$ do +9: Sample $B$ policy parameters $\{\theta_{j}\}$ from $p(\cdot | \rho_{t}^{s + 1})$ +10: Sample one trajectory $\tau_{j}$ from each policy $\pi_{\theta_{j}}$ +11: $\mathbf{v}_{t}^{s + 1} = \mathbf{v}_{t - 1}^{s + 1} + \frac{1}{B} \sum_{j = 1}^{B} (g(\tau_{j} | \rho_{t}^{s + 1}) - g_{\omega}(\tau_{j} | \rho_{t - 1}^{s + 1}))$ +12: $\rho_{t + 1}^{s + 1} = \rho_{t}^{s + 1} + \eta \mathbf{v}_{t}^{s + 1}$ +13: end for +14: end for +15: return $\rho_{\mathrm{out}}$ , which is uniformly picked from $\{\rho_{t}^{s}\}_{t = 0, \dots, m; s = 0, \dots, S}$ + +# B PROOF OF THE MAIN THEORY + +In this section, we provide the proofs of the theoretical results for SRVR-PG (Algorithm 1). Before we start the proof of Theorem 4.5, we first lay down the following key lemma that controls the variance of the importance sampling weight $\omega$ . + +Lemma B.1. For any $\pmb{\theta}_1, \pmb{\theta}_2 \in \mathbb{R}^d$ , let $\omega_{0:h}\big(\tau|\pmb{\theta}_1, \pmb{\theta}_2\big) = p(\tau_h|\pmb{\theta}_1)/p(\tau_h|\pmb{\theta}_2)$ , where $\tau_h$ is a truncated trajectory of $\tau$ up to step $h$ . Under Assumptions 4.1 and 4.4, it holds that + +$$ +\operatorname {V a r} \left(\omega_ {0: h} (\tau | \boldsymbol {\theta} _ {1}, \boldsymbol {\theta} _ {2})\right) \leq C _ {\omega} \| \boldsymbol {\theta} _ {1} - \boldsymbol {\theta} _ {2} \| _ {2} ^ {2}, +$$ + +where $C_{\omega} = h(2hG^{2} + M)(W + 1)$ + +Recall that in Assumption 4.4 we assume the variance of the importance weight is upper bounded by a constant $W$ . Based on this assumption, Lemma B.1 further bounds the variance of the importance weight via the distance between the behavioral and the target policies. As the algorithm converges, these two policies will be very close and the bound in Lemma B.1 could be much tighter than the constant bound. + +Proof of Theorem 4.5. By plugging the definition of the projection operator in (3.3) into the update rule $\pmb{\theta}_{t + 1}^{s + 1} = \mathcal{P}_{\Theta}\bigl (\pmb{\theta}_t^{s + 1} + \eta \mathbf{v}_t^{s + 1}\bigr)$ , we have + +$$ +\boldsymbol {\theta} _ {t + 1} ^ {s + 1} = \underset {\mathbf {u} \in \mathbb {R} ^ {d}} {\operatorname {a r g m i n}} \mathbb {1} _ {\boldsymbol {\Theta}} (\mathbf {u}) + 1 / (2 \eta) \left\| \mathbf {u} - \boldsymbol {\theta} _ {t} ^ {s + 1} \right\| _ {2} ^ {2} - \langle \mathbf {v} _ {t} ^ {s + 1}, \mathbf {u} \rangle . \tag {B.1} +$$ + +Similar to the generalized projected gradient $\mathcal{G}_{\eta}(\pmb{\theta})$ defined in (3.4), we define $\widetilde{\mathcal{G}}_t^{s+1}$ to be a (stochastic) gradient mapping based on the recursive gradient estimator $\mathbf{v}_t^{s+1}$ : + +$$ +\widetilde {\mathcal {G}} _ {t} ^ {s + 1} = \frac {1}{\eta} \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1}\right) = \frac {1}{\eta} \left(\mathcal {P} _ {\boldsymbol {\Theta}} \left(\boldsymbol {\theta} _ {t} ^ {s + 1} + \eta \mathbf {v} _ {t} ^ {s + 1}\right) - \boldsymbol {\theta} _ {t} ^ {s + 1}\right). \tag {B.2} +$$ + +The definition of $\widetilde{\mathcal{G}}_t^{s + 1}$ differs from $\mathcal{G}_{\eta}(\pmb{\theta}_t^{s + 1})$ only in the semi-stochastic gradient term $\mathbf{v}_t^{s + 1}$ , while the latter one uses the full gradient $\nabla J(\pmb{\theta}_t^{s + 1})$ . Note that $\mathbb{1}_{\Theta}(\cdot)$ is convex but not smooth. We + +assume that $\mathbf{p} \in \partial \mathbb{1}_{\Theta}(\boldsymbol{\theta}_{t+1}^{s+1})$ is a sub-gradient of $\mathbb{1}_{\Theta}(\cdot)$ . According to the optimality condition of (B.1), we have $\mathbf{p} + 1 / \eta (\boldsymbol{\theta}_{t+1}^{s+1} - \boldsymbol{\theta}_t^{s+1}) - \mathbf{v}_t^{s+1} = \mathbf{0}$ . Further by the convexity of $\mathbb{1}_{\Theta}(\cdot)$ , we have + +$$ +\begin{array}{l} \mathbf {1} _ {\boldsymbol {\Theta}} \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1}\right) \leq \mathbf {1} _ {\boldsymbol {\Theta}} \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) + \langle \mathbf {p}, \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \rangle \\ = \mathbb {1} _ {\Theta} \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \langle 1 / \eta \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \mathbf {v} _ {t} ^ {s + 1}, \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \rangle . \tag {B.3} \\ \end{array} +$$ + +By Proposition 4.2, $J(\pmb{\theta})$ is $L$ -smooth, which by definition directly implies + +$$ +J \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1}\right) \geq J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) + \left\langle \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right), \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \right\rangle - \frac {L}{2} \| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \| _ {2} ^ {2}. +$$ + +For the simplification of presentation, let us define the notation $\Phi (\pmb {\theta}) = J(\pmb {\theta}) - \mathbb{1}_{\Theta}(\pmb {\theta})$ . Then according to the definition of $\mathbb{1}_{\Theta}$ we have $\mathrm{argmax}_{\pmb {\theta}\in \mathbb{R}^d}\Phi (\pmb {\theta}) = \mathrm{argmax}_{\pmb {\theta}\in \Theta}J(\pmb {\theta}):= \pmb{\theta}^*$ . Combining the above inequality with (B.3), we have + +$$ +\begin{array}{l} \Phi \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1}\right) \geq \Phi \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) + \left\langle \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \mathbf {v} _ {t} ^ {s + 1}, \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \right\rangle + \left(\frac {1}{\eta} - \frac {L}{2}\right) \| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \| _ {2} ^ {2} \\ = \Phi (\boldsymbol {\theta} _ {t} ^ {s + 1}) + \left\langle \nabla J (\boldsymbol {\theta} _ {t} ^ {s + 1}) - \mathbf {v} _ {t} ^ {s + 1}, \eta \widetilde {\mathcal {G}} _ {t} ^ {s + 1} \right\rangle + \eta \| \widetilde {\mathcal {G}} _ {t} ^ {s + 1} \| _ {2} ^ {2} - \frac {L}{2} \| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \| _ {2} ^ {2} \\ \geq \Phi (\boldsymbol {\theta} _ {t} ^ {s + 1}) - \frac {\eta}{2} \| \nabla J (\boldsymbol {\theta} _ {t} ^ {s + 1}) - \mathbf {v} _ {t} ^ {s + 1} \| _ {2} ^ {2} + \frac {\eta}{2} \| \widetilde {\mathcal {G}} _ {t} ^ {s + 1} \| _ {2} ^ {2} - \frac {L}{2} \| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \| _ {2} ^ {2} \\ = \Phi (\boldsymbol {\theta} _ {t} ^ {s + 1}) - \frac {\eta}{2} \| \nabla J (\boldsymbol {\theta} _ {t} ^ {s + 1}) - \mathbf {v} _ {t} ^ {s + 1} \| _ {2} ^ {2} + \frac {\eta}{4} \| \widetilde {\mathcal {G}} _ {t} ^ {s + 1} \| _ {2} ^ {2} + \left(\frac {1}{4 \eta} - \frac {L}{2}\right) \| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \| _ {2} ^ {2} \\ \geq \Phi (\boldsymbol {\theta} _ {t} ^ {s + 1}) - \frac {\eta}{2} \| \nabla J (\boldsymbol {\theta} _ {t} ^ {s + 1}) - \mathbf {v} _ {t} ^ {s + 1} \| _ {2} ^ {2} + \frac {\eta}{8} \| \mathcal {G} _ {\eta} (\boldsymbol {\theta} _ {t} ^ {s + 1}) \| _ {2} ^ {2} \\ - \frac {\eta}{4} \| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \widetilde {\mathcal {G}} _ {t} ^ {s + 1} \| _ {2} ^ {2} + \left(\frac {1}{4 \eta} - \frac {L}{2}\right) \| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \| _ {2} ^ {2}, \tag {B.4} \\ \end{array} +$$ + +where the second inequality holds due to Young's inequality and the third inequality holds due to the fact that $\| \mathcal{G}_{\eta}(\pmb{\theta}_t^{s + 1})\| _2^2\leq 2\| \widetilde{\mathcal{G}}_t^{s + 1}\| _2^2 +2\| \mathcal{G}_{\eta}(\pmb{\theta}_t^{s + 1}) - \widetilde{\mathcal{G}}_t^{s + 1}\| _2^2$ . Denote $\bar{\pmb{\theta}}_{t + 1}^{s + 1} = \mathrm{prox}_{\eta \mathbb{1}_{\Theta}}(\pmb{\theta}_t^{s + 1} + \eta \nabla J(\pmb{\theta}_t^{s + 1}))$ . By similar argument in (B.3) we have + +$$ +\begin{array}{l} \mathbf {1} _ {\Theta} \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1}\right) \leq \mathbf {1} _ {\Theta} \left(\bar {\boldsymbol {\theta}} _ {t + 1} ^ {s + 1}\right) - \langle 1 / \eta \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \mathbf {v} _ {t} ^ {s + 1}, \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \bar {\boldsymbol {\theta}} _ {t + 1} ^ {s + 1} \rangle , \\ \mathbb {1} _ {\Theta} \left(\bar {\boldsymbol {\theta}} _ {t + 1} ^ {s + 1}\right) \leq \mathbb {1} _ {\Theta} \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1}\right) - \langle 1 / \eta \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right), \bar {\boldsymbol {\theta}} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t + 1} ^ {s + 1} \rangle . \\ \end{array} +$$ + +Adding the above two inequalities immediately yields $\| \bar{\pmb{\theta}}_{t + 1}^{s + 1} - \pmb{\theta}_{t + 1}^{s + 1}\| _2\leq \eta \| \nabla J(\pmb {\theta}_t^{s + 1}) - \mathbf{v}_t^{s + 1}\| _2$ which further implies $\| \mathcal{G}_{\eta}(\pmb{\theta}_t^{s + 1}) - \widetilde{\mathcal{G}}_t^{s + 1}\| _2\leq \| \nabla J(\pmb {\theta}_t^{s + 1}) - \mathbf{v}_t^{s + 1}\| _2$ . Submitting this result into (B.4), we obtain + +$$ +\begin{array}{l} \Phi \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1}\right) \geq \Phi \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \frac {3 \eta}{4} \left\| \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \mathbf {v} _ {t} ^ {s + 1} \right\| _ {2} ^ {2} + \frac {\eta}{8} \left\| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) \right\| _ {2} ^ {2} \\ + \left(\frac {1}{4 \eta} - \frac {L}{2}\right) \| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \| _ {2} ^ {2}. \tag {B.5} \\ \end{array} +$$ + +We denote the index set of $\{\tau_j\}_{j = 1}^B$ in the $t$ -th inner iteration by $\mathcal{B}_t$ . Note that + +$$ +\begin{array}{l} \left\| \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \mathbf {v} _ {t} ^ {s + 1} \right\| _ {2} ^ {2} \\ = \left\| \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \mathbf {v} _ {t - 1} ^ {s + 1} + \frac {1}{B} \sum_ {j \in \mathcal {B} _ {t}} \left(g _ {\omega} \left(\tau_ {j} | \boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - g \left(\tau_ {j} | \boldsymbol {\theta} _ {t} ^ {s + 1}\right)\right) \right\| _ {2} ^ {2} \\ = \left\| \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) + \frac {1}{B} \sum_ {j \in \mathcal {B} _ {t}} \left(g _ {\omega} \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - g \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t} ^ {s + 1}\right)\right) + \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - \mathbf {v} _ {t - 1} ^ {s + 1} \right\| _ {2} ^ {2} \\ = \left\| \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) + \frac {1}{B} \sum_ {j \in \mathcal {B} _ {t}} \left(g _ {\omega} \left(\tau_ {j} | \boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - g \left(\tau_ {j} | \boldsymbol {\theta} _ {t} ^ {s + 1}\right)\right) \right\| ^ {2} \\ + \frac {2}{B} \sum_ {j \in \mathcal {B} _ {t}} \left\langle \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) + g _ {\omega} \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - g \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t} ^ {s + 1}\right), \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - \mathbf {v} _ {t - 1} ^ {s + 1} \right\rangle \\ \end{array} +$$ + +$$ ++ \left\| \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - \mathbf {v} _ {t - 1} ^ {s + 1} \right\| _ {2} ^ {2}. \tag {B.6} +$$ + +Conditional on $\theta_t^{s + 1}$ , taking the expectation over $\mathcal{B}_t$ yields + +$$ +\mathbb {E} \left[ \left\langle \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - g \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t} ^ {s + 1}\right), \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - \mathbf {v} _ {t - 1} ^ {s + 1} \right\rangle \right] = 0. +$$ + +Similarly, taking the expectation over $\theta_t^{s+1}$ and the choice of $\mathcal{B}_t$ yields + +$$ +\mathbb {E} \left[ \left\langle \nabla J (\boldsymbol {\theta} _ {t - 1} ^ {s + 1}) - g _ {\omega} \left(\tau_ {j} | \boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right), \nabla J (\boldsymbol {\theta} _ {t - 1} ^ {s + 1}) - \mathbf {v} _ {t - 1} ^ {s + 1} \right\rangle \right] = 0. +$$ + +Combining the above equations with (B.6), we obtain + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \mathbf {v} _ {t} ^ {s + 1} \right\| _ {2} ^ {2} \right] \\ = \mathbb {E} \left\| \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) + \frac {1}{B} \sum_ {j \in \mathcal {B} _ {t}} \left(g _ {\omega} \left(\tau_ {j} | \boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - g \left(\tau_ {j} | \boldsymbol {\theta} _ {t} ^ {s + 1}\right)\right) \right\| _ {2} ^ {2} \\ + \mathbb {E} \| \nabla J (\boldsymbol {\theta} _ {t - 1} ^ {s + 1}) - \mathbf {v} _ {t - 1} ^ {s + 1} \| _ {2} ^ {2} \\ = \frac {1}{B ^ {2}} \sum_ {j \in \mathcal {B} _ {t}} \mathbb {E} \| \nabla J (\boldsymbol {\theta} _ {t} ^ {s + 1}) - \nabla J (\boldsymbol {\theta} _ {t - 1} ^ {s + 1}) + g _ {\omega} (\tau_ {j} | \boldsymbol {\theta} _ {t - 1} ^ {s + 1}) - g (\tau_ {j} | \boldsymbol {\theta} _ {t} ^ {s + 1}) \| _ {2} ^ {2} \\ + \mathbb {E} \| \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - \mathbf {v} _ {t - 1} ^ {s + 1} \| _ {2} ^ {2}, (B.7) \\ \leq \frac {1}{B ^ {2}} \sum_ {j \in \mathcal {B} _ {t}} \mathbb {E} \left\| g _ {\omega} \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - g \left(\tau_ {j} \mid \boldsymbol {\theta} _ {t} ^ {s + 1}\right) \right\| _ {2} ^ {2} + \left\| \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - \mathbf {v} _ {t - 1} ^ {s + 1} \right\| _ {2} ^ {2}, (B.8) \\ \end{array} +$$ + +where (B.7) is due to the fact that $\mathbb{E}\| \mathbf{x}_1 + \ldots +\mathbf{x}_n\| _2^2 = \mathbb{E}\| \mathbf{x}_1\| _2 + \ldots +\mathbb{E}\| \mathbf{x}_n\| _2$ for independent zero-mean random variables, and (B.8) holds due to the fact that $\mathbf{x}_1,\dots ,\mathbf{x}_n$ is due to $\mathbb{E}\| \mathbf{x} - \mathbb{E}\mathbf{x}\| _2^2\leq \mathbb{E}\| \mathbf{x}\| _2^2$ . For the first term, we have $\| g_{\omega}(\tau_j|\pmb{\theta}_{t - 1}^{s + 1}) - g(\tau_j|\pmb{\theta}_t^{s + 1})\| _2\leq$ $\left\| g_{\omega}(\tau_j|\pmb{\theta}_{t - 1}^{s + 1}) - g(\tau_j|\pmb{\theta}_{t - 1}^{s + 1})\right\| _2 + L\left\| \pmb{\theta}_{t - 1}^{s + 1} - \pmb{\theta}_t^{s + 1}\right\| _2$ by triangle inequality and Proposition 4.2. + +$$ +\begin{array}{l} \mathbb {E} \big [ \big \| g _ {\omega} \big (\tau_ {j} | \pmb {\theta} _ {t - 1} ^ {s + 1} \big) - g \big (\tau_ {j} | \pmb {\theta} _ {t - 1} ^ {s + 1} \big) \big \| _ {2} ^ {2} \big ] = \mathbb {E} \bigg [ \bigg \| \sum_ {h = 0} ^ {H - 1} (\omega_ {0: h} - 1) \bigg [ \sum_ {t = 0} ^ {h} \nabla_ {\pmb {\theta}} \log \pi_ {\pmb {\theta}} (a _ {t} ^ {i} | s _ {t} ^ {i}) \bigg ] \gamma^ {h} r (s _ {h} ^ {i}, a _ {h} ^ {i}) \bigg \| _ {2} ^ {2} \bigg ] \\ = \sum_ {h = 0} ^ {H - 1} \mathbb {E} \left[ \left\| (\omega_ {0: h} - 1) \left[ \sum_ {t = 0} ^ {h} \nabla_ {\pmb {\theta}} \log \pi_ {\pmb {\theta}} (a _ {t} ^ {i} | s _ {t} ^ {i}) \right] \gamma^ {h} r (s _ {h} ^ {i}, a _ {h} ^ {i}) \right\| _ {2} ^ {2} \right] \\ \leq \sum_ {h = 0} ^ {H - 1} h ^ {2} (2 G ^ {2} + M) (W + 1) \left\| \boldsymbol {\theta} _ {t - 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \right\| _ {2} ^ {2} \cdot h ^ {2} G ^ {2} \gamma^ {h} R \\ \leq \frac {2 4 R G ^ {2} (2 G ^ {2} + M) (W + 1) \gamma}{(1 - \gamma) ^ {5}} \left\| \boldsymbol {\theta} _ {t - 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \right\| _ {2} ^ {2}, \tag {B.9} \\ \end{array} +$$ + +where in the second equality we used the fact that $\mathbb{E}[\nabla \log \pi_{\pmb{\theta}}(a|s)] = 0$ , the first inequality is due to Lemma B.1 and in the last inequality we use the fact that $\sum_{h=0}^{\infty} h^{4}\gamma^{h} = \gamma(\gamma^{3} + 11\gamma^{2} + 11\gamma + 1) / (1 - \gamma)^{5}$ for $|\gamma| < 1$ . Combining the results in (B.8) and (B.9), we get + +$$ +\begin{array}{l} \mathbb {E} \left\| \nabla J \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) - \mathbf {v} _ {t} ^ {s + 1} \right\| _ {2} ^ {2} \leq \frac {C _ {\gamma}}{B} \left\| \boldsymbol {\theta} _ {t} ^ {s + 1} - \boldsymbol {\theta} _ {t - 1} ^ {s + 1} \right\| _ {2} ^ {2} + \left\| \nabla J \left(\boldsymbol {\theta} _ {t - 1} ^ {s + 1}\right) - \mathbf {v} _ {t - 1} ^ {s + 1} \right\| _ {2} ^ {2} \\ \leq \frac {C _ {\gamma}}{B} \sum_ {l = 1} ^ {t} \left\| \boldsymbol {\theta} _ {l} ^ {s + 1} - \boldsymbol {\theta} _ {l - 1} ^ {s + 1} \right\| _ {2} ^ {2} + \left\| \nabla J \left(\boldsymbol {\theta} _ {0} ^ {s + 1}\right) - \mathbf {v} _ {0} ^ {s + 1} \right\| _ {2} ^ {2}, \tag {B.10} \\ \end{array} +$$ + +which holds for $t = 1, \ldots, m - 1$ , where $C_{\gamma} = 24RG^{2}(2G^{2} + M)(W + 1)\gamma / (1 - \gamma)^{5}$ . According to Algorithm 1 and Assumption 4.3, we have + +$$ +\mathbb {E} \left\| \nabla J \left(\boldsymbol {\theta} _ {0} ^ {s + 1}\right) - \mathbf {v} _ {0} ^ {s + 1} \right\| _ {2} ^ {2} \leq \frac {\xi^ {2}}{N}. \tag {B.11} +$$ + +Submitting the above result into (B.5) yields + +$$ +\mathbb {E} _ {N, B} \left[ \Phi \left(\boldsymbol {\theta} _ {t + 1} ^ {s + 1}\right) \right] \geq \mathbb {E} _ {N, B} \left[ \Phi \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) \right] + \frac {\eta}{8} \left\| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) \right\| _ {2} ^ {2} + \left(\frac {1}{4 \eta} - \frac {L}{2}\right) \left\| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \right\| _ {2} ^ {2} +$$ + +$$ +- \frac {3 \eta C _ {\gamma}}{4 B} \mathbb {E} _ {N, B} \left[ \sum_ {l = 1} ^ {t} \left\| \boldsymbol {\theta} _ {l} ^ {s + 1} - \boldsymbol {\theta} _ {l - 1} ^ {s + 1} \right\| _ {2} ^ {2} \right] - \frac {3 \eta \xi^ {2}}{4 N}, \tag {B.12} +$$ + +for $t = 1,\dots ,m - 1$ . Recall Line 6 in Algorithm 1, where we update $\theta_1^{t + 1}$ with the average of a mini-batch of gradients $\mathbf{v}_0^s = 1 / N\sum_{i = 1}^N g(\tau_i|\widetilde{\pmb{\theta}}^s)$ . Similar to (B.5), by smoothness of $J(\pmb {\theta})$ , we have + +$$ +\begin{array}{l} \Phi \left(\boldsymbol {\theta} _ {1} ^ {s + 1}\right) \geq \Phi \left(\boldsymbol {\theta} _ {0} ^ {s + 1}\right) - \frac {3 \eta}{4} \left\| \nabla J \left(\boldsymbol {\theta} _ {0} ^ {s + 1}\right) - \mathbf {v} _ {0} ^ {s + 1} \right\| _ {2} ^ {2} + \frac {\eta}{8} \left\| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {0} ^ {s + 1}\right) \right\| _ {2} ^ {2} \\ + \left(\frac {1}{4 \eta} - \frac {L}{2}\right) \left\| \boldsymbol {\theta} _ {1} ^ {s + 1} - \boldsymbol {\theta} _ {0} ^ {s + 1} \right\| _ {2} ^ {2}. \\ \end{array} +$$ + +Further by (B.11), it holds that + +$$ +\mathbb {E} \left[ \Phi \left(\boldsymbol {\theta} _ {1} ^ {s + 1}\right) \right] \geq \mathbb {E} \left[ \Phi \left(\boldsymbol {\theta} _ {0} ^ {s + 1}\right) \right] - \frac {3 \eta \xi^ {2}}{4 N} + \frac {\eta}{8} \left\| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {0} ^ {s + 1}\right) \right\| _ {2} ^ {2} + \left(\frac {1}{4 \eta} - \frac {L}{2}\right) \left\| \boldsymbol {\theta} _ {1} ^ {s + 1} - \boldsymbol {\theta} _ {0} ^ {s + 1} \right\| _ {2} ^ {2}. \tag {B.13} +$$ + +Telescoping inequality (B.12) from $t = 1$ to $m - 1$ and combining the result with (B.13), we obtain + +$$ +\begin{array}{l} \mathbb {E} _ {N, B} \left[ \Phi \left(\boldsymbol {\theta} _ {m} ^ {s + 1}\right) \right] \geq \mathbb {E} _ {N, B} \left[ \Phi \left(\boldsymbol {\theta} _ {0} ^ {s + 1}\right) \right] + \frac {\eta}{8} \sum_ {t = 0} ^ {m - 1} \mathbb {E} _ {N} \left[ \left\| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) \right\| _ {2} ^ {2} \right] - \frac {3 m \eta \xi^ {2}}{4 N} \\ + \left(\frac {1}{4 \eta} - \frac {L}{2}\right) \sum_ {t = 0} ^ {m - 1} \left\| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \right\| _ {2} ^ {2} \\ - \frac {3 \eta C _ {\gamma}}{2 B} \mathbb {E} _ {N, B} \left[ \sum_ {t = 0} ^ {m - 1} \sum_ {l = 1} ^ {t} \left\| \boldsymbol {\theta} _ {l} ^ {s + 1} - \boldsymbol {\theta} _ {l - 1} ^ {s + 1} \right\| _ {2} ^ {2} \right] \\ \geq \mathbb {E} _ {N, B} \big [ \Phi \big (\pmb {\theta} _ {0} ^ {s + 1} \big) \big ] + \frac {\eta}{8} \sum_ {t = 0} ^ {m - 1} \mathbb {E} _ {N} \big [ \big \| \mathcal {G} _ {\eta} \big (\pmb {\theta} _ {t} ^ {s + 1} \big) \big \| _ {2} ^ {2} \big ] - \frac {3 m \eta \xi^ {2}}{4 N} \\ + \left(\frac {1}{4 \eta} - \frac {L}{2} - \frac {3 m \eta C _ {\gamma}}{2 B}\right) \sum_ {t = 0} ^ {m - 1} \left\| \boldsymbol {\theta} _ {t + 1} ^ {s + 1} - \boldsymbol {\theta} _ {t} ^ {s + 1} \right\| _ {2} ^ {2}. \tag {B.14} \\ \end{array} +$$ + +If we choose step size $\eta$ and the epoch length $B$ such that + +$$ +\eta \leq \frac {1}{4 L}, \quad \frac {B}{m} \geq \frac {3 \eta C _ {\gamma}}{L} = \frac {7 2 \eta G ^ {2} (2 G ^ {2} + M) (W + 1) \gamma}{M (1 - \gamma) ^ {2}}, \tag {B.15} +$$ + +and note that $\theta_0^{s + 1} = \widetilde{\theta}^s$ $\theta_{m}^{s + 1} = \widetilde{\theta}^{s + 1}$ , then (B.14) leads to + +$$ +\mathbb {E} _ {N} \left[ \Phi \left(\widetilde {\boldsymbol {\theta}} ^ {s + 1}\right) \right] \geq \mathbb {E} _ {N} \left[ \Phi \left(\widetilde {\boldsymbol {\theta}} ^ {s}\right) \right] + \frac {\eta}{8} \sum_ {t = 0} ^ {m - 1} \mathbb {E} _ {N} \left[ \left\| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {t} ^ {s + 1}\right) \right\| _ {2} ^ {2} \right] - \frac {3 m \eta \xi^ {2}}{4 N}. \tag {B.16} +$$ + +Summing up the above inequality over $s = 0,\dots ,S - 1$ yields + +$$ +\frac {\eta}{8} \sum_ {s = 0} ^ {S - 1} \sum_ {t = 0} ^ {m - 1} \mathbb {E} \big [ \| \mathcal {G} _ {\eta} (\pmb {\theta} _ {t} ^ {s + 1}) \| _ {2} ^ {2} \big ] \leq \mathbb {E} \big [ \Phi (\widetilde {\pmb {\theta}} ^ {S}) \big ] - \mathbb {E} \big [ \Phi (\widetilde {\pmb {\theta}} ^ {0}) \big ] + \frac {3 S m \eta \xi^ {2}}{4 N}, +$$ + +which immediately implies + +$$ +\mathbb {E} \left[ \left\| \mathcal {G} _ {\eta} \left(\boldsymbol {\theta} _ {\text {o u t}}\right) \right\| _ {2} ^ {2} \right] \leq \frac {8 \left(\mathbb {E} \left[ \Phi \left(\widetilde {\boldsymbol {\theta}} ^ {S}\right) \right] - \mathbb {E} \left[ \Phi \left(\widetilde {\boldsymbol {\theta}} ^ {0}\right) \right]\right)}{\eta S m} + \frac {6 \xi^ {2}}{N} \leq \frac {8 \left(\Phi \left(\boldsymbol {\theta} ^ {*}\right) - \Phi \left(\boldsymbol {\theta} _ {0}\right)\right)}{\eta S m} + \frac {6 \xi^ {2}}{N}. +$$ + +This completes the proof. + +Proof of Corollary 4.7. Based on the convergence results in Theorem 4.5, in order to ensure $\mathbb{E}\left[\left\|\nabla J(\theta_{\mathrm{out}})\right\|_2^2\right] \leq \epsilon$ , we can choose $S, m$ and $N$ such that + +$$ +\frac {8 (J (\boldsymbol {\theta} ^ {*}) - J (\boldsymbol {\theta} _ {0}))}{\eta S m} = \frac {\epsilon}{2}, \quad \frac {6 \xi^ {2}}{N} = \frac {\epsilon}{2}, +$$ + +which implies $S_m = O(1 / \epsilon)$ and $N = O(1 / \epsilon)$ . Note that we have set $m = O(B)$ . The total number of stochastic gradient evaluations $\mathcal{T}_g$ we need is + +$$ +\mathcal {T} _ {g} = S N + S m B = O \left(\frac {N}{B \epsilon} + \frac {B}{\epsilon}\right) = O \left(\frac {1}{\epsilon^ {3 / 2}}\right), +$$ + +where we set $B = 1 / \epsilon^{1 / 2}$ + +![](images/d653f45d71bb1ab30708993e1374a8b0211e6b4d13c60ed8746a50afde508328.jpg) + +# C PROOF OF TECHNICAL LEMMAS + +In this section, we provide the proofs of the technical lemmas. We first prove the smoothness of the performance function $J(\theta)$ . + +Proof of Proposition 4.2. Recall the definition of PGT in (2.5). We first show the Lipschitzness of $g(\tau |\pmb{\theta})$ with baseline $b = 0$ as follows: + +$$ +\begin{array}{l} \| \nabla g (\tau | \boldsymbol {\theta}) \| _ {2} = \left\| \sum_ {h = 0} ^ {H - 1} \nabla_ {\boldsymbol {\theta}} ^ {2} \log \pi_ {\boldsymbol {\theta}} (a _ {h} | s _ {h}) \left(\sum_ {t = h} ^ {H - 1} \gamma^ {t} r (s _ {t}, a _ {t})\right) \right\| _ {2} \\ \leq \left(\sum_ {t = 0} ^ {H - 1} \gamma^ {h} \left\| \nabla_ {\boldsymbol {\theta}} ^ {2} \log \pi_ {\boldsymbol {\theta}} \left(a _ {t} \mid s _ {t}\right) \right\| _ {2}\right) \frac {R}{1 - \gamma} \\ \leq \frac {M R}{(1 - \gamma) ^ {2}}, \\ \end{array} +$$ + +where we used the fact that $0 < \gamma < 1$ . When we have a nonzero baseline $b_{h}$ , we can simply scale it with $\gamma^{h}$ and the above result still holds up to a constant multiplier. + +Since the PGT estimator is an unbiased estimator of the policy gradient $\nabla_{\pmb{\theta}}J(\pmb{\theta})$ , we have $\nabla_{\pmb{\theta}}J(\pmb{\theta}) = \mathbb{E}_{\tau}[g(\tau |\pmb{\theta})]$ and thus + +$$ +\begin{array}{l} \nabla_ {\boldsymbol {\theta}} ^ {2} J (\boldsymbol {\theta}) = \int_ {\tau} p (\tau | \boldsymbol {\theta}) \nabla_ {\boldsymbol {\theta}} g (\tau | \boldsymbol {\theta}) d \tau + \int_ {\tau} p (\tau | \boldsymbol {\theta}) g (\tau | \boldsymbol {\theta}) \nabla_ {\boldsymbol {\theta}} \log p (\tau | \boldsymbol {\theta}) d \tau \\ = \mathbb {E} _ {\tau} [ \nabla_ {\boldsymbol {\theta}} g (\tau | \boldsymbol {\theta}) ] + \mathbb {E} _ {\tau} [ g (\tau | \boldsymbol {\theta}) \nabla_ {\boldsymbol {\theta}} \log p (\tau | \boldsymbol {\theta}) ]. \tag {C.1} \\ \end{array} +$$ + +We have already bounded the norm of the first term by $MR / (1 - \gamma)^2$ . Now we take a look at the second term. Plugging the equivalent definition of $g(\tau |\pmb{\theta})$ in (2.6) yields + +$$ +\begin{array}{l} \mathbb {E} _ {\tau} \left[ g (\tau | \boldsymbol {\theta}) \nabla_ {\boldsymbol {\theta}} \log p (\tau | \boldsymbol {\theta}) \right] \\ = \int_ {\tau} \sum_ {h = 0} ^ {H - 1} \left(\sum_ {t = 0} ^ {h} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {t} \mid s _ {t}\right)\right) \gamma^ {h} r \left(s _ {h}, a _ {h}\right) \nabla_ {\boldsymbol {\theta}} \log p (\tau \mid \boldsymbol {\theta}) \cdot p (\tau \mid \boldsymbol {\theta}) d \tau \\ = \int_ {\tau} \sum_ {h = 0} ^ {H - 1} \left(\sum_ {t = 0} ^ {h} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {t} \mid s _ {t}\right)\right) \gamma^ {h} r \left(s _ {h}, a _ {h}\right) \sum_ {t ^ {\prime} = 0} ^ {H - 1} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {t ^ {\prime}} \mid s _ {t ^ {\prime}}\right) \cdot p (\tau \mid \boldsymbol {\theta}) d \tau \\ = \int_ {\tau} \sum_ {h = 0} ^ {H - 1} \left(\sum_ {t = 0} ^ {h} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {t} \mid s _ {t}\right)\right) \gamma^ {h} r \left(s _ {h}, a _ {h}\right) \sum_ {t ^ {\prime} = 0} ^ {h} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {t ^ {\prime}} \mid s _ {t ^ {\prime}}\right) \cdot p (\tau | \boldsymbol {\theta}) d \tau , \tag {C.2} \\ \end{array} +$$ + +where the second equality is due to $\nabla_{\pmb{\theta}}P(s_{t^{\prime} + 1}|s_{t^{\prime}},a_{t^{\prime}}) = 0$ , and the last equality is due to the fact that for all $t^\prime >h$ it holds that + +$$ +\int_ {\tau} \sum_ {t = 0} ^ {h} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} (a _ {t} | s _ {t}) \gamma^ {h} r (s _ {h}, a _ {h}) \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} (a _ {t ^ {\prime}} | s _ {t ^ {\prime}}) \cdot p (\tau | \boldsymbol {\theta}) d \tau = 0. +$$ + +Therefore, we have + +$$ +\| \mathbb {E} _ {\tau} [ g (\tau | \boldsymbol {\theta}) \nabla_ {\boldsymbol {\theta}} \log p (\tau | \boldsymbol {\theta}) ] \| _ {2} \leq \mathbb {E} _ {\tau} \left[ \sum_ {h = 0} ^ {H - 1} \sum_ {t = 0} ^ {h} G \gamma^ {h} R \times (h + 1) G \right] +$$ + +$$ +\begin{array}{l} = \sum_ {h = 0} ^ {H - 1} G ^ {2} R (h + 1) ^ {2} \gamma^ {h} \\ \leq \frac {2 G ^ {2} R}{(1 - \gamma) ^ {3}}. \\ \end{array} +$$ + +Putting the above pieces together, we can obtain + +$$ +\left\| \nabla_ {\pmb {\theta}} ^ {2} J (\pmb {\theta}) \right\| _ {2} \leq \frac {M R}{(1 - \gamma) ^ {2}} + \frac {2 G ^ {2} R}{(1 - \gamma) ^ {3}} := L, +$$ + +which implies that $J(\pmb{\theta})$ is $L$ -smooth with $L = MR / (1 - \gamma)^2 + 2G^2 R / (1 - \gamma)^3$ . + +Similarly, we can bound the norm of gradient estimator as follows + +$$ +\| g (\tau | \boldsymbol {\theta}) \| _ {2} \leq \left\| \sum_ {h = 0} ^ {H - 1} \nabla_ {\boldsymbol {\theta}} \log \pi_ {\boldsymbol {\theta}} \left(a _ {h} \mid s _ {h}\right) \frac {\gamma^ {h} R (1 - \gamma^ {H - h})}{1 - \gamma} \right\| _ {2} \leq \frac {G R}{(1 - \gamma) ^ {2}}, +$$ + +which completes the proof. + +![](images/2e5f7006d928ef715b90a0da15eaa802426418f42fe560a4df7c80642642d31f.jpg) + +Lemma C.1 (Lemma 1 in Cortes et al. (2010)). Let $\omega(x) = P(x)/Q(x)$ be the importance weight for distributions $P$ and $Q$ . Then $\mathbb{E}[\omega] = 1, \mathbb{E}[\omega^2] = d_2(P||Q)$ , where $d_2(P||Q) = 2^{D_2(P||Q)}$ and $D_2(P||Q)$ is the Rényi divergence between distributions $P$ and $Q$ . Note that this immediately implies $\mathrm{Var}(\omega) = d_2(P||Q) - 1$ . + +Proof of Lemma B.1. According to the property of importance weight in Lemma C.1, we know + +$$ +\operatorname {V a r} \left(\omega_ {0: h} \left(\tau | \widetilde {\boldsymbol {\theta}} ^ {s}, \boldsymbol {\theta} _ {t} ^ {s + 1}\right)\right) = d _ {2} \left(p \left(\tau_ {h} | \widetilde {\boldsymbol {\theta}} ^ {s}\right) | | p \left(\tau_ {h} | \boldsymbol {\theta} _ {t} ^ {s + 1}\right)\right) - 1. +$$ + +To simplify the presentation, we denote $\theta_{1} = \widetilde{\theta}^{s}$ and $\theta_{2} = \theta_{t}^{s + 1}$ in the rest of this proof. By definition, we have + +$$ +d _ {2} \left(p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) \mid \mid p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {2}\right)\right) = \int_ {\tau} p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) \frac {p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right)}{p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {2}\right)} d \tau = \int_ {\tau} p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) ^ {2} p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {2}\right) ^ {- 1} d \tau . +$$ + +Taking the gradient of $d_{2}(p(\tau_{h}|\pmb{\theta}_{1})||p(\tau_{h}|\pmb{\theta}_{2}))$ with respect to $\pmb{\theta}_{1}$ , we have + +$$ +\nabla_ {\boldsymbol {\theta} _ {1}} d _ {2} \left(p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) \mid \mid p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {2}\right)\right) = 2 \int_ {\tau} p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) \nabla_ {\boldsymbol {\theta} _ {1}} p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {2}\right) ^ {- 1} \mathrm {d} \tau . +$$ + +In particular, if we set the value of $\theta_{1}$ to be $\theta_{1} = \theta_{2}$ in the above formula of the gradient, we get + +$$ +\left. \nabla_ {\boldsymbol {\theta} _ {1}} d _ {2} \left(p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) \right\| p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {2}\right)\right) \big | _ {\boldsymbol {\theta} _ {1} = \boldsymbol {\theta} _ {2}} = 2 \int_ {\tau} \nabla_ {\boldsymbol {\theta} _ {1}} p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) \mathrm {d} \tau \big | _ {\boldsymbol {\theta} _ {1} = \boldsymbol {\theta} _ {2}} = 0. +$$ + +Applying mean value theorem with respect to the variable $\theta_{1}$ , we have + +$$ +d _ {2} \left(p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {1}\right) \mid \mid p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {2}\right)\right) = 1 + 1 / 2 \left(\boldsymbol {\theta} _ {1} - \boldsymbol {\theta} _ {2}\right) ^ {\top} \nabla_ {\boldsymbol {\theta}} ^ {2} d _ {2} \left(p \left(\tau_ {h} \mid \boldsymbol {\theta}\right) \mid \mid p \left(\tau_ {h} \mid \boldsymbol {\theta} _ {2}\right)\right) \left(\boldsymbol {\theta} _ {1} - \boldsymbol {\theta} _ {2}\right), \tag {C.3} +$$ + +where $\pmb{\theta} = t\pmb{\theta}_1 + (1 - t)\pmb{\theta}_2$ for some $t \in [0, 1]$ and we used the fact that $d_2(p(\tau_h|\pmb{\theta}_2)||p(\tau_h|\pmb{\theta}_2)) = 1$ . To bound the above exponentiated Rényi divergence, we need to compute the Hessian matrix. Taking the derivative of $\nabla_{\pmb{\theta}_1}d_2(p(\tau_h|\pmb{\theta}_1)||p(\tau_h|\pmb{\theta}_2))$ with respect to $\pmb{\theta}_1$ further yields + +$$ +\begin{array}{l} \nabla_ {\boldsymbol {\theta}} ^ {2} d _ {2} \left(p \left(\tau_ {h} | \boldsymbol {\theta}\right) | | p \left(\tau_ {h} | \boldsymbol {\theta} _ {2}\right)\right) = 2 \int_ {\tau} \nabla_ {\boldsymbol {\theta}} \log p \left(\tau_ {h} | \boldsymbol {\theta}\right) \nabla_ {\boldsymbol {\theta}} \log p \left(\tau_ {h} | \boldsymbol {\theta}\right) ^ {\top} \frac {p \left(\tau_ {h} | \boldsymbol {\theta}\right) ^ {2}}{p \left(\tau_ {h} | \boldsymbol {\theta} _ {2}\right)} d \tau \\ + 2 \int_ {\tau} \nabla_ {\boldsymbol {\theta}} ^ {2} p (\tau_ {h} | \boldsymbol {\theta}) p (\tau_ {h} | \boldsymbol {\theta}) p (\tau_ {h} | \boldsymbol {\theta} _ {2}) ^ {- 1} d \tau . \tag {C.4} \\ \end{array} +$$ + +Thus we need to compute the Hessian matrix of the trajectory distribution function, i.e., $\nabla_{\pmb{\theta}}^{2}p(\tau_{h}|\pmb{\theta})$ which can further be derived from the Hessian matrix of the log-density function. + +$$ +\nabla_ {\boldsymbol {\theta}} ^ {2} \log p (\tau_ {h} | \boldsymbol {\theta}) = - p (\tau_ {h} | \boldsymbol {\theta}) ^ {- 2} \nabla_ {\boldsymbol {\theta}} p (\tau_ {h} | \boldsymbol {\theta}) \nabla_ {\boldsymbol {\theta}} p (\tau_ {h} | \boldsymbol {\theta}) ^ {\top} + p (\tau_ {h} | \boldsymbol {\theta}) ^ {- 1} \nabla_ {\boldsymbol {\theta}} ^ {2} p (\tau_ {h} | \boldsymbol {\theta}). \tag {C.5} +$$ + +Submitting (C.5) into (C.4) yields + +$$ +\begin{array}{l} \| \nabla_ {\boldsymbol {\theta}} ^ {2} d _ {2} (p (\tau_ {h} | \boldsymbol {\theta}) \| p (\tau_ {h} | \boldsymbol {\theta} _ {2})) \| _ {2} = \left\| 4 \int_ {\tau} \nabla_ {\boldsymbol {\theta}} \log p (\tau_ {h} | \boldsymbol {\theta}) \nabla_ {\boldsymbol {\theta}} \log p (\tau_ {h} | \boldsymbol {\theta}) ^ {\top} \frac {p (\tau_ {h} | \boldsymbol {\theta}) ^ {2}}{p (\tau_ {h} | \boldsymbol {\theta} _ {2})} d \tau \right. \\ + 2 \int_ {\tau} \nabla_ {\boldsymbol {\theta}} ^ {2} \log p (\tau_ {h} | \boldsymbol {\theta}) \frac {p (\tau_ {h} | \boldsymbol {\theta}) ^ {2}}{p (\tau_ {h} | \boldsymbol {\theta} _ {2})} d \tau \Bigg \| _ {2} \\ \leq \int_ {\tau} \frac {p \left(\tau_ {h} | \boldsymbol {\theta}\right) ^ {2}}{p \left(\tau_ {h} | \boldsymbol {\theta} _ {2}\right)} \left(4 \| \nabla_ {\boldsymbol {\theta}} \log p (\tau_ {h} | \boldsymbol {\theta}) \| _ {2} ^ {2} + 2 \| \nabla_ {\boldsymbol {\theta}} ^ {2} \log p (\tau_ {h} | \boldsymbol {\theta}) \| _ {2}\right) d \tau \\ \leq \left(4 h ^ {2} G ^ {2} + 2 h M\right) \mathbb {E} \left[ \omega (\tau | \boldsymbol {\theta}, \boldsymbol {\theta} _ {2}) ^ {2} \right] \\ \leq 2 h \left(2 h G ^ {2} + M\right) (W + 1), \\ \end{array} +$$ + +where the second inequality comes from Assumption 4.1 and the last inequality is due to Assumption 4.4 and Lemma C.1. Combining the above result with (C.3), we have + +$$ +\operatorname {V a r} \left(\omega_ {0: h} \left(\tau | \widetilde {\boldsymbol {\theta}} ^ {s}, \boldsymbol {\theta} _ {t} ^ {s + 1}\right)\right) = d _ {2} \left(p \left(\tau_ {h} | \widetilde {\boldsymbol {\theta}} ^ {s}\right) \| p \left(\tau_ {h} | \boldsymbol {\theta} _ {t} ^ {s + 1}\right)\right) - 1 \leq C _ {\omega} \| \widetilde {\boldsymbol {\theta}} ^ {s} - \boldsymbol {\theta} _ {t} ^ {s + 1} \| _ {2} ^ {2}, +$$ + +where $C_{\omega} = h(2hG^{2} + M)(W + 1)$ + +![](images/972cd30055e614b2a0079e5f99c731111db18fd383cb03a50f0a7bfcc183ecdf.jpg) + +# D PROOF OF THEORETICAL RESULTS FOR GAUSSIAN POLICY + +In this section, we prove the sample complexity for Gaussian policy. According to (4.1), we can calculate the gradient and Hessian matrix of the logarithm of the policy. + +$$ +\nabla \log \pi_ {\boldsymbol {\theta}} (a | s) = \frac {(a - \boldsymbol {\theta} ^ {\top} \phi (s)) \phi (s)}{\sigma^ {2}}, \quad \nabla^ {2} \log \pi_ {\boldsymbol {\theta}} (a | s) = - \frac {\phi (s) \phi (s) ^ {\top}}{\sigma^ {2}}. \tag {D.1} +$$ + +It is easy to see that Assumption 4.1 holds with $G = C_{a}M_{\phi} / \sigma^{2}$ and $M = M_{\phi}^{2} / \sigma^{2}$ . Based on this observation, Proposition 4.2 also holds for Gaussian policy with parameters defined as follows + +$$ +L = \frac {R M _ {\phi} ^ {2}}{\sigma^ {2} (1 - \gamma) ^ {3}}, \quad \text {a n d} \quad C _ {g} = \frac {R C _ {a} M _ {\phi}}{\sigma^ {2} (1 - \gamma) ^ {2}}. \tag {D.2} +$$ + +The following lemma gives the variance $\xi^2$ of the PGT estimator, which verifies Assumption 4.3. + +Lemma D.1 (Lemma 5.5 in Pirotta et al. (2013)). Given a Gaussian policy $\pi_{\pmb{\theta}}(a|s) \sim N(\pmb{\theta}^{\top}\phi(s), \sigma^{2})$ , if the $|r(s,a)| \leq R$ and $\| \phi(s)\|_2 \leq M_{\phi}$ for all $s \in S, a \in \mathcal{A}$ and $R, M_{\phi} > 0$ are constants, then the variance of PGT estimator defined in (2.5) can be bounded as follows: + +$$ +\mathbf {V a r} (g (\tau | \pmb {\theta})) \leq \xi^ {2} = \frac {R ^ {2} M _ {\phi} ^ {2}}{(1 - \gamma) ^ {2} \sigma^ {2}} \bigg (\frac {1 - \gamma^ {2 H}}{1 - \gamma^ {2}} - H \gamma^ {2 H} - 2 \gamma^ {H} \frac {1 - \gamma^ {H}}{1 - \gamma} \bigg). +$$ + +Proof of Corollary 4.8. The proof will be similar to that of Corollary 4.7. By Theorem 4.5, to ensure that $\mathbb{E}[\|\nabla J(\pmb{\theta}_{\mathrm{out}})\|_2^2] \leq \epsilon$ , we can set + +$$ +\frac {8 (J (\boldsymbol {\theta} ^ {*}) - J (\boldsymbol {\theta} _ {0}))}{\eta S m} = \frac {\epsilon}{2}, \quad \frac {6 \xi^ {2}}{N} = \frac {\epsilon}{2}. +$$ + +Plugging the value of $\xi^2$ in Lemma D.1 into the second equation above yields $N = O(\epsilon^{-1}(1 - \gamma)^{-3})$ . For the first equation, we have $S = O(1 / (\eta m\epsilon))$ . Therefore, the total number of stochastic gradient evaluations $\mathcal{T}_g$ required by Algorithm 1 is + +$$ +\mathcal {T} _ {g} = S N + S m B = O \left(\frac {N}{\eta m \epsilon} + \frac {B}{\eta \epsilon}\right). +$$ + +So a good choice of batch size $B$ and epoch length $m$ will lead to $Bm = N$ . Combining this with the requirement of $B$ in Theorem 4.5, we can set + +$$ +m = \sqrt {\frac {L N}{\eta C _ {\gamma}}}, \quad \text {a n d} B = \sqrt {\frac {N \eta C _ {\gamma}}{L}}. +$$ + +Note that $C_{\gamma} = 24RG^{2}(2G^{2} + M)(W + 1)\gamma /(1 - \gamma)^{5}$ . Plugging the values of $G, N$ and $L$ into the above equations yields + +$$ +m = O \bigg (\frac {1}{(1 - \gamma) ^ {2} \sqrt {\epsilon}} \bigg), \quad B = O \bigg (\frac {1}{(1 - \gamma) ^ {1} \sqrt {\epsilon}} \bigg). +$$ + +The corresponding sample complexity is + +$$ +\mathcal {T} _ {g} = O \bigg (\frac {1}{(1 - \gamma) ^ {4} \epsilon^ {3 / 2}} \bigg). +$$ + +This completes the proof for Gaussian policy. + +![](images/52f8a04d6e451255632828d20742ff5ff075823b707bbe6f5e6a9630ef201c8f.jpg) + +# E ADDITIONAL DETAILS ON EXPERIMENTS + +Now, we provide more details of our experiments presented in Section 5. We first present the parameters for all algorithms we used in all our experiments in Tables 2 and 3. Among the parameters, the neural network structure and the RL environment parameters are shared across all the algorithms. As mentioned in Section 5, the order of the batch size parameters of our algorithm are chosen according to Corollary 4.7 and we multiply them by a tuning constant via grid search. Similarly, the orders of batch size parameters of SVRPG and GPOMDP are chosen based on the theoretical results suggested by Papini et al. (2018); Xu et al. (2019). Moerover, the learning rates for different methods are tuned by grid search. + +We then present the results of PGPE and SRVR-PG-PE on Cartpole, Mountain Car and Pendulum in Figure 2. In all three environments, our SRVR-PG-PE algorithm shows improvement over PGPE (Sehnke et al., 2010) in terms of number of trajectories. It is worth noting that in all these environments both PGPE and SRVR-PG-PE seem to solve the problem very quickly, which is consistent with the results reported in (Zhao et al., 2011; 2013; Metelli et al., 2018). Our primary goal in this experiment is to show that our proposed variance reduced policy gradient algorithm can be easily extended to the PGPE framework. To avoid distracting the audience's attention from the variance reduction algorithm on the sample complexity, we do not thoroughly compare the performance of the parameter based policy gradient methods such as PGPE and SRVR-PG-PE with the action based policy gradient methods. We refer interested readers to the valuable empirical studies of PGPE based algorithms presented in Zhao et al. (2011; 2013); Metelli et al. (2018). + +![](images/97d277e078752f482656c27a7c3a1f77b2a945cd8f8158802132b0e5a15cbb6a.jpg) +(a) Cartpole +Figure 2: Performance of SRVR-PG-PE compared with PGPE. Experiment results are averaged over 10 runs. + +![](images/02369b0c15ad28772b473b775a75f37db4f1afb96c8e0d6e4941f512f1991a10.jpg) +(b) Mountain Car + +![](images/9715464c25c682075a1bc25664e3008a8316fe3ce07d89d0118605af3ec03770.jpg) +(c) Pendulum + +Table 2: Parameters used in the SRVR-PG experiments. + +
ParametersAlgorithmCartpoleMountain CarPendulum
NN size-64648×8
NN activation function-TanhTanhTanh
Task horizon-1001000200
Total trajectories-250030002 × 105
Discount factor γGPOMDP0.990.9990.99
SVRPG0.9990.9990.995
SRVR-PG0.9950.9990.995
Learning rate ηGPOMDP0.0050.0050.01
SVRPG0.00750.00250.01
SRVR-PG0.0050.00250.01
Batch size NGPOMDP1010250
SVRPG2510250
SRVR-PG2510250
Batch size BGPOMDP---
SVRPG10550
SRVR-PG5350
Epoch size mGPOMDP---
SVRPG321
SRVR-PG321
+ +Table 3: Parameters used in the SRVR-PG-PE experiments. + +
ParametersCartpoleMountain CarPendulum
NN size-648×8
NN activation functionTanhTanhTanh
Task horizon1001000200
Total trajectories20005001750
Discount factor γ0.990.9990.99
Learning rate η0.010.00750.01
Batch size N10550
Batch size B5310
Epoch size m212
\ No newline at end of file diff --git a/sampleefficientpolicygradientmethodswithrecursivevariancereduction/images.zip b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6403c5412e9251e595e02ad1b646f1f7c8afd9ae --- /dev/null +++ b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:690d535c93d7513da377f6ad00a119e889c99ebf99b7ed8f0bcb5f68231ea172 +size 1152104 diff --git a/sampleefficientpolicygradientmethodswithrecursivevariancereduction/layout.json b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..99ef9ea6f27512c77f7ce91212c075c6980bd06d --- /dev/null +++ b/sampleefficientpolicygradientmethodswithrecursivevariancereduction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e8078eb23d4fdac72985a6812af401c3ce0bfc9cf5e76944b9a124b5c5907fa +size 1062754 diff --git a/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_content_list.json b/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fa9bb6ab7b493bb5c28bf5f26ef89f4159e20fe3 --- /dev/null +++ b/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5340f177ca041f41b617e8be9b1a1209cc0328124219e18897cabd30ea31e62 +size 188733 diff --git a/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_model.json b/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b88cc840cdca6aff5f440c572d1552b4a3d4bcde --- /dev/null +++ b/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b18809fc422099c218552123341090ed6288b07cdd7b07d9ab71d66c0f12850c +size 219520 diff --git a/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_origin.pdf b/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..062103d42118bc652a31e50f9a9bcac3640e5135 --- /dev/null +++ b/samplingfreelearningofbayesianquantizedneuralnetworks/ed38c360-d8aa-4aaf-92af-3b80d11e6c9d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caaf71d42e60792e0a1d11c109be9a255178b40b1294cb5cf814eb07786b2b70 +size 665462 diff --git a/samplingfreelearningofbayesianquantizedneuralnetworks/full.md b/samplingfreelearningofbayesianquantizedneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f18958e20efb41399841ebdcb355c280da4e19ec --- /dev/null +++ b/samplingfreelearningofbayesianquantizedneuralnetworks/full.md @@ -0,0 +1,930 @@ +# SAMPLING-FREE LEARNING OF BAYESIAN QUANTIZED NEURAL NETWORKS + +Jiahao Su + +Department of Electrical and Computer Engineering + +University of Maryland + +College Park, MD 20740 + +jiahaosu@umd.edu + +Milan Cvitkovic + +Amazon Web Services + +Seattle, WA, USA + +cvitkom@amazon.com + +Furong Huang + +Department of Computer Science + +University of Maryland + +College Park, MD 20740 + +furongh@cs.umd.edu + +# ABSTRACT + +Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important. In this paper, we propose Bayesian quantized networks (BQNs), quantized neural networks (QNNs) for which we learn a posterior distribution over their discrete parameters. We provide a set of efficient algorithms for learning and prediction in BQNs without the need to sample from their parameters or activations, which not only allows for differentiable learning in QNNs, but also reduces the variance in gradients. We evaluate BQNs on MNIST, Fashion-MNIST, KMNIST and CIFAR10 image classification datasets. compared against bootstrap ensemble of QNNs (E-QNN). We demonstrate BQNs achieve both lower predictive errors and better-calibrated uncertainties than E-QNN (with less than $20\%$ of the negative log-likelihood). + +# 1 INTRODUCTION + +A Bayesian approach to deep learning considers the network's parameters to be random variables and seeks to infer their posterior distribution given the training data. Models trained this way, called Bayesian neural networks (BNNs) (Wang & Yeung, 2016), in principle have well-calibrated uncertainties when they make predictions, which is important in scenarios such as active learning and reinforcement learning (Gal, 2016). Furthermore, the posterior distribution over the model parameters provides valuable information for evaluation and compression of neural networks. + +There are three main challenges in using BNNs: (1) Intractable posterior: Computing and storing the exact posterior distribution over the network weights is intractable due to the complexity and high-dimensionality of deep networks. (2) Prediction: Performing a forward pass (a.k.a. as probabilistic propagation) in a BNN to compute a prediction for an input cannot be performed exactly, since the distribution of hidden activations at each layer is intractable to compute. (3) Learning: The classic evidence lower bound (ELBO) learning objective for training BNNs is not amenable to backpropagation as the ELBO is not an explicit function of the output of probabilistic propagation. + +These challenges are typically addressed either by making simplifying assumptions about the distributions of the parameters and activations, or by using sampling-based approaches, which are expensive and unreliable (likely to overestimate the uncertainties in predictions). Our goal is to propose a sampling-free method which uses probabilistic propagation to deterministically learn BNNs. + +A seemingly unrelated area of deep learning research is that of quantized neural networks (QNNs), which offer advantages of computational and memory efficiency compared to continuous-valued models. QNNs, like BNNs, face challenges in training, though for different reasons: (4.1) The non- + +differentiable activation function is not amenable to backpropagation. (4.2) Gradient updates cease to be meaningful, since the model parameters in QNNs are coarsely quantized. + +In this work, we combine the ideas of BNNs and QNNs in a novel way that addresses the aforementioned challenges (1)(2)(3)(4) in training both models. We propose Bayesian quantized networks (BQNs), models that (like QNNs) have quantized parameters and activations over which they learn (like BNNs) categorical posterior distributions. BQNs have several appealing properties: + +- BQNs solve challenge (1) due to their use of categorical distributions for their model parameters. +- BQNs can be trained via sampling-free backpropagation and stochastic gradient ascent of a differentiable lower bound to ELBO, which addresses challenges (2), (3) and (4) above. +- BQNs leverage efficient tensor operations for probabilistic propagation, further addressing challenge (2). We show the equivalence between probabilistic propagation in BQNs and tensor contractions (Kolda & Bader, 2009), and introduce a rank-1 CP tensor decomposition (mean-field approximation) that speeds up the forward pass in BQNs. +- BQNs provide a tunable trade-off between computational resource and model complexity: using a refined quantization allows for more complex distribution at the cost of more computation. +- Sampling from a learned BQN provides an alternative way to obtain deterministic QNNs. + +In our experiments, we demonstrate the expressive power of BQNs. We show that BQNs trained using our sampling-free method have much better-calibrated uncertainty compared with the state-of-the-art Bootstrap ensemble of quantized neural networks (E-QNN) trained by Courbariaux et al. (2016). More impressively, our trained BQNs achieve comparable log-likelihood against Gaussian Bayesian neural network (BNN) trained with stochastic gradient variational Bayes (SGVB) (Shridhar et al., 2019) (the performance of Gaussian BNNs are expected to be better than BQNs since they allow for continuous random variables). We further verify that BQNs can be easily used to compress (Bayesian) neural networks and obtain deterministic QNNs. Finally, we evaluate the effect of mean-field approximation in BQN, by comparing with its Monte-Carlo realizations, where no approximation is used. We show that our sampling-free probabilistic propagation achieves similar accuracy and log-likelihood — justifying the use of mean-field approximation in BQNs. + +Related Works. In Appendix A, we survey different approaches for training Bayesian neural networks including sampling-free assumed density filtering (Minka, 2001; Soudry et al., 2014; Hernández-Lobato & Adams, 2015; Ghosh et al., 2016), sampling-based variational inference (Graves, 2011; Blundell et al., 2015; Shridhar et al., 2019), as well as sampling-free variational inference (Wu et al., 2018), probabilistic neural networks (Wang et al., 2016; Shekhotsov & Flach, 2018; Gast & Roth, 2018), quantized neural network (Han et al., 2015; Courbariaux et al., 2015; Zhu et al., 2016; Kim & Smaragdis, 2016; Zhou et al., 2016; Rastegari et al., 2016; Hubara et al., 2017; Esser et al., 2015; Peters & Welling, 2018; Shayer et al., 2017), and tensor networks and tensorial neural networks (Grasedyck et al., 2013; Orús, 2014; Cichocki et al., 2016; Su et al., 2018; Newman et al., 2018; Robeva & Seigal, 2017). + +# Contributions: + +- We propose an alternative evidence lower bound (ELBO) for Bayesian neural networks such that optimization of the variational objective is compatible with the backpropagation algorithm. +- We introduce Bayesian quantized networks (BQNs), establish a duality between BQNs and hierarchical tensor networks, and show prediction a BQN is equivalent to a series of tensor contractions. +- We derive a sampling-free approach for both learning and inference in BQNs using probabilistic propagation (analytical inference), achieving better-calibrated uncertainty for the learned models. +- We develop a set of fast algorithms to enable efficient learning and prediction for BQNs. + +# 2 BAYESIAN NEURAL NETWORKS + +Notation. We use bold letters such as $\theta$ to denote random variables, and non-bold letters such as $\theta$ to denote their realizations. We abbreviate $\mathbf{Pr}[\theta = \theta]$ as $\mathbf{Pr}[\theta]$ and use bold letters in an equation if the equality holds for arbitrary realizations. For example, $\mathbf{Pr}[x,y] = \mathbf{Pr}[y|x]\mathbf{Pr}[x]$ means $\mathbf{Pr}[x = x,y = y] = \mathbf{Pr}[y = y|x = x]\mathbf{Pr}[x = x],\forall x\in \mathcal{X},y\in \mathcal{Y}$ . + +# 2.1 PROBLEM SETTING + +Given a dataset $\mathcal{D} = \{(x_n, y_n)\}_{n=1}^N$ of $N$ data points, we aim to learn a neural network with model parameters $\theta$ that predict the output $y \in \mathcal{V}$ based on the input $x \in \mathcal{X}$ . (1) We first solve the learning problem to find an approximate posterior distribution $Q(\theta; \phi)$ over $\theta$ with parameters $\phi$ such that $Q(\theta; \phi) \approx \operatorname{Pr}[\theta | \mathcal{D}]$ . (2) We then solve the prediction problem to compute the predictive distribution $\operatorname{Pr}[y|x, \mathcal{D}]$ for arbitrary input $x = x$ given $Q(\theta; \phi)$ . For notational simplicity, we will omit the conditioning on $\mathcal{D}$ and write $\operatorname{Pr}[y|x, \mathcal{D}]$ as $\operatorname{Pr}[y|x]$ in what follows. + +In order to address the prediction and learning problems in BNNs, we analyze these models in their general form of probabilistic graphical models (shown in Figure 3b in Appendix B). Let $\pmb{h}^{(l)}$ , $\pmb{\theta}^{(l)}$ and $\pmb{h}^{(l+1)}$ denote the inputs, model parameters, and (hidden) outputs of the $l$ -th layer respectively. We assume that $\pmb{\theta}^{(l)}$ 's are layer-wise independent, i.e., $Q(\pmb{\theta};\phi) = \prod_{l=0}^{L-1} Q(\pmb{\theta}^{(l)};\phi^{(l)})$ , and $\pmb{h}^{(l)}$ follows the Markovian property, i.e., $\mathbf{Pr}[h^{(l+1)}|\pmb{h}^{(:l)},\pmb{\theta}^{(:l)}] = \mathbf{Pr}[h^{(l+1)}|\pmb{h}^{(l)},\pmb{\theta}^{(l)}]$ . + +# 2.2 THE PREDICTION PROBLEM + +Computing the predictive distribution $\mathbf{Pr}[y|x,\mathcal{D}]$ with a BNN requires marginalizing over the random variable $\theta$ . The hierarchical structure of BNNs allows this marginalization to be performed in multiple steps sequentially. In Appendix B, we show that the predictive distribution of $h^{(l + 1)}$ given input $\pmb{x} = \pmb{x}$ can be obtained from its preceding layer $h^{(l)}$ by + +$$ +\underbrace {\mathbf {P r} [ \boldsymbol {h} ^ {(l + 1)} | x ]} _ {P (\boldsymbol {h} ^ {(l + 1)}; \psi^ {(l + 1)})} = \int_ {h ^ {(l)}, \theta^ {(l)}} \mathbf {P r} [ \boldsymbol {h} ^ {(l + 1)} | h ^ {(l)}, \theta^ {(l)} ] Q \left(\theta^ {(l)}; \phi^ {(l)}\right) \underbrace {\mathbf {P r} [ h ^ {(l)} | x ]} _ {P (h ^ {(l)}; \psi^ {(l)})} d h ^ {(l)} d \theta^ {(l)} \tag {1} +$$ + +This iterative process to compute the predictive distributions layer-by-layer sequentially is known as probabilistic propagation (Soudry et al., 2014; Hernández-Lobato & Adams, 2015; Ghosh et al., 2016). With this approach, we need to explicitly compute and store each intermediate result $\mathbf{Pr}[h^{(l)}|x]$ in its parameterized form $P(h^{(l)};\psi^{(l)})$ (the conditioning on $x$ is hidden in $\psi^{(l)}$ , i.e. $\psi^{(l)}$ is a function of $x$ ). Therefore, probabilistic propagation is a deterministic process that computes $\psi^{(l+1)}$ as a function of $\psi^{(l)}$ and $\phi^{(l)}$ , which we denote as $\psi^{(l+1)} = g^{(l)}(\psi^{(l)},\phi^{(l)})$ . + +Challenge in Sampling-Free Probabilistic Propagation. If the hidden variables $\pmb{h}^{(l)}$ 's are continuous, Equation (1) generally can not be evaluated in closed form as it is difficult to find a family of parameterized distributions $P$ for $\pmb{h}^{(l)}$ such that $\pmb{h}^{(l+1)}$ remains in $P$ under the operations of a neural network layer. Therefore most existing methods consider approximations at each layer of probabilistic propagation. In Section 4, we will show that this issue can be (partly) addressed if we consider the $\pmb{h}^{(l)}$ 's to be discrete random variables, as in a BQN. + +# 2.3 THE LEARNING PROBLEM + +Objective Function. A standard approach to finding a good approximation $Q(\pmb{\theta}; \phi)$ is variational inference, which finds $\phi^*$ such that the KL-divergence $\mathbf{KL}(Q(\pmb{\theta}; \phi) || \mathbf{Pr}[\pmb{\theta} | \mathcal{D}])$ from $Q(\pmb{\theta}; \phi)$ to $\mathbf{Pr}[\pmb{\theta} | \mathcal{D}]$ is minimized. In Appendix B, we prove that to minimizing the KL-divergence is equivalent to maximizing an objective function known as the evidence lower bound (ELBO), denoted as $\mathcal{L}(\phi)$ . + +$$ +\max _ {\phi} \mathcal {L} (\phi) = - \mathbf {K L} (Q (\boldsymbol {\theta}; \phi) | | \mathbf {P r} [ \boldsymbol {\theta} | \mathcal {D} ]) = \sum_ {n = 1} ^ {N} \mathcal {L} _ {n} (\phi) + \mathcal {R} (\phi) \tag {2} +$$ + +where $\mathcal{L}_n(\phi) = \mathbb{E}_Q[\log \mathbf{Pr}[y_n|x_n,\pmb {\theta}]]$ and $\mathcal{R}(\phi) = \mathbb{E}_Q[\log (\mathbf{Pr}[\pmb {\theta}])] + H(Q)$ + +Probabilistic Backpropagation. Optimization in neural networks heavily relies on the gradient-based methods, where the partial derivatives $\partial \mathcal{L}(\phi) / \partial \phi$ of the objective $\mathcal{L}(\phi)$ w.r.t. the parameters $\phi$ are obtained by backpropagation. Formally, if the output produced by a neural network is given by a (sub-)differentiable function $g(\phi)$ , and the objective $\mathcal{L}(g(\phi))$ is an explicit function of $g(\phi)$ (and not just an explicit function of $\phi$ ), then the partial derivatives can be computed by chain rule: + +$$ +\partial \mathcal {L} (g (\phi)) / \partial \phi = \partial \mathcal {L} (g (\phi)) / \partial g (\phi) \cdot \partial g (\phi) / \partial \phi \tag {3} +$$ + +The learning problem can then be (approximately) solved by first-order methods, typically stochastic gradient descent/ascent. Notice that (1) For classification, the function $g(\phi)$ returns the probabilities after the softmax function, not the categorical label; (2) An additional regularizer $\mathcal{R}(\phi)$ on the parameters will not cause difficulty in backpropagation, given $\partial \mathcal{R}(\phi) / \partial \phi$ is easily computed. + +Challenge in Sampling-Free Probabilistic Backpropagation. Learning BNNs is not amenable to standard backpropagation because the ELBO objective function $\mathcal{L}(\phi)$ in (4b) is not an explicit (i.e. implicit) function of the predictive distribution $g(\phi)$ in (4a): + +$$ +g _ {n} (\phi) = \mathbb {E} _ {Q} \left[ \Pr \left[ y _ {n} \mid x _ {n}, \boldsymbol {\theta} \right] \right] = \int_ {\theta} \Pr \left[ y _ {n} \mid x _ {n}, \theta \right] Q (\theta ; \phi) d \theta \tag {4a} +$$ + +$$ +\mathcal {L} _ {n} (\phi) = \mathbb {E} _ {Q} \left[ \log \left(\mathbf {P r} \left[ y _ {n} \mid x _ {n}, \boldsymbol {\theta} \right]\right) \right] = \int_ {\theta} \log \left(\mathbf {P r} \left[ y _ {n} \mid x _ {n}, \theta \right]\right) Q (\theta ; \phi) d \theta \tag {4b} +$$ + +Although $\mathcal{L}_n(\phi)$ is a function of $\phi$ , it is not an explicit function of $g_{n}(\phi)$ . Consequently, the chain rule in Equation (3) on which backpropagation is based is not directly applicable. + +# 3 PROPOSED LEARNING METHOD FOR BAYESIAN NEURAL NETWORKS + +Alternative Evidence Lower Bound. We make learning in BNNs amenable to backpropagation by developing a lower bound $\overline{\mathcal{L}}_n(\phi) \leq \mathcal{L}_n(\phi)$ such that $\partial \overline{\mathcal{L}}_n(\phi) / \partial \phi$ can be obtained by chain rule (i.e. $\overline{\mathcal{L}}_n(\phi)$ is an explicit function of the results from the forward pass.) With $\overline{\mathcal{L}}_n(\phi)$ in hand, we can (approximately) find $\phi^{\star}$ by maximizing the alternative objective via gradient-based method: + +$$ +\phi^ {\star} = \arg \max _ {\phi} \bar {\mathcal {L}} (\phi) = \arg \max _ {\phi} \left(\mathcal {R} (\phi) + \sum_ {n = 1} ^ {N} \bar {\mathcal {L}} _ {n} (\phi)\right) \tag {5} +$$ + +In Appendix C.1, we proved one feasible $\overline{\mathcal{L}}_n(\phi)$ which only depends on second last output $h^{(L - 1)}$ . Theorem 3.1 (Alternative Evidence Lower Bound). Define each term $\overline{\mathcal{L}}_n(\phi)$ in $\overline{\mathcal{L}} (\phi)$ as + +$$ +\overline {{\mathcal {L}}} _ {n} (\phi) := \mathbb {E} _ {\boldsymbol {h} ^ {(L - 1)} \sim P; \boldsymbol {\theta} ^ {(L - 1)} \sim Q} \left[ \log \left(\mathbf {P r} \left[ y _ {n} \mid \boldsymbol {h} ^ {(L - 1)}, \boldsymbol {\theta} ^ {(L - 1)} \right]\right) \right] \tag {6} +$$ + +then $\overline{\mathcal{L}}_n(\phi)$ is a lower bound of $\mathcal{L}_n(\phi)$ , i.e. $\overline{\mathcal{L}}_n(\phi) \leq \mathcal{L}_n(\phi)$ . The equality $\overline{\mathcal{L}}_n(\phi) = \mathcal{L}_n(\phi)$ holds if $\pmb{h}^{(L-1)}$ is deterministic given input $x$ and all parameters before the last layer $\theta^{(:L-2)}$ . + +Analytic Forms of $\overline{\mathcal{L}}_n(\phi)$ . While the lower bound in Theorem 3.1 applies to BNNs with arbitrary distributions $P$ on hidden variables $h$ , $Q$ on model parameters $\theta$ , and any problem setting (e.g. classification or regression), in practice sampling-free probabilistic backpropagation requires that $\overline{\mathcal{L}}_n(\phi)$ can be analytically evaluated (or further lower bounded) in terms of $\phi^{(L-1)}$ and $\theta^{(L-1)}$ . This task is nontrivial since it requires redesign of the output layer, i.e. the function of $\mathbf{Pr}[y|\pmb{h}^{(L-1)},\pmb{\theta}^{(L-1)}]$ . In this paper, we develop two layers for classification and regression tasks, and present the classification case in this section due to space limit. Since $\overline{\mathcal{L}}_n(\phi)$ involves the last layer only, we omit the superscripts/subscripts of $\pmb{h}^{(L-1)}$ , $\psi^{(L-1)}$ , $\phi^{(L-1)}$ , $x_n$ , $y_n$ , and denote them as $\pmb{h}$ , $\psi$ , $\phi$ , $x$ , $y$ . + +Theorem 3.2 (Analytic Form of $\overline{\mathcal{L}}_n(\phi)$ for Classification). Let $\pmb{h} \in \mathbb{R}^K$ (with $K$ the number of classes) be the pre-activations of a softmax layer (a.k.a. logits), and $\phi = s \in \mathbb{R}^+$ be a scaling factor that adjusts its scale such that $\mathbf{Pr}[\pmb{y} = c|\pmb{h}, s] = \exp(\pmb{h}_c / s) / \sum_{k=1}^{K} \exp(\pmb{h}_k / s)$ . Suppose the logits $\{\pmb{h}_k\}_{k=1}^K$ are pairwise independent (which holds under mean-field approximation) and $\pmb{h}_k$ follows a Gaussian distribution $\pmb{h}_k \sim \mathcal{N}(\mu_k, \nu_k)$ (therefore $\psi = \{\mu_k, \nu_k\}_{k=1}^K$ ) and $s$ is a deterministic parameter. Then $\overline{\mathcal{L}}_n(\phi)$ is further lower bounded as $\overline{\mathcal{L}}_n(\phi) \geq \frac{\mu_c}{s} - \log\left(\sum_{k=1}^{K} \exp\left(\frac{\mu_k}{s} + \frac{\nu_k}{2s^2}\right)\right)$ . + +The regression case and proofs for both layers are deferred to Appendix C. + +# 4 BAYESIAN QUANTIZED NETWORKS (BQNS) + +While Section 3 provides a general solution to learning in BNNs, the solution relies on the ability to perform probabilistic propagation efficiently. To address this, we introduce Bayesian quantized + +networks (BQNs) — BNNs where both hidden units $\pmb{h}^{(l)}$ 's and model parameters $\pmb{\theta}^{(l)}$ 's take discrete values — along with a set of novel algorithms for efficient sampling-free probabilistic propagation in BQNs. For simplicity of exposition, we assume activations and model parameters take values from the same set $\mathbb{Q}$ , and denote the degree of quantization as $D = |\mathbb{Q}|$ , (e.g. $\mathbb{Q} = \{-1, 1\}$ , $D = 2$ ). + +# 4.1 PROBABILISTIC PROPAGATION AS TENSOR CONTRACTIONS + +Lemma 4.1 (Probabilistic Propagation in BQNs). After quantization, the iterative step of probabilistic propagation in Equation (1) is computed with a finite sum instead of an integral: + +$$ +P \left(\boldsymbol {h} ^ {(l + 1)}; \psi^ {(l + 1)}\right) = \sum_ {h ^ {(l)}, \theta^ {(l)}} \Pr \left[ \boldsymbol {h} ^ {(l + 1)} \mid h ^ {(l)}, \theta^ {(l)} \right] Q \left(\theta^ {(l)}; \phi^ {(l)}\right) P \left(h ^ {(l)}; \psi^ {(l)}\right) \tag {7} +$$ + +and a categorically distributed $\pmb{h}^{(l)}$ results in $\pmb{h}^{(l + 1)}$ being categorical as well. The equation holds without any assumption on the operation $\mathbf{Pr}[\pmb{h}^{(l + 1)}|\pmb{h}^{(l)},\theta^{(l)}]$ performed in the neural network. + +Notice all distributions in Equation (7) are represented in high-order tensors: Suppose there are $I$ input units, $J$ output units, and $K$ model parameters at the $l$ -th layer, then $\pmb{h}^{(l)} \in \mathbb{Q}^I$ , $\pmb{\theta}^{(l)} \in \mathbb{Q}^K$ , and $\pmb{h}^{(l+1)} \in \mathbb{Q}^J$ , and their distributions are characterized by $P(\pmb{h}^{(l)}; \pmb{\psi}^{(l)}) \in \mathbb{R}^{D^I}$ , $Q(\pmb{\theta}^{(l)}; \pmb{\phi}^{(l)}) \in \mathbb{R}^{D^K}$ , $P(\pmb{h}^{(l+1)}; \pmb{\psi}^{(l+1)}) \in \mathbb{R}^{D^J}$ , and $\mathbf{Pr}[h^{(l+1)} | h^{(l)}, \pmb{\theta}^{(l)}] \in \mathbb{R}^{D^J \times D^I \times D^K}$ respectively. Therefore, each step in probabilistic propagation is a tensor contraction of three tensors, which establishes the duality between BQNs and hierarchical tensor networks (Robeva & Seigal, 2017). + +Since tensor contractions are differentiable w.r.t. all inputs, BQNs thus circumvent the difficulties in training QNNs (Courbariaux et al., 2015; Rastegari et al., 2016), whose outputs are not differentiable w.r.t. the discrete parameters. This result is not surprising: if we consider learning in QNNs as an integer programming (IP) problem, solving its Bayesian counterpart is equivalent to the approach to relaxing the problem into a continuous optimization problem (Williamson & Shmoys, 2011). + +Complexity of Exact Propagation. The computational complexity to evaluate Equation (7) is exponential in the number of random variables $O(D^{IJK})$ , which is intractable for quantized neural network of any reasonable size. We thus turn to approximations. + +# 4.2 APPROXIMATE PROPAGATION VIA RANK-1 TENSOR CP DECOMPOSITION + +We propose a principled approximation to reduce the computational complexity in probabilistic propagation in BQNs using tensor $CP$ decomposition, which factors an intractable high-order probability tensor into tractable lower-order factors (Grasedyck et al., 2013). In this paper, we consider the simplest rank-1 tensor $CP$ decomposition, where the joint distributions of $P$ and $Q$ are fully factorized into products of their marginal distributions, thus equivalent to the mean-field approximation (Wainwright et al., 2008). With rank-1 CP decomposition on $P(h^{(l)};\psi^{(l)})$ , $\forall l\in [L]$ , the tensor contraction in (7) reduces to a standard Tucker contraction (Kolda & Bader, 2009) + +$$ +P \left(\boldsymbol {h} _ {j} ^ {(l + 1)}; \psi_ {j} ^ {(l + 1)}\right) \approx \sum_ {h ^ {(l)}, \theta^ {(l)}} \Pr \left[ \boldsymbol {h} _ {j} ^ {(l + 1)} \mid \theta^ {(l)}, h ^ {(l)} \right] \prod_ {k} Q \left(\theta_ {k} ^ {(l)}; \phi_ {k} ^ {(l)}\right) \prod_ {i} P \left(h _ {i} ^ {(l)}; \psi_ {i} ^ {(l)}\right) \tag {8} +$$ + +where each term of $\psi_i^{(l)},\phi_k^{(l)}$ parameterizes a single categorical variable. In our implementation, we store the parameters in their log-space, i.e. $Q(\theta_k^{(l)} = \mathbb{Q}(d)) = \exp (\psi_k^{(l)}(d)) / \sum_{q = 1}^D\exp (\phi_k^{(l)}(q))$ + +Fan-in Number $E$ . In a practical model, for the $l$ -th layer, an output unit $\pmb{h}_j^{(l + 1)}$ only (conditionally) depends on a subset of all input units $\{\pmb{h}_i^{(l)}\}$ and model parameters $\{\pmb{\theta}_k^{(h)}\}$ according to the connectivity pattern in the layer. We denote the set of dependent input units and parameters for $\pmb{h}_j^{(l + 1)}$ as $\mathcal{I}_j^{(l + 1)}$ and $\mathcal{M}_j^{(l + 1)}$ , and define the fan-in number $E$ for the layer as $\max_j\left|\mathcal{I}_j^{(l + 1)}\right| + \left|\mathcal{M}_j^{(l + 1)}\right|$ . + +Complexity of Approximate Propagation. The approximate propagation reduces the computational complexity from $O(D^{IJK})$ to $O(JD^E)$ , which is linear in the number of output units $J$ if we assume the fan-in number $E$ to be a constant (i.e. $E$ is not proportional to $I$ ). + +# 4.3 FAST ALGORITHMS FOR APPROXIMATE PROPAGATION + +Different types of network layers have different fan-in numbers $E$ , and for those layers with $E$ greater than a small constant, Equation (8) is inefficient since the complexity grows exponential in $E$ . Therefore in this part, we devise fast(er) algorithms to further lower the complexity. + +Small Fan-in Layers: Direct Tensor Contraction. If $E$ is small, we implement the approximate propagation through tensor contraction in Equation (8). The computational complexity is $O(JD^{E})$ as discussed previously. See Appendix D.1 for a detailed discussion. + +Medium Fan-in Layers: Discrete Fourier Transform. If $E$ is medium, we implement approximate propagation through fast Fourier transform since summation of discrete random variables is equivalent to convolution between their probability mass function. See Appendix D.2 for details. With the fast Fourier transform, the computational complexity is reduced to $O(JE^2D\log (ED))$ . + +Large Fan-in Layers: Lyapunov Central Limit Theorem. In a typical linear layer, the fan-in $E$ is large, and a super-quadratic algorithm using fast Fourier transform is still computational expensive. Therefore, we derive a faster algorithm based on the Lyapunov central limit theorem (See App D.3) With CLT, the computational complexity is further reduced to $O(JED)$ . + +Remarks: Depending on the fan-in numbers $E$ , we adopt CLT for linear layers with sufficiently large $E$ such as fully connected layers and convolutional layers; DFT for those with medium $E$ such as average pooling layers and depth-wise layers; and direct tensor contraction for those with small $E$ such as shortcut layers and nonlinear layers. + +# 5 EXPERIMENTS + +In this section, we demonstrate the effectiveness of BQNs on the MNIST, Fashion-MNIST, KM-NIST and CIFAR10 classification datasets. We evaluate our BQNs with both multi-layer perceptron (MLP) and convolutional neural network (CNN) models. In training, each image is augmented by a random shift within 2 pixels (with an additional random flipping for CIFAR10), and no augmentation is used in test. In the experiments, we consider a class of quantized neural networks, with both binary weights and activations (i.e. $\mathbb{Q} = \{-1,1\}$ ) with sign activations $\sigma (\cdot) = \mathrm{sign}(\cdot)$ . For BQNs, the distribution parameters $\phi$ are initialized by Xavier's uniform initializer, and all models are trained by ADAM optimizer (Kingma & Ba, 2014) for 100 epochs (and 300 epochs for CIFAR10) with batch size 100 and initial learning rate $10^{-2}$ , which decays by 0.98 per epoch. + +
MethodsMNISTKMNISTFashion-MNISTCIFAR10
NLL (10-3)% Err.NLL (10-3)% Err.NLL (10-3)% Err.NLL (10-3)% Err.
E-QNN on MLP546.6±157.93.30 ± 0.652385.6±432.317.88±1.862529.4±276.713.02±0.81N/AN/A
BQN on MLP130.0±3.52.49±0.08457.7±13.813.41±0.12417.3±8.19.99±0.20N/AN/A
E-QNN on CNN425.3±61.80.85±0.133755.7±465.111.49±1.161610.7±158.43.02±0.377989.7 ± 600.215.92 ± 0.72
BQN on CNN41.8±1.60.85±0.06295.5±1.49.95±0.15209.5±2.84.65±0.15530.6 ± 23.013.74 ± 0.47
+ +Table 1: Comparison of performance of BQNs against the baseline E-QNN. Each E-QNN is an ensemble of 10 networks, which are trained individually and but make predictions jointly. We report both NLL (which accounts for prediction uncertainty) and 0-1 test error (which doesn't account for prediction uncertainty). All the numbers are averages over 10 runs with different seeds, the standard deviation are exhibited following the $\pm$ sign. + +Training Objective of BQNs. To allow for customized level of uncertainty in the learned Bayesian models, we introduce a regularization coefficient $\lambda$ in the alternative ELBO proposed in Equation (5) (i.e. a lower bound of the likelihood), and train the BQNs by maximizing the following objective: + +$$ +\overline {{\mathcal {L}}} (\phi) = \sum_ {n = 1} ^ {N} \overline {{\mathcal {L}}} _ {n} (\phi) + \lambda \mathcal {R} (\phi) = \lambda \left(1 / \lambda \sum_ {n = 1} ^ {N} \overline {{\mathcal {L}}} _ {n} (\phi) + \mathcal {R} (\phi)\right) \tag {9} +$$ + +where $\lambda$ controls the uncertainty level, i.e. the importance weight of the prior over the training set. + +Baselines. (1) We compare our BQN against the baseline - Bootstrap ensemble of quantized neural networks (E-QNN). Each member in the ensemble is trained in a non-Bayesian way (Courbariaux et al., 2016), and jointly make the prediction by averaging over the logits from all members. Note + +![](images/e7104071fe6057ce0a1f8a97119eaccf4c625645e1f1bafa2923fa4a133cb0cc.jpg) + +![](images/6fd014e51ccbd2762f3660cb2ca2d7ef74ea18200428b5fd87deedf24541265e.jpg) + +![](images/fd0c24ca80b608eb53fb51d4e7b9e2131b181d52f77c6ba6dc1412879711e405.jpg) + +![](images/af7dd1b40f49e3d12caef00a1467d52dcb591b5c273b44cbd8c7f580431c41b1.jpg) + +![](images/f4456c874d174bdedd7aeaef7945f480d810e656a227631dcfbb61c4ad0244c7.jpg) +(a) NLL MNIST +(e) Error MNIST +Figure 1: Comparison of the predictive performance of our BQNs against the E-QNN as well as the non-quantized BNN trained by SGVB on a CNN. Negative log-likelihood (NLL) which accounts for uncertainty and 0-1 test error which doesn't account for uncertainty are displayed. + +![](images/d619baf5d252f7bc4d03821075d865904a97f882d9754a86ea1c9aceb6035704.jpg) +(b) NLL FMNIST +(f) Error FMNIST + +![](images/c2c8ee5bb5303e8829c9c01b381dd2e1c824fac90f4651e9719cce2435c61bb5.jpg) +(c) NLL KMNIST +(g) Error KMNIST + +![](images/0c5d678904523d68be515cb207d07ea993884f816326a2687892ea60021ec764.jpg) +(d) NLL CIFAR10 +(h) Error CIFAR10 + +that Courbariaux et al. (2016) is chosen over other QNN training methods as the baseline since it trains QNN from random initialization, thus a fair comparison to our approach. Details are discussed in Appendix A. (2) To exhibit the effectiveness of our BQN, we further compare against continuous-valued Bayesian neural network (abbreviated as BNN) with Gaussian parameters. The model is trained with stochastic gradient variational Bayes (SGVB) augmented by local re-parameterization trick (Shridhar et al., 2019). Since the BNN allows for continuous parameters (different from BQN with quantized parameters), the predictive error is expected to be lower than BQN. + +Evaluation of BQNs. While 0-1 test error is a popular metric to measure the predictive performance, it is too coarse a metric to assess the uncertainty in decision making (for example it does not account for how badly the wrong predictions are). Therefore, we will mainly use the negative log-likelihood (NLL) to measure the predictive performance in the experiments. + +Once a BQN is trained (i.e. an approximate posterior $Q(\theta)$ is learned), we consider three modes to evaluate the behavior of the model: (1) analytic inference (AI), (2) Monte Carlo (MC) sampling and (3) Maximum a Posterior (MAP) estimation: + +1. In analytic inference (AI, i.e. our proposed method), we analytically integrate over $Q(\theta)$ to obtain the predictive distribution as in the training phase. Notice that the exact NLL is not accessible with probabilistic propagation (which is why we propose an alternative ELBO in Equation (5)), we will report an upper bound of the NLL in this mode. +2. In MC sampling, $S$ sets of model parameters are drawn independently from the posterior posterior $\theta_s \sim Q(\pmb{\theta}), \forall s \in [S]$ , and the forward propagation is performed as in (non-Bayesian) quantized neural network for each set $\theta_s$ , followed by an average over the model outputs. The difference between analytic inference and MC sampling will be used to evaluate (a) the effect of mean-field approximation and (b) the tightness of the our proposed alternative ELBO. +3. MAP estimation is similar to MC sampling, except that only one set of model parameters $\theta^{\star}$ is obtained $\theta^{\star} = \arg \max_{\theta}Q(\theta)$ . We will exhibit our model's ability to compress a Bayesian neural network by comparing MAP estimation of our BQN with non-Bayesian QNN. + +# 5.1 ANALYSIS OF RESULTS + +Expressive Power and Uncertainty Calibration in BQNs. We report the performance via all evaluations of our BQN models against the Ensemble-QNN in Table 1 and Figure 1. (1) Compared to E-QNNs, our BQNs have significantly lower NLL and smaller predictive error (except for + +![](images/77cbb35b07ac52c7316243a9edc1fa5930a189b38053239ae39b04ee2ec016b5.jpg) + +![](images/f363f79ac45144fb9cc5e6916fd616251ed8deda73754186120d0d84a570dccf.jpg) + +![](images/514405bdc5b877e8a6ca8c5dca045ed74518914c00f43da404bf4937658e406d.jpg) + +![](images/a1f1e2c91b0679e9134814216ba85d883c11ad91830bfc62cde900bc41ff43c0.jpg) +(a) NLL MNIST +(d) Error MNIST +Figure 2: Illustration of mean-field approximation and tightness of alternative ELBO on a CNN. The performance gap between our analytical inference and the Monte Carlo Sampling is displayed. + +![](images/f6819abb37362424e0e8acddb97f1bbf640cb148deed31f5284b11bdcf30c68d.jpg) +(b) NLL FMNIST +(e) Error FMNIST + +![](images/4987415aaa51da6727ca77d022268851321c6423dca22d041fffa6f5c88c88cb.jpg) +(c) NLL KMNIST +(f) Error KMNIST + +Fashion-MNIST with architecture CNN). (2) As we can observe in Figure 1, BQNs impressively achieve comparable NLL to continuous-valued BNN, with slightly higher test error. As our model parameters only take values $\{-1, 1\}$ , small degradation in predictive accuracy is expected. + +Evaluations of Mean-field Approximation and Tightness of the Alternative ELBO. If analytic inference (by probabilistic propagation) were computed exactly, the evaluation metrics would have been equal to the ones with MC sampling (with infinite samples). Therefore we can evaluate the approximations in probabilistic propagation, namely mean-field approximation in Equation (8) and relaxation of the original ELBO in Equation (5), by measuring the gap between analytic inference and MC sampling. As shown in Figure 2, such gaps are small for all scenarios, which justifies the approximations we use in BQNs. + +To further decouple these two factors of mean-field approximation and relaxation of the original ELBO, we vary the regularization coefficient $\lambda$ in the learning objective. (1) For $\lambda = 0$ (where the prior term is removed), the models are forced to become deterministic during training. Since the deterministic models do not have mean-field approximation in the forward pass, the gap between analytic inference and MC-sampling reflects the tightness of our alternative ELBO. (2) As $\lambda$ increases, the gaps increase slightly as well, which shows that the mean-field approximation becomes slightly less accurate with higher learned uncertainty in the model. + +
MethodsMNISTKMNISTFashion-MNIST
NLL(10-3)% Err.NLL(10-3)% Err.NLL(10-3)% Err.
QNN on MLP522.4±42.24.14±0.252019.1±281.219.56±1.972427.1±193.515.67±1.19
MAP of BQN on MLP137.60±4.403.69±0.09464.60±12.8014.79±0.21461.30±13.4012.89±0.17
QNN on CNN497.4±139.51.08±0.24734.5±1697.214.2±2.291878.3±223.83.88±0.33
MAP of BQN on CNN30.3±1.60.92±0.07293.6±4.410.82±0.37179.1±4.45.00±0.11
+ +Table 2: Deterministic model compression through direct training of QNN (Courbariaux et al., 2016) v.s. MAP estimation in our proposed BQN. All the numbers are averages over 10 runs with different seeds, the standard deviation are exhibited following the $\pm$ sign. + +Compression of Neural Networks via BQNs. One advantage of BQNs over continuous-valued BNNs is that deterministic QNNs can be obtained for free, since a BQN can be interpreted as an ensemble of infinite QNNs (each of which is a realization of posterior distribution). (1) One simple approach is to set the model parameters to their MAP estimates, which compresses a given BQN to 1/64 of its original size (and has the same number of bits as a single QNN). (2) MC sampling can be + +
MethodsMNISTKMNISTFashion-MNIST
NLL(10-3)% Err.NLL(10-3)% Err.NLL(10-3)% Err.
E-QNN on MLP546.60±157.903.30 ±0.652385.60±432.3017.88±1.862529.40±276.7013.02±0.81
MC of BQN on MLP108.9±2.62.73±0.09429.50±11.6013.83±0.12385.30±5.1010.81±0.44
E-QNN on CNN425.3±61.800.85±0.133755.70±465.1011.49±1.161610.70±158.403.02±0.37
MC of BQN on CNN29.2±0.60.87±0.04286.3±2.710.56±0.14174.5±3.64.82±0.13
+ +Table 3: Bayesian Model compression through direct training of Ensemble-QNN vs a Monte-Carlo sampling on our proposed BQN. Each ensemble consists of 5 quantized neural networks, and for fair comparison we use 5 samples for Monte-Carlo evaluation. All the numbers are averages over 10 runs with different seeds, the standard deviation are exhibited following the $\pm$ sign. + +interpreted as another approach to compress a BQN, which reduces the original size to its $S / 64$ (with the same number of bits as an ensemble of $S$ QNNs). In Tables 2 and 3, we compare the models by both approaches to their counterparts (a single QNN for MAP, and E-QNN for MC sampling) trained from scratch as in Courbariaux et al. (2016). For both approaches, our compressed models outperform their counterparts (in NLL). We attribute this to two factors: (a) QNNs are not trained in a Bayesian way, therefore the uncertainty is not well calibrated; and (b) Non-differentiable QNNs are unstable to train. Our compression approaches via BQNs simultaneously solve both problems. + +# 6 CONCLUSION + +We present a sampling-free, backpropagation-compatible, variational-inference-based approach for learning Bayesian quantized neural networks (BQNs). We develop a suite of algorithms for efficient inference in BQNs such that our approach scales to large problems. We evaluate our BQNs by Monte-Carlo sampling, which proves that our approach is able to learn a proper posterior distribution on QNNs. Furthermore, we show that our approach can also be used to learn (ensemble) QNNs by taking maximum a posterior (or sampling from) the posterior distribution. + +# REFERENCES + +Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. +Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015. +Andrzej Cichocki, Namgil Lee, Ivan V Oseledets, Anh Huy Phan, Qibin Zhao, and D Mandic. Low-rank tensor networks for dimensionality reduction and large-scale optimization problems: Perspectives and challenges part 1. arXiv preprint arXiv:1609.00893, 2016. +Andrzej Cichocki, Anh-Huy Phan, Qibin Zhao, Namgil Lee, Ivan Oseledets, Masashi Sugiyama, Danilo P Mandic, et al. Tensor networks for dimensionality reduction and large-scale optimization: Part 2 applications and future perspectives. Foundations and Trends® in Machine Learning, 9(6):431-673, 2017. +Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123-3131, 2015. +Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016. +Steve K Esser, Rathinakumar Appuswamy, Paul Merolla, John V Arthur, and Dharmendra S Modha. Backpropagation for energy-efficient neuromorphic computing. In Advances in Neural Information Processing Systems, pp. 1117-1125, 2015. +Yarin Gal. Uncertainty in Deep Learning. PhD thesis, University of Cambridge, 2016. +Jochen Gast and Stefan Roth. Lightweight probabilistic deep networks. In Proceedings of the IEEE Conference on Computer Vision and Patter Recognition, pp. 3369-3378, 2018. + +Soumya Ghosh, Francesco Maria Delle Fave, and Jonathan S Yedidia. Assumed density filtering methods for learning bayesian neural networks. In AAAI, pp. 1589-1595, 2016. +Lars Grasedyck, Daniel Kressner, and Christine Tobler. A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen, 36(1):53-78, 2013. +Alex Graves. Practical variational inference for neural networks. In Advances in neural information processing systems, pp. 2348-2356, 2011. +Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. +Jose Miguel Hernández-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International Conference on Machine Learning, pp. 1861-1869, 2015. +Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. Journal of Machine Learning Research, 18:187-1, 2017. +Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. +Mohammad Khan. Variational learning for latent Gaussian model of discrete data. PhD thesis, University of British Columbia, 2012. +Minje Kim and Paris Smaragdis. Bitwise neural networks. arXiv preprint arXiv:1601.06071, 2016. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3): 455-500, 2009. +Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. +Thomas Peter Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001. +Elizabeth Newman, Lior Horesh, Haim Avron, and Misha Kilmer. Stable tensor neural networks for rapid deep learning, 2018. +Román Orús. A practical introduction to tensor networks: Matrix product states and projected entangled pair states. Annals of Physics, 349:117-158, 2014. +Jorn WT Peters and Max Welling. Probabilistic binary neural networks. arXiv preprint arXiv:1809.03368, 2018. +Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pp. 525-542. Springer, 2016. +Elina Robeva and Anna Seigal. Duality of graphical models and tensor networks. Information and Inference: A Journal of the IMA, 2017. +Oran Shayer, Dan Levi, and Ethan Fetaya. Learning discrete weights using the local reparameterization trick. arXiv preprint arXiv:1710.07739, 2017. +Alexander Shekhotsov and Boris Flach. Feed-forward propagation in probabilistic neural networks with categorical and max layers. 2018. +Kumar Shridhar, Felix Laumann, and Marcus Liwicki. A comprehensive guide to bayesian convolutional neural network with variational inference. arXiv preprint arXiv:1901.02731, 2019. + +Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In Advances in Neural Information Processing Systems, pp. 963-971, 2014. +Jiahao Su, Jingling Li, Bobby Bhattacharjee, and Furong Huang. Tensorized spectrum preserving compression for neural networks. arXiv preprint arXiv:1805.10352, 2018. +Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1-305, 2008. +Hao Wang and Dit-Yan Yeung. Towards bayesian deep learning: A survey. arXiv preprint arXiv:1604.01662, 2016. +Hao Wang, SHI Xingjian, and Dit-Yan Yeung. Natural-parameter networks: A class of probabilistic neural networks. In Advances in Neural Information Processing Systems, pp. 118-126, 2016. +David P Williamson and David B Shmoys. The design of approximation algorithms. Cambridge university press, 2011. +Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E Turner, José Miguel Hernández-Lobato, and Alexander L Gaunt. Fixing variational bayes: Deterministic variational inference for bayesian neural networks. arXiv preprint arXiv:1810.03958, 2018. +Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. +Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016. + +# Appendix: Sampling-Free Learning of Bayesian Quantized Neural Networks + +# A RELATED WORK + +Probabilistic Neural Networks and Bayesian Neural Networks These models consider weights to be random variables and aim to learn their distributions. To further distinguish two families of such models, we call a model Bayesian neural network if the distributions are learned using a prior-posterior framework (i.e. via Bayesian inference) (Soudry et al., 2014; Hernández-Lobato & Adams, 2015; Ghosh et al., 2016; Graves, 2011; Blundell et al., 2015; Shridhar et al., 2019), and otherwise probabilistic neural network (Wang et al., 2016; Shekhotsov & Flach, 2018; Gast & Roth, 2018). In particular, our work is closely related to natural-parameters networks (NPN) (Wang et al., 2016), which consider both weights and activations to be random variables from exponential family. Since categorical distribution (over quantized values) belongs to exponential family, our BQN can be interpreted as categorical NPN, but we learn the distributions via Bayesian inference. + +For Bayesian neural networks, various types of approaches have been proposed to learn the posterior distribution over model parameters. + +(1) Sampling-free Assumed Density Filtering (ADF), including EBP (Soudry et al., 2014) and PBP (Hernández-Lobato & Adams, 2015), is an online algorithm which (approximately) updates the posterior distribution by Bayes' rule for each observation. If the model parameters $\theta$ are Gaussian distributed, Minka (2001) shows that the Bayes' rule can be computed in analytic form based on $\partial \log(g_n(\phi)) / \partial \phi$ , and EBP Soudry et al. (2014) derives a similar rule for Bernoulli parameters in binary classification. Notice that ADF is compatible to backpropagation: + +$$ +\frac {\partial \log \left(g _ {n} (\phi)\right)}{\partial \phi} = \frac {1}{g _ {n} (\phi)} \cdot \frac {\partial g _ {n} (\phi)}{\partial \phi} \tag {10} +$$ + +assuming $g_{n}(\phi)$ can be (approximately) computed by sampling-free probabilistic propagation as in Section 2. However, this approach has two major limitations: (a) the Bayes' rule needed to be derived case by case, and analytic rule for most common cases are not known yet. (b) it is not compatible to modern optimization methods (such as SGD or ADAM) as the optimization is solved analytically for each data point, therefore difficult to cope with large-scale models. + +(2) Sampling-based Variational inference (SVI), formulates an optimization problem and solves it approximately via stochastic gradient descent (SGD). The most popular method among all is, Stochastic Gradient Variational Bayes (SGVB), which approximates $\mathcal{L}_n(\phi)$ by the average of multiple samples (Graves, 2011; Blundell et al., 2015; Shridhar et al., 2019). Before each step of learning or prediction, a number of independent samples of the model parameters $\{\theta_s\}_{s=1}^S$ are drawn according to the current estimate of $Q$ , i.e. $\theta_s \sim Q$ , by which the predictive function $g_n(\phi)$ and the loss $\mathcal{L}_n(\phi)$ can be approximated by + +$$ +g _ {n} (\phi) \approx \frac {1}{S} \sum_ {s = 1} ^ {S} \Pr \left[ y _ {n} \mid x _ {n}, \theta_ {s} \right] = \frac {1}{S} \sum_ {s = 1} ^ {S} f _ {n} \left(\theta_ {s}\right) \tag {11a} +$$ + +$$ +\mathcal {L} _ {n} (\phi) \approx \frac {1}{S} \sum_ {s = 1} ^ {S} \log \left(\Pr \left[ y _ {n} \mid x _ {n}, \theta_ {s} \right]\right) = \frac {1}{S} \sum_ {s = 1} ^ {S} \log \left(f _ {n} \left(\theta_ {s}\right)\right) \tag {11b} +$$ + +where $f_{n}(\theta) = \mathbf{Pr}[y_{n}|x_{n},\theta ]$ denotes the predictive function given a specific realization $\theta$ of the model parameters. The gradients of $\mathcal{L}_n(\phi)$ can now be approximated as + +$$ +\frac {\partial \mathcal {L} _ {n} (\phi)}{\partial \phi} \approx \frac {1}{S} \sum_ {s = 1} ^ {S} \frac {\partial \mathcal {L} _ {n} (\phi)}{\partial f _ {n} \left(\theta_ {s}\right)} \cdot \frac {\partial f _ {n} \left(\theta_ {s}\right)}{\partial \theta_ {s}} \cdot \frac {\partial \theta_ {s}}{\partial \phi} \tag {12} +$$ + +This approach has multiple drawbacks: (a) Repeated sampling suffers from high variance, besides being computationally expensive in both learning and prediction phases; (b) While $g_{n}(\phi)$ is differentiable w.r.t. $\phi$ , $f_{n}(\theta)$ may not be differentiable w.r.t. $\theta$ . One such example is quantized neural networks, whose backpropagation is approximated by straight through estimator (Bengio et al., + +2013). (3) The partial derivatives $\partial \theta_{s} / \partial \phi$ are difficult to compute with complicated reparameterization tricks (Maddison et al., 2016; Jang et al., 2016). +(3) Deterministic Variational inference (DVI) Our approach is most similar to Wu et al. (2018), which observes that if the underlying model is deterministic, i.e. $\mathbf{Pr}[h^{(l + 1)}|h^{(l)},\pmb{\theta}^{(l)}]$ is a dirac function + +$$ +\mathcal {L} _ {n} (\phi) := \mathbb {E} _ {\boldsymbol {h} ^ {(L - 1)} \sim P; \boldsymbol {\theta} ^ {(L - 1)} \sim Q} \left[ \log \left(\mathbf {P r} \left[ y _ {n} \mid \boldsymbol {h} ^ {(L - 1)}, \boldsymbol {\theta} ^ {(L - 1)} \right]\right) \right] \tag {13} +$$ + +Our approach considers a wider scope of problem settings, where the model could be stochastic, i.e. $\mathbf{Pr}[h^{(l + 1)}|h^{(l)},\theta^{(l)}]$ is an arbitrary function. Furthermore, Wu et al. (2018) considers the case that all parameters $\pmb{\theta}$ are Gaussian distributed, whose sampling-free probabilistic propagation requires complicated approximation (Shekhotsov & Flach, 2018). + +Quantized Neural Networks These models can be categorized into two classes: (1) Partially quantized networks, where only weights are discretized (Han et al., 2015; Zhu et al., 2016); (2) Fully quantized networks, where both weights and hidden units are quantized (Courbariaux et al., 2015; Kim & Smaragdis, 2016; Zhou et al., 2016; Rastegari et al., 2016; Hubara et al., 2017). While both classes provide compact size, low-precision neural network models, fully quantized networks further enjoy fast computation provided by specialized bit-wise operations. In general, quantized neural networks are difficult to train due to their non-differentiability. Gradient descent by backpropagation is approximated by either straight-through estimators (Bengio et al., 2013) or probabilistic methods (Esser et al., 2015; Shayer et al., 2017; Peters & Welling, 2018). Unlike these papers, we focus on Bayesian learning of fully quantized networks in this paper. Optimization of quantized neural networks typically requires dedicated loss function, learning scheduling and initialization. For example, Peters & Welling (2018) considers pre-training of a continuous-valued neural network as the initialization. Since our approach considers learning from scratch (with an uniform initialization), the performance could be inferior to prior works in terms of absolute accuracy. + +Tensor Networks and Tensorial Neural Networks Tensor networks (TNs) are widely used in numerical analysis (Grasedyck et al., 2013), quantum physics (Orus, 2014), and recently machine learning (Cichocki et al., 2016; 2017) to model interactions among multi-dimensional random objects. Various tensorial neural networks (TNNs) (Su et al., 2018; Newman et al., 2018) have been proposed that reduce the size of neural networks by replacing the linear layers with TNs. Recently, (Robeva & Seigal, 2017) points out the duality between probabilistic graphical models (PGMs) and TNs. I.e. there exists a bijection between PGMs and TNs. Our paper advances this line of thinking by connecting hierarchical Bayesian models (e.g. Bayesian neural networks) and hierarchical TNs. + +# B SUPERVISED LEARNING WITH BAYESIAN NEURAL NETWORKS (BNNS) + +The problem settings of general Bayesian model and Bayesian neural networks for supervised learning are illustrated in Figures 3a and 3b using graphical models. + +![](images/ab232d238daac85f03bf2e3a1d4c3283d59073c7e01eb1f64a3fc063c2165353.jpg) +(a) Graphical model depiction of the problem setting in Bayesian neural networks. +Figure 3: Graphical models. + +![](images/4d9a1d1ddafcfd7d90e0b35d7cf8b1066981d4f3f5210c3cd2f11956b97808cd.jpg) +(b) Graphical model depiction of a Bayesian neural network as a hierarchical model, where predicting $\mathbf{y}$ from $\mathbf{x}$ can be performed iteratively through the hidden variables $\mathbf{h}^{(l)}$ 's. + +General Bayesian model Formally, the graphical model in Figure 3a implies the joint distribution of the model parameters $\pmb{\theta}$ , the observed dataset $\mathcal{D} = \{(x_n,y_n)\}_{n = 1}^N$ and any unseen data point + +$(\pmb {x},\pmb {y})$ is factorized as follows: + +$$ +\begin{array}{l} \Pr [ \boldsymbol {x}, \boldsymbol {y}, \mathcal {D}, \boldsymbol {\theta} ] = (\Pr [ \boldsymbol {y} | \boldsymbol {x}, \boldsymbol {\theta} ] \Pr [ \boldsymbol {x} ] (\Pr [ \mathcal {D} | \boldsymbol {\theta} ])) \Pr [ \boldsymbol {\theta} ] (14) \\ = \left(\mathbf {P r} [ \boldsymbol {y} | \boldsymbol {x}, \boldsymbol {\theta} ] \mathbf {P r} [ \boldsymbol {x} ] \left(\prod_ {n = 1} ^ {N} \mathbf {P r} [ y _ {n} | x _ {n}, \boldsymbol {\theta} ] \mathbf {P r} [ x _ {n} ]\right)\right) \mathbf {P r} [ \boldsymbol {\theta} ] (15) \\ \end{array} +$$ + +where $\mathbf{Pr}[x_i]$ 's and $\mathbf{Pr}[\pmb{x}]$ are identically distributed, and so are the conditional distributions $\mathbf{Pr}[y_i|x_i,\pmb{\theta}]$ 's and $\mathbf{Pr}[\pmb{y}|\pmb{x},\pmb{\theta}]$ . In other words, we assume that (1) the samples $(x_n,y_n)$ 's (and unseen data point $(\pmb{x},\pmb{y})$ ) are identical and independent distributed according to the same data distribution; and (2) $x_{n}$ (or $\pmb{x}$ ) and $\pmb{\theta}$ together predict the output $y_{n}$ (or $\pmb{y}$ ) according to the same conditional distribution. Notice that the factorization above also implies the following equations: + +$$ +\Pr [ \boldsymbol {y} | \boldsymbol {x}, \mathcal {D}, \boldsymbol {\theta} ] = \Pr [ \boldsymbol {y} | \boldsymbol {x}, \boldsymbol {\theta} ] \tag {16a} +$$ + +$$ +\Pr [ \boldsymbol {\theta} | \boldsymbol {x}, \mathcal {D} ] = \Pr [ \boldsymbol {\theta} | \mathcal {D} ] \tag {16b} +$$ + +With these implications, the posterior predictive distribution $\mathbf{Pr}[y|x,\mathcal{D}]$ can now expanded as: + +$$ +\mathbf {P r} [ \boldsymbol {y} | \boldsymbol {x}, \mathcal {D} ] = \int_ {\theta} \mathbf {P r} [ \boldsymbol {y} | x, \theta , \mathcal {D} ] \mathbf {P r} [ \theta | x, \mathcal {D} ] d \theta = \int_ {\theta} \mathbf {P r} [ \boldsymbol {y} | x, \theta ] \underbrace {\mathbf {P r} [ \theta | \mathcal {D} ]} _ {\approx Q (\theta ; \phi)} d \theta \tag {17} +$$ + +where we approximate the posterior distribution $\mathbf{Pr}[\pmb {\theta}|\mathcal{D}]$ by a parameterized distribution $Q(\pmb {\theta};\phi)$ + +Variational Learning The reason we are learning an approximate posterior $Q$ and not the exact distribution $\mathbf{Pr}[\pmb{\theta}|\mathcal{D}]$ is that for complex models the latter is intractable to compute. The exact posterior $\mathbf{Pr}[\pmb{\theta}|\mathcal{D}]$ generally does not take the form of $Q(\pmb{\theta};\phi)$ even if its prior $\mathbf{Pr}[\pmb{\theta}]$ does. + +A standard approach to finding a good approximation $Q(\pmb{\theta}; \phi)$ is variational inference, which finds $\phi^{\star}$ such that the KL-divergence $\mathbf{KL}(Q(\pmb{\theta}; \phi) || \mathbf{Pr}[\pmb{\theta} | \mathcal{D}])$ of $Q(\pmb{\theta}; \phi)$ from $\mathbf{Pr}[\pmb{\theta} | \mathcal{D}]$ is minimized (or alternatively the negative KL-divergence is maximized.) + +$$ +\begin{array}{l} \phi^ {\star} = \arg \max _ {\phi} (- \mathbf {K L} (Q (\boldsymbol {\theta}; \phi) | | \mathbf {P r} [ \boldsymbol {\theta} | \mathcal {D} ])) (18) \\ = \arg \max _ {\phi} \left(- \int_ {\theta} Q (\theta ; \phi) \log \left(\frac {Q (\theta ; \phi)}{\mathbf {P r} [ \theta | \mathcal {D} ]}\right) d \theta\right) (19) \\ \end{array} +$$ + +where $\mathbf{Pr}[\pmb {\theta}|\mathcal{D}]$ is obtained via standard Bayes' rule, i.e. $\mathbf{Pr}[\pmb {\theta}|\mathcal{D}] = \mathbf{Pr}[\mathcal{D}|\pmb {\theta}]\mathbf{Pr}[\pmb {\theta}] / \mathbf{Pr}[\mathcal{D}]$ . Now we are able to decompose the maximization objective into two terms by plugging the rule into (19): + +$$ +\begin{array}{l} \mathcal {L} (\phi) = - \int_ {\theta} Q (\theta ; \phi) \log \left(Q (\theta ; \phi) \cdot \frac {\mathbf {P r} [ \mathcal {D} ]}{\mathbf {P r} [ \theta ] \mathbf {P r} [ \mathcal {D} | \theta ]}\right) d \boldsymbol {\theta} (20) \\ = \sum_ {n = 1} ^ {N} \int_ {\theta} \log \left(\mathbf {P r} \left[ y _ {n} \mid x _ {n}, \theta \right]\right) Q (\theta ; \phi) d \theta + \int_ {\theta} Q (\theta ; \phi) \log \left(\frac {Q (\theta ; \phi)}{\mathbf {P r} [ \theta ]}\right) d \theta + \text {c o n s t .} (21) \\ = \sum_ {n = 1} ^ {N} \underbrace {\mathbb {E} _ {Q} [ \log (\mathbf {P r} [ y _ {n} | x _ {n} , \theta ]) ]} _ {\mathcal {L} _ {n} (\phi)} + \underbrace {\mathbf {K L} (Q (\boldsymbol {\theta} ; \phi) | | \mathbf {P r} [ \boldsymbol {\theta} ])} _ {\mathcal {R} (\phi)} - \underbrace {\log (\mathbf {P r} [ \mathcal {D} ])} _ {\text {c o n s t .}} (22) \\ \end{array} +$$ + +where (1) $\mathcal{L}_n(\phi)$ is the expected log-likelihood, which reflects the predictive performance of the Bayesian model on the data point $(x_{n},y_{n})$ ; and (2) $\mathcal{R}(\phi)$ is the KL-divergence between $Q(\pmb {\theta};\phi)$ and its prior $\mathbf{Pr}[\pmb {\theta}]$ , which reduces to entropy $H(Q)$ if the prior of $\pmb{\theta}$ follows a uniform distribution. + +Hierarchical Bayesian Model A Bayesian neural network can be considered as a hierarchical Bayesian model depicted in Figure 3b, which further satisfies the following two assumptions: + +Assumption B.1 (Independence of Model Parameters $\theta^{(l)}$ ). The approximate posterior $Q(\pmb{\theta};\phi)$ over the model parameters $\pmb{\theta}$ are partitioned into $L$ disjoint and statistically independent layers $\{\pmb{\theta}^{(l)}\}_{l = 0}^{L - 1}$ (where each $\phi^{(l)}$ parameterizes $\pmb{\theta}^{(l)}$ in the $l$ -th layer) such that: + +$$ +Q (\boldsymbol {\theta}; \phi) = \prod_ {l = 0} ^ {L - 1} Q \left(\boldsymbol {\theta} ^ {(l)}; \phi^ {(l)}\right) \tag {23} +$$ + +Assumption B.2 (Markovianity of Hidden Units $\pmb{h}^{(l)}$ ). The hidden variables $\pmb{h} = \{\pmb{h}^{(l)}\}_{l=0}^{L}$ satisfy the Markov property that $\pmb{h}^{(l+1)}$ depends on the input $\pmb{x}$ only through its previous layer $\pmb{h}^{(l)}$ : + +$$ +\Pr [ \boldsymbol {h} ^ {(l + 1)} | \boldsymbol {h} ^ {(: l)}, \boldsymbol {\theta} ^ {(: l)} ] = \Pr [ \boldsymbol {h} ^ {(l + 1)} | \boldsymbol {h} ^ {(l)}, \boldsymbol {\theta} ^ {(l)} ] \tag {24} +$$ + +where we use short-hand notations $\pmb{h}^{(l)}$ and $\pmb{\theta}^{(l)}$ to represent the sets of previous layers $\{\pmb{h}^{(k)}\}_{k=0}^l$ and $\{\pmb{\theta}^{(k)}\}_{k=0}^l$ . For consistency, we denote $\pmb{h}^{(0)} = \pmb{x}$ and $\pmb{h}^{(L)} = \pmb{y}$ . + +Proof of probabilistic prorogation Based on the two assumptions above, we provide a proof for probabilistic propagation in Equation (1) as follows: + +$$ +\begin{array}{l} \overbrace {\mathbf {P r} [ \boldsymbol {h} ^ {(l + 1)} | x ]} ^ {P \left(\boldsymbol {h} ^ {(l + 1)}; \psi^ {(l + 1)}\right)} = \int_ {\theta^ {(l)}} \mathbf {P r} [ \boldsymbol {h} ^ {(l + 1)} | x, \theta^ {(: l)} ] Q \left(\theta^ {(: l)}; \phi^ {(: l)}\right) d \theta^ {(: l)} (25) \\ = \int_ {\theta^ {(l)}} \left(\int_ {h ^ {(l)}} \mathbf {P r} [ h ^ {(l + 1)} | h ^ {(l)}, \theta^ {(l)} ] \mathbf {P r} [ h ^ {(l)} | x, \theta^ {(: l - 1)} ] d h ^ {(l)}\right) Q \left(\theta^ {(: l)}; \phi^ {(: l)}\right) d \theta^ {(: l)} (26) \\ = \int_ {h ^ {(l)}, \theta^ {(l)}} \mathbf {P r} [ h ^ {(l + 1)} | h ^ {(l)}, \theta^ {(l)} ] Q \left(\theta^ {(l)}; \phi^ {(l)}\right) \left(\int_ {\theta^ {(: l - 1)}} \mathbf {P r} [ h ^ {(l)} | x, \theta^ {(: l - 1)} ] Q \left(\theta^ {(: l - 1)}; \phi^ {(: l - 1)}\right) d \theta^ {(: l - 1)}\right) d h ^ {(l)} d \theta^ {(l)} (27) \\ = \int_ {h ^ {(l)}, \theta^ {(l)}} \Pr [ h ^ {(l + 1)} | h ^ {(l)}, \theta^ {(l)} ] Q \left(\theta^ {(l)}; \phi^ {(l)}\right) \underbrace {\Pr [ h ^ {(l)} | x ]} _ {P \left(h ^ {(l)} ; \psi^ {(l)}\right)} d h ^ {(l)} d \theta^ {(l)} (28) \\ \end{array} +$$ + +# C ALTERNATIVE EVIDENCE LOWER BOUND AND ITS ANALYTIC FORMS + +# C.1 ALTERNATIVE EVIDENCE LOWER BOUND (PROOF FOR THEOREM 3.1) + +The steps to prove the inequality (6) almost follow the ones for probabilistic propagation above: + +$$ +\begin{array}{l} \mathcal {L} _ {n} (\phi) = \mathbb {E} _ {Q} \left[ \log \left(\Pr \left[ y _ {n} \mid x _ {n}, \boldsymbol {\theta} \right]\right) \right] (29) \\ = \int_ {\theta} \log \left(\mathbf {P r} \left[ y _ {n} \mid x _ {n}, \theta \right]\right) Q (\theta ; \phi) d \theta (30) \\ = \int_ {\theta} \log \left(\int_ {h ^ {(L - 1)}} \mathbf {P r} \left[ y _ {n}, h ^ {(L - 1)} \mid x _ {n}, \theta \right] d h ^ {(L - 1)}\right) Q (\theta ; \phi) d \theta (31) \\ = \int_ {\theta} \log \left(\int_ {h ^ {(L - 1)}} \mathbf {P r} \left[ y _ {n} \mid h ^ {(L - 1)}, \theta^ {(L - 1)} \right] \mathbf {P r} \left[ h ^ {(L - 1)} \mid x _ {n}, \theta^ {(0: L - 2)} \right] d h ^ {(L - 1)}\right) Q (\theta ; \phi) d \theta (32) \\ \geq \int_ {\theta} \left(\int_ {h ^ {(L - 1)}} \log \left(\mathbf {P r} \left[ y _ {n} \mid h ^ {(L - 1)}, \theta^ {(L - 1)} \right]\right) \mathbf {P r} \left[ h ^ {(L - 1)} \mid x _ {n}, \theta^ {(0: L - 1)} \right] d h ^ {(L - 1)}\right) Q (\theta ; \phi) d \theta (33) \\ = \int_ {h ^ {(L - 1)}, \theta^ {(L - 1)}} \log \left(\mathbf {P r} \left[ y _ {n} \mid h ^ {(L - 1)}, \theta^ {(L - 1)} \right]\right) Q \left(\theta^ {(L - 1)}; \phi^ {(L - 1)}\right) (34) \\ \left(\int_ {\theta^ {(0: L - 2)}} \mathbf {P r} [ h ^ {(L - 1)} | x _ {n}, \theta^ {(0: L - 2)} ] Q \left(\theta^ {(0: L - 2)}; \phi^ {(0: L - 2)}\right) d \theta^ {(0: L - 2)}\right) d h ^ {(L - 1)} d \theta^ {(L - 1)} (34) \\ = \int_ {h ^ {(L - 1)}, \theta^ {(L - 1)}} \log \left(\mathbf {P r} \left[ y _ {n} \mid h ^ {(L - 1)}, \theta^ {(L - 1)} \right]\right) Q \left(\theta^ {(L - 1)}\right) \mathbf {P r} \left[ h ^ {(L - 1)} \mid x _ {n} \right] d h ^ {(L - 1)} d \theta^ {(L - 1)} (35) \\ = \mathbb {E} _ {\boldsymbol {h} ^ {(L - 1)} \sim P; \boldsymbol {\theta} ^ {(L - 1)} \sim Q} \left[ \log \left(\mathbf {P r} \left[ y _ {n} \mid h ^ {(L - 1)}, \theta^ {(L - 1)} \right]\right) \right] = \overline {{\mathcal {L}}} _ {n} (\phi) (36) \\ \end{array} +$$ + +where the key is the Jensen's inequality $\mathbb{E}_Q[\log (\cdot)]\geq \log (\mathbb{E}_Q[\cdot])$ in Equation (33). Notice that if $\pmb{\theta}^{(L - 1)}$ is not random variable (typical for an output layer), $\overline{\mathcal{L}}_n(\phi)$ can be simplified as: + +$$ +\bar {\mathcal {L}} _ {n} (\phi) = \int_ {h ^ {(L - 1)}} \log \left(\mathbf {P r} \left[ y _ {n} \mid h ^ {(L - 1)}; \phi^ {(L - 1)} \right]\right) P \left(h ^ {(L - 1)}; \psi^ {(L - 1)}\right) d h ^ {(L - 1)} \tag {37} +$$ + +where we write $\mathbf{Pr}[h^{(L - 1)}|x]$ in its parameterized form $P(h^{(L - 1)};\psi^{(L - 1)})$ . Now, the gradient $\partial \overline{\mathcal{L}}_n(\phi) / \partial \phi^{(L - 1)}$ can be obtained by differentiating over Equation (37), while other gradients $\partial \overline{\mathcal{L}}_n(\phi) / \phi^{(:L - 2)}$ further obtained by chain rule: + +$$ +\frac {\partial \overline {{\mathcal {L}}} _ {n} (\phi)}{\partial \phi^ {(: L - 2)}} = \frac {\partial \overline {{\mathcal {L}}} _ {n} (\phi)}{\partial \psi^ {(L - 1)}} \cdot \frac {\partial \psi^ {(L - 1)}}{\partial \phi^ {(: L - 2)}} \tag {38} +$$ + +which requires us to compute $\partial \overline{\mathcal{L}}_n(\phi) / \partial \psi^{(L - 1)}$ and $\partial \psi^{(L - 1)} / \partial \phi^{(:L - 2)}$ . While $\partial \overline{\mathcal{L}}_n(\phi) / \partial \psi^{(L - 1)}$ can be derived from Equation (37), $\partial \psi^{(L - 1)} / \partial \phi^{(:L - 2)}$ can be obtained by backpropagating outputs of the $(L - 2)^{\mathrm{th}}$ layer obtained from probabilistic propagation in Equation (1). In other words: since $P(h^{(L - 1)};\psi^{(L - 1)})$ is an intermediate step of the forward pass, $\psi^{(L - 1)}$ is a function of all parameters from previous layers $\phi^{(:L - 2)}$ , and if each step $\psi^{(l + 1)} = g^{(l)}(\psi^{(l)},\phi^{(l)})$ is differentiable w.r.t. $\psi^{(l)}$ and $\phi^{(l)}$ , the partial derivatives $\partial \psi^{(L - 1)} / \partial \phi^{(:L - 2)}$ can be obtained by iterative chain rule. + +# C.2 SOFTMAX LAYER FOR CLASSIFICATION PROBLEM + +In this part, we first prove the alternative evidence lower bound (ELBO) for Bayesian neural networks with softmax function as their last layers. Subsequently, we derive the corresponding backpropagation rule for the softmax layer. Finally, we show a method based on Taylor's expansion to approximately evaluate a softmax layer without Monte Carlo sampling. + +Theorem C.1 (Analytic Form of $\overline{\mathcal{L}}_n(\phi)$ for Classification). Let $\pmb{h} \in \mathbb{R}^K$ (with $K$ the number of classes) be the pre-activations of a softmax layer (a.k.a. logits), and $\phi = s \in \mathbb{R}^+$ be a scaling factor that adjusts its scale such that $\mathbf{Pr}[y = c|\pmb{h}, s] = \exp(\pmb{h}_c / s) / \sum_{k=1}^{K} \exp(\pmb{h}_k / s)$ . Suppose the logits $\{\pmb{h}_k\}_{k=1}^K$ are pairwise independent (which holds under mean-field approximation) and $\pmb{h}_k$ follows a Gaussian distribution $\pmb{h}_k \sim \mathcal{N}(\mu_k, \nu_k)$ (therefore $\psi = \{\mu_k, \nu_k\}_{k=1}^K$ ) and $s$ is a deterministic parameter. Then $\overline{\mathcal{L}}_n(\phi)$ can be further upper bound by the following analytic form: + +$$ +\bar {\mathcal {L}} _ {n} (\phi) \geq \frac {\mu_ {c}}{s} - \log \left(\sum_ {k = 1} ^ {K} \exp \left(\frac {\mu_ {k}}{s} + \frac {\nu_ {k}}{2 s ^ {2}}\right)\right) \triangleq \hat {\mathcal {L}} (\phi) \tag {39} +$$ + +Proof. The lower bound follows by plugging $\mathbf{Pr}[y|h,s]$ and $\mathbf{Pr}[h_k|x]$ into Equation (6). + +$$ +\begin{array}{l} \bar {\mathcal {L}} _ {n} (\phi) = \int_ {h} \log \left(\Pr [ y _ {n} = c | h; s ]\right) \Pr [ h | x ] d h (40) \\ = \int_ {h} \left(\frac {h _ {c}}{s} - \log \left(\sum_ {k = 1} ^ {K} \exp \left(\frac {h _ {k}}{s}\right)\right)\right) \left(\prod_ {k = 1} ^ {K} \mathbf {P r} \left[ h _ {k} \mid x \right]\right) d h (41) \\ = \frac {1}{s} \int_ {h _ {c}} h _ {c} \mathbf {P r} [ h _ {c} | x _ {n} ] d h _ {c} - \int_ {h} \log \left(\sum_ {k = 1} ^ {K} \exp \left(\frac {h _ {k}}{s}\right)\right) \left(\prod_ {k = 1} ^ {K} \mathbf {P r} [ h _ {k} | x ]\right) d h (42) \\ = \frac {\mu_ {c}}{s} - \int_ {h} \log \left(\sum_ {k = 1} ^ {K} \exp \left(\frac {h _ {k}}{s}\right)\right) \left(\prod_ {k = 1} ^ {K} \mathbf {P r} [ h _ {k} | x ]\right) d h (43) \\ \geq \frac {\mu_ {c}}{s} - \log \left(\int_ {h} \sum_ {k = 1} ^ {K} \exp \left(\frac {h _ {k}}{s}\right) \left(\prod_ {k = 1} ^ {K} \mathbf {P r} [ h _ {k} | x ]\right) d h\right) (44) \\ = \frac {\mu_ {c}}{s} - \log \left(\sum_ {k = 1} ^ {K} \int_ {h _ {k}} \exp \left(\frac {h _ {k}}{s}\right) \mathbf {P r} [ h _ {k} | x ] d h _ {k}\right) (45) \\ = \frac {\mu_ {c}}{s} - \log \left(\sum_ {k = 1} ^ {K} \int_ {h _ {k}} \exp \left(\frac {h _ {k}}{s}\right) \cdot \frac {1}{\sqrt {2 \pi \nu_ {k}}} \exp \left(- \frac {\left(h _ {k} - \mu_ {k}\right) ^ {2}}{2 \nu_ {k}}\right) d h _ {k}\right) (46) \\ = \frac {\mu_ {c}}{s} - \log \left(\sum_ {k = 1} ^ {K} \exp \left(\frac {\mu_ {k}}{s} + \frac {\nu_ {k}}{2 s ^ {2}}\right)\right) = \hat {\mathcal {L}} (\phi) (47) \\ \end{array} +$$ + +where the last equation follows + +$$ +\begin{array}{l} \int_ {h _ {k}} \exp \left(\frac {h _ {k}}{s}\right) \cdot \frac {1}{\sqrt {2 \pi \nu_ {k}}} \exp \left(- \frac {\left(h _ {k} - \mu_ {k}\right) ^ {2}}{2 \nu_ {k}}\right) d h _ {k} (48) \\ = \int_ {h _ {k}} \frac {1}{\sqrt {2 \pi \nu_ {k}}} \exp \left(- \frac {h _ {k} ^ {2} - 2 \left(\mu_ {k} + \nu_ {k} / s\right) h _ {k} + \mu_ {k} ^ {2}}{2 \nu_ {k}}\right) d h _ {k} (49) \\ = \underbrace {\int_ {h _ {k}} \frac {1}{\sqrt {2 \pi \nu_ {k}}} \exp \left(- \frac {\left(h _ {k} - \left(\mu_ {k} + \nu_ {k}\right)\right) ^ {2}}{2 \nu_ {k}}\right) d h _ {k}} \cdot \exp \left(\frac {\mu_ {k}}{s} + \frac {\nu_ {k}}{2 s ^ {2}}\right) (50) \\ \end{array} +$$ + +where the under-braced term is unity since it takes the form of Gaussian distribution. + +![](images/fdcd948f915dfc0af969fdeb62fb5da6ba914577e28ff976d8d12bdd914bb258.jpg) + +From Equation (43) to (44), we use the Jensen's inequality to achieve a lower bound for integral of log-sum-exp. The bound can be tighten with advanced techniques in Khan (2012). + +Derivatives of $\overline{\mathcal{L}}_n(\phi)$ in (39) To use probabilistic backpropagation to obtain the gradients w.r.t. the parameters from previous layers, we first need to obtain the derivatives w.r.t. $\psi^{(L - 1)} = \{\mu_k,\nu_k\}_{k = 1}^K$ + +$$ +\frac {\partial \hat {\mathcal {L}} _ {n} (\phi)}{\partial \mu_ {k}} = - \frac {1}{s} \left(\frac {\exp \left(\mu_ {k} / s + \nu_ {k} / 2 s ^ {2}\right)}{\sum_ {k = 1} ^ {K} \exp \left(\mu_ {k} / s + \nu_ {k} / 2 s ^ {2}\right)} - \mathbf {1} [ k = c ]\right) \tag {51a} +$$ + +$$ +\frac {\partial \hat {\mathcal {L}} _ {n} (\phi)}{\partial \nu_ {k}} = - \frac {1}{2 s ^ {2}} \left(\frac {\exp \left(\mu_ {k} / s + \nu_ {k} / 2 s ^ {2}\right)}{\sum_ {k = 1} ^ {K} \exp \left(\mu_ {k} / s + \nu_ {k} / 2 s ^ {2}\right)}\right) \tag {51b} +$$ + +Furthermore, the scale $s$ can be (optionally) updated along with other parameters using the gradient + +$$ +\frac {\partial \hat {\mathcal {L}} _ {n} (\phi)}{\partial s} = - \frac {\mu_ {c}}{s ^ {2}} + \frac {\sum_ {k = 1} ^ {K} \left(\mu_ {k} / s ^ {2} + \nu_ {k} / s ^ {3}\right) \exp \left(\mu_ {k} / s + \nu_ {k} / 2 s ^ {2}\right)}{\sum_ {k = 1} ^ {K} \exp \left(\mu_ {k} / s + \nu_ {k} / 2 s ^ {2}\right)} \tag {52} +$$ + +Prediction with Softmax Layer Once we learn the parameters for the Bayesian neural network, in principle we can compute the predictive distribution of $y$ by evaluating the following equation: + +$$ +\Pr [ \boldsymbol {y} = c | \boldsymbol {x} ] = \int_ {h} \Pr [ \boldsymbol {y} = c | \boldsymbol {h}, s ] \Pr [ h | \boldsymbol {x} ] d h = \int_ {h} \ell_ {c} (h) \Pr [ h | \boldsymbol {x} ] d h \tag {53} +$$ + +$$ +\text {(M e a n - f i e l d a s s u m p t i o n)} = \int_ {h _ {1}} \dots \int_ {h _ {K}} \ell_ {c} (h) \left(\prod_ {k = 1} ^ {K} \Pr [ h _ {k} | x ]\right) d h _ {1} \dots d h _ {k} \tag {54} +$$ + +where we denote the softmax function as $\ell_c(h) = \exp (h_c / s) / [\sum_k\exp (h_k / s)]$ . Unfortunately, the equation above can not be computed in closed form. The most straight-forward work-around is to approximate the integral by Monte Carlo sampling: for each $h_k$ we draw $S$ samples $\{h_k^s\}_{s = 1}^S$ independently and compute the prediction: + +$$ +\mathbf {P r} [ \boldsymbol {y} = c | x ] \approx \frac {1}{S} \sum_ {s = 1} ^ {S} \ell_ {c} \left(h ^ {s}\right), \forall c \in [ K ] \tag {55} +$$ + +Despite its conceptual simplicity, Monte Carlo method suffers from expensive computation and high variance in estimation. Instead, we propose an economical estimate based on Taylor's expansion. First, we expand the function $\ell_c(h)$ by Taylor's series at the point $\mu$ (up to the second order): + +$$ +\begin{array}{l} \ell_ {c} (h) = \ell_ {c} (\mu) + \left[ \frac {\partial \ell_ {c}}{\partial h} (\mu) \right] ^ {\top} (h - \mu) + \frac {1}{2} (h - \mu) ^ {\top} \left[ \frac {\partial^ {2} \ell_ {c}}{\partial h ^ {2}} (\mu) \right] (h - \mu) + O \left(\| h - c \| ^ {3}\right) (56) \\ = \ell_ {c} (\mu) + \sum_ {k = 1} ^ {K} \left[ \frac {\partial \ell_ {c}}{\partial h _ {k}} (\mu) \right] \left(h _ {k} - \mu_ {k}\right) + \sum_ {i = 1} ^ {K} \sum_ {j = 1} ^ {K} \left[ \frac {\partial^ {2} \ell_ {c}}{\partial h _ {i} h _ {j}} (\mu) \right] \left(h _ {i} - \mu_ {i}\right) \left(h _ {j} - \mu_ {j}\right) + O \left(\| h - \mu \| ^ {3}\right) (57) \\ \end{array} +$$ + +Before we derive the forms of these derivatives, we first show the terms of odd orders do not contribute to the expectation. For example, if $\ell_c(h)$ is approximated by its first two terms (i.e. a linear function), Equation (54) can be written as + +$$ +\begin{array}{l} \Pr [ \boldsymbol {y} = c | x ] \approx \int_ {h _ {1}} \dots \int_ {h _ {K}} \left(\ell_ {c} (\mu) + \sum_ {k = 1} ^ {K} \left[ \frac {\partial \ell_ {c}}{\partial h _ {k}} (\mu) \right] (h _ {k} - \mu_ {k})\right) \left(\prod_ {k = 1} ^ {K} \Pr [ h _ {k} | x ]\right) d h _ {1} \dots d h _ {k} (58) \\ = \ell_ {c} (\mu) + \sum_ {k = 1} ^ {K} \left[ \frac {\partial \ell_ {c}}{\partial h _ {k}} (\mu) \right] \left(\int_ {h _ {k}} \left(h _ {k} - \mu_ {k}\right) \mathbf {P r} [ h _ {k} | x ] d h _ {k}\right) = \ell_ {c} (\mu) (59) \\ \end{array} +$$ + +where the second term is zero by the symmetry of $\mathbf{Pr}[h_k|x]$ around $\mu_{k}$ (or simply the definition of $\mu_{k}$ 's). Therefore, the first-order approximation results exactly in a (deterministic) softmax function of the mean vector $\mu$ . In order to incorporate the variance into the approximation, we will need to derive the exact forms of the derivatives of $\ell_c(h)$ . Specifically, the first-order derivatives are obtained from the definition of $\ell_c(h)$ . + +$$ +\frac {\partial \ell_ {c}}{\partial h _ {c}} (h) = \frac {1}{s} \cdot \frac {\exp \left(h _ {c} / s\right) - \exp \left(2 h _ {c} / s\right)}{\left(\sum_ {k = 1} ^ {K} \exp \left(h _ {k} / s\right)\right) ^ {2}} = \frac {1}{s} \left(\ell_ {c} (h) - \ell_ {c} ^ {2} (h)\right) \tag {60a} +$$ + +$$ +\frac {\partial \ell_ {c}}{\partial h _ {k}} (h) = - \frac {1}{s} \cdot \frac {\exp \left(h _ {c} / s\right) \cdot \exp \left(h _ {k} / s\right)}{\left(\sum_ {k = 1} ^ {K} \exp \left(h _ {k} / s\right)\right) ^ {2}} = - \frac {1}{s} \ell_ {c} (h) \ell_ {k} (h), \forall k \neq c \tag {60b} +$$ + +and subsequently the second-order derivatives from the first ones: + +$$ +\frac {\partial^ {2} \ell_ {c}}{\partial h _ {c} ^ {2}} (h) = \frac {1}{s} \left(\frac {\partial \ell_ {c}}{\partial h _ {c}} (h) - 2 \ell_ {c} (h) \frac {\partial \ell_ {c}}{\partial h _ {c}} (h)\right) = \frac {1}{s ^ {2}} \left(\ell_ {c} (h) - 3 \ell_ {c} ^ {2} (h) + 2 \ell_ {c} ^ {3} (h)\right) \tag {61a} +$$ + +$$ +\frac {\partial^ {2} \ell_ {c}}{\partial h _ {k} ^ {2}} (h) = - \frac {1}{s} \left(\frac {\partial \ell_ {c}}{\partial h _ {c}} (h) \ell_ {k} (h) + \ell_ {c} (h) \frac {\partial \ell_ {k}}{\partial h _ {c}} (h)\right) = \frac {1}{s ^ {2}} \left(- \ell_ {c} (h) \ell_ {k} (h) + 2 \ell_ {c} ^ {2} (h) \ell_ {k} (h)\right), \forall k \neq c \tag {61b} +$$ + +with these derivatives we can compute the second-order approximation as + +$$ +\begin{array}{l} \Pr [ \boldsymbol {y} = c | x ] \approx \int_ {h _ {1},.., h _ {K}} \left(\ell_ {c} (\mu) + \frac {1}{2} \sum_ {i = 1} ^ {K} \sum_ {j = 1} ^ {K} \frac {\partial^ {2} \ell_ {c}}{\partial \mu_ {i} \mu_ {j}} (\mu) (h _ {i} - \mu_ {i}) (h _ {j} - \mu_ {j})\right) \left(\prod_ {k = 1} ^ {K} \Pr [ h _ {k} | x ]\right) d h _ {1} \dots d h _ {K} (62) \\ = \ell_ {c} (\mu) + \frac {1}{2} \frac {\partial^ {2} \ell_ {c}}{\partial \mu_ {c} ^ {2}} (\mu) \int_ {h _ {c}} \left(h _ {c} - \mu_ {c}\right) ^ {2} \mathbf {P r} [ h _ {c} | x ] d h _ {c} + \frac {1}{2} \sum_ {k \neq c} \frac {\partial^ {2} \ell_ {c}}{\partial \mu_ {k} ^ {2}} (\mu) \int_ {h _ {k}} \left(h _ {k} - \mu_ {k}\right) ^ {2} \mathbf {P r} [ h _ {k} | x ] d h _ {k} (63) \\ = \ell_ {c} (\mu) + \frac {1}{2 s ^ {2}} \left(\ell_ {c} (\mu) - 3 \ell_ {c} ^ {2} (\mu) + 2 \ell_ {c} ^ {3} (\mu)\right) \nu_ {c} + \frac {1}{2 s ^ {2}} \sum_ {k \neq c} \left(- \ell_ {c} (\mu) \ell_ {k} (\mu) + 2 \ell_ {c} ^ {2} (\mu) \ell_ {k} (\mu)\right) \nu_ {k} (64) \\ = \ell_ {c} (\mu) + \frac {1}{2 s ^ {2}} \left(\ell_ {c} (\mu) - 2 \ell_ {c} ^ {2} (\mu)\right) \left(\nu_ {c} - \sum_ {k = 1} ^ {K} \ell_ {k} (\mu) \nu_ {k}\right) (65) \\ \end{array} +$$ + +The equation above can be further written in vector form as: + +$$ +\Pr [ \boldsymbol {y} | x ] \approx \ell (\mu) + \frac {1}{2 s ^ {2}} \left(\ell (\mu) - \ell (\mu) ^ {\circ 2}\right) \circ \left(\nu - \ell (\mu) ^ {\top} \nu\right) \tag {66} +$$ + +# C.3 GAUSSIAN OUTPUT LAYER FOR REGRESSION PROBLEM + +In this part, we develop an alternative evidence lower bound (ELBO) for Bayesian neural networks with Gaussian output layers, and derive the corresponding gradients for backpropagation. Despite the difficulty to obtain an analytical predictive distribution for the output, we show that its central moments can be easily computed given the learned parameters. + +Theorem C.2 (Analytic Form of $\overline{\mathcal{L}}_n(\phi)$ for Regression). Let $h\in \mathbb{R}^I$ be the output of last hidden layer (with $I$ the number of hidden units), and $\phi = (w,s)\in \mathbb{R}^{I}\times \mathbb{R}^{+}$ be the parameters that define the predictive distribution over output $\pmb{y}$ as + +$$ +\mathbf {P r} [ \boldsymbol {y} | \boldsymbol {h}; w, s ] = \frac {1}{\sqrt {2 \pi s}} \exp \left(- \frac {\left(\boldsymbol {y} - w ^ {\top} \boldsymbol {h}\right) ^ {2}}{2 s}\right) \tag {67} +$$ + +Suppose the hidden units $\{\pmb{h}_k\}_{k=1}^K$ are pairwise independent (which holds under mean-field approximation), and each $\pmb{h}_i$ has mean $\mu_i$ and variance $\nu_i$ , then $\overline{\mathcal{L}}_n(\phi)$ takes an analytic form: + +$$ +\overline {{\mathcal {L}}} _ {n} (\phi) = - \frac {(y - w ^ {\top} \mu) ^ {2} + (w ^ {\circ 2}) ^ {\top} \nu}{2 s} - \frac {\log (2 \pi s)}{2} \tag {68} +$$ + +where $(w^{\circ 2})_i = w_i^2$ and $\mu = [\mu_1,\dots ,\mu_I]^\top \in \mathbb{R}^I$ and $\nu = [\nu_{1},\dots ,\nu_{I}]^{\top}\in \mathbb{R}^{I}$ are vectors of mean and variance of the hidden units $\pmb{h}$ . + +Proof. The Equation (68) is obtained by plugging $\mathbf{Pr}[\pmb{y}|\pmb{h};w,s]$ into Equation (6). + +$$ +\begin{array}{l} \overline {{\mathcal {L}}} _ {n} (\phi) = \sum_ {h _ {1}} \dots \sum_ {h _ {I}} \log \left(\mathbf {P r} [ y | h _ {1}, \dots , h _ {I}; w, s ]\right) \binom {I} {\prod_ {i = 1}} \mathbf {P r} [ h _ {i} | x _ {n} ] (69) \\ = - \sum_ {h _ {1}} \dots \sum_ {h _ {I}} \left(\frac {\left(y - \sum_ {i = 1} ^ {I} w _ {i} h _ {i}\right) ^ {2}}{2 s} + \frac {\log (2 \pi s)}{2}\right) \left(\prod_ {i = 1} ^ {I} \Pr [ h _ {i} | x _ {n} ]\right) (70) \\ = - \frac {1}{2 s} \sum_ {h _ {1}} \dots \sum_ {h _ {I}} \left(y - \sum_ {i = 1} ^ {I} w _ {i} h _ {i}\right) ^ {2} \left(\prod_ {i = 1} ^ {I} \Pr [ h _ {i} | x _ {n} ]\right) - \frac {\log (2 \pi s)}{2} (71) \\ \end{array} +$$ + +where the long summation in the first term can be further simplified with notations of $\mu$ and $\nu$ : + +$$ +\begin{array}{l} \sum_ {h _ {1}} \dots \sum_ {h _ {I}} \left(y - \sum_ {i = 1} ^ {I} w _ {i} h _ {i}\right) ^ {2} \left(\prod_ {i = 1} ^ {I} \Pr [ h _ {i} | x _ {n} ]\right) (72) \\ = \sum_ {h _ {1}} \dots \sum_ {h _ {I}} \left(y ^ {2} - 2 y \sum_ {i = 1} ^ {I} w _ {i} h _ {i} + \sum_ {i = 1} ^ {I} w _ {i} ^ {2} h _ {i} ^ {2} + \sum_ {j = 1} ^ {I} \sum_ {k \neq j} w _ {j} w _ {k} h _ {j} h _ {k}\right) \left(\prod_ {i = 1} ^ {I} \Pr [ h _ {i} | x _ {n} ]\right) (73) \\ = y ^ {2} - 2 y \sum_ {i = 1} ^ {I} w _ {i} \left(\sum_ {h _ {i}} h _ {i} \mathbf {P r} [ h _ {i} | x ]\right) + \sum_ {i = 1} ^ {I} w _ {i} ^ {2} \left(\sum_ {h _ {i}} h _ {i} ^ {2} \mathbf {P r} [ h _ {i} | x _ {n} ]\right) (74) \\ + \sum_ {j = 1} ^ {I} \sum_ {k \neq j} w _ {j} w _ {k} \left(\sum_ {h _ {j}} h _ {j} \mathbf {P r} [ h _ {j} | x _ {n} ]\right) \left(\sum_ {h _ {k}} h _ {k} \mathbf {P r} [ h _ {k} | x _ {n} ]\right) \\ = y ^ {2} - 2 y \sum_ {i = 1} ^ {I} w _ {i} \mu_ {i} + \sum_ {i = 1} ^ {I} w _ {i} ^ {2} \left(\mu_ {i} ^ {2} + \nu_ {i}\right) + \sum_ {j = 1} ^ {I} \sum_ {k \neq j} w _ {j} w _ {k} \mu_ {j} \mu_ {k} (75) \\ = y ^ {2} - 2 y \sum_ {i = 1} ^ {I} w _ {i} \mu_ {i} + \sum_ {i = 1} ^ {I} w _ {i} ^ {2} \nu_ {i} + \left(\sum_ {j = 1} ^ {I} w _ {j} \mu_ {j}\right) \left(\sum_ {k = 1} ^ {I} w _ {k} \mu_ {k}\right) (76) \\ = y ^ {2} - 2 y w ^ {\top} \mu + \left(w ^ {\circ 2}\right) ^ {\top} \nu + \left(w ^ {\top} \mu\right) ^ {2} (77) \\ = (y - w ^ {\top} \mu) ^ {2} + (w ^ {\circ 2}) ^ {\top} \nu (78) \\ \end{array} +$$ + +where $w^{\circ 2}$ denotes element-wise square, i.e. $w^{\circ 2} = [w_1^2,\dots ,w_I^2 ]^\top$ + +Derivatives of $\overline{\mathcal{L}}_n(\phi)$ in Equation (68) It is not difficult to show that the gradient of $\overline{\mathcal{L}}_n(\phi)$ can be backpropagated through the last layer. by computing derivatives of $\overline{\mathcal{L}}_n(\phi)$ w.r.t. $\mu$ and $\nu$ : + +$$ +\frac {\partial \overline {{\mathcal {L}}} _ {n} (\phi)}{\partial \mu} = - \frac {(y - w ^ {\top} \mu) w}{s} \tag {79a} +$$ + +$$ +\frac {\partial \overline {{\mathcal {L}}} _ {n} (\phi)}{\partial \nu} = - \frac {w ^ {\circ 2}}{2 s} \tag {79b} +$$ + +Furthermore, the parameters $\{w,s\}$ can be updated along with other parameters with their gradients: + +$$ +\frac {\partial \overline {{\mathcal {L}}} _ {n} (\phi)}{\partial w} = - \frac {(y - w ^ {\top} \mu) \mu + (w \circ \nu)}{s} \tag {80a} +$$ + +$$ +\frac {\partial \overline {{\mathcal {L}}} _ {n} (\phi)}{\partial s} = - \frac {1}{2 s} + \frac {(y - w ^ {\top} \mu) ^ {2} + (w ^ {\circ 2}) ^ {\top} \nu}{2 s ^ {2}} \tag {80b} +$$ + +Prediction with Gaussian Layer Once we determine the parameters for the last layer, in principle we can compute the predictive distribution $\mathbf{Pr}[y|x]$ for the output $y$ given the input $x$ according to + +$$ +\begin{array}{l} \mathbf {P r} [ \boldsymbol {y} | x ] = \sum_ {h} \mathbf {P r} [ \boldsymbol {y} | h; w, s ] \mathbf {P r} [ h | \boldsymbol {x} ] = \sum_ {h _ {1}} \dots \sum_ {h _ {I}} \mathbf {P r} [ \boldsymbol {y} | h; w, s ] \left(\prod_ {i = 1} ^ {I} \mathbf {P r} [ h _ {i} | x ]\right) \\ = \sum_ {h _ {1}} \dots \sum_ {h _ {I}} \frac {1}{\sqrt {2 \pi s}} \exp \left(- \frac {\left(\boldsymbol {y} - \sum_ {i = 1} ^ {I} w _ {i} h _ {i}\right) ^ {2}}{2 s}\right) \left(\prod_ {i = 1} ^ {I} \mathbf {P r} [ h _ {i} | x ]\right) \tag {81} \\ \end{array} +$$ + +Unfortunately, exact computation of the equation above for arbitrary output value $y$ is intractable in general. However, the central moments of the predictive distribution $\mathbf{Pr}[y|x]$ are easily evaluated. Consider we interpret the prediction as $\pmb{y} = w^{\top}\pmb{h} + \pmb{\epsilon}$ , where $\epsilon \sim \mathcal{N}(0,s)$ , its mean and variance can be easily computed as + +$$ +\mathbb {E} [ \boldsymbol {y} | x ] = w ^ {\top} \mathbb {E} [ \boldsymbol {h} ] = w ^ {\top} \mu \tag {82a} +$$ + +$$ +\mathbb {V} [ \boldsymbol {y} | x ] = (w ^ {\circ 2}) ^ {\top} \mathbb {V} [ \boldsymbol {h} ] + \mathbb {V} [ \boldsymbol {\epsilon} ] = (w ^ {\circ 2}) ^ {\top} \nu + s \tag {82b} +$$ + +Furthermore, if we denote the (normalized) skewness and kurtosis of $\pmb{h}_i$ as $\gamma_i$ and $\kappa_i$ : + +$$ +\gamma_ {i} = \mathbb {E} \left[ \left(\boldsymbol {h} _ {i} - \mu_ {i}\right) ^ {3} | x \right] / \nu_ {i} ^ {3 / 2} = \sum_ {h _ {i}} \left(h _ {i} - \mu_ {i}\right) ^ {3} \Pr [ h _ {i} | x ] / \nu_ {i} ^ {3 / 2} \tag {83a} +$$ + +$$ +\kappa_ {i} = \mathbb {E} \left[ \left(\boldsymbol {h} _ {i} - \mu_ {i}\right) ^ {4} | x \right] / \nu_ {i} ^ {2} = \sum_ {h _ {i}} \left(h _ {i} - \mu_ {i}\right) ^ {4} \Pr \left[ h _ {i} | x \right] / \nu_ {i} ^ {2} \tag {83b} +$$ + +Then the (normalized) skewness and kurtosis of the prediction $\pmb{y}$ are also easily computed with the vectors of $\gamma = [\gamma_{1},\dots ,\gamma_{I}]^{\top}\in \mathbb{R}^{I}$ and $\kappa = [\kappa_{1},\dots ,\kappa_{I}]\in \mathbb{R}^{I}$ . + +$$ +\gamma [ \boldsymbol {y} | x ] = \frac {\mathbb {E} \left[ (\boldsymbol {y} - w ^ {\top} \mu) ^ {3} | x \right]}{\mathbb {V} [ \boldsymbol {y} | x ] ^ {3 / 2}} = \frac {(w ^ {\circ 3}) ^ {\top} (\gamma \circ \nu^ {\circ 3 / 2})}{[ (w ^ {\circ 2}) ^ {\top} \nu + s ] ^ {3 / 2}} \tag {84a} +$$ + +$$ +\kappa [ \boldsymbol {y} | x ] = \frac {\mathbb {E} \left[ (\boldsymbol {y} - w ^ {\top} \mu) ^ {4} | x \right]}{\mathbb {V} [ \boldsymbol {y} | x ] ^ {2}} = \frac {(w ^ {\circ 4}) ^ {\top} (\kappa \circ \nu^ {\circ 2}) + s (w ^ {\circ 2}) ^ {\top} \nu}{[ (w ^ {\circ 2}) ^ {\top} \nu + s ] ^ {2}} \tag {84b} +$$ + +# D PROBABILISTIC PROPAGATION IN BAYESIAN QUANTIZED NETWORKS + +In this section, we present fast(er) algorithms for sampling-free probabilistic propagation (i.e. evaluating Equation (8)). According to Section 4, we divide this section into three parts, each part for a specific range of fan-in numbers $E$ . + +# D.1 SMALL FAN-IN LAYERS: DIRECT TENSOR CONTRACTION + +If $E$ is small, tensor contraction in Equation (8) is immediately applicable. Representative layers of small $E$ are shortcut layer (a.k.a. skip-connection) and what we name as depth-wise layers. + +Shortcut Layer With a skip connection, the output $\pmb{h}^{(l + 1)}$ is an addition of two previous layers $\pmb{h}^{(l)}$ and $\pmb{h}^{(m)}$ . Therefore and the distribution of $\pmb{h}^{(l + 1)}$ can be directly computed as + +$$ +P \left(\boldsymbol {h} _ {i} ^ {(l + 1)}; \psi_ {i} ^ {(l + 1)}\right) = \sum_ {h _ {i} ^ {(l)}, h _ {i} ^ {(m)}} \delta \left[ \boldsymbol {h} _ {i} ^ {(l + 1)} = h _ {i} ^ {(l)} + h _ {i} ^ {(m)} \right] P \left(h _ {i} ^ {(l)}; \psi_ {i} ^ {(l)}\right) P \left(h _ {i} ^ {(m)}; \psi_ {i} ^ {(m)}\right) \tag {85} +$$ + +Depth-wise Layers In a depth-wise layer, each output unit $h_i^{(l + 1)}$ is a transformation (parameterized by $\theta_i^{(l)}$ ) of its corresponding input $h_i^{(l)}$ , i.e. + +$$ +P \left(\boldsymbol {h} _ {i} ^ {(l + 1)}; \psi_ {i} ^ {(l + 1)}\right) = \sum_ {h _ {i} ^ {(l)}, \theta_ {i} ^ {(l)}} \Pr \left[ \boldsymbol {h} _ {i} ^ {(l + 1)} \mid h _ {i} ^ {(l)}, \theta_ {i} ^ {(l)} \right] Q \left(\theta_ {i} ^ {(l)}; \phi_ {i} ^ {(m)}\right) P \left(h _ {i} ^ {(l)}; \psi_ {i} ^ {(l)}\right) \tag {86} +$$ + +Depth-wise layers include dropout layers (where $\theta^{(l)}$ are dropout rates), nonlinear layers (where $\theta^{(l)}$ are threshold values) or element-wise product layers (where $\theta^{(l)}$ are the weights). For both shortcut and depth-wise layers, the time complexity is $O(JD^2)$ since $E < = 2$ . + +# D.2 MEDIUM FAN-IN LAYERS: DISCRETE FOURIER TRANSFORM + +In neural networks, representative layers with medium fan-in number $E$ are pooling layers, where each output unit depends on a medium number of input units. Typically, the special structure of pooling layers allows for faster algorithm than computing Equation (8) directly. + +Max and Probabilistic Pooling For each output, (1) a max pooling layer picks the maximum value from corresponding inputs, i.e. $h_j^{(l+1)} = \max_{i \in \mathcal{I}(j)} h_i^{(l)}$ , while (2) a probabilistic pooling layer selects the value the inputs following a categorical distribution, i.e. $\operatorname{Pr}[h_j^{(l+1)} = h_i^{(l)}] = \theta_i$ . For both cases, the predictive distribution of $h_j^{(l+1)}$ can be computed as + +$$ +\text {M a x :} P \left(\boldsymbol {h} _ {j} ^ {(l + 1)} \leq q\right) = \prod_ {i \in \mathcal {I} (j)} P \left(\boldsymbol {h} _ {i} ^ {(l)} \leq q\right) \tag {87} +$$ + +$$ +\text {P r o b :} P \left(\boldsymbol {h} _ {j} ^ {(l + 1)} = q\right) = \sum_ {i \in \mathcal {I} (j)} \theta_ {i} P \left(\boldsymbol {h} _ {i} ^ {(l)} = q\right) \tag {88} +$$ + +where $P(\pmb{h}_i^{(l)} \leq q)$ is the culminating mass function of $P$ . Complexities for both layers are $O(ID)$ . + +Average Pooling and Depth-wise Convolutional Layer Both layers require additions of a medium number of inputs. We prove a convolution theorem for discrete random variables and show that discrete Fourier transform (DFT) (with fast Fourier transform (FFT)) can accelerate the additive computation. We also derive its backpropagation rule for compatibility of gradient-based learning. + +Theorem D.1 (Fast summation via discrete Fourier transform). Suppose $\mathbf{u}_i$ take values in $\{b_i, b_i + 1, \dots, B_i\}$ between integers $b_i$ and $B_i$ , then the summation $\mathbf{v} = \sum_{i=1}^{E} \mathbf{u}_i$ takes values between $b$ and $B$ , where $b = \sum_{i=1}^{E} b_i$ and $B = \sum_{i=1}^{E} B_i$ . Let $C^{\mathbf{v}}$ , $C^{\mathbf{u}_i}$ be the discrete Fourier transforms of $P^{\mathbf{v}}$ , $P^{\mathbf{u}_i}$ respectively, i.e. + +$$ +C ^ {\boldsymbol {v}} (f) = \sum_ {v = b} ^ {B} P ^ {\boldsymbol {v}} (v) \exp (- \mathrm {j} 2 \pi (v - b) f / (B - b + 1)) \tag {89a} +$$ + +$$ +C ^ {\boldsymbol {u} _ {i}} (f) = \sum_ {u _ {i} = b _ {i}} ^ {B _ {i}} P ^ {\boldsymbol {u} _ {i}} \left(u _ {i}\right) \exp \left(- \mathrm {j} 2 \pi \left(u _ {i} - b _ {i}\right) f / \left(B _ {i} - b _ {i} + 1\right)\right) \tag {89b} +$$ + +Then $C^{\pmb{v}}(f)$ is the element-wise product of all Fourier transforms $C^{u_i}(f)$ , i.e. + +$$ +C ^ {\boldsymbol {v}} (f) = \prod_ {i = 1} ^ {E} C ^ {\boldsymbol {u} _ {i}} (f), \forall f \tag {90} +$$ + +Proof. We only prove the theorem for two discrete random variables, and the extension to multiple variables can be proved using induction. Now consider $\mathbf{u}_1 \in [b_1, B_1]$ , $\mathbf{u}_2 \in [b_2, B_2]$ and their sum $\mathbf{v} = \mathbf{u}_1 + \mathbf{u}_2 \in [b, B]$ , where $b = b_1 + b_2$ and $B = B_1 + B_2$ . Denote the probability vectors of $\mathbf{u}_1, \mathbf{u}_2$ and $\mathbf{v}$ as $P_1 \in \triangle^{B_1 - b_1}, P_2 \in \triangle^{B_2 - b_2}$ and $P \in \triangle^{B - b}$ respectively, then the entries in $P$ are computed with $P_1$ and $P_2$ by standard convolution as follows: + +$$ +P (v) = \sum_ {u _ {1} = b _ {1}} ^ {B _ {1}} P _ {1} \left(u _ {1}\right) P _ {2} \left(v - u _ {1}\right) = \sum_ {u _ {2} = b _ {2}} ^ {B _ {2}} P _ {1} \left(v - u _ {2}\right) P _ {2} \left(u _ {2}\right), \forall v \in \{b, \dots , B \} \tag {91} +$$ + +The relation above is usually denoted as $P = P_{1} * P_{2}$ , where $*$ is the symbol for convolution. Now define the characteristic functions $C, C_{1}$ , and $C_{2}$ as the discrete Fourier transform (DFT) of the probability vectors $P, P_{1}$ and $P_{2}$ respectively: + +$$ +C (f) = \sum_ {v = b} ^ {B} P (v) \exp \left(- \mathrm {j} \frac {2 \pi}{R} (v - b) f\right), f \in [ R ] \tag {92a} +$$ + +$$ +C _ {i} (f) = \sum_ {u _ {i} = b _ {i}} ^ {B _ {i}} P _ {i} \left(u _ {i}\right) \exp \left(- \mathrm {j} \frac {2 \pi}{R} \left(u _ {i} - b _ {i}\right) f\right), f \in [ R ] \tag {92b} +$$ + +where $R$ controls the resolution of the Fourier transform (typically chosen as $R = B - b + 1$ , i.e. the range of possible values). In this case, the characteristic functions are complex vectors of same length $R$ , i.e. $C, C_1, C_2 \in \mathbb{C}^R$ , and we denote the (functional) mappings as $C = \mathcal{F}(P)$ and $C_i = \mathcal{F}_i(P_i)$ . Given a characteristic function, its original probability vector can be recovered by inverse discrete Fourier transform (IDFT): + +$$ +P (v) = \frac {1}{R} \sum_ {f = 0} ^ {R - 1} C (f) \exp \left(\mathrm {j} \frac {2 \pi}{R} (v - b) f\right), \forall v \in \{b, \dots , B \} \tag {93a} +$$ + +$$ +P _ {i} \left(u _ {i}\right) = \frac {1}{R} \sum_ {f = 0} ^ {R - 1} C _ {i} (f) \exp \left(\mathrm {j} \frac {2 \pi}{R} \left(u _ {i} - b _ {i}\right) f\right), \forall u _ {i} \in \left\{b _ {i}, \dots , B _ {i} \right\} \tag {93b} +$$ + +which we denote the inverse mapping as $P = \mathcal{F}^{-1}(C)$ and $P_{i} = \mathcal{F}_{i}^{-1}(C_{i})$ . Now we plug the convolution in Equation (91) into the characteristic function $C(f)$ in (92a) and rearrange accordingly: + +$$ +C (f) = \sum_ {v = b} ^ {B} \left(\sum_ {u _ {1} = b _ {1}} ^ {B _ {1}} P _ {1} \left(u _ {1}\right) P _ {2} \left(v - u _ {1}\right)\right) \exp \left(- \mathrm {j} \frac {2 \pi}{R} (v - b) f\right) \tag {94} +$$ + +$$ +\left(\operatorname {L e t} u _ {2} = v - u _ {1}\right) = \sum_ {u _ {1} = b _ {1}} ^ {B _ {1}} \sum_ {u _ {2} = b _ {2}} ^ {B _ {2}} P _ {1} \left(u _ {1}\right) P _ {2} \left(u _ {2}\right) \exp \left(- \mathrm {j} \frac {2 \pi}{R} \left(u _ {1} + u _ {2} - b\right) f\right) \tag {95} +$$ + +$$ +\begin{array}{l} \left. \left(\text {S i n c e} b = b _ {1} + b _ {2}\right) = \left[ \sum_ {u _ {1} = b _ {1}} ^ {B _ {1}} P _ {1} \left(u _ {1}\right) \exp \left(- j \frac {2 \pi}{R} \left(u _ {1} - b _ {1}\right) f\right) \right] \right. (96) \\ \left[ \sum_ {u _ {2} = b _ {2}} ^ {B _ {2}} P _ {2} (u _ {2}) \exp \left(- \mathbf {j} \frac {2 \pi}{R} (u _ {2} - b _ {2}) f\right) \right] \\ = C _ {1} (f) \cdot C _ {2} (f) (97) \\ \end{array} +$$ + +The equation above can therefore be written as $C = C_1 \circ C_2$ , where we use $\circ$ to denote element-wise product. Thus, we have shown summation of discrete random variables corresponds to element-wise product of their characteristic functions. + +With the theorem, addition of $E$ discrete random variables can be computed efficiently as follows + +$$ +P ^ {\boldsymbol {v}} = P ^ {\boldsymbol {u} _ {1}} * P ^ {\boldsymbol {u} _ {2}} * \dots * P ^ {\boldsymbol {u} _ {E}} \tag {98} +$$ + +$$ += \mathcal {F} ^ {- 1} \left(\mathcal {F} \left(P ^ {\boldsymbol {u} _ {1}}\right) \circ \mathcal {F} \left(P ^ {\boldsymbol {u} _ {2}}\right) \circ \dots \circ \mathcal {F} \left(P ^ {\boldsymbol {u} _ {E}}\right)\right) \tag {99} +$$ + +where $\mathcal{F}$ denotes the Fourier transforms in Equations (93a) and (93b). If FFT is used in computing all DFT, the computational complexity of Equation (99) is $O(ER\log R) = O(E^2D\log (ED))$ (since $R = O(ED)$ ), compared to $O(D^{E})$ with direct tensor contraction. + +Backpropagation When fast Fourier transform is used to accelerate additions in Bayesian quantized network, we need to derive the corresponding backpropagation rule, i.e. equations that relate $\partial \mathcal{L} / \partial P$ to $\{\partial \mathcal{L} / \partial P_i\}_{i = 1}^I$ . For this purpose, we break the computation in Equation (99) into three steps, and compute the derivative for each of these steps. + +$$ +C _ {i} = \mathcal {F} _ {i} \left(P _ {i}\right) \Longrightarrow \frac {\partial \mathcal {L}}{\partial P _ {i}} = R \cdot \mathcal {F} _ {i} ^ {- 1} \left(\frac {\partial \mathcal {L}}{\partial C _ {i}}\right) \tag {100a} +$$ + +$$ +C = C _ {1} \circ \dots \circ C _ {I} \Longrightarrow \frac {\partial \mathcal {L}}{\partial C _ {i}} = \frac {C}{C _ {i}} \circ \frac {\partial \mathcal {L}}{\partial C} \tag {100b} +$$ + +$$ +P = \mathcal {F} ^ {- 1} (C) \Longrightarrow \frac {\partial \mathcal {L}}{\partial C} = R ^ {- 1} \cdot \mathcal {F} \left(\frac {\partial \mathcal {L}}{\partial P}\right) \tag {100c} +$$ + +where in (100b) we use $C / C_i$ to denote element-wise division. Since $P_i$ lies into real domain, we need to project the gradients back to real number in (100c). Putting all steps together: + +$$ +\frac {\partial \mathcal {L}}{\partial P _ {i}} = \Re \left\{\mathcal {F} _ {i} ^ {- 1} \left(\frac {C}{C _ {i}} \circ \mathcal {F} \left(\frac {\partial \mathcal {L}}{\partial P}\right)\right) \right\}, \forall i \in [ I ] \tag {101} +$$ + +# D.3 LARGE FAN-IN LAYERS: LYAPUNOV CENTRAL LIMIT APPROXIMATION + +In this part, we show that Lyapunov central limit approximation (Lyapunov CLT) accelerates probabilistic propagation in linear layers. For simplicity, we consider fully-connected layer in the derivations, but the results can be easily extended to types of convolutional layers. We conclude this part by deriving the corresponding backpropagation rules for the algorithm. + +Linear Layers Linear layers (followed by a nonlinear transformations $\sigma(\cdot)$ ) are the most important building blocks in neural networks, which include fully-connected and convolutional layers. A linear layer is parameterized by a set of vectors $\pmb{\theta}^{(l)}$ 's, and maps $\pmb{h}^{(l)} \in \mathbb{R}^I$ to $\pmb{h}^{(l+1)} \in \mathbb{R}^J$ as + +$$ +\boldsymbol {h} _ {j} ^ {(l + 1)} = \sigma \left(\sum_ {i \in \mathcal {I} (j)} \boldsymbol {\theta} _ {j i} ^ {(l)} \cdot \boldsymbol {h} _ {i} ^ {(l)}\right) = \sigma \left(\sum_ {i \in \mathcal {I} (j)} \boldsymbol {u} _ {j i} ^ {(l)}\right) = \sigma \left(\boldsymbol {v} _ {j} ^ {(l + 1)}\right) \tag {102} +$$ + +where $\pmb{u}_{ji}^{(l)} = \pmb{\theta}_{ji}^{(l)}\cdot \pmb{h}_{i}^{(l)}$ and $\pmb{v}_j^{(l + 1)} = \sum_{i\in \mathcal{I}(j)}\pmb{u}_{ji}^{(l)}$ . The key difficulty here is to compute the distribution of $\pmb{v}_j^{(l + 1)}$ from the ones of $\{\pmb{u}_{ji}^{(l)}\}_{i = 1}^I$ , i.e. addition of a large number of random variables. + +Theorem D.2 (Fast summation via Lyapunov Central Limit Theorem). Let $v_{j} = \sigma(\tilde{v}_{j}) = \sigma(\sum_{i \in \mathcal{I}(j)} \theta_{ji} u_{i})$ be an activation of a linear layer followed by nonlinearity $\sigma$ . Suppose both inputs $\{\pmb{u}_{i}\}_{i \in \mathcal{I}}$ and parameters $\{\pmb{\theta}_{ji}\}_{i \in \mathcal{I}(j)}$ have bounded variance, then for sufficiently large $|\mathcal{I}(j)|$ , the distribution of $\tilde{v}_{j}$ converges to a Gaussian distribution $\mathcal{N}(\tilde{\mu}_j, \tilde{\nu}_j)$ with mean and variance as + +$$ +\tilde {\mu} _ {j} = \sum_ {i = 1} ^ {I} m _ {j i} \mu_ {i} \tag {103a} +$$ + +$$ +\tilde {\nu} _ {j} = \sum_ {i = 1} ^ {I} m _ {j i} ^ {2} \nu_ {i} + v _ {j i} \mu_ {i} ^ {2} + v _ {j i} \nu_ {i} \tag {103b} +$$ + +where $m_{ji} = \mathbb{E}[\theta_{ji}]$ , $v_{ji} = \mathbb{V}[\theta_{ji}]$ and $\mu_i = \mathbb{E}[u_i]$ , $\nu_i = \mathbb{V}[u_i]$ . And if the nonlinear transform $\sigma$ is a sign function, each activation $\pmb{v}_j$ follows a Bernoulli distribution $P(\pmb{v}_j = 1) = \Phi(\tilde{\mu}_j / \sqrt{\tilde{\nu}_j})$ , where $\Phi$ is the culminating probability function of a standard Gaussian distribution $\mathcal{N}(0,1)$ . + +Proof. The proof follows directly from the definitions of mean and variance: + +$$ +\begin{array}{l} \tilde {\mu} _ {j} = \mathbb {E} \left[ \sum_ {i = 1} ^ {I} \boldsymbol {\theta} _ {j i} \boldsymbol {h} _ {i} \right] = \sum_ {i = 1} ^ {I} \mathbb {E} \left[ \boldsymbol {\theta} _ {j i} \boldsymbol {h} _ {i} \right] (104) \\ = \sum_ {i = 1} ^ {I} \mathbb {E} \left[ \boldsymbol {\theta} _ {j i} \right] \mathbb {E} \left[ \boldsymbol {h} _ {i} \right] = \sum_ {i = 1} ^ {I} m _ {j i} \mu_ {i} (105) \\ \end{array} +$$ + +$$ +\begin{array}{l} \tilde {\nu} _ {j} = \mathbb {V} \left[ \sum_ {i = 1} ^ {I} \boldsymbol {\theta} _ {j i} \boldsymbol {h} _ {i} \right] = \sum_ {i = 1} ^ {I} \mathbb {V} \left[ \boldsymbol {\theta} _ {j i} \boldsymbol {h} _ {i} \right] (106) \\ = \sum_ {i = 1} ^ {I} \left(\mathbb {E} \left[ \boldsymbol {\theta} _ {j i} ^ {2} \right] \mathbb {E} \left[ \boldsymbol {h} _ {i} ^ {2} \right] - \mathbb {E} \left[ \boldsymbol {\theta} _ {j i} ^ {2} \right] \mathbb {E} \left[ \boldsymbol {h} _ {i} ^ {2} \right]\right) (107) \\ = \sum_ {i = 1} ^ {I} \left[ \left(m _ {j i} ^ {2} + v _ {i}\right) \left(\mu_ {i} ^ {2} + v _ {j i}\right) - m _ {j i} ^ {2} \mu_ {i} ^ {2} \right] (108) \\ = \sum_ {i = 1} ^ {I} \left(m _ {j i} ^ {2} \nu_ {i} + v _ {j i} \mu_ {i} ^ {2} + v _ {j i} \nu_ {i}\right) (109) \\ \end{array} +$$ + +For fully-connected layers, these two equations can be concisely written in matrix forms: + +$$ +\tilde {\mu} = M \mu \tag {110a} +$$ + +$$ +\tilde {\nu} = \left(M ^ {\circ 2}\right) \nu + V \left(\mu^ {\circ 2} + \nu\right) \tag {110b} +$$ + +where $M^{\circ 2}$ and $\mu^{\circ 2}$ are element-wise square of $M$ and $\mu$ respectively. + +![](images/aee381545934eca809c1d8ac1957de3c9e32d442d39c041c725359d4bd124024.jpg) + +Backpropagation With matrix forms, the backpropagation rules that relate $\partial \mathcal{L} / \partial \tilde{\psi}^{(l + 1)} = \{\partial \mathcal{L} / \partial \tilde{\mu},\partial \mathcal{L} / \partial \tilde{\nu}\}$ to $\partial \mathcal{L} / \partial \phi^{(l)} = \{\partial \mathcal{L} / \partial M,\partial \mathcal{L} / \partial V\}$ and $\partial \mathcal{L} / \partial \psi^{(l)} = \{\partial \mathcal{L} / \partial \mu ,\partial \mathcal{L} / \partial \nu \}$ can be derived with matrix calculus. + +$$ +\begin{array}{l} \frac {\partial \mathcal {L}}{\partial M} = \left(\frac {\partial \mathcal {L}}{\partial \tilde {\mu}}\right) \mu^ {\top} + 2 M \circ \left[ \left(\frac {\partial \mathcal {L}}{\partial \tilde {\nu}}\right) \nu^ {\top} \right] (111a) \\ \frac {\partial \mathcal {L}}{\partial V} = \left(\frac {\partial \mathcal {L}}{\partial \tilde {\nu}}\right) \left(\mu^ {\circ 2}\right) ^ {\top} (111b) \\ \end{array} +$$ + +$$ +\frac {\partial \mathcal {L}}{\partial \mu} = M ^ {\top} \left(\frac {\partial \mathcal {L}}{\partial \tilde {\mu}}\right) + 2 \mu \circ \left[ V ^ {\top} \left(\frac {\partial \mathcal {L}}{\partial \tilde {\nu}}\right) \right] \tag {111c} +$$ + +$$ +\frac {\partial \mathcal {L}}{\partial \nu} = \left(M ^ {\circ 2}\right) ^ {\top} \left(\frac {\partial \mathcal {L}}{\partial \tilde {\nu}}\right) \tag {111d} +$$ + +Notice that these equations do not take into account the fact that $V$ implicitly defined with $M$ (i.e. $v_{ji}$ is defined upon $m_{ji}$ ). Therefore, we adjust the backpropagation rule for the probabilities: denote $Q_{ji}(d) = Q(\pmb{\theta}_{ji} = \mathbb{Q}(d); \phi_{ji}^{(l)})$ , then the backpropagation rule can be written in matrix form as + +$$ +\begin{array}{l} \frac {\partial \mathcal {L}}{\partial Q (d)} = \left(\frac {\partial \mathcal {L}}{\partial M} + \frac {\partial \mathcal {L}}{\partial V} \cdot \frac {\partial V}{\partial M}\right) \frac {\partial M}{\partial Q (d)} + \frac {\partial \mathcal {L}}{\partial V} \cdot \frac {\partial \nu}{\partial Q (d)} (112) \\ = \mathbb {Q} (d) \cdot \frac {\partial \mathcal {L}}{\partial M} + 2 (\mathbb {Q} (d) - M) \circ \left(\frac {\partial \mathcal {L}}{\partial V}\right) (113) \\ \end{array} +$$ + +Lastly, we derive the backpropagation rules for sign activations. Let $p_j$ denote the probability that the hidden unit $\pmb{v}_j$ is activated as $p_j = \mathbf{Pr}[\pmb{v}_j = 1|x]$ , $\partial \mathcal{L} / p_j$ relates to $\{\partial \mathcal{L} / \partial \tilde{\mu}_j, \partial \mathcal{L} / \partial \tilde{\nu}_j\}$ as: + +$$ +\frac {\partial p _ {j}}{\partial \tilde {\mu} _ {j}} = \mathcal {N} \left(\frac {\tilde {\mu} _ {j}}{\sqrt {\tilde {\nu} _ {j}}}\right) \tag {114a} +$$ + +$$ +\frac {\partial p _ {j}}{\partial \tilde {\nu} _ {j}} = - \frac {1}{2 \tilde {\nu} _ {j} ^ {3 / 2}} \cdot \mathcal {N} \left(\frac {\tilde {\mu} _ {j}}{\sqrt {\tilde {\nu} _ {j}}}\right) \tag {114b} +$$ + +# E SUPPLEMENTARY MATERIAL FOR EXPERIMENTS + +# E.1 NETWORK ARCHIECTURES + +(1) For MNIST, Fashion-MNIST and KMNIST, we evaluate our models on both MLP and CNN. For MLP, we use a 3-layers network with 512 units in the first layer and 256 units in the second; and + +for CNN, we use a 4-layers network with two $5 \times 5$ convolutional layers with 64 channels followed by $2 \times 2$ average pooling, and two fully-connected layers with 1024 hidden units. (2) For CIFAR10, we evaluate our models on a smaller version of VGG (Peters & Welling, 2018), which consists of 6 convolutional layers and 2 fully-connected layers: $2 \times 128\mathrm{C3} - \mathrm{MP2} - 2 \times 256\mathrm{C3} - \mathrm{MP2} - 2 \times 512\mathrm{C3} - \mathrm{MP2} - 1024\mathrm{FC} - \mathrm{SM10}$ . + +# E.2 MORE RESULTS FOR MULTI-LAYER PERCEPTRON (MLP) + +![](images/f09933685d1c729580d1eef4a7b03a846e007867b2501b7876074f650b3579cb.jpg) +(a) NLL MNIST + +![](images/3db4c8064d8806b870918fd7540592b090ffc739302f9f06c98df4652fc3e876.jpg) +(b) NLL FMNIST + +![](images/6f706dff45cbae809c3a31017fb5868bb7aaa783aa24992b3e5618c97a48b103.jpg) +(c) NLL KMNIST + +![](images/6f4f11d882e7f7903c15223f6147a298e313291b4da24cab27142b75a3d9804b.jpg) +(d) Error MNIST + +![](images/cea33459a31e7eb16b6b6160fe34fe12cb1c046cacb07038c06f47234da9117e.jpg) +(e) Error FMNIST + +![](images/3d401b51c78d87f909f6e4c6e43872994840ded216889b3496168703729d886f.jpg) +(f) Error KMNIST + +![](images/1b268d8d9d0a74d822a33ae9d2e2754d3b83af48e9683af19302ba52c3ad2ef8.jpg) +Figure 4: Comparison of the predictive performance of our BQNs against the E-QNN as well as the non-quantized BNN trained by SGVB on a MLP. Negative log-likelihood (NLL) which accounts for uncertainty and 0-1 test error which doesn't account for uncertainty are displayed. + +![](images/03c01399ffa06bf8134b67ae3dd91a7ca2d3c463186b5db66a3bb09dcf541a77.jpg) + +![](images/6f0724302be0bbd35f58ff23c4daca3d50ade4af83fb43825acf3c5421680969.jpg) + +![](images/4c934b75016dba50e168c0854b38f0ff9cb986a4616c793fa535fe517c5df5dc.jpg) +(a) NLL MNIST +(d) Error MNIST +Figure 5: Illustration of mean-field approximation and tightness of alternative ELBO on a MLP. The performance gap between our analytical inference and the Monte Carlo Sampling is displayed. + +![](images/68fafdf34594b4ecdc94cc521149ce487ae6ca66fa7d44f2c10440c2f1e74299.jpg) +(b) NLL FMNIST +(e) Error FMNIST + +![](images/abe6d9da0490a6717770076f94487103fe38b98294bf2e91f5f9d7b9596b39ee.jpg) +(c) NLL KMNIST +(f) Error KMNIST + +# E.3 REGRESSION ON BOSTON HOUSING DATASET + +In this part, we evaluate our proposed BQN on Boston housing dataset, a regression benchmark widely used in testing Bayesian neural networks (Hernández-Lobato & Adams, 2015; Ghosh et al., 2016) and Probabilistic neural networks (Wang et al., 2016). The dataset consists of 456 training and 50 test samples, each sample has 13 features as input and a scalar (housing) price as output. Following Hernández-Lobato & Adams (2015); Ghosh et al. (2016); Wang et al. (2016), we train a two-layers network with 50 hidden units, and report the performance in terms of root mean square error (RMSE) in Table 4. The results show that our BQN achieves lower RMSE compared to other models trained in a probabilistic/Bayesian way. + +
DatasetBQNPBP (Ghosh et al., 2016)EBP (Soudry et al., 2014)NPN (Wang et al., 2016)
Boston2.04 ± 0.072.79 ± 0.163.14 ± 0.932.57± NA
+ +Table 4: Performance of different networks in terms of RMSE. The numbers for BQN are averages over 10 runs with different seeds, the standard deviation are exhibited following the $\pm$ sign. The results for PBP, EBP are from Ghosh et al. (2016), and the one for NPN is from (Wang et al., 2016). \ No newline at end of file diff --git a/samplingfreelearningofbayesianquantizedneuralnetworks/images.zip b/samplingfreelearningofbayesianquantizedneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..45a202a24b265de1dd06515be4d08ac398584f49 --- /dev/null +++ b/samplingfreelearningofbayesianquantizedneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:898ae3f6b4c2c185f819893e749d481881ec2056e21cd9a7f6243856c8a969ed +size 1597615 diff --git a/samplingfreelearningofbayesianquantizedneuralnetworks/layout.json b/samplingfreelearningofbayesianquantizedneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f699eac0faa9d6252ef61b2d67b38c8c832240ae --- /dev/null +++ b/samplingfreelearningofbayesianquantizedneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99dec8c9d7d6bfa3e824968a0caaea46bb40235143e8dd28eac16207b7cc2af6 +size 1127954 diff --git a/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_content_list.json b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c6b5d0d39610faac086da2475be6f4f5355adefc --- /dev/null +++ b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24a67725becffe56d42f75cc1d08ddd0ca5c142011390c08377bc3ab017a1648 +size 97043 diff --git a/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_model.json b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3fd7d391111788113d9546065b416485ccbe3083 --- /dev/null +++ b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30b96ba554ecddb9f4c52cbe6bb41688b6a64bc9f9764b04c832a7bd3fe4b188 +size 117987 diff --git a/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_origin.pdf b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e2730166e3e4bfe174192331d93e3a83f1406b8a --- /dev/null +++ b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/b7fce80e-1676-47ad-ac59-5e6b612f8a9e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea6485e4c63594a9e6df810eae4f1553e977b0ebba0fbe30eff87d24db2d6133 +size 1292934 diff --git a/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/full.md b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..af6dd83ec580f94444543ab3d65ecc6daf622555 --- /dev/null +++ b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/full.md @@ -0,0 +1,355 @@ +# SCALABLE AND ORDER-ROBUST CONTINUAL LEARNING WITH ADDITIVE PARAMETER DECOMPOSITION + +Jaehong Yoon $^{1}$ , Saehoon Kim $^{2}$ , Eunho Yang $^{1,2}$ , and Sung Ju Hwang $^{1,2}$ + +KAIST1, AITRICS2, South Korea + +{jaehong.yoon, eunhoy, sjhwang82}@kaist.ac.kr, shkim@aitrics.com + +# ABSTRACT + +While recent continual learning methods largely alleviate the catastrophic problem on toy-sized datasets, some issues remain to be tackled to apply them to real-world problem domains. First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks. Secondly, it needs to tackle the problem of order-sensitivity, where the performance of the tasks largely varies based on the order of the task arrival sequence, as it may cause serious problems where fairness plays a critical role (e.g. medical diagnosis). To tackle these practical challenges, we propose a novel continual learning method that is scalable as well as order-robust, which instead of learning a completely shared set of weights, represents the parameters for each task as a sum of task-shared and sparse task-adaptive parameters. With our Additive Parameter Decomposition (APD), the task-adaptive parameters for earlier tasks remain mostly unaffected, where we update them only to reflect the changes made to the task-shared parameters. This decomposition of parameters effectively prevents catastrophic forgetting and order-sensitivity, while being computation- and memory-efficient. Further, we can achieve even better scalability with APD using hierarchical knowledge consolidation, which clusters the task-adaptive parameters to obtain hierarchically shared parameters. We validate our network with APD, APD-Net, on multiple benchmark datasets against state-of-the-art continual learning methods, which it largely outperforms in accuracy, scalability, and order-robustness. + +# 1 INTRODUCTION + +Continual learning (Thrun, 1995), or lifelong learning, is a learning scenario where a model is incrementally updated over a sequence of tasks, potentially performing knowledge transfer from earlier tasks to later ones. Building a successful continual learning model may lead us one step further towards developing a general artificial intelligence, since learning numerous tasks over a long-term time period is an important aspect of human intelligence. Continual learning is often formulated as an incremental / online multi-task learning that models complex task-to-task relationships, either by sharing basis vectors in linear models (Kumar & Daume III, 2012; Ruvolo & Eaton, 2013) or weights in neural networks (Li & Hoiem, 2016). One problem that arises here is that as the model learns on the new tasks, it could forget what it learned for the earlier tasks, which is known as the problem of catastrophic forgetting. Many recent works in continual learning of deep networks (Li & Hoiem, 2016; Lee et al., 2017; Shin et al., 2017; Kirkpatrick et al., 2017; Riemer et al., 2019; Chaudhry et al., 2019) tackle this problem by introducing advanced regularizations to prevent drastic change of network weights. Yet, when the model should adapt to a large number of tasks, the interference between task-specific knowledge is inevitable with fixed network capacity. Recently introduced expansion-based approaches handle this problem by expanding the network capacity as they adapt to new tasks (Rusu et al., 2016; Fang et al., 2017; Yoon et al., 2018; Li et al., 2019). These recent advances have largely alleviated the catastrophic forgetting, at least with a small number of tasks. + +However, to deploy continual learning to real-world systems, there are a number of issues that should be resolved. First, in practical scenarios, the number of tasks that the model should train on may be large. In the lifelong learning setting, the model may even have to continuously train on an unlimited number of tasks. Yet, conventional continual learning methods have not been verified for their scalability to a large number of tasks, both in terms of effectiveness in the prevention of + +![](images/1ffcbc5df282d0c424eb09c578e783caa3ba5e9e3b66b6c410116d227ea10e7d.jpg) +(a) Catastrophic forgetting + +![](images/5fe5667a3f021f42847dbf5ded0269dff89cf2bf2c1b6bb3db28851a7b5707bd.jpg) +(b) Scalability +Figure 1: Description of crucial challenges for continual learning with Omniglot dataset experiment. Catastrophic forgetting: Model should not forget what it has learned about previous tasks. Scalability: The increase in network capacity with respect to the number of tasks should be minimized. Order sensitivity: The model should have similar final performance regardless of the task order. Our model with Additive Parameter Decomposition effectively solves these three problems. + +![](images/1341f068df360eb4d36ea6f41a33723613fe488dce18847b5a572a2a3df0c2e4.jpg) +(c) Task-order sensitivity + +catastrophic forgetting, and efficiency as to memory usage and computations (See Figure 1 (a), and (b)). + +Another important but relatively less explored problem is the problem of task order sensitivity, which describes the performance discrepancy with respect to the task arrival sequence (See Figure 1 (c)). The task order that the model trains on has a large impact on the individual task performance as well as the final performance, not only because of the model drift coming from the catastrophic forgetting but due to the unidirectional knowledge transfer from earlier tasks to later ones. This order-sensitivity could be highly problematic if fairness across tasks is important (e.g. disease diagnosis). + +To handle these practical challenges, we propose a novel continual learning model with Additive Parameter Decomposition (APD). APD decomposes the network parameters at each layer of the target network into task-shared and sparse task-specific parameters with small mask vectors. At each arrival of a task to a network with APD, which we refer to as $APD-Net$ , it will try to maximally utilize the task-shared parameters and will learn the incremental difference that cannot be explained by the shared parameters using sparse task-adaptive parameters. Moreover, since having a single set of shared parameters may not effectively utilize the varying degree of knowledge sharing structure among the tasks, we further cluster the task-adaptive parameters to obtain hierarchically shared parameters (See Figure 2). + +This decomposition of generic and task-specific knowledge has clear advantages in tackling the previously mentioned problems. First, APD will largely alleviate catastrophic forgetting, since learning on later tasks will have no effect on the task-adaptive parameters for the previous tasks, and will update the task-shared parameters only with generic knowledge. Secondly, since APD does not change the network topology as existing expansion-based approaches do, APD-Net is memory-efficient, and even more so with hierarchically shared parameters. It also trains fast since it does not require multiple rounds of retraining. Moreover, it is order-robust since the task-shared parameters can stay relatively static and will converge to a solution rather than drift away upon the arrival of each task. With the additional mechanism to retroactively update task-adaptive parameters, it can further alleviate the order-sensitivity from unidirectional knowledge transfer as well. + +We validate our methods on several benchmark datasets for continual learning while comparing against state-of-the-art continual learning methods to obtain significantly superior performance with minimal increase in network capacity while being scalable and order-robust. + +The contribution of this paper is threefold: + +- We tackle practically important and novel problems in continual learning that have been overlooked thus far, such as scalability and order robustness. +- We introduce a novel framework for continual deep learning that effectively prevents catastrophic forgetting, and is highly scalable and order-robust, which is based on the decomposition of the network parameters into shared and sparse task-adaptive parameters with small mask vectors. +- We perform extensive experimental validation of our model on multiple datasets against recent continual learning methods, whose results show that our method is significantly superior to them in terms of the accuracy, efficiency, scalability, as well as order-robustness. + +![](images/05fd26513b9eea64305d663844716b912602049d841cd3aa24b4d85abdc4e938.jpg) +Figure 2: An illustration of Additive Parameter Decomposition (APD) for continual learning. APD effectively prevents catastrophic forgetting and suppresses order-sensitivity by decomposing the model parameters into shared $\sigma$ and sparse task-adaptive $\tau_{t}$ , which will let later tasks to only update shared knowledge. $\mathcal{M}_t$ is the task-adaptive mask on $\sigma$ to access only the relevant knowledge. Sparsity on $\tau_{t}$ and hierarchical knowledge consolidation which hierarchically rearranges the shared parameters greatly enhances scalability. + +# 2 RELATED WORK + +Continual Learning The literature on continual (lifelong) learning (Thrun, 1995) is vast (Ruvolo & Eaton, 2013) as it is a long-studied topic, but we only mention the most recent and relevant works. Most continual deep learning approaches are focused on preventing catastrophic forgetting, in which case the retraining of the network for new tasks shifts the distribution of the learned representations. A simple yet effective regularization is to enforce the representations learned at the current task to be closer to ones from the network trained on previous tasks (Li & Hoiem, 2016). A more advanced approach is to employ deep generative models to compactly encode task knowledge (Shin et al., 2017) and generate samples from the model later when learning for a novel task. Kirkpatrick et al. (2017), and Schwarz et al. (2018) proposed to regularize the model parameter for the current tasks with parameters for the previous task via a Fisher information matrix, to find a solution that works well for both tasks, and Lee et al. (2017) introduces a moment-matching technique with a similar objective. Serrà et al. (2018) proposes a new binary masking approach to minimize drift for important prior knowledge. The model learns pseudo-step function to promote hard attention, then builds a compact network with a marginal forgetting. But the model cannot expand the network capacity and performs unidirectional knowledge transfer thus suffers from the order-sensitivity. Lopez-Paz & Ranzato (2017); Chaudhry et al. (2019) introduces a novel approach for efficient continual learning with weighted update according to the gradients of episodic memory under single-epoch learning scenario. Nguyen et al. (2018) formulates continual learning as a sequential Bayesian update and use coresets, which contain important samples for each observed task to mitigate forgetting when estimating the posterior distribution over weights for the new task. Riemer et al. (2019) addresses the stability-plasticity dilemma maximizing knowledge transfer to later tasks while minimizing their interference on earlier tasks, using optimization-based meta-learning with experience replay. + +Dynamic Network Expansion Even with well-defined regularizers, it is nearly impossible to completely avoid catastrophic forgetting, since in practice, the model may encounter an unlimited number of tasks. An effective way to tackle this challenge is by dynamically expanding the network capacity to handle new tasks. Dynamic network expansion approaches have been introduced in earlier work such as Zhou et al. (2012), which proposed an iterative algorithm to train a denoising autoencoder while adding in new neurons one by one and merging similar units. Rusu et al. (2016) proposed to expand the network by augmenting each layer of a network by a fixed number of neurons for each task, while keeping the old weights fixed to avoid catastrophic forgetting. Yet, this approach often results in a network with excessive size. Yoon et al. (2018) proposed to overcome these limitations via selective retraining of the old network while expanding each of its layers with only the necessary number of neurons, and further alleviate catastrophic forgetting by splitting and duplicating the neurons. Xu & Zhu (2018) proposed to use reinforcement learning to decide how many neurons to add. Li et al. (2019) proposes to perform an explicit network architecture search to decide how much to reuse the existing network weights and how much to add. Our model also performs dynamic network expansion as the previous expansion-based methods, but instead of adding in new units, it additively decomposes the network parameters into task-shared and task-specific parameters. Further, the capacity increase at the arrival of each task is kept minimal with the sparsity on the task-specific parameters and the growth is logarithmic with the hierarchical structuring of shared parameters. + +# 3 CONTINUAL LEARNING WITH ADDITIVE PARAMETER DECOMPOSITION + +In a continual learning setting, we assume that we have sequence of tasks $\{\mathcal{T}_1,\dots ,\mathcal{T}_T\}$ arriving to a deep network in a random order. We denote the dataset of the $t^{th}$ task as $\mathcal{D}_t = \{\mathbf{x}_t^i,\mathbf{y}_t^i\}_{i = 1}^{N_t}$ , where $\mathbf{x}_t^i$ and $\mathbf{y}_t^i$ are $i^{th}$ instance and label among $N_{t}$ examples. We further assume that they become inaccessible after step $t$ . The set of parameters for the network at step $t$ is then given as $\Theta_{t} = \{\pmb{\theta}_{t}^{l}\}$ , where $\{\pmb{\theta}_t^l\}$ represents the set of weights for each layer $l$ ; we omit the layer index $l$ when the context is clear. Then the training objective at the arrival of task $t$ can be defined as follows: minimize $\Theta_{t}\mathcal{L}(\Theta_{t};\Theta_{t - 1},\mathcal{D}_{t}) + \lambda \mathcal{R}(\Theta_{t})$ , where $\mathcal{R}(\cdot)$ is a regularization term on the model parameters. In the next paragraph, we introduce our continual learning framework with task-adaptive parameter decomposition and hierarchical knowledge consolidation. + +Additive Parameter Decomposition To minimize the effect of catastrophic forgetting and the amount of newly introduced parameters with network expansion, we propose to decompose $\pmb{\theta}$ into a task-shared parameter matrix $\pmb{\sigma}$ and a task-adaptive parameter matrix $\pmb{\tau}$ , that is, $\pmb{\theta}_t = \pmb{\sigma} \otimes \mathcal{M}_t + \pmb{\tau}_t$ for task $t$ , where the masking variable $\mathcal{M}_t$ acts as an attention on the task-shared parameter to guide the learner to focus only on the parts relevant for each task. This decomposition allows us to easily control the trade-off between semantic drift and predictive performance of a new task by imposing separate regularizations on decomposed parameters. When a new task arrives, we encourage the shared parameters $\pmb{\sigma}$ to be properly updated, but not deviate far from the previous shared parameters $\pmb{\sigma}^{(t-1)}$ . At the same time, we enforce the capacity of $\pmb{\tau}_t$ to be as small as possible, by making it sparse. The objective function for this decomposed parameter model is given as follows: + +$$ +\underset {\boldsymbol {\sigma}, \boldsymbol {\tau} _ {t}, \mathrm {v} _ {t}} {\text {m i n i m i z e}} \quad \mathcal {L} \left(\left\{\boldsymbol {\sigma} \otimes \mathcal {M} _ {t} + \boldsymbol {\tau} _ {t} \right\}; \mathcal {D} _ {t}\right) + \lambda_ {1} \| \boldsymbol {\tau} _ {t} \| _ {1} + \lambda_ {2} \| \boldsymbol {\sigma} - \boldsymbol {\sigma} ^ {(t - 1)} \| _ {2} ^ {2}, \tag {1} +$$ + +where $\mathcal{L}$ denotes a loss function, $\sigma^{(t - 1)}$ denotes the shared parameter before the arrival of the current task $t$ , $\| \cdot \|_1$ indicates an element-wise $\ell_1$ norm defined on the matrix, and $\lambda_1, \lambda_2$ are hyperparameters balancing efficiency catastrophic forgetting. We use $\ell_2$ transfer regularization to prevent catastrophic forgetting, but we could use other types of regularizations as well, such as Elastic Weight Consolidation (Kirkpatrick et al., 2017). The masking variable $\mathcal{M}_t$ is a sigmoid function with a learnable parameter $\mathbf{v}_t$ , which is applied to output channels or neurons of $\sigma$ in each layer. We name our model with decomposed network parameters, Additive Parameter Decomposition (APD). + +The proposed decomposition in (1) makes continual learning efficient, since at each task we only need to learn a very sparse $\pmb{\tau}_{t}$ that accounts for task-specific knowledge that cannot be explained with the transformed shared knowledge $\sigma \otimes \mathcal{M}_t$ . Thus, in a way, we are doing residual learning with $\pmb{\tau}_{t}$ . Further, it helps the model achieve robustness to the task arrival order, because semantic drift occurs only through the task-shared parameter that corresponds to generic knowledge, while the task-specific knowledge learned from previous tasks are kept intact. In the next section, we introduce additional techniques to achieve even more task-order robustness and efficiency. + +Order Robust Continual Learning with Retroactive Parameter Updates We observe that a naive update of the shared parameters may induce semantic drift in parameters for the previously trained tasks which will yield an order-sensitive model, since we do not have access to previous task data. In order to provide high degree of order-robustness, we impose an additional regularization to further prevent parameter-level drift without explicitly training on the previous tasks. + +To achieve order-robustness in (1), we need to retroactively update task adaptive parameters of the past tasks to reflect the updates in the shared parameters at each training step, so that all previous tasks are able to maintain their original solutions. Toward this objective, when a new task $t$ arrives, we first recover all previous parameters ( $\theta_{i}$ for task $i < t$ ): $\pmb{\theta}_{i}^{*} = \pmb{\sigma}^{(t - 1)}\otimes \mathcal{M}_{i}^{(t - 1)} + \pmb{\tau}_{i}^{(t - 1)}$ and then update $\pmb{\tau}_{1:t - 1}$ by constraining the combined parameter $\pmb{\sigma}\otimes \mathcal{M}_i + \pmb{\tau}_i$ to be close to $\pmb{\theta}_i^*$ . The learning objective for the current task $t$ is then described as follows: + +$$ +\underset {\boldsymbol {\sigma}, \boldsymbol {\tau} _ {1: t}, \mathbf {v} _ {1: t}} {\text {m i n i m i z e}} \mathcal {L} \left(\left\{\boldsymbol {\sigma} \otimes \mathcal {M} _ {t} + \boldsymbol {\tau} _ {t} \right\}; \mathcal {D} _ {t}\right) + \lambda_ {1} \sum_ {i = 1} ^ {t} \| \boldsymbol {\tau} _ {i} \| _ {1} + \lambda_ {2} \sum_ {i = 1} ^ {t - 1} \| \boldsymbol {\theta} _ {i} ^ {*} - (\boldsymbol {\sigma} \otimes \mathcal {M} _ {i} + \boldsymbol {\tau} _ {i}) \| _ {2} ^ {2}. \tag {2} +$$ + +Compared to (1), the task-adaptive parameters of previous tasks now can be retroactively updated to minimize the parameter-level drift. This formulation also constrains the update of the task-shared parameters to consider order-robustness. + +Algorithm 1 Continual learning with Additive Parameter Decomposition +input Dataset $\mathcal{D}_{1:T}$ and hyperparameter $\lambda, m, s, K = k$ +output $\sigma^{(T)}$ , $\mathbf{v}_{1:T}$ , $\widetilde{\sigma}_{1:K}$ , and $\tau_{1:T}$ +1: Let $\sigma^{(1)} = \theta_1$ , and optimize for the task 1 +2: for $t = 2, \dots, T$ do +3: for $i = 1, \dots, t - 1$ do +4: Restore $\boldsymbol{\theta}_i^* = \boldsymbol{\sigma}^{(t - 1)} \otimes \mathcal{M}_i^{(t - 1)} + \widetilde{\tau}_i^{(t - 1)}$ +5: end for +6: Minimize (3) to update $\sigma$ and $\{\pmb{\tau}_i, \mathbf{v}_i\}_{i=1}^t$ +7: if $t \mod s = 0$ then +8: Initialize $k$ new random centroids, $\{\pmb{\mu}_g\}_{g = K - k + 1}^K$ +9: Group all tasks into $K$ disjoint sets, $\{\mathcal{G}_g\}_{g = 1}^K$ +10: for $g = 1, \dots, K$ do +11: Decompose $\{\widetilde{\tau}_i\}_{i \in \mathcal{G}_g}$ into $\widetilde{\sigma}_g$ and $\{\pmb{\tau}_i\}_{i \in \mathcal{G}_g}$ +12: end for +13: Delete old $\widetilde{\sigma}$ and $K = K + k$ +14: end if +15: end for + +Hierarchical Knowledge Consolidation The objective function in (2) does not directly consider local sharing among the tasks, and thus it will inevitably result in the redundancy of information in the task-adaptive parameters. To further minimize the capacity increase, we perform a process called hierarchical knowledge consolidation to group relevant task-adaptive parameters into task-shared parameters (See Figure 2). We first group all tasks into $K$ disjoint sets $\{\mathcal{G}_g\}_{g = 1}^K$ using $K$ -means clustering on $\{\tau_i\}_{i = 1}^t$ , then decompose the task-adaptive parameters in the same group into locally-shared parameters $\widetilde{\sigma}_g$ and task-adaptive parameters $\{\tau_i\}_{i\in \mathcal{G}_g}$ (with higher sparsity) by simply computing the amount of value discrepancy in each parameter as follows: + +- If $\max \{\pmb{\tau}_{i,j}\}_{i\in \mathcal{G}_g} - \min \{\pmb{\tau}_{i,j}\}_{i\in \mathcal{G}_g} \leq \beta$ , then $\{\pmb{\tau}_{i,j}\}_{i\in \mathcal{G}_g} = 0$ and $\widetilde{\sigma}_{g,j} = \pmb{\mu}_{g,j}$ +- Else, $\widetilde{\sigma}_{g,j} = 0$ + +where $\pmb{\tau}_{i,j}$ denotes the $j$ th element of the $i$ th task-adaptive parameter matrix, and $\pmb{\mu}_g$ is the cluster center of group $\mathcal{G}_g$ . We update the locally-shared parameters $\widetilde{\sigma}_g$ after the arrival of every $s$ tasks for efficiency, by performing $K$ -means clustering while initializing the cluster centers with the previous locally-shared parameters $\widetilde{\sigma}_g$ for each group. At the same time, we increase the number of centroids to $K + k$ to account for the increase in the variance among the tasks. + +Our final objective function is then given as follows: + +$$ +\underset {\boldsymbol {\sigma}, \boldsymbol {\tau} _ {1: t}, \mathbf {v} _ {1: t}} {\text {m i n i m i z e}} \mathcal {L} \left(\left\{\boldsymbol {\sigma} \otimes \mathcal {M} _ {t} + \boldsymbol {\tau} _ {t} \right\}; \mathcal {D} _ {t}\right) + \lambda_ {1} \sum_ {i = 1} ^ {t} \| \boldsymbol {\tau} _ {i} \| _ {1} + \lambda_ {2} \sum_ {i = 1} ^ {t - 1} \| \boldsymbol {\theta} _ {i} ^ {*} - (\boldsymbol {\sigma} \otimes \mathcal {M} _ {i} + \widetilde {\boldsymbol {\tau}} _ {i}) \| _ {2} ^ {2}, \tag {3} +$$ + +where $\widetilde{\pmb{\tau}}_i = \pmb {\tau}_i + \widetilde{\pmb{\sigma}}_g$ for $i\in \mathcal{G}_g$ + +Algorithm 1 describes the training of our APD model. + +Selective task forgetting In practical scenarios, some of earlier learned tasks may become irrelevant as we continually train the model. For example, when we are training a product identification model, recognition of discontinued products will be unnecessary. In such situations, we may want to forget the earlier tasks in order to secure network capacity for later task learning. Unfortunately, existing continual learning methods cannot effectively handle this problem, since the removal of some features or parameters will also negatively affect the remaining tasks as their parameters are entangled. Yet, with APDs, forgetting of a task $t$ can be done by dropping out the task adaptive parameters $\pmb{\tau}_{t}$ . Trivially, this will have absolutely no effect on the task-adaptive parameters of the remaining tasks. + +# 4 EXPERIMENT + +We now validate APD-Net on multiple datasets against state-of-the-art continual learning methods. + +# 4.1 DATASETS + +1) CIFAR-100 Split (Krizhevsky & Hinton, 2009) consists of images from 100 generic object classes. We split the classes into 10 groups, and consider 10-way multi-class classification in each group as a single task. We use 5 random training/validation/test splits of 4,000/1,000/1,000 samples. +2) CIFAR-100 Superclass consists of images from 20 superclasses of the CIFAR-100 dataset, where each superclass consists of 5 different but semantically related classes. For each task, we use 5 random training/validation/test splits of 2,000/500/500 samples. +3) Omniglot-rotation (Lake et al., 2015) contains OCR images of 1, 200 characters (we only use the training set) from various writing systems for training, where each class has 80 images, including 0, 90, 180, and 270 degree rotations of the original images. We use this dataset for large-scale continual learning experiments, by considering the classification of 12 classes as a single task, obtaining 100 tasks in total. For each class, we use 5 random training/test splits of $60/20$ samples. + +We use a modified version of LeNet-5 (LeCun et al., 1998) and VGG16 network (Simonyan & Zisserman, 2015) with batch normalization as base networks. For experiments on more datasets, and detailed descriptions of the architecture and task order sequences, please see the supplementary file. + +# 4.2 BASELINES AND OUR MODELS + +1) L2-Transfer. Deep neural networks trained with the $L2$ -transfer regularizer $\lambda \| \pmb{\theta}_t - \pmb{\theta}_{t - 1}\| _F^2$ when training for task $t$ . 2) EWC. Deep neural networks regularized with Elastic Weight Consolidation (Kirkpatrick et al., 2017). 3) P&C. Deep neural networks with two-step training: Progress, and Compresss (Schwarz et al., 2018). 4) PGN. Progressive Neural Networks (Rusu et al., 2016) which constantly increase the network size by $k$ neurons with each task. 5) DEN. Dynamically Expandable Networks (Yoon et al., 2018) that selectively retrain and dynamically expand the network size by introducing new units and duplicating neurons with semantic drift. 6) RCL. Reinforced Continual Learning proposed in (Xu & Zhu, 2018) which adaptively expands units at each layer using reinforcement learning. 7) APD-Fixed. APD-Net without the retroactive update of the previous task-adaptive parameters (Eq. (1)). 8) APD(1). Additive Parameter Decomposition Networks with depth 1, whose parameter is decomposed into task-shared and task-adaptive parameters. 10) APD(2). APD-Net with depth 2, that also has locally shared parameters from hierarchical knowledge consolidation. + +# 4.3 QUANTITATIVE EVALUATION + +Task-average performance We first validate the final task-average performance after the completion of continual learning. To perform fair evaluation of performance that is not order-dependent, we report the performance on three random trials over 5 different task sequences over all experiments. Table 1 shows that APD-Nets outperform all baselines by large margins in accuracy. We attribute this performance gain to two features. First, an APD-Net uses neuron过滤器-wise masking on the shared parameters, which allows it to focus only on parts that are relevant to the task at the current training stage. Secondly, an APD-Net updates the previous task-adaptive parameters to reflect the changes made to the shared parameters, to perform retroactive knowledge transfer. APD-Fixed, without these retroactive updates, performs slightly worse. APD(2) outperforms APD(1) since it further allows local knowledge transfer with hierarchically shared parameters. Moreover, when compared with expansion based baselines, our methods yield considerably higher accuracy with lower capacity (Figure 3). This efficiency comes from the task-adaptive learning performing only residual learning for each task with minimal capacity increase, while maximally utilizing the task-shared parameters. + +We further validate the efficiency of our methods in terms of training time. Existing approaches with network expansion are slow to train. DEN should be trained with multiple steps, namely selective retraining, dynamic network expansion and split/duplication, each of which requires retraining of the network. RCL is trained with reinforcement learning, which is inherently slow since the agent should determine exactly how many neurons to add at each layer in a discrete space. PGN trains much faster, but the model increases the fixed number of neurons at each layer when a new task arrives, resulting in overly large networks. On the contrary, APD-Net, although it requires updates to the previous task-adaptive parameters, can be trained in a single training step. Figure 3 shows that both APD(1) and APD(2) have training time comparable to the base model, with only a marginal increase. + +Table 1: Experiment results on CIFAR-100 Split and CIFAR-100 Superclass datasets. The results are the mean accuracies over 3 runs of experiments with random splits, performed with 5 different task order sequences. STL is the single-task learning model that trains a separate network for each task independently. Standard deviations for accuracy are given in Table A.3 in the Appendix. + +
CIFAR-100 SplitCIFAR-100 Superclass
MethodsCapacityAccuracyAOPDMOPDCapacityAccuracyAOPDMOPD
STL1,000%63.75%0.98%2.23%2,000%61.00%2.31%3.33%
L2T100%48.73%8.62%17.77%100%41.40%8.59%20.08%
EWC100%53.72%7.06%15.37%100%47.78%9.83%16.87%
P&C100%53.54%6.59%11.80%100%48.42%9.05%20.93%
PGN171%54.90%8.08%14.63%271%50.76%8.69%16.80%
DEN181%57.38%8.33%13.67%191%51.10%5.35%10.33%
RCL181%55.26%5.90%11.50%184%51.99%4.98%14.13%
APD-Fixed132%59.32%2.43%4.03%128%55.75%3.16%6.80%
APD(1)134%59.93%2.12%3.43%133%56.76%3.02%6.20%
APD(2)135%60.74%1.79%3.43%130%56.81%2.85%5.73%
+ +![](images/14095f47b3405a1538d3fa990645d2185c1e953d0b4b7aa65611c8cd56f9b285.jpg) +(a) CIFAR-100 Split $(\mathrm{T} = 10)$ + +![](images/e5b1c8933716f068b6c126dc8314537f8902e257ae4bd5eff8a8ae1ad4b4c4e5.jpg) +Figure 3: Accuracy over efficiency of expansion-based continual learning methods and our methods. We report performance over capacity and performance over training time on both datasets. + +![](images/b1f42457c8ad1562e621a99d1693ae08a37fa923551a332a7f87e1bbb2645ace.jpg) +(b) CIFAR-100 Superclass $(\mathrm{T} = 20)$ + +![](images/91f7b37f9bb2f9dda91ea206a9db9b8caca5c9355b405932aa23308c0ada4e26.jpg) + +Order fairness in continual learning We now evaluate the order-robustness of our model in comparison to the existing approaches. We first define an evaluation metric for order-sensitivity for each task $t$ , which we name as Order-normalized Performance Disparity (OPD), as the disparity between its performance on $R$ random task sequences: + +$$ +O P D _ {t} = \max \left(\bar {P} _ {t} ^ {1}, \dots , \bar {P} _ {t} ^ {R}\right) - \min \left(\bar {P} _ {t} ^ {1}, \dots , \bar {P} _ {t} ^ {R}\right) \tag {4} +$$ + +where $\overline{P}_t^r$ denotes the performance of task $t$ to the task sequence $r$ . Then we define the Maximum OPD as $MOPD = \max(OPD_1, \dots, OPD_t)$ , and the Average OPD as $AOPD = \frac{1}{T}\sum_{t=1}^{T}OPD_t$ , to evaluate order-robustness on the entire task set. A model that is sensitive to the task sequence order will have high MOPD and AOPD, and an order-robust model will have low values for both metrics. + +In table 1, we show the experimental results on order-robustness for all models, obtained on 5 random sequences. We observe that expansion-based continual learning methods are more order-robust than fixed-capacity methods, owing to their ability to introduce task-specific units, but they still suffer from a large degree of performance disparity due to asymmetric direction of knowledge transfer from earlier tasks to later ones. On the other hand, APD-Nets obtain significantly lower MOPD and AOPD compared to baseline models that have high performance disparity between task sequences given in different orders. APD(1) and APD(2) are more order-robust than APD-Fixed, which suggests the effectiveness of the retroactive updates of $\tau_{1:t - 1}$ . Figure 4 further shows how the per-task performance of each model changes to task sequences of three different orders. We observe that our models show the least disparity in performance to the order of the task sequence. + +Preventing catastrophic forgetting We show the effectiveness of APD on its prevention of catastrophic forgetting by examining how the model performance on earlier tasks change as new tasks arrive. Figure 5, (a)-(c) show the results on task 1, 6, 11 from CIFAR-100 Superclass, which has 20 tasks in total. APD-Nets do not show any sign of catastrophic forgetting, although their performances marginally change with the arrival of each task. In fact, APD(2) even improves on task 6 (by $0.40\% p$ ) as it learns on later tasks, which is possible both due to the update of the shared parameters and the retroactive update of the task-adaptive parameters for earlier tasks, which leads to better solutions. + +![](images/6a164ac8272eba9a0a422472364abad7d7dea93519fe1bc70a817a740d7b8d93.jpg) +(a) L2T + +![](images/144be3fa5cf2ab6e3f05aca8c580f25e9982d72df400ecbcd5bcd51ef38134d5.jpg) +(b) EWC + +![](images/08a24bccede78ddef27238880c5f3260a67e33806f8f3c9cb142e7bb0506de0a.jpg) +(c) DEN + +![](images/20cf617d1640ce42575e614863dd69a65ae8460f2d8eb67f81dcf3e99505b32b.jpg) +(d) RCL + +![](images/4b7971b2cb3daef6d1ced3936d910747473515ebc73d98aa2ec96a4717dca6b6.jpg) +(e) APD(1) + +![](images/b1837da1fcc36e6e1f9126b55d3631d88e824f6c508c24b2f6fbeb0b3b9c8044.jpg) +(f) APD(2) + +![](images/ba7f07ff0992baca1907c488af428272ab8046c52d028351729cbaec88270acd.jpg) +Figure 4: Performance disparity of continual learning baselines and our models on CIFAR-100 Split. Plots show per-task accuracy for 3 task sequences of different order. Performance disparity of all methods for 5 task sequences of different order are given in Figure A.8 in the Appendix. +(a) Task 1 +Figure 5: (a)-(c) Catastrophic Forgetting on CIFAR-100 Supclass: Performance of our models on the $1^{st}$ , $6^{th}$ , and $11^{th}$ task during continual learning. (d)-(e) Task Forgetting on CIFAR-100 Split: Per-task Performance of APD(1) ( $T_{1:5}$ ) when $1^{st}$ task is dropped during continual learning. + +![](images/03aba7ee392bea4422a888709a3e6ef00788e60ace517544ba66d4a3f624746e.jpg) +(b) Task 6 + +![](images/e34935b5304f804e667435e13f6386555bfb8085964bb25048a215577fc24896.jpg) +(c) Task 11 + +![](images/8f529987c0e43e87111507690d78546bfac54246da091e69f6294bdfdb89af5f.jpg) +(d) F. in Step 5 + +![](images/663f6f79bef326d6d0b1fe2722123f38332886bdada2c91aa524594369cf2d91.jpg) +(e) F. in Step 3 + +Selective task forgetting To show that APD-Net can perform selective task forgetting without any harm on the performance of non-target tasks, in Figure 5, (d)-(e), we report the performance change in Task 1-5 when removing parameters for Task 3 and 5. As shown, there is no performance degeneration on non-target tasks, which is expected since dropping out a task-adaptive parameter for a specific task will not affect the task-adaptive parameters for the remaining tasks. This ability to selectively forget is another important advantage of our model that makes it practical in lifelong learning scenarios. + +Scalability to large number of tasks We further validate the scalability of our model with large-scale continual learning experiments on the Omniglot-Rotation dataset, which has 100 tasks. Regardless of random rotations, tasks could share specific features such as circles, curves, and straight lines. Gidaris et al. (2018) showed that we can learn generic representations even with rotated images, where they proposed a popular self-supervised learning technique where they train the model to predict the rotation angle of randomly rotated images. We do not compare against DEN or RCL for this experiment since they are impractically slow to train. Figure 6 (Left) shows the results of this experiment. For PGN, we restrict the maximum number of links to the adapter to 3 in order to avoid it from establishing exponentially many connections. We observe that continual learning models achieve significantly lower performance and high OPDs compared to single task learning. On the contrary, our model outperforms them by large amount, obtaining performance that is almost equal to STL which uses 100 times more network parameters. To show that our model scales well, we plot the number of parameters for our models as a function of the number of tasks in Figure 6 (Right). The plot shows that our APD-Net scales well, showing logarithmic growth in network capacity (the number of parameters), while PGN shows linear growth. This result suggests that our model is highly efficient especially in large-scale continual learning scenarios. + +Continual learning with heterogeneous datasets We further consider a more challenging continual learning scenario where we train on a series of heterogeneous datasets. For this experiment, we use CIFAR-10 (Krizhevsky & Hinton, 2009), CIFAR100, and the Street View House Numbers (SVHN) (Netzer et al., 2011) dataset, in two different task arrival sequences (SVHN $\rightarrow$ CIFAR $10\rightarrow$ CIFAR-100, CIFAR- $100\rightarrow$ CIFAR- $10\rightarrow$ SVHN). We use VGG-16 as the base network, and compare against an additional baseline, Piggyback (Mallya et al., 2018), which handles a newly arrived task by learning a task-specific binary mask on a network pretrained on ImageNet; since we cannot assume the availability of such large-scale datasets for pretraining in a general setting, we pretrain it on the initial task. Table 2 shows the results, which show that existing models obtain + +
ModelsCapacityAccuracyAOPDMOPD
STL10,000%82.13% (0.08)2.79%5.70%
L2T100%63.46% (1.58)13.35%24.43%
1,599%64.65% (1.76)11.35%27.23%
EWC100%67.48% (1.39)14.92%32.93%
1,599%68.66% (1.92)15.19%40.43%
PGN1,045%73.65% (0.27)6.79%19.27%
1,543%79.35% (0.12)4.52%10.37%
APD(2)649%81.20% (0.62)4.09%9.44%
943%81.60% (0.53)3.78%8.19%
+ +![](images/6fbbb484b3c59e448b41b93b68a5ede881609d52bbcafc4a56a0a2deafac05e9.jpg) +Figure 6: Left: Performance comparison with several benchmarks on Omniglot-rotation (standard deviation into parenthesis). Right: The number of the parameters which is obtained during course of training on Omniglot-rotation. + +Table 2: Accuracy comparison on diverse datasets according to two opposite task order (arrows). The results are the mean accuracies over 3 runs of experiments. VGG16 with batch normalization is used for a base network. + +
STLL2TPiggybackPGNAPD(1)
Task OrderNone
SVHN96.8%10.7%88.4%96.8%96.4%96.8%96.2%96.8%96.8%
CIFAR1091.3%41.4%35.8%83.6%90.8%85.8%87.7%90.1%91.0%
CIFAR10067.2%29.6%12.2%41.2%67.2%41.6%67.2%61.1%67.2%
Average85.1%27.2%45.5%73.9%84.8%74.7%83.7%83.0%85.0%
Model Size171MB57 MB57 MB59 MB59 MB64 MB64 MB63 MB65 MB
+ +suboptimal performance in this setting and are order-sensitive. While Piggyback and PGN are immue to catastrophic forgetting since they freeze the binary masks and hidden units trained on previous tasks, they still suffer from performance degeneration, since their performances largely depends upon the pretrained network and the similarity of the later tasks to earlier ones. On the contrary, APD obtains performance close to STL without much increase to the model size, and is also order-robust. + +![](images/f2863e2fdd55bcf2ceae0fa473be9d83bde96e7c19f93488f22e14978f07c0c9.jpg) +Figure 7: Visualizations of the model parameters during continual learning. The colored markers denote the parameters for each task $i$ , and the empty markers with black outlines denote the task-shared parameters. Dashed arrows indicate the drift in the parameter space as the model trains on a sequence of tasks. + +![](images/87ecdb3c3790ff9b06cb83343164a9c4e414b41d0e2a27794cb5ebba2dcc086c.jpg) + +# 4.4 QUALITATIVE ANALYSIS + +As a further qualitative analysis of the effect of APD, we visualize the parameters using our method and baselines by projecting them onto a 2D space (Figure 7). For this experiment, we use a modified MNIST-split dataset whose images are cropped in the center by $8 \times 8$ pixels, and create 5 tasks, where each task is the binary classification between two classes. As for the base network, we use a 2-layer multi-layer perceptron with 10 units at each layer. Then we use Principle Component Analysis (PCA) to reduce the dimensionality of the parameters to two. We visualize the 2D projections of both the task-shared and task-adaptive parameters for each step of continual learning. For example, for task 3, we plot three green markers which visualize teh parameters when training on task 4 and 5. For the last task (Task 5), we only have a single marker since this is the last task. + +We observe that the model parameters using L2-Transfer drift away in a new direction, as it trains on a sequence of tasks, which brings in catastrophic forgetting. APD-Fixed (Figure 7(b)) largely alleviates the semantic drift, as the update on later tasks only affects the task-shared parts while the task-adaptive parameters are kept intact. However, the update to the task-shared parameters could result in small drift in the combined task-specific parameters. On the other hand, APD-Net with retroactive update of task-adaptive parameters successfully prevents the drift in the task-specific parameters (Figure 7(c)). + +# 5 CONCLUSION + +We proposed a novel continual learning model with Additive Parameter Decomposition, where the task-shared parameters capture knowledge generic across tasks and the task-adaptive parameters capture incremental differences over them to capture task-specific idiosyncrasies. This knowledge decomposition naturally solves the catastrophic forgetting problem since the task-adaptive parameters for earlier tasks will remain intact, and is significantly more efficient compared to expansion-based approaches, since the task-adaptive parameters are additive and do not increase the number of neurons or filters. Moreover, we also introduce and tackle a novel problem we refer to as task order sensitivity, where the performance for each task varies sensitively to the order of task arrival sequence; with our model, the shared parameters will stay relatively static regardless of the task order, and retroactive updates of the task-adaptive parameters prevent them from semantic drift. With extensive experimental validation, we showed that our model obtains impressive accuracy gains over the existing continual learning approaches, while being memory- and computation-efficient, scalable to large number of tasks, and order-robust. We hope that our paper initiates new research directions for continual learning on the relatively unexplored problems of scalability, task-order sensitivity, and selective task forgetting. + +Acknowledgements This work was supported by Samsung Advanced Institute of Technology, Samsung Research Funding Center of Samsung Electronics (No. SRFC-IT1502-51), the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921), the National Research Foundation of Korea (No. NRF-2016M3C4A7952634, Development of Machine Learning Framework for Peta Flops Scale), Institute for Information communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion), and Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)). + +# REFERENCES + +Yaroslav Bulatov. Not-mnist dataset. 2011. +Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. In Proceedings of the International Conference on Learning Representations (ICLR), 2019. +Yuchun Fang, Zhengyan Ma, Zhaoxiang Zhang, Xu-Yao Zhang, and Xiang Bai. Dynamic multi-task learning with convolutional neural network. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2017. +Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In Proceedings of the International Conference on Learning Representations (ICLR), 2018. +James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, pp. 201611835, 2017. +Alex Krizhevsky and Geoffrey E. Hinton. Learning multiple layers of features from tiny images. Technical report, Computer Science Department, University of Toronto, 2009. + +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. +Abhishek Kumar and Hal Daume III. Learning task grouping and overlap in multi-task learning. In Proceedings of the International Conference on Machine Learning (ICML), 2012. +Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015. +Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. +Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. In Advances in Neural Information Processing Systems (NIPS), 2017. +Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In Proceedings of the International Conference on Machine Learning (ICML), 2019. +Zhizhong Li and Derek Hoiem. Learning without forgetting. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. +David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems (NIPS), 2017. +Arun Mallya, Dillon Davis, and Svetlana Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. +Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. +Hong-Wei Ng and Stefan Winkler. A data-driven approach to cleaning large face datasets. In 2014 IEEE international conference on image processing (ICIP), pp. 343-347. IEEE, 2014. +Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2018. +Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. In Proceedings of the International Conference on Learning Representations (ICLR), 2019. +Andrei Rusu, Neil Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. In Advances in Neural Information Processing Systems (NIPS), 2016. +Paul Ruvolo and Eric Eaton. Ella: An efficient lifelong learning algorithm. In Proceedings of the International Conference on Machine Learning (ICML), 2013. +Jonathan Schwarz, Jelena Luketina, Wojciech M Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. arXiv preprint arXiv:1805.06370, 2018. +Joan Serrà, Dídac Surís, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In Proceedings of the International Conference on Machine Learning (ICML), 2018. +Hanul Shin, Jung Kwon Lee, Jaehon Kim, and Jiwon Kim. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems (NIPS), 2017. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), 2015. + +Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. The german traffic sign recognition benchmark: a multi-class classification competition. In *The 2011 international joint conference on neural networks*, 2011. +Sebastian Thrun. A Lifelong Learning Perspective for Mobile Robot Control. Elsevier, 1995. +Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. +Ju Xu and Zhanxing Zhu. Reinforced continual learning. In Advances in Neural Information Processing Systems (NIPS), 2018. +Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2018. +Guanyu Zhou, Kihyuk Sohn, and Honglak Lee. Online incremental feature learning with denoising autoencoders. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 1453-1461, 2012. + +# A APPENDIX + +We introduce detailed experiment settings for our Additive Parameter Decomposition (APD). Also, we provide experimental results including additional quantitative analysis and ablation study for our model. + +# A.1 EXPERIMENT SETTINGS + +In this section, we describe experimental details for our models. We used exponential learning rate decay at each epoch and all models are applied on weight decay with $\lambda = 1e^{-4}$ . All hyperparameters are determined from a validation set. All experiments are performed without data preprocessing techniques. For MNIST-Variation, we used two-layered feedforward networks with 312, 128 neurons. Training epochs are 50 for all baselines and APDs. $\lambda_{1} = [2e^{-4}, 1e^{-4}]$ on APD. + +For CIFAR-100 Split and CIFAR-100 Superclass, we used LeNet with 20-50-800-500 neurons. Training epochs are 20 for all models. $\lambda_{1} = [6e^{-4}, 4e^{-4}]$ . We equally set $\lambda_{2} = 100$ , also $K = 2$ per 5 tasks, and $\beta = 1e^{-2}$ for hierarchical knowledge consolidation on MNIST-Variation, CIFAR-100 Split, and CIFAR-100 Superclass. + +For Omniglot, we used LeNet with 10-20-500-300 neurons as default. And to show the performance EWC with larger network capacity, we used LeNet with 64-128-2500-1500 neurons. Training epochs are 100 for all models, and $\lambda_{1} = [4e^{-4}, 2e^{-4}]$ , and $\lambda_{2} = 100$ , and 1K for APD. We set $K = 3$ per 10 tasks, and $\beta = 1e^{-4}$ for hierarchical knowledge consolidation. Note that we use an additional technique which updates only largely changed $\theta_{i}$ where $i < t$ . It bypasses the retroactive parameter update for the tasks which is nearly relevant to learn the current task $t$ . This selective update rule helps the model skip these meaningless update procedure and we can train our model much faster on large-scale continual learning. + +To estimate order robustness, we used 5 different orders on all experiments. For the case of MNIST-Variation and CIFAR-100 Split, we select random generated orders as follows: + +orderA: [0,1,2,3,4,5,6,7,8,9] +orderB: [1, 7, 4, 5, 2, 0, 8, 6, 9, 3] +orderC: [7,0,5,1,8,4,3,6,2,9] +orderD: [5,8,2,9,0,4,3,7,6,1] +orderE: [2,9,5,4,8,0,6,1,3,7] + +For CIFAR-100 Superclass, we select random generated orders as follows: + +orderA: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19] +orderB: [15, 12, 5, 9, 7, 16, 18, 17, 1, 0, 3, 8, 11, 14, 10, 6, 2, 4, 13, 19] +orderC: [17, 1, 19, 18, 12, 7, 6, 0, 11, 15, 10, 5, 13, 3, 9, 16, 4, 14, 2, 8] +orderD: [11,9,6,5,12,4,0,10,13,7,14,3,15,16,8,1,2,19,18,17] +orderE: [6,14,0,11,12,17,13,4,9,1,7,19,8,10,3,15,18,5,2,16] + +For Omniglot dataset, we omit the sequence of random generated orders for readability. + +Table A.3: Ablation study results on APD(1) with average of five different orders depicted in A.1. We show a validity of APD as comparing with several architectural variants. All experiments performed on CIFAR-100 split dataset. + +
ModelsCapacityAccuracyAOPDMOPD
STL1,000%63.75%0.98%2.23%
APD(1)170%61.30%1.57%2.77%
w/o Sparsity1,084%63.47%3.20%5.40%
w/o Adaptive Mask168%59.09%1.83%3.47%
Fixed σ167%58.55%2.31%3.53%
+ +# A.2 ARCHITECTURAL CHOICES FOR ADDITIVE PARAMETER DECOMPOSITION + +We also evaluate various ablation experiments on Additive Parameter Decomposition. First of all, we build the dense APD without sparsity inducing constraints for task-adaptive parameters while maintaining the essential architecture, depicted as w/o Sparsity. It significantly outperforms APD in terms of accuracy but impractical since it requires huge capacity. We also measure the performance of APD without adaptive masking variables to observe how much performance is degraded when the flexibility of APD for newly arriving tasks is limited, which is referred to as w/o Adaptive Masking in the table. Naturally, it underperforms with respect to both accuracy and OPDs. Freezing $\sigma$ after training the first task, referred to as Fixed $\sigma$ in the table, is designed to observe the performance when the task-shared knowledge is not properly captured by $\sigma$ . Interestingly, this shows much lower performance than other variants, suggesting that it is extremely crucial to properly learn the task-shared knowledge during continual learning. + +Table A.4: Comparison with GEM-variants on Permuted-MNIST dataset. We followed all experimental settings from A-GEM (Chaudhry et al., 2019). We report the performance on single epoch training for 17 random permuted MNIST except 3 cross-validation tasks from 20 total tasks, mini-batch is 10 and size of episodic memory in GEMs is 256. We referred the experimental results for GEM variants from Chaudhry et al. (2019). + +
MethodsNetwork Capacity (%)AccuracyAverage ForgettingWorst-case Forgetting
STL1,700%0.95330.000.00
GEM100%0.89500.0600.100
S-GEM100%0.88200.080-
A-GEM100%0.89100.0600.130
APD(1)103%0.90670.0200.051
APD(1)115%0.92830.0180.047
+ +Table A.5: Comparison with HAT (Serrà et al., 2018) on sequence of 8 heterogeneous dataset. We follow all experimental settings from HAT and reproduce the performance of HAT directly from the author's code. We perform the experiments with 5 different (randomly generated) task order sequences. We use forgetting measure as Average Forgetting and Worst-case Forgetting from (Chaudhry et al., 2019). + +
MethodsNetwork CapacityAccuracyAverage F.Worst-case F.AOPDMOPD
HAT100%0.8036 (0.012)0.00140.00500.07950.2315
HAT-Large182%0.8183 (0.011)0.00130.00570.06780.1727
APD-Fixed181%0.8242 (0.005)0.00030.00060.02090.0440
+ +# A.3 COMPARISON WITH OTHER CONTINUAL LEARNING METHODS + +We additionally compare our APD with GEM-based approaches (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019). As for the backbone networks, we use a two-layer perceptron with 256 neurons at each layer. The results in Table A.4 show that GEM-variants obtain reasonable performance with a marginal forgetting since the models store data instances of previous tasks in the episodic memory, and use them to compute gradients for training on later tasks. Note that we do not count the size of episodic memory to the network capacity. + +Furthermore, we compare APD-Net against HAT (Serrà et al., 2018) on a sequence of 8 heterogeneous dataset including CIFAR-10, CIFAR-100, FaceScrub (Ng & Winkler, 2014), MNIST (LeCun et al., 1998), NotMNIST (Bulatov, 2011), FashionMNIST (Xiao et al., 2017), SVHN, and TrafficSign (Stallkamp et al., 2011). We used a modified version of AlexNet (Krizhevsky et al., 2012) as the backbone networks and reproduce the performance of HAT directly from the author's code. Table A.5 shows that APD-Fixed largely outperforms HAT. + +Both GEM variants and HAT are strong continual learning approaches, but cannot expand the network capacity and/or performs unidirectional knowledge transfer thus suffers from the capacity limitation and order-sensitivity. On the other hand, our APD adaptively increases the network capacity by introducing task-adaptive parameters which learns task-specific features not captured in the task-shared parameters. Therefore, APD can learn richer representations compared to fixed-capacity continual learning approaches. APD also exhibit several unique properties, such as task-order robustness and trivial task forgetting. + +Table A.6: Full experiment results on CIFAR-100 Split and CIFAR-100 Superclass datasets. The results are the mean accuracies over 3 runs of experiments with random splits, preformed with 5 different task order sequences (standard deviation into parenthesis). + +
CIFAR-100 Split
MethodsCapacityAccuracyAOPDMOPD
STL1,000%63.75% (0.14)0.98%2.23%
L2T100%48.73% (0.66)8.62%17.77%
EWC100%53.72% (0.56)7.06%15.37%
P&C100%53.54% (1.70)6.59%11.80%
PGN171%54.90% (0.92)8.08%14.63%
DEN181%57.38% (0.56)8.33%13.67%
RCL181%55.26% (0.13)5.90%11.50%
APD-Fixed132%59.32% (0.44)2.43%4.03%
175%61.02% (0.31)2.26%2.87%
APD(1)134%59.93% (0.41)2.12%3.43%
170%61.30% (0.37)1.57%2.77%
APD(2)135%60.74% (0.21)1.79%3.43%
153%61.18% (0.20)1.86%3.13%
+ +CIFAR-100 Superclass + +
MethodsCapacityAccuracyAOPDMOPD
STL2,000%61.00% (0.20)2.31%3.33%
L2T100%41.40% (0.99)8.59%20.08%
EWC100%47.78% (0.74)9.83%16.87%
P&C100%48.42% (1.39)9.05%20.93%
PGN271%50.76% (0.39)8.69%16.80%
DEN191%51.10% (0.77)5.35%10.33%
RCL184%51.99% (0.25)4.98%14.13%
APD-Fixed128%55.75% (1.01)3.16%6.80%
191%57.98% (0.65)2.58%4.53%
APD(1)133%56.76% (0.27)3.02%6.20%
191%58.37% (0.22)2.64%5.47%
APD(2)130%56.81% (0.33)2.85%5.73%
182%58.53% (0.31)2.75%5.67%
+ +![](images/977cad759fe7776a5b36567b755a5817b664aabc14ecfbabe0241e837d35d0a7.jpg) +(a) L2T + +![](images/e79c4fcfbc3999d91653c1c988672693597a23953180d39f9ec1057340c8cab2.jpg) +(b) EWC + +![](images/4c44a8d2f2d99f0adfe33a47187032f4f506dd097fe8f774795aa49902b69f25.jpg) +(c) P&C + +![](images/c83829841b77d73f1f650b13cd0170d3eb8dfe771d51030967602a7e0d7b4f57.jpg) +(d) PGN + +![](images/53dcc9fa29a90a5e4f42dec7a448b1ef71085cd8f436708d3691318b459d5b30.jpg) +(c) DEN + +![](images/b6c8037e6d8b34015a388a381fee1e0b255808c79a0fa2a1aff2abcc2e3f515e.jpg) +(d) RCL + +![](images/a5e46d4f95543265b0e54ee08cbe9b1b89c30f040c6f2f083e4744d00c30533f.jpg) +(e) APD(1) +Figure A.8: Per-task accuracy for each task sequence of continual learning baselines and our models on CIFAR-100 Split, on 5 task sequences of different order. Large amount of disparity among task performance of different orders implies that the model is task-order sensitive, that is less confident in terms of fairness in continual learning. + +![](images/43e863724ac746ebe486419cc995e552f52303876d6dcd187d353081e489efd6.jpg) +(g) APD(2) \ No newline at end of file diff --git a/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/images.zip b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..006629951daec6e08e17324fac21d95d3ef79321 --- /dev/null +++ b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b29b4dcb0826d315c17759b910f67330cbe75b0fd8532831c4da8844f514e3b +size 820035 diff --git a/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/layout.json b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..25ea262a951ac49e3871fba5c93e447fe48538d7 --- /dev/null +++ b/scalableandorderrobustcontinuallearningwithadditiveparameterdecomposition/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05e26c57b5c3be513bfd92b4b38991c583120320845667991c8def208a457bc9 +size 514519 diff --git a/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_content_list.json b/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8f7378e90cd3f0765477d195fac29ee2576fbabf --- /dev/null +++ b/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:003907afb731f43e8f8d3af84129b212a98ecda913565d0745285b016b8b5194 +size 66835 diff --git a/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_model.json b/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1218210b7ce95211fb536f9ed6698eb492f8a794 --- /dev/null +++ b/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed01eeef7fe0a4f9857fc9c2c82cea2aaa8b59f17e499745ed6b34e24b196a55 +size 81177 diff --git a/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_origin.pdf b/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0d7abd8a631d9a8aadc1e314f258cbf043bd26d4 --- /dev/null +++ b/scalablemodelcompressionbyentropypenalizedreparameterization/f71f27ae-a427-400d-bbfd-8794f1e701a2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88340b66419fc2f83f480ad01e054cb5f58d532ba0cce186625fe706b747aaa0 +size 363404 diff --git a/scalablemodelcompressionbyentropypenalizedreparameterization/full.md b/scalablemodelcompressionbyentropypenalizedreparameterization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4be7f3b44e5822d72c4453bad207d7cafcf9267e --- /dev/null +++ b/scalablemodelcompressionbyentropypenalizedreparameterization/full.md @@ -0,0 +1,244 @@ +# SCALABLE MODEL COMPRESSION BY ENTROPY PENALIZED REPARAMETERIZATION + +Deniz Oktay* + +Princeton University + +Princeton, NJ, USA + +doktay@cs.princeton.edu + +Johannes Balle + +Google Research + +Mountain View, CA, USA + +jballe@google.com + +Saurabh Singh + +Google Research + +Mountain View, CA, USA + +saurabhsingh@gmail.com + +Abhinav Shrivastava + +University of Maryland, College Park + +College Park, MD, USA + +abhinav@cs.umd.edu + +# ABSTRACT + +We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a "latent" space, amounting to a reparameterization. This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training. Classification accuracy and model compressibility is maximized jointly, with the bitrate-accuracy trade-off specified by a hyperparameter. We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures. Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training. + +# 1 INTRODUCTION + +Artificial neural networks (ANNs) have proven to be highly successful on a variety of tasks, and as a result, there is an increasing interest in their practical deployment. However, ANN parameters tend to require a large amount of space compared to manually designed algorithms. This can be problematic, for instance, when deploying models onto devices over the air, where the bottleneck is often network speed, or onto devices holding many stored models, with only few used at a time. To make these models more practical, several authors have proposed to compress model parameters (Han et al., 2016; Louizos, Ullrich, et al., 2017; Molchanov et al., 2017; Havasi et al., 2019). While other desiderata often exist, such as minimizing the number of layers or filters of the network, we focus here simply on model compression algorithms that 1. minimize compressed size while maintaining an acceptable classification accuracy, 2. are conceptually simple and easy to implement, and 3. can be scaled easily to large models. + +Classical data compression in a Shannon sense (Shannon, 1948) requires discrete-valued data (i.e., the data can only take on a countable number of states) and a probability model on that data known to both sender and receiver. Practical compression algorithms are often lossy, and consist of two steps. First, the data is subjected to (re-)quantization. Then, a Shannon-style entropy coding method such as arithmetic coding (Rissanen and Langdon, 1981) is applied to the discrete values, bringing them into a binary representation which can be easily stored or transmitted. Shannon's source coding theorem establishes the entropy of the discrete representation as a lower bound on the average length of this binary sequence (the bit rate), and arithmetic coding achieves this bound asymptotically. Thus, entropy is an excellent proxy for the expected model size. + +The type of quantization scheme affects both the fidelity of the representation (in this case, the precision of the model parameters, which in turn affects the prediction accuracy) as well as the bit + +![](images/81d0f7da97c763dcf86442846b7ae35641af5dcdd614a7600b260dca2c4d13f6.jpg) +Figure 1: Visualization of representatives in scalar quantization vs. reparameterized quantization. The axes represent two different model parameters (e.g., linear filter coefficients). Dots are samples of the model parameters, discs are the representatives. Left: in scalar quantization, the representatives must be given by a Kronecker product of scalar representatives along the cardinal axes, even though the distribution of samples may be skewed. Right: in reparameterized scalar quantization, the representatives are still given by a Kronecker product, but in a transformed (here, rotated) space. This allows a better adaptation of the representatives to the parameter distribution. + +![](images/9b1d1e99be1dfd5ca7c853c8d38047ec8c52acb3f42636a67bbb09e1d634aee5.jpg) + +rate, since a reduced number of states coincides with reduced entropy. ANN parameters are typically represented as floating point numbers. While these technically have a finite (but large) number of states, the best results in terms of both accuracy and bit rate are typically achieved for a significantly reduced number of states. Existing approaches to model compression often acknowledge this by quantizing each individual linear filter coefficient in an ANN to a small number of pre-determined values (Louizos, Reisser, et al., 2019; Baskin et al., 2018; F. Li et al., 2016). This is known as scalar quantization (SQ). Other methods explore vector quantization (VQ), closely related to $k$ -means clustering, in which each vector of filter coefficients is quantized jointly (Chen, J. Wilson, et al., 2015; Ullrich et al., 2017). This is equivalent to enumerating a finite set of representatives (representable vectors), while in SQ the set of representatives is given by the Kronecker product of representable scalar elements. VQ is much more general than SQ, in the sense that representatives can be placed arbitrarily: if the set of useful filter vectors all live in a subregion of the entire space, there is no benefit in having representatives outside of that region, which may be unavoidable with SQ (fig. 1, left). Thus, VQ has the potential to yield better results, but it also suffers from the "curse of dimensionality": the number of necessary states grows exponentially with the number of dimensions, making it computationally infeasible to enumerate them explicitly, hence limiting VQ to only a handful of dimensions in practice. One of the key insights leading to this paper is that the strengths of SQ and VQ can be combined by representing the data in a "latent" space. This space can be an arbitrary rescaling, rotation, or otherwise warping of the original data space. SQ in this space, while making quantization computationally feasible, can provide substantially more flexibility in the choice of representatives compared to the SQ in the data space (fig. 1, right). This is in analogy to recent image compression methods based on autoencoders (Balle, Laparra, et al., 2017; Theis et al., 2017). + +The contribution of this paper is two-fold. First, we propose a novel end-to-end trainable model compression method that uses scalar quantization and entropy penalization in a reparameterized space of model parameters. The reparameterization allows us to use efficient SQ, while achieving flexibility in representing the model parameters. Second, we provide state-of-the-art results on a variety of network architectures on several datasets. This demonstrates that more complicated strategies involving pretraining, multi-stage training, sparsification, adaptive coding, etc., as employed by many previous methods, are not necessary to achieve good performance. Our method scales to modern large image datasets and neural network architectures such as ResNet-50 on ImageNet. + +# 2 ENTROPY PENALIZED REPARAMETERIZATION + +We consider the classification setup, where we are given a dataset $D = \{(\pmb{x}_1,y_1),\dots (\pmb{x}_N,y_N)\}$ consisting of pairs of examples $\pmb{x}_i$ and corresponding labels $y_{i}$ . We wish to minimize the expected negative log-likelihood on $D$ , or cross-entropy classification loss, over $\Theta$ , the set of model parame + +![](images/b3bfeb99119a7cf5d17db2daa9198f709fda693fbfbba2dbd932d41ff1a9d708.jpg) +Figure 2: Classifier architecture. The $\Phi$ tensors (annotated with a tilde) are stored in their compressed form. During inference, they are read from storage, uncompressed, and transformed via $f$ into $\Theta$ , the usual parameters of a convolutional or dense layer (denoted without a tilde). + +![](images/7ab2c8543459a2ea94095e17c9410de27a9d0702f5eecd073d73c1fe4b29cafe.jpg) +Figure 3: The internals of $f_{\mathrm{conv}}$ and $f_{\mathrm{dense}}$ in our experiments for layer $k$ , annotated with the dimensionalities. In $f_{\mathrm{conv}}$ , $H$ , $W$ , $I$ , $O$ refer to the convolutional height, width, input channel, output channel, respectively. For $f_{\mathrm{dense}}$ , $I$ and $O$ refer to the number of input and output activations. For $f_{\mathrm{conv}}$ , we use an affine transform, while for $f_{\mathrm{dense}}$ we use a scalar shift and scale, whose parameters are captured in $\Psi$ . Note that in both cases, the number of parameters of $f$ itself (labeled as $\psi$ ) is significantly smaller than the size of the model parameters it decodes. + +![](images/bb8f11c6f15a51b541ad44349b2f94e19824d24ee2ce47cdad3a177b5daec5b7.jpg) + +ters: + +$$ +\Theta^ {*} = \underset {\Theta} {\arg \min } \sum_ {(\boldsymbol {x}, y) \sim D} - \log p (y \mid \boldsymbol {x}; \Theta), \tag {1} +$$ + +where $p(y \mid x; \Theta)$ is the likelihood our model assigns to a dataset sample $(x, y)$ . The likelihood function is implemented using an ANN with parameters $\Theta = \{W^1, b^1, W^2, b^2, \ldots, W^N\}$ , where $W^k$ and $b^k$ denote the weight (including convolutional) and bias terms at layer $k$ , respectively. + +Compressing the model amounts to compressing each parameter in the set $\Theta$ . Instead of compressing each parameter directly, we compress reparameterized forms of them. To be precise, we introduce the reparameterizations $\Phi = \{\widetilde{W}^1,\widetilde{b}^1,\widetilde{W}^2,\widetilde{b}^2,\ldots ,\widetilde{W}^N\}$ and parameter decoders $f_{\mathrm{conv}}$ , $f_{\mathrm{dense}}$ , $f_{\mathrm{bias}}$ such that + +$$ +\boldsymbol {W} ^ {k} = f _ {\text {c o n v}} \left(\widetilde {\boldsymbol {W}} ^ {k}\right) \quad \text {i f l a y e r} k \text {i s c o n v o l u t i o n a l}, \tag {2} +$$ + +$$ +\boldsymbol {W} ^ {k} = f _ {\text {d e n s e}} \left(\widetilde {\boldsymbol {W}} ^ {k}\right) \quad \text {i f l a y e r} k \text {i s f u l l y c o n n e c t e d ,} \tag {3} +$$ + +$$ +\boldsymbol {b} ^ {k} = f _ {\text {b i a s}} \left(\widetilde {\boldsymbol {b}} ^ {k}\right) \quad \text {i f l a y e r} k \text {h a s a b i a s .} \tag {4} +$$ + +We can think of each parameter decoder $f$ as a mapping from reparameterization space to parameter space. For ease of notation, we write $\mathcal{F} = \{f_{\mathrm{conv}}, f_{\mathrm{dense}}, f_{\mathrm{bias}}\}$ and $\Theta = \mathcal{F}(\Phi)$ . The parameter decoders themselves may have learnable parameters, which we denote $\Psi$ . Our method is visually summarized in figs. 2 and 3. + +# 2.1 COMPRESSING $\Phi$ WITH ENTROPY CODING + +In order to apply a Shannon-style entropy coder efficiently to the reparameterizations $\Phi$ , we need a discrete alphabet of representatives and associated probabilities for each representative. Rather than handling an expressive set of representatives, as in VQ, we choose to fix them to the integers, and + +achieve expressivity via the parameter decoders $\mathcal{F}$ instead. + +Each reparameterization $\phi \in \Phi$ (i.e. a $\widetilde{W}$ or $\widetilde{b}$ representing a weight or bias, respectively) is a matrix in $\mathbb{Z}^{d\times \ell}$ interpreted as consisting of $d$ samples from a discrete probability distribution producing vectors of dimension $\ell$ . We fit a factorized probability model + +$$ +q (\phi) = \prod_ {j = 1} ^ {d} \prod_ {i = 1} ^ {\ell} q _ {i} \left(\phi_ {j, i}\right) \tag {5} +$$ + +to each column $i$ of $\phi$ , using $\ell$ different probability models $q_{i}$ for each corresponding parameter decoder (the form of $q_{i}$ is described in the next section). Fitting of probability models is often done by minimizing the negative log-likelihood. Assuming $\phi$ follows the distribution $q$ , Shannon's source coding theorem states that the minimal length of a bit sequence encoding $\phi$ is the self-information of $\phi$ under $q$ : + +$$ +I (\phi) = - \log_ {2} q (\phi), \tag {6} +$$ + +which is identical to Shannon cross entropy up to an expectation operator, and identical to the negative log likelihood up to a constant factor. By minimizing $I$ over $q$ and $\phi$ during training, we thus achieve two goals: 1) we fit $q$ to the model parameters in a maximum likelihood sense, and 2) we directly optimize the parameters for compressibility. + +After training, we design an arithmetic code for $q$ , and use it to compress the model parameters. This method incurs only a small overhead over the theoretical bound due to the finite length of the bit sequence (arithmetic coding is asymptotically optimal). Practically, the overhead amounts to less than $1\%$ of the size of the bit sequence; thus, self-information is an excellent proxy for model size. Further overhead results from including a description of $\Psi$ , the parameters of the parameter decoders, as well as of $q$ itself (in the form of a table) in the model size. However, these can be considered constant and small compared to the total model size, and thus do not need to be explicitly optimized for. + +The overall loss function is simply the additive combination of the original cross-entropy classification loss under reparameterization with the self-information of all reparameterizations: + +$$ +L (\Phi , \Psi) = \sum_ {(\boldsymbol {x}, y) \sim D} - \log p (y \mid \boldsymbol {x}; \mathcal {F} (\Phi)) + \lambda \sum_ {\phi \in \Phi} I (\phi). \tag {7} +$$ + +We refer to the second term (excluding the constant $\lambda$ ) as the rate loss. By varying $\lambda$ across different experiments, we can explore the Pareto frontier of compressed model size vs. model accuracy. To compare our method to other work, we varied $\lambda$ such that our method produced similar accuracy, and then compared the resulting model size. + +# 2.2 DISCRETE OPTIMIZATION + +Since $\Phi$ is discrete-valued, we need to make some further approximations in order to optimize $L$ over it using stochastic gradient descent. To get around this, we maintain continuous surrogates $\tilde{\Phi}$ . + +For optimizing the classification loss, we use the "straight-through" gradient estimator (Bengio et al., 2013), which provides a biased gradient estimate but has shown good results in practice. This consists of rounding the continuous surrogate to the nearest integer during training, and ignoring the rounding for purposes of backpropagation. After training, we only keep the discretized values. + +In order to obtain good estimates for both the rate term and its gradient during training, we adopt a relaxation approach previously described by Balle, Minnen, et al. (2018, appendix 6.1); the code is provided as an open source library1. In a nutshell, the method replaces the probability mass functions $q_{i}$ with a set of non-parametric continuous density functions, which are based on small ANNs. These density models are fitted to $\hat{\phi}_{j,i} + n_{j,i}$ , where $n_{j,i} \sim \mathcal{U}(-\frac{1}{2},\frac{1}{2})$ is i.i.d. uniformly distributed additive noise. This turns out to work well in practice, because the negative log likelihood of these noise-affected variates under the continuous densities approximates the self-information $I$ : + +$$ +I (\phi) \approx \sum_ {j = 1} ^ {d} \sum_ {i = 1} ^ {\ell} - \log_ {2} \tilde {q} _ {i} \left(\phi_ {j, i} + n _ {j, i}\right), \tag {8} +$$ + +where $\tilde{q}_i$ denote the density functions. Once the density models are trained, the values of the probability mass functions modeling $\phi$ are derived from the substitutes $\tilde{q}_i$ and stored in a table, which is included in the model description. The parameters of $\tilde{q}_i$ are no longer needed after training. + +# 2.3 MODEL PARTITIONING + +A central component of our approach is partitioning the set of model parameters into groups. For the purpose of creating a model compression method, we interpret entire groups of model parameters as samples from the same learned distribution. We define a fully factorized distribution $q(\Phi) = \prod_{\phi \in \Phi} q_{\phi}(\phi)$ , and introduce parameter sharing within the factors $q_{\phi}$ of the distribution that correspond to the same group, as well as within the corresponding decoders. These group assignments are fixed a priori. For instance, in fig. 2, $\widetilde{W}^{1}$ and $\widetilde{W}^{2}$ can be assumed to be samples of the same distribution, that is $q_{\widetilde{W}^{1}} = q_{\widetilde{W}^{2}}$ . We also use the same parameter decoder $f_{\mathrm{conv}}$ to decode them. Further, each of the reparameterizations $\phi$ is defined as a rank-2 tensor (a matrix), where each row corresponds to a "sample" from the learned distribution. The operations in $f$ apply the same transformation to each row (fig. 3). As an example, in $f_{\mathrm{conv}}$ , each spatial $H \times W$ matrix of filter coefficients is assumed to be a sample from the same distribution. + +Our method can be applied analogously to various model partitionings. In fact, in our experiments, we vary the size of the groups, i.e., the number of parameters assumed i.i.d., depending on the total number of parameters of the model $(\Theta)$ . The size of the groups parameterizes a trade-off between compressibility and overhead: if groups consisted of just one scalar parameter each, compressibility would be maximal, since $q$ would degenerate (i.e., would capture the value of the parameter with certainty). However, the overhead would be maximal, since $\mathcal{F}$ and $q$ would have a large number of parameters that would need to be included in the model size (defeating the purpose of compression). On the other hand, encoding all parameters of the model with one and the same decoder and scalar distribution would minimize overhead, but may be overly restrictive by failing to capture distributional differences amongst all the parameters, and hence lead to suboptimal compressibility. We describe the group structure of each network that we use in more detail in the following section. + +# 3 EXPERIMENTS + +For our MNIST and CIFAR-10 experiments, we evaluate our method by applying it to four distinct image classification networks: LeNet300-100 (Lecun et al., 1998) and LeNet-5-Caffe² on MNIST (LeCun and Cortes, 2010), as well as VGG-16³ (Simonyan and Zisserman, 2015) and ResNet-20 (He et al., 2016b; Zagoruyko and Komodakis, 2016) with width multiplier 4 (ResNet-20-4) on CIFAR-10 (Zagoruyko and Komodakis, 2016). For our ImageNet experiments, we evaluate our method on the ResNet-18 and ResNet-50 (He et al., 2016a) networks. We train all our models from scratch and compare them with recent state-of-the-art methods by quoting performance from their respective papers. Compared to many previous approaches, we do not initialize the network with pre-trained or pre-sparsified weights. + +We found it useful to use two separate optimizers: one to optimize the variables of the probability models $\tilde{q}_i$ , and one to optimize the reparameterizations $\Phi$ and variables of the parameter decoders $\Psi$ . While the latter is chosen to be the same optimizer typically used for the task/architecture, the former is always Adam (Kingma and Ba, 2015) with a learning rate of $10^{-4}$ . We chose to always use Adam, because the parameter updates used by Adam are independent of any scaling of the objective (when its hyper-parameter $\epsilon$ is sufficiently small). In our method, the probability model variables only get gradients from the entropy loss which is scaled by the rate penalty $\lambda$ . Adam normalizes out this scale and makes the learning rate of the probability model independent of $\lambda$ and of other hyperparameters such as the model partitioning. + +# 3.1 MNIST EXPERIMENTS + +We apply our method to two LeNet variants: LeNet300-100 and LeNet5-Caffe and report results in table 1. We train the networks using Adam with a constant learning rate of 0.001 for 200,000 + +iterations. To remedy some of the training noise from quantization, we maintain an exponential moving average (EMA) of the weights and evaluate using those. Note that this does not affect the quantization, as quantization is performed after the EMA variables are restored. + +LeNet300-100 consists of 3 fully connected layers. We partitioned this network into three parameter groups: one for the first two fully connected layers, one for the classifier layer, and one for biases. LeNet5-Caffe consists of two $5 \times 5$ convolutional layers followed by two fully connected layers, with max pooling following each convolutional layer. We partitioned this network into four parameter groups: One for both of the convolutional layers, one for the penultimate fully connected layer, one for the final classifier layer, and one for the biases. + +As evident from table 1, for the larger LeNet300-100 model, our method outperforms all the baselines while maintaining a comparable error rate. For the smaller LeNet5-Caffe model, our method is second only to Minimal Random Code Learning (Havasi et al., 2019). Note that in both of the MNIST models, the number of probability distributions $\ell = 1$ in every parameter group, including in the convolutional layers. To be precise, the $\widetilde{W}^k$ for the convolutional weights $W^k$ will be $H \cdot W \cdot I \cdot O \times 1$ . This is a good trade-off, since the model is small to begin with, and having $\ell = 5 \cdot 5 = 25$ scalar probability models for $5 \times 5$ convolutional layers would have too much overhead. + +For both of the MNIST models, we found that letting each subcomponent of $\mathcal{F}$ be a simple dimension-wise scalar affine transform (similar to $f_{\mathrm{dense}}$ in fig. 3), was sufficient. Since each $\phi$ is quantized to integers, having a flexible scale and shift leads to flexible SQ, similar to Louizos, Reisser, et al. (2019). Due to the small size of the networks, more complex transformation functions would lead to too much overhead. + +# 3.2 CIFAR-10 EXPERIMENTS + +We apply our method to VGG-16 (Simonyan and Zisserman, 2015) and ResNet-20-4 (He et al., 2016b; Zagoruyko and Komodakis, 2016) and report the results in table 1. For both VGG-16 and ResNet-20-4, we use momentum of 0.9 with an initial learning rate of 0.1, and decay by 0.2 at iterations 256,000, 384,000, and 448,000 for a total of 512,000 iterations. This learning rate schedule was fixed from the beginning and was not tuned in any way other than verifying that our models' training loss had converged. + +VGG-16 consists of 13 convolutional layers of size $3 \times 3$ followed by 3 fully connected layers. We split this network into four parameter groups: one for all convolutional layers and one each all fully connected layers. We did not compress biases. We found that the biases in 32-bit floating point format add up to about $20\mathrm{KB}$ , which we add to our reported numbers. + +ResNet-20-4 consists of 3 ResNet groups with 3 residual blocks each. There is also an initial convolution layer and a final fully connected classification layer. We partition this network into two parameter groups: one for all convolutional layers and one for the final classification layer. We again did not compress biases and include them in our results; they add up to about $11\mathrm{KB}$ . + +For convolutions in both CIFAR-10 models, $\ell = O\times I = 9$ ; $f_{\mathrm{conv}}$ and $f_{\mathrm{dense}}$ are exactly as pictured in fig. 3. To speed up training, we fixed $\psi_{\mathbf{W}}$ to a diagonal scaling matrix multiplied by the inverse real-valued discrete Fourier transform (DFT). We found that this particular choice performs much better than SQ, or than choosing a random but fixed orthogonal matrix in place of the DFT (fig. 4). From the error vs. rate plots, the benefit of reparameterization in the high compression regime is evident. VGG-16 and ResNet-20-4 both contain batch normalization (Ioffe and Szegedy, 2015) layers that include a moving average for the mean and variance. Following Havasi et al. (2019), we do not include the moving averages in our reported numbers. We do, however, include the batch normalization bias term $\beta$ and let it function as the bias for each layer ( $\gamma$ is set to a constant 1). + +# 3.3 IMAGENET EXPERIMENTS + +For the ImageNet dataset (Russakovsky et al., 2015), we reproduce the training setup and hyperparameters from He et al. (2016a). We put all $3 \times 3$ convolutional kernels in a single parameter group, similar to in our CIFAR experiments. In the case of ResNet-50, we also group all $1 \times 1$ convolutional kernels together. We put all the remaining layers in their own groups. This gives a + +
ModelAlgorithmSizeError (Top-1)
LeNet300-100 (MNIST)Uncompressed1.06 MB1.6%
Bayesian Compression (GNJ) (Louizos, Ullrich, et al., 2017)18.2 KB (58x)1.8%
Bayesian Compression (GHS) (Louizos, Ullrich, et al., 2017)18.0 KB (59x)2.0%
Sparse Variational Dropout (Molchanov et al., 2017)9.38 KB (113x)1.8%
Our Method (SQ)8.56 KB (124x)1.9%
LeNet5-Caffe (MNIST)Uncompressed1.72 MB0.7%
Sparse Variational Dropout (Molchanov et al., 2017)4.71 KB (365x)1.0%
Bayesian Compression (GHS) (Louizos, Ullrich, et al., 2017)2.23 KB (771x)1.0%
Minimal Random Code Learning (Havasi et al., 2019)1.52 KB (1110x)1.0%
Our Method (SQ)2.84 KB (606x)0.9%
VGG-16 (CIFAR-10)Uncompressed60 MB6.6%
Bayesian Compression (Louizos, Ullrich, et al., 2017)525 KB (116x)9.2%
DeepCABAC (Wiedemann, Kirchhoffer, et al., 2019)960 KB (62.5x)9.0%
Minimal Random Code Learning (Havasi et al., 2019)417 KB (159x)6.6%
Minimal Random Code Learning (Havasi et al., 2019)168 KB (452x)10.0%
Our Method (DFT)101 KB (590x)10.0%
ResNet-20-4 (CIFAR-10)Uncompressed17.2 MB5%
Our Method (SQ)176 KB (97x)10.3%
Our Method (DFT)128 KB (134x)8.8%
ResNet-18 (ImageNet)Uncompressed46.7 MB30.0%
AP + Coreset-S (Dubey et al., 2018)3.11 MB (15x)32.0%
Our Method (SQ)2.78 MB (17x)30.0%
Our Method (DFT)1.97 MB (24x)30.0%
ResNet-50 (ImageNet)Uncompressed102 MB25%
AP + Coreset-S (Dubey et al., 2018)6.46 MB (16x)26.0%
DeepCABAC (Wiedemann, Kirchhoffer, et al., 2019)6.06 MB (17x)25.9%
Our Method (SQ)5.91 MB (17x)26.5%
Our Method (DFT)5.49 MB (19x)26.0%
+ +Table 1: Our compression results compared to the existing state of the art. Our method is able to achieve higher compression than previous approaches in LeNet300-100, VGG-16, and ResNet18/50, while maintaining comparable prediction accuracy. We have reported the models that have the closest accuracy to the baselines. For the complete view of the trade-off refer to figs. 4a and 4b. For VGG-16 and ImageNet experiments, we report a median of three runs with a fixed entropy penalty. For ResNet-20-4, we report the SQ and DFT points closest to $10\%$ error from fig. 4b. Note that the values we reproduce here for MRC are the corrected values found in the OpenReview version of the publication. + +total of 4 parameter groups for ResNet-50 and 3 groups for ResNet-18. Analogously to the CIFAR experiments, we compare SQ to using random orthogonal or DFT matrices for reparameterizing the convolution kernels (fig. 4a). + +# 4 DISCUSSION + +Existing model compression methods are typically built on a combination of pruning, quantization, or coding. Pruning involves sparsifying the network either by removing individual parameters or higher level structures such as convolutional filters, layers, activations, etc. Various strategies for pruning weights include looking at the Hessian (Cun et al., 1990) or just their $\ell_p$ norm (Han et al., 2016). Srinivas and Babu (2015) focus on pruning individual units, and H. Li et al. (2017) prunes convolutional filters. Louizos, Ullrich, et al. (2017) and Molchanov et al. (2017), which we compare to in our compression experiments, also prune parts of the network. Dubey et al. (2018) describe a dimensionality reduction technique specialized for CNN architectures. Pruning is a simple approach to reduce memory requirements as well as computational complexity, but doesn't inherently tackle the problem of efficiently representing the parameters that are left. Here, we primarily focus on the latter: given a model architecture and a task, we're interested in finding a set of parameters which can be described in a compact form and yield good prediction accuracy. Our work is largely orthogonal to the pruning literature, and could be combined if reducing the number of units is desired. + +![](images/e6ac6044070006019a5463a0ee426d24eb70beb18021f20030dbdd9b462f386e.jpg) +(a) ResNet-18 (ImageNet) + +![](images/c0aed291234c2719fd71ea66197e5c0dc70ec219cce4ed4336d70250738bf38d.jpg) +(b) ResNet-20-4 (CIFAR-10) +Figure 4: Error vs. rate plot for ResNet-18 on ImageNet and ResNet-20-4 on CIFAR-10 using SQ, DFT transform, and random but fixed orthogonal matrices. The DFT is clearly beneficial in comparison to the other two transforms. All experiments were trained with the same hyper-parameters (including the set of entropy penalties), only differing in the transformation matrix. + +Quantization involves restricting the parameters to a small discrete set of values. There is work in binarizing or ternarizing networks (Courbariaux et al., 2015; F. Li et al., 2016; Zhou et al., 2018) via either straight-through gradient approximation (Bengio et al., 2013) or stochastic rounding (Gupta et al., 2015). Recently, Louizos, Reisser, et al. (2019) introduced a new differentiable quantization procedure that relaxes quantization. We use the straight-through heuristic, but could possibly use other stochastic approaches to improve our methods. While most of these works focus on uniform quantization, Baskin et al. (2018) also extend to non-uniform quantization, which our generalized transformation function amounts to. Han et al. (2016) and Ullrich et al. (2017) share weights and quantize by clustering, Chen, J. Wilson, et al. (2015) randomly enforce weight sharing, and thus effectively perform VQ with a pre-determined assignment of parameters to representatives. Other works also make the observation that representing weights in the frequency domain helps compression; Chen, J. T. Wilson, et al. (2016) randomly enforce weight sharing in the frequency domain and Wang et al. (2016) use K-means clustering in the frequency domain. + +Coding (entropy coding, or Shannon-style compression) methods produce a bit sequence that can allow convenient storage or transmission of a trained model. This generally involves quantization as a first step, followed by methods such as Huffman coding (Huffman, 1952), arithmetic coding (Rissanen and Langdon, 1981), etc. Entropy coding methods exploit a known probabilistic structure of the data to produce optimized binary sequences whose length ideally closely approximates the cross entropy of the data under the probability model. In many cases, authors represent the quantized values directly as binary numbers with few digits (Courbariaux et al., 2015; F. Li et al., 2016; Louizos, Reisser, et al., 2019), which effectively leaves the probability distribution over the values unexploited for minimizing model size; others do exploit it (Han et al., 2016). Wiedemann, Marban, et al. (2018) formulate model compression with an entropy constraint, but use (non-reparameterized) scalar quantization. Their model significantly underperforms all the state-of-the-art models that we compare with (table 1). Some recent work has claimed improved compression performance by skipping quantization altogether (Havasi et al., 2019). Our work focuses on coding with quantization. + +Han et al. (2016) defined their method using a four-stage training process: 1. training the original network, 2. pruning and re-training, 3. quantization and re-training, and 4. entropy coding. This approach has influenced many follow-up publications. In the same vein, many current high-performing methods have significant complexity in implementation or require a multi-stage training process. Havasi et al. (2019) requires several stages of training and retraining while keeping parts of the network fixed. Wiedemann, Kirchhoffer, et al. (2019) require pre-sparsification of the network, which is computationally expensive, and use a more complex (context-adaptive) variant of arithmetic coding which may be affected by MPEG patents. These complexities can prevent methods from scaling to larger architectures or decrease their practical usability. In contrast, our method requires only a single training stage followed by a royalty-free version of arithmetic coding. In + +addition, our code is publicly available4. + +Our method has parallels to recent work in learned image compression (Balle, Laparra, et al., 2017; Theis et al., 2017) that uses end-to-end trained deep models for significant performance improvements in lossy image compression. These models operate in an autoencoder framework, where scalar quantization is applied in the latent space. Our method can be viewed as having just a decoder that is used to transform the latent representation into the model parameters, but no encoder. + +# 5 CONCLUSION + +We describe a simple model compression method built on two ingredients: joint (i.e., end-to-end) optimization of compressibility and task performance in only a single training stage, and reparameterization of model parameters, which increases the flexibility of the representation over scalar quantization, and is applicable to arbitrary network architectures. We demonstrate that state-of-the-art model compression performance can be achieved with this simple framework, outperforming methods that rely on complex, multi-stage training procedures. Due to its simplicity, the approach is particularly suitable for larger models, such as VGG and especially ResNets. In future work, we may consider the potential benefits of even more flexible (deeper) parameter decoders. + +# REFERENCES + +Balle, Johannes, Valero Laparra, and Eero P. Simoncelli (2017). "End-to-end Optimized Image Compression". In: Proc. of 5th Int. Conf. on Learning Representations. URL: https : // openreview.net/forum?id=rJxdO3jeq. +Balle, Johannes, David Minnen, et al. (2018). "Variational image compression with a scale hyperprior". In: Proc. of 6th Int. Conf. on Learning Representations. URL: https://openreview.net/forum?id=rkcQFMZRb. +Baskin, Chaim et al. (2018). "UNIQ: Uniform Noise Injection for the Quantization of Neural Networks". In: arXiv preprint arXiv:1804.10969. +Bengio, Yoshua, Nicholas Léonard, and Aaron Courville (2013). "Estimating or propagating gradients through stochastic neurons for conditional computation". In: arXiv preprint arXiv:1308.3432. +Chen, Wenlin, James Wilson, et al. (July 2015). "Compressing Neural Networks with the Hashing Trick". In: Proceedings of the 32nd International Conference on Machine Learning. Ed. by Francis Bach and David Blei. Vol. 37. Proceedings of Machine Learning Research. Lille, France: PMLR, pp. 2285-2294. URL: http://proceedings.mlr.press/v37/chenc15.html. +Chen, Wenlin, James T. Wilson, et al. (2016). "Compressing Convolutional Neural Networks in the Frequency Domain". In: KDD. +Courbariaux, Matthieu, Yoshua Bengio, and Jean-Pierre David (2015). "BinaryConnect: Training Deep Neural Networks with binary weights during propagations". In: Advances in Neural Information Processing Systems 28. Ed. by C. Cortes et al. Curran Associates, Inc., pp. 3123-3131. +Cun, Yann Le, John S. Denker, and Sara A. Solla (1990). "Optimal Brain Damage". In: Advances in Neural Information Processing Systems. Morgan Kaufmann, pp. 598-605. +Dubey, Abhimanyu, Moitreya Chatterjee, and Narendra Ahuja (2018). "Coreset-Based Neural Network Compression". In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 454-470. +Gupta, Suyog et al. (2015). "Deep Learning with Limited Numerical Precision". In: Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37. ICML'15. Lille, France: JMLR.org, pp. 1737-1746. URL: http://dl.acm.org/citation.cfm?id=3045118.3045303. +Han, Song, Huizi Mao, and William J. Dally (2016). "Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding". In: 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Ed. by Yoshua Bengio and Yann LeCun. URL: http://arxiv.org/abs/1510.00149. + +Havasi, Marton, Robert Peharz, and José Miguel Hernández-Lobato (2019). "Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters". In: International Conference on Learning Representations. URL: https://openreview.net/forum?id=rlf0YiCctm. +He, Kaiming et al. (2016a). "Deep residual learning for image recognition". In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. +- (2016b). "Identity Mappings in Deep Residual Networks". In: Lecture Notes in Computer Science, pp. 630-645. DOI: 10.1007/978-3-319-46493-0_38. +Huffman, David A. (Sept. 1952). "A Method for the Construction of Minimum-Redundancy Codes". In: Proceedings of the Institute of Radio Engineers 40.9, pp. 1098-1101. +Ioffe, Sergey and Christian Szegedy (2015). "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift". In: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37. ICML'15. Lille, France: JMLR.org, pp. 448-456. +Kingma, Diederik P. and Jimmy Ba (2015). "Adam: A Method for Stochastic Optimization". In: arXiv e-prints. Presented at the 3rd Int. Conf. on Learning Representations. arXiv: 1412.6980. +LeCun, Yann and Corinna Cortes (2010). "MNIST handwritten digit database". In: URL: http://yann.learcun.com/exdb/mnist/. +Lecun, Yann et al. (1998). "Gradient-based learning applied to document recognition". In: Proceedings of the IEEE, pp. 2278-2324. +Li, Fengfu, Bo Zhang, and Bin Liu (2016). Ternary Weight Networks. arXiv: 1605.04711 [cs.CV]. +Li, Hao et al. (2017). "Pruning filters for efficient convnets". In: International Conference on Learning Representations (ICLR). +Louizos, Christos, Matthias Reisser, et al. (2019). "Relaxed Quantization for Discretized Neural Networks". In: International Conference on Learning Representations. URL: https://openreview.net/forum?id=HkxjYoCqKX. +Louizos, Christos, Karen Ullrich, and Max Welling (2017). "Bayesian compression for deep learning". In: Advances in Neural Information Processing Systems, pp. 3288-3298. +Molchanov, Dmitry, Arsenii Ashukha, and Dmitry Vetrov (2017). "Variational Dropout Sparsifies Deep Neural Networks". In: Proceedings of the 34th International Conference on Machine Learning - Volume 70. ICML'17. Sydney, NSW, Australia: JMLR.org, pp. 2498-2507. +Rissanen, Jorma and Glen G. Langdon Jr. (1981). "Universal modeling and coding". In: IEEE Transactions on Information Theory 27.1. DOI: 10.1109/TIT.1981.1056282. +Russakovsky, Olga et al. (2015). "ImageNet Large Scale Visual Recognition Challenge". In: International Journal of Computer Vision (IJCV) 115.3, pp. 211-252. DOI: 10.1007/s11263-015-0816-y. +Shannon, Claude E. (1948). "A Mathematical Theory of Communication". In: The Bell System Technical Journal 27.3. DOI: 10.1002/j.1538-7305.1948.tb01338.x. +Simonyan, K. and A. Zisserman (2015). "Very Deep Convolutional Networks for Large-Scale Image Recognition". In: International Conference on Learning Representations. +Srinivas, Suraj and R. Venkatesh Babu (2015). "Data-free Parameter Pruning for Deep Neural Networks". In: Proceedings of the British Machine Vision Conference 2015, BMVC 2015, Swansea, UK, September 7-10, 2015. Ed. by Xianghua Xie, Mark W. Jones, and Gary K. L. Tam. BMVA Press, pp. 31.1-31.12. DOI: 10.5244/C.29.31. URL: https://doi.org/10.5244/C.29.31. +Theis, Lucas et al. (2017). "Lossy Image Compression with Compressive Autoencoders". In: Proc. of 5th Int. Conf. on Learning Representations. URL: https://openreview.net/forum?id=rJiNwv9gg. +Ullrich, Karen, Edward Meeds, and Max Welling (2017). "Soft Weight-Sharing for Neural Network Compression". In: International Conference on Learning Representations (ICLR). +Wang, Yunhe et al. (2016). "CNNpack: Packing Convolutional Neural Networks in the Frequency Domain". In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 253-261. URL: http://papers.nips.cc/paper/6390-cnnpack-packing-convolutional-neural-networks-in-the-frequency-domain. +Wiedemann, Simon, Heiner Kirchhoffer, et al. (2019). DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression. arXiv: 1905.08318 [cs.LG]. + +Wiedemann, Simon, Arturo Marban, et al. (2018). Entropy-Constrained Training of Deep Neural Networks. arXiv: 1812.07520 [cs.LG]. +Zagoruyko, Sergey and Nikos Komodakis (2016). "Wide Residual Networks". In: Proceedings of the British Machine Vision Conference 2016. DOI: 10.5244/c.30.87. URL: http://dx.doi.org/10.5244/C.30.87. +Zhou, Aojun et al. (June 2018). "Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks". In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). \ No newline at end of file diff --git a/scalablemodelcompressionbyentropypenalizedreparameterization/images.zip b/scalablemodelcompressionbyentropypenalizedreparameterization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e5ae2135ae53e1b5a1c016534d6b375503d5d44b --- /dev/null +++ b/scalablemodelcompressionbyentropypenalizedreparameterization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e191891fdeba92142ea9c93c3f4126a5ddbab4f49dcf3104f1fad1528b065fa7 +size 318958 diff --git a/scalablemodelcompressionbyentropypenalizedreparameterization/layout.json b/scalablemodelcompressionbyentropypenalizedreparameterization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..671c7f03bc66e09a4577de71801073cd6c58d256 --- /dev/null +++ b/scalablemodelcompressionbyentropypenalizedreparameterization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1070a8ac433edc9eefff65966d50f5329760dd05b8af09ed318658e76f6455d7 +size 359799 diff --git a/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_content_list.json b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f1df45de362df8179827951a2e0c7afd6970a8d5 --- /dev/null +++ b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8755c47b63cf307f66baaac4f7e5165e051c97785fef1cc203029adeae36c712 +size 111586 diff --git a/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_model.json b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..27152aa2907172de94054c4237c81c55e1881d7a --- /dev/null +++ b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dc7c2db9c4aef9a5aad0b71571e8940a7e1baf3b7b4895b7f86cbf1a8c520b8 +size 128985 diff --git a/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_origin.pdf b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..578d75d14952142e59c4d0e482fbdab4d1de165a --- /dev/null +++ b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/258bc97f-c28e-4985-82c7-247dbe332af1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00febb72cc49f768a7eca3eef14be2e74ce499f6701bfc3944370fe7f7a5fe4f +size 403465 diff --git a/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/full.md b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8e07b6deedd80cec6587c8c62866901d1e747c81 --- /dev/null +++ b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/full.md @@ -0,0 +1,405 @@ +# SCALABLE NEURAL METHODS FOR REASONING WITH A SYMBOLIC KNOWLEDGE BASE + +William W. Cohen & Haitian Sun & R. Alex Hofer & Matthew Siegler + +Google, Inc + +{wcohen, Haitiansun, rofer, msiegler}@google.com + +# ABSTRACT + +We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural KB inference modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations. + +# 1 INTRODUCTION + +There has been much prior work on using neural networks to generalize the contents of a KB (Xiong et al., 2017; Bordes et al., 2013; Dettmers et al., 2018), typically by constructing low-dimensional embeddings of the entities and relations in the KB, which are then used to score potential triples as plausible or implausible elements of the KB. We consider here the related but different problem of incorporating a symbolic KB into a neural system, so as to inject knowledge from an existing KB directly into a neural model. More precisely, we consider the problem of designing neural KB inference modules that are (1) fully differentiable, so that any loss based on their outputs can be backpropagated to their inputs; (2) accurate, in that they are faithful to the original semantics of the KB; (3) expressive, so they can perform non-trivial inferences; and (4) scalable, so that realistically large KBs can be incorporated into a neural model. + +To motivate the goal of incorporating a symbolic KB into a neural network, consider the task of learning neural semantic parsers from denotations. Many questions—e.g., what's the most recent movie that Quentin Tarantino directed? or which nearby restaurants have vegetarian entrees and take reservations?—are best answered by knowledge-based question-answering (KBQA) methods, where an answer is found by accessing a KB. Within KBQA, a common approach is neural semantic parsing—i.e., using neural methods to translate a natural-language question into a structured query against the KB (Zhong et al., 2017; Finegan-Dollak et al., 2018; Shaw et al., 2019), which is subsequently executed with a symbolic KB query engine. While this approach can be effective, it requires training data pairing natural-language questions with structured queries, which is difficult to obtain. Hence researchers have also considered learning semantic parsers from denotations (Berant et al., 2013; Yih et al., 2015), where training data consists of pairs $(q, A)$ , where $q$ is a natural-language question and $A$ is the desired answer. Typically $A$ is a set of KB entities—e.g., if $q$ is the first sample question above, $A$ would be1 the singleton set containing Once Upon a Time in Hollywood. + +Learning semantic parsers from denotations is difficult because the end-to-end process to be learned includes a non-differentiable operation—i.e., reasoning with the symbolic KB that contains the answers. To circumvent this difficulty, prior systems have used three different approaches. Some have used heuristic search to infer structured queries from denotations (Pasupat & Liang, 2016; Dasigi et al., 2019): this works in some cases but often an answer could be associated with many possible structured queries, introducing noise. Others have supplemented gradient approaches with + +
x: an entityX: weighted set of entitiesx: vector encoding XNE: # entities in KB
r: an relationR: weighted set of relationsr: vector encoding RNR: # relations in KB
Mr: matrix for rMR: weighted sum of Mr's, see Eq 1follow(x, r): see Eq 2NT: # triples in KB
Msubj, Mobj, Mrel: the reified KB, encoded as matrices mapping triple id ℓ to subject, object, and relation ids
+ +Table 1: Summary of notation used in the paper. (This excludes notation used in defining models for the KB completion and QA tasks of Section 3.) + +reinforcement learning (e.g., (Misra et al., 2018)). Some systems have also "neuralized" KB reasoning, but to date only over small KBs: this approach is natural when answers are naturally constrained to depend on a small set of facts (e.g., a single table (Zhong et al., 2017; Gupta & Lewis, 2018)), but more generally requires coupling a learner with some (non-differentiable) mechanism to retrieve an appropriate small question-dependent subset of the KB as in (Sun et al., 2018; 2019). + +In this paper, we introduce a novel scheme for incorporating reasoning on a large question-independent KB into a neural network, by representing a symbolic KB with an encoding called a sparse-matrix reified KB. A sparse-matrix reified KB is very compact, can be distributed across multiple GPUs if necessary, and is well-suited to modern GPU architecture. For KBs with many relations, a reified KB can be up to four orders of magnitude faster than alternative implementations (even alternatives based on sparse-matrix representations), and in our experiments we demonstrate scalability to a KB with over 13 million entities and nearly 44 million facts. This new architectural component leads to radically simpler architectures for neural semantic parsing from denotations—architectures based on a single end-to-end differentiable process, rather than cascades of retrieval and neural processes. + +We show that very simple instantiations of these architectures are still highly competitive with the state of the art for several benchmark tasks. To our knowledge these models are the first fully end-to-end neural parsers from denotations that have been applied to these benchmark tasks. We also demonstrate that these architectures scale to long chains of reasoning on synthetic tasks, and demonstrate similarly simple architectures for a second task, KB completion. + +# 2 NEURAL REASONING WITH A SYMBOLIC KB + +# 2.1 BACKGROUND + +KBs, entities, and relations. A KB consists of entities and relations. We use $x$ to denote an entity and $r$ to denote a relation. Each entity has an integer index between 1 and $N_{E}$ , where $N_{E}$ is the number of entities in the KB, and we write $x_{i}$ for the entity that has index $i$ . A relation is a set of entity pairs, and represents a relationship between entities: for instance, if $x_{i}$ represents "Quentin Tarantino" and $x_{j}$ represents "Pulp Fiction" then $(x_{i}, x_{j})$ would be an member of the relation director_of. A relation $r$ can thus be represented as a subset of $\{1, \dots, N_{E}\} \times \{1, \dots, N_{E}\}$ . Finally a KB consists a set of relations and a set of entities. + +Weighted sets as "k-hot" vectors. Our differentiable operations are based on weighted sets, where each element $x$ of weighted set $X$ is associated with a non-negative real number. It is convenient to define this weight to be zero for all $x \notin X$ , while for $x \in X$ , a weight less than 1 is a confidence that the set contains $x$ , and weights more than 1 make $X$ a multiset. If all elements of $X$ have weight 1, we say $X$ is a hard set. A weighted set $X$ can be encoded as an entity-set vector $\mathbf{x} \in \mathbb{R}^{N_E}$ , where the $i$ -th component of $\mathbf{x}$ is the weight of $x_i$ in $X$ . If $X$ is a hard entity set, then this will be a "k-hot" vector, for $k = |X|$ . The set of indices of $\mathbf{x}$ with non-zero values is called the support of $\mathbf{x}$ . + +Sets of relations, and relations as matrices Often we would like to reason about sets of relations $^2$ , so we also assume every relation $r$ in a KB is associated with an entity and hence an integer index. We write $r_k$ for the relation with index $k$ , and we assume that relation entities are listed first in the index of entities, so the index $k$ for $r_k$ is between 1 and $N_R$ , where $N_R$ is the number of relations in the KB. We use $R$ for a set of relations, e.g., $R = \{\text{writer_of}, \text{director_of}\}$ might be such a set, and use $\mathbf{r}$ for a vector encoding of a set. A relation $r$ can be encoded as a relation matrix $\mathbf{M}_r \in \mathbb{R}^{N_E \times N_E}$ , where the value for $\mathbf{M}_r[i,j]$ is (in general) the weight of the assertion $r(x_i, x_j)$ in the KB. In the experiments of this paper, all KB relations are hard sets, so $\mathbf{M}_r[i,j] \in \{0,1\}$ . + +Sparse vs. dense matrices for relations. Scalably representing a large KB requires careful consideration of the implementation. One important issue is that for all but the smallest KBs, a relation matrix must be implemented using a sparse matrix data structure, as explicitly storing all $N_{E}^{2}$ values is impractical. For instance, consider a KB containing 10,000 movie entities and 100,000 person entities. A relationship like writer_of would have only a few tens of thousands of facts (since most movies have only one or two writers), but a dense matrix would have 1 billion values. + +We thus model relations as sparse matrices. Let $N_r$ be the number of entity pairs in the relation $r$ : common sparse matrix data structures require space $O(N_r)$ . One common sparse matrix data structure is a sparse coordinate pair (COO) encoding: with a COO encoding, each KB fact requires storing only two integers and one float. + +Our implementations are based on Tensorflow (Abadi et al., 2016), which offers limited support for sparse matrices. In particular, driven by the limitations of GPU architecture, Tensorflow only supports matrix multiplication between a sparse matrix COO and a dense matrix, but not between two sparse matrices, or between sparse higher-rank tensors and dense tensors. + +Entity types. It is often possible to easily group entities into disjoint sets by some notion of "type": for example, in a movie domain, all entities might be either of the type "movie", "person", or "movie studio". It is straightforward to extend the formalism above to typed sets of entities, and doing this can lead to some useful optimizations. We use these optimizations below where appropriate: in particular, relation-set vectors $\mathbf{r}$ are of dimension $N_{R}$ , not $N_{E}$ , in the sections below. The full formal extension to typed entities and relations is given in Appendix A. + +# 2.2 REASONING IN A KB + +The relation-set following operation. Note that relations can also be viewed as labeled edges in a knowledge graph, the vertices of which are entities. Adopting this view, we define the $r$ -neighbors of an entity $x_{i}$ to be the set of entities $x_{j}$ that are connected to $x_{i}$ by an edge labeled $r$ , i.e., $r$ -neighbors $(x)\equiv \{x_{j}:(x_{i},x_{j})\in r\}$ . Extending this to relation sets, we define + +$$ +R \text {- n e i g h b o r s} (X) \equiv \left\{x _ {j}: \exists r \in R, x _ {i} \in X \text {s o t h a t} \left(x _ {i}, x _ {j}\right) \in r \right\} +$$ + +Computing the $R$ -neighbors of an entity is a single-step reasoning operation: e.g., the answer to the question $q =$ "what movies were produced or directed by Quentin Tarantino" is precisely the set $R$ -neighbors $(X)$ for $R = \{ \text{producer_of, writer_of} \}$ and $X = \{ \text{Quentin_Tarantino} \}$ . "Multi-hop" reasoning operations require nested $R$ -neighborhoods, e.g. if $R' = \{ \text{actor_of} \}$ then $R'$ -neighbors $(R$ -neighbors $(X))$ is the set of actors in movies produced or directed by Quentin Tarantino. + +We would like to approximate the $R$ -neighbors computation with differentiable operations that can be performed on the vectors encoding the sets $X$ and $R$ . Let $\mathbf{x}$ encode a weighted set of entities $X$ , and let $\mathbf{r}$ encode a weighted set of relations. We first define $\mathbf{M}_R$ to be a weighted mixture of the relation matrices for all relations in $R$ i.e., + +$$ +\mathbf {M} _ {R} \equiv \left(\sum_ {k = 1} ^ {N _ {R}} \mathbf {r} [ k ] \cdot \mathbf {M} _ {r _ {k}}\right) \tag {1} +$$ + +We then define the relation-set following operation for $\mathbf{x}$ and $\mathbf{r}$ as: + +$$ +f o l l o w (\mathbf {x}, \mathbf {r}) \equiv \mathbf {x} \mathbf {M} _ {R} = \mathbf {x} \left(\sum_ {k = 1} ^ {N _ {R}} \mathbf {r} [ k ] \cdot \mathbf {M} _ {r _ {k}}\right) \tag {2} +$$ + +As we will show below, this differentiable numerical relation-set following operation can be used as a neural component to perform certain types of logical reasoning. In particular, Eq 2 corresponds closely to the logical $R$ -neighborhood operation, as shown by the claim below. + +Claim 1 The support of $\text{follow}(\pmb{x}, \pmb{r})$ is exactly the set of $R$ -neighbors $(X)$ . + +A proof and the implications of this are discussed in Appendix B. + +# 2.3 SCALABLE RELATION-SET FOLLOWING WITH A REIFIED KB + +Baseline implementations. Suppose the KB contains $N_R$ relations, $N_E$ entities, and $N_T$ triples. Typically $N_R < N_E < N_T \ll N_E^2$ . As noted above, we implement each $\mathbf{M}_r$ as a sparse COO matrix, + +
StrategyDefinitionBatch?Space complexity# Operations
sp-dense matmuldense + or ⊙sparse +
naive mixingEq 1-2noO(NT+NE+NR)10NR
late mixingEq 3yesO(NT+bNE+bNR)NRNR0
reified KBEq 4yesO(bNT+bNE)310
+ +Table 2: Complexity of implementations of relation-set following, where $N_T$ is the number of KB triples, $N_E$ the number of entities, $N_R$ the number of relations, and $b$ is batch size. + +so collectively these matrices require space $O(N_T)$ . Each triple appears in only one relation, so $\mathbf{M}_R$ in Eq 1 is also size $O(N_T)$ . Since sparse-sparse matrix multiplication is not supported in Tensorflow we implement $\mathbf{x}\mathbf{M}_R$ using dense-sparse multiplication, so $\mathbf{x}$ must be a dense vector of size $O(N_E)$ , as is the output of relation-set following. Thus the space complexity of $follow(\mathbf{x},\mathbf{r})$ is $O(N_T + N_E + N_R)$ , if implemented as suggested by Eq 2. We call this the naive mixing implementation, and its complexity is summarized in Table 2. + +Because Tensorflow does not support general sparse tensor contractions, it is not always possible to extend sparse-matrix computations to minibatches. Thus we also consider a variant of naive mixing called late mixing, which mixes the output of many single-relation following steps, rather than mixing the KB itself: + +$$ +f o l l o w (\mathbf {x}, \mathbf {r}) = \sum_ {k = 1} ^ {N _ {R}} \left(\mathbf {r} [ k ] \cdot \mathbf {x} \mathbf {M} _ {r _ {k}}\right) \tag {3} +$$ + +Unlike naive mixing, late mixing can be extended easily to a minibatches (see Appendix C). Let $b$ be the batch size and $\mathbf{X}$ be a minibatch of $b$ examples $[\mathbf{x}_1; \ldots; \mathbf{x}_b]$ : then this approach leads to $N_R$ matrices $\mathbf{X}\mathbf{M}_k$ , each of size $O(bN_E)$ . However, they need not all be stored at once, so the space complexity becomes $O(bN_E + bN_R + N_T)$ . An additional cost of late mixing is that we must now sum up $N_R$ dense matrices. + +A reified knowledge base. While semantic parses for natural questions often use small sets of relations (often singleton ones), in learning there is substantial uncertainty about what the members of these small sets should be. Furthermore, realistic wide-coverage KBs have many relations—typically hundreds or thousands. This leads to a situation where, at least during early phases of learning, it is necessary to evaluate the result of mixing very large sets of relations. When many relations are mixed, late mixing becomes quite expensive (as experiments below show). + +An alternative is to represent each KB assertion $r_k(x_i, x_j)$ as a tuple $(i, j, k)$ where $i, j, k$ are the indices of $x_i, x_j$ , and $r_k$ . There are $N_T$ such triples, so for $\ell = 1, \ldots, N_T$ , let $(i_\ell, j_\ell, k_\ell)$ denote the $\ell$ -th triple. We define these sparse matrices: + +$$ +\mathbf {M} _ {s u b j} [ \ell , m ] \equiv \left\{ \begin{array}{l l} 1 \text {i f} m = i _ {\ell} \\ 0 \text {e l s e} \end{array} \right. \quad \mathbf {M} _ {o b j} [ \ell , m ] \equiv \left\{ \begin{array}{l l} 1 \text {i f} m = j _ {\ell} \\ 0 \text {e l s e} \end{array} \right. \quad \mathbf {M} _ {r e l} [ \ell , m ] \equiv \left\{ \begin{array}{l l} 1 \text {i f} m = k _ {\ell} \\ 0 \text {e l s e} \end{array} \right. +$$ + +Conceptually, $\mathbf{M}_{\text{subj}}$ maps the index $\ell$ of the $\ell$ -th triple to its subject entity; $\mathbf{M}_{\text{obj}}$ maps $\ell$ to the object entity; and $\mathbf{M}_{\text{rel}}$ maps $\ell$ to the relation. We can now implement the relation-set following as below, where $\odot$ is Hadamard product: + +$$ +f o l l o w (\mathbf {x}, \mathbf {r}) = \left(\mathbf {x} \mathbf {M} _ {\text {s u b j}} ^ {T} \odot \mathbf {r} \mathbf {M} _ {\text {r e l}} ^ {T}\right) \mathbf {M} _ {\text {o b j}} \tag {4} +$$ + +Notice that $\mathbf{x}\mathbf{M}_{\text{subj}}^T$ are the triples with an entity in $\mathbf{x}$ as their subject, $\mathbf{r}\mathbf{M}_{\text{rel}}^T$ are the triples with a relation in $\mathbf{r}$ , and the Hadamard product is the intersection of these. The final multiplication by $\mathbf{M}_{\text{obj}}$ finds the object entities of the triples in the intersection. These operations naturally extend to minibatches (see Appendix). The reified KB has size $O(N_T)$ , the sets of triples that are intersected have size $O(bN_T)$ , and the final result is size $O(bN_E)$ , giving a final size of $O(bN_T + bN_E)$ , with no dependence on $N_R$ . + +Table 2 summarizes the complexity of these three mathematically equivalent but computationally different implementations. The analysis suggests that the reified KB is preferable if there are many relations, which is the case for most realistic KBs3. + +![](images/195262bb9058075008306c01a5b7631c0b53ad629113f13dd58ff219d781f528.jpg) +Figure 1: Left and middle: inference time in queries/sec on a synthetic KB as size and number of relations is varied. Queries/sec is given as zero when GPU memory of 12Gb is exceeded. Right: speedups of reified KBs over the baseline implementations. + +![](images/a59a4c880f2afe9a4498da4e13e3ae6903ab0413d592fcc701644c63c8432958.jpg) + +![](images/f7e39dba9bae11d3bfd4040dbd93a71567e9896104440527770eab861a5156ae.jpg) + +Distributing a large reified KB. The reified KB representation is quite compact, using only six integers and three floats for each KB triple. However, since GPU memory is often limited, it is important to be able to distribute a KB across multiple GPUs. Although to our knowledge prior implementations of distributed matrix operations (e.g., (Shazeer et al., 2018)) do not support sparse matrices, sparse-dense matrix multiplication can be distributed across multiple machines. We thus implemented a distributed sparse-matrix implementation of reified KBs. We distributed the matrices that define a reified KB "horizontally", so that different triple ids $\ell$ are stored on different GPUs. Details are provided in Appendix D. + +# 3 EXPERIMENTS + +# 3.1 SCALABILITY + +Like prior work (Cohen et al., 2017; De Raedt et al., 2007), we used a synthetic KB based on an $n$ -by- $n$ grid to study scalability of inference. Every grid cell is an entity, related to its immediate neighbors via relations north, south, east, and west. The KB for an $n$ -by- $n$ grid thus has $O(n^{2})$ entities and $O(n^{2})$ triples. We measured the time to compute the 2-hop inference follow(follow(x, r), r) for minibatches of $b = 128$ one-hot vectors, and report it as queries per second (qps) on a single GPU (e.g., qps=1280 would mean a single minibatch requires 100ms). We also compare to a key-value memory network (Miller et al., 2016), using an embedding size of 64 for entities and relations, where there is one memory entry for every triple in the KB. Further details are given in Appendix E. + +The results are shown in Figure 1 (left and middle), on a log-log scale because some differences are very large. With only four relations (the leftmost plot), late mixing is about $3\mathrm{x}$ faster than the reified KB method, and about $250\mathrm{x}$ faster than the naive approach. However, for more than around 20 relations, the reified KB is faster (middle plot). As shown in the rightmost plot, the reified KB is $50\mathrm{x}$ faster than late mixing with 1000 relations, and nearly $12,000\mathrm{x}$ faster than the naive approach. + +With this embedding size, the speed of the key-value network is similar to the reified KB for only four relations, however it is about 7x slower for 50 relations and 10k entities. Additionally, the space needed to store a triple is much larger in a key-value network than the reified KB, so memory is exhausted when the KB exceeds 200,000 entities (with four relations), or when the KB exceeds 100 relations (with 10,000 entities.) The reified KB scales much better, and can handle 10x as many entities and 20x as many relations. + +# 3.2 MODELS USING REIFIED KBS + +As discussed below in Section 4, the reified KB is closely related to key-value memory networks, so it can be viewed as a more efficient implementation of existing neural modules, optimized for reasoning with symbolic KBs. However, being able to include an entire KB into a model can lead to a qualitative difference in model complexity, since it is not necessary to build machinery to retrieve from the KB. To illustrate this, below we present simple models for several tasks, each using the reified KB in different ways, as appropriate to the task. We consider two families of tasks: learning semantic parsers from denotations over a large KB, and learning to complete a KB. + +KBQA for multi-hop questions. MetaQA (Zhang et al., 2018) consists of 1.2M questions, evenly distributed into one-hop, two-hop, and three-hop questions. (E.g, the question "who acted in a movie directed by Quentin Tarantino?" is a two-hop question.) The accompanying KB (Miller et al., 2016) contains 43k entities and 186k triples. Past work treated one-hop, two-hop and three-hop questions separately, and the questions are labeled with the entity ids for the "seed entities" that begin the reasoning chains (e.g., the question above would be tagged with the id of the entity for Quentin Tarantino). + +Using a reified KB for reasoning means the neural model only needs to predict the relations used at each stage in the reasoning process. For each step of inference we thus compute relation sets $\mathbf{r}^t$ using a differentiable function of the question, and then chain them together with relation-set following steps. Letting $\mathbf{x}^0$ be the set of entities associated with $q$ , the model we use is: + +$$ +\text {f o r} t = 1, 2, 3: \quad \mathbf {r} ^ {t} = f ^ {t} (q); \quad \mathbf {x} ^ {t} = \operatorname {f o l l o w} (\mathbf {x} ^ {t - 1}, \mathbf {r} ^ {t}) +$$ + +where $\text{follow}(\mathbf{x}^{t-1}, \mathbf{r}^t)$ is implemented with a reified KB as described in Eq. 4. + +To predict an answer on a $T$ -hop subtask, we compute the softmax of the appropriate set $\mathbf{x}^T$ . We used cross entropy loss of this set against the desired answer, represented as a uniform distribution over entities in the target set. Each function $f^t(q)$ is a different linear projection of a common encoding for $q$ , specifically a mean-pooling of the tokens in $q$ encoded with a pre-trained 128-dimensional word2vec model (Mikolov et al., 2013). The full KB was loaded into a single GPU in our experiments. + +It is interesting to contrast this simple model with the one proposed by Zhang et al. (2018). The "module for logic reasoning" they propose in Section 3.4 is fairly complex, with a description that requires a figure, three equations, and a page of text; furthermore, training this model requires constructing an example-dependent subgraph for each training instance. In our model, the "logic reasoning" (and all interaction with the KB) has been encapsulated completely in the follow(x, r) operation—which, as we will demonstrate below, can be re-used for many other problems. Encapsulating all KB reasoning with a single scalable differentiable neural module greatly simplifies modeling: in particular, the problem of learning a structured KB query has been reduced to learning a few differentiable functions of the question, one for each reasoning "hop". The learned functions are also interpretable: they are mixtures of relation identifiers which correspond to soft weighted sets of relations, which in turn softly specify which KB relation should be used in each stage of the reasoning process. Finally, optimization is simple, as the loss on predicted denotations can be back-propagated to the relation-prediction functions. + +A similar modeling strategy is used in all the other models presented below. + +KBQA on FreeBase. WebQuestionsSP (Yih et al., 2016) contains 4737 natural language questions, all of which are answerable using FreeBase (Bollacker et al., 2008), a large open-domain KB. Each question $q$ is again labeled with the entities $\mathbf{x}$ that appear in it. + +FreeBase contains two kinds of nodes: real-world entities, and compound value types (CVTs), which represent non-binary relationships or events (e.g., a movie release event, which includes a movie id, a date, and a place.) Real-world entity nodes can be related to each other or to a CVT node, but CVT nodes are never directly related to each other. In this dataset, all questions can be answered with 1- or 2-hop chains, and all 2-hop reasoning chains pass through a CVT entity; however, unlike MetaQA, the number of hops is not known. Our model thus derives from $q$ three relation sets and then uniformly mixes both potential types of inferences: + +$$ +\begin{array}{l} \mathbf {r} _ {\mathrm {E} \rightarrow \mathrm {E}} = f _ {\mathrm {E} \rightarrow \mathrm {E}} (q); \quad \mathbf {r} _ {\mathrm {E} \rightarrow \mathrm {C V T}} = f _ {\mathrm {E} \rightarrow \mathrm {C V T}} (q); \quad \mathbf {r} _ {\mathrm {C V T} \rightarrow \mathrm {E}} = f _ {\mathrm {C V T} \rightarrow \mathrm {E}} (q) \\ \hat {\mathbf {a}} = \text {f o l l o w} (\text {f o l l o w} (\mathbf {x}, \mathbf {r} _ {\mathrm {E} \rightarrow \mathrm {C V T}}), \mathbf {r} _ {\mathrm {C V T} \rightarrow \mathrm {E}}) + \text {f o l l o w} (\mathbf {x}, \mathbf {r} _ {\mathrm {E} \rightarrow \mathrm {E}}) \\ \end{array} +$$ + +We again apply a softmax to $\hat{\mathbf{a}}$ and use cross entropy loss, and $f_{\mathrm{E}\rightarrow \mathrm{E}},f_{\mathrm{E}\rightarrow \mathrm{CVT}}$ , and $f_{\mathrm{CVT}\rightarrow \mathrm{E}}$ are again linear projections of a word2vec encoding of $q$ . We used a subset of Freebase with 43.7 million facts and 12.9 million entities, containing all facts in Freebase within 2-hops of entities mentioned in any question, excluding paths through some very common entities. We split the KB across three 12-Gb GPUs, and used a fourth GPU for the rest of the model. + +This dataset is a good illustration of the scalability issues associated with prior approaches to including a KB in a model, such as key-value memory networks. A key-value network can be trained to implement something similar to relation-set following, if it stores all the KB triples in memory. If + +we assume 64-float embeddings for the 12.9M entities, the full KB of 43.7M facts would be 67Gb in size, which is impractical. Additionally performing a softmax over the 43.7M keys would be prohibitively expensive, as shown by the experiments of Figure 1. This is the reason why in standard practice with key-value memory networks for KBs, the memory is populated with a heuristically subset of the KB, rather than the full KB. We compare experimentally to this approach in Table 3. + +Knowledge base completion. Following Yang et al. (2017) we treat KB completion as an inference task, analogous to KBQA: a query $q$ is a relation name and a head entity $\mathbf{x}$ , and from this we predict a set of tail entities. We assume the answers are computed with the disjunction of multiple inference chains of varying length. Each inference chain has a maximum length of $T$ and we build $N$ distinct inference chains in total, using this model (where $\mathbf{x}_i^0 = \mathbf{x}$ for every chain $i$ ): + +$$ +\mathrm {f o r} i = 1, \ldots , N \mathrm {a n d} t = 1, \ldots , T: \mathbf {r} _ {i} ^ {t} = f _ {i} ^ {t} (q); \mathbf {x} _ {i} ^ {t} = f o l l o w (\mathbf {x} _ {i} ^ {t - 1}, \mathbf {r} _ {i} ^ {t}) + \mathbf {x} _ {i} ^ {t - 1} +$$ + +The final output is a softmax of the mix of all the $\mathbf{x}_i^T$ 's: i.e., we let $\hat{\mathbf{a}} = \text{softmax}(\sum_{i \in \{1 \dots N\}} \mathbf{x}_i^T)$ . The update $\mathbf{x}_i^{t+1} = \text{follow}(\mathbf{x}_i^t, \mathbf{r}_i^t) + \mathbf{x}_i^t$ gives the model access to outputs of all chains of length less than $t$ (for more intuition see Appendix E.) The encoding of $q$ is based on a lookup table, and each $f_i^t$ is a learned linear transformation of $q$ 's embedding. + +An encoder-decoder architecture for varying inferential structures. To explore performance on more complex reasoning tasks, we generated simple artificial natural-language sentences describing longer chains of relationships on a 10-by-10 grid. For this task we used an encoder-decoder model which emits chains of relation-set following operations. The question is encoded with the final hidden state of an LSTM, written here $\mathbf{h}^0$ . We then generate a reasoning chain of length up to $T$ using a decoder LSTM. At iteration $t$ , the decoder emits a scalar probability of "stopping", $p^t$ , and a distribution over relations to follow $\mathbf{r}^t$ , and then, as we did for the KBQA tasks, sets $\mathbf{x}^t = \text{follow}(\mathbf{x}^{t-1}, \mathbf{r}^t)$ . Finally the decoder updates its hidden state to $\mathbf{h}^t$ using an LSTM cell that "reads" the "input" $\mathbf{r}^{t-1}$ . For each step $t$ , the model thus contains the steps + +$$ +p ^ {t} = f _ {p} (\mathbf {h} ^ {t - 1}); \quad \mathbf {r} ^ {t} = f _ {r} (\mathbf {h} ^ {t - 1}); \quad \mathbf {x} ^ {t} = f o l l o w (\mathbf {x} ^ {t - 1}, \mathbf {r} ^ {t}); \quad \mathbf {h} ^ {t} = \mathrm {L S T M} (\mathbf {h} ^ {t - 1}, \mathbf {r} ^ {t - 1}) +$$ + +The final predicted location is a mixture of all the $\mathbf{x}_t$ 's weighted by the probability of stopping $p_t$ at iteration $t$ , i.e., $\hat{\mathbf{a}} = \text{softmax}(\sum_{t=1}^{T} \mathbf{x}^t \cdot p^t \prod_{t' < t} (1 - p^{t'}))$ . The function $f_r$ is a softmax over a linear projection, and $f_p$ is a logistic function. In the experiments, we trained on 360,000 sentences requiring between 1 and $T$ hops and tested on an additional 12,000 sentences. + +Experimental results. We next consider the performance of these models relative to strong baselines for each task. We emphasize our goal here is not to challenge the current state of the art on any particular benchmark, and clearly there are many ways the models of this paper could be improved. (For instance, our question encodings are based on word2vec, rather than contextual encodings (Devlin et al., 2018), and likewise relations are predicted with simple linear classifiers, rather than, say, attention queries over some semantically meaningful space, such as might be produced with language models or KB embedding approaches (Bordes et al., 2013)). Rather, our contribution is to present a generally useful scheme for including symbolic KB reasoning into a model, and we have thus focused on describing simple, easily understood models that do this for several tasks. However, it is important to confirm experimentally that the reified KB models "work"—e.g., that they are amenable to use of standard optimizers, etc. + +Performance (using Hits@1) of our models on the KBQA tasks is shown in Table 3. For the non-synthetic tasks we also compare to a Key-Value Memory Network (KV-Mem) baseline (Miller et al., 2016). For the smaller MetaQA dataset, KV-Mem is initialized with all facts within 3 hops of the query entities, and for WebQuestionsSP it is initialized by a random-walk process seeded by the query entities (see (Sun et al., 2018; Zhang et al., 2018) for details). ReifKB consistently outperforms the baseline, dramatically so for longer reasoning chains. The synthetic grid task shows that there is very little degradation as chain length increases, with Hits@1 for 10 hops still $89.7\%$ . It also illustrates the ability to predict entities in a KB, as well as relations. + +We also compare these results to two much more complex architectures that perform end-to-end question answering in the same setting used here: VRN (Zhang et al., 2018), GRAFT-Net (Sun et al., 2018), and PullNet (Sun et al., 2019). All three systems build question-dependent subgraphs of the + +
ReifKB (ours)ReifKB + maskKV-Mem (baseline)VRNGRAFT-NetPullNetnon-differentiable components of architectures
WebQSP MetaQA52.746.767.868.1KV-Meminitial memory retrieval
1-hop96.295.897.597.097.0
2-hop81.195.425.189.994.899.9VRNquestion-specific
3-hop72.379.710.162.577.291.4GRAFTNetsubgraph retrieval
GridPullNetall iterative retrievals
5-hop98.4
10-hop89.7ReifKB(ours)none
+ +KB, and then use graph CNN-like methods (Kipf & Welling, 2016) to "reason" with these graphs. Although not superior, ReifKB model is competitive with these approaches, especially on the most difficult 3-hop setting. + +A small extension to this model is to mask the seed entities out of the answers (see Appendix E). This model (given as $\mathrm{ReifKB + }$ mask) has better performance than GRAFT-Net on 2-hop and 3-hop questions. + +For KB completion, we evaluated the model on the NELL-995 dataset (Xiong et al., 2017) which is paired with a KB with 154k facts, 75k entities, and 200 relations. On the left of Table 4 we compare our model with three popular embedding approaches (results are from Das et al. (2017)). The reified KB model outperforms DistMult (Yang et al., 2014), is slightly worse than ConvE (Dettmers et al., 2018), and is comparable to ComplEx (Trouillon et al., 2017). + +The competitive performance of the ReifKB model is perhaps surprising, since it has many fewer parameters than the baseline models—only one float and two integers per KB triple, plus a small number of parameters to define the $f_{i}^{t}$ functions for each relation. The ability to use fewer parameters is directly related to the fact that our model directly uses inference on the existing symbolic KB in its model, rather than having to learn embeddings that approximate this inference. Or course, since the KB is incomplete, some learning is still required, but learning is quite different: the system learns logical inference chains in the incomplete KB that approximate a target relation. In this setting for KBC, the ability to perform logical inference “out of the box” appears to be very advantageous. + +Another relative disadvantage of KB embedding methods is that KB embeddings are generally transductive—they only make predictions for entities seen in training. As a non-transductive baseline, we also compared to the MINERVA model, which uses reinforcement learning (RL) methods to learn how to traverse a KB to find a desired answer. Although RL methods are less suitable as “neural modules”, MINERVA is arguably a plausible competitor to end-to-end learning with a reified KB. + +MINERVA slightly outperforms our simple KB completion model on the NELL-995 task. However, unlike our model, MINERVA is trained to find a single answer, rather than trained to infer a set of answers. To explore this difference, we compared to MINERVA on the grid task under two conditions: (1) the KB relations are the grid directions north, south, east and west, so the output of the target chain is always a single grid location, and (2) the KB relations also include a "vertical move" (north or south) and a "horizontal move" (east or west), so the result of the target chain can be a set of locations. As expected MINERVA's performance drops dramatically in the second case, from $99.3\%$ + +Table 3: Hits@1 on the KBQA datasets. Results for KV-Mem and VRN on MetaQA are from (Zhang et al., 2018); results for GRAFT-Net, PullNet and KV-Mem on WebQSP are from (Sun et al., 2018) and (Sun et al., 2019). + +
NELL-995ReifKB (Ours)MINERVA
H@1H@10NELL-99564.166.3
ReifKB (Ours)64.182.4Grid with seed entity
DistMult*61.079.510-hop NSEW98.999.3
ComplEx*61.282.710-hop NSEW-VH73.634.4
ConvE*67.286.4MetaQA 3-hop72.341.7
+ +Table 4: Left: Hits@1 and Hits@10 for KB completion on NELL 995. Starred KB completion methods are transductive, and do not generalize to entities not seen in training. Right: Comparison to MINERVA on several tasks for Hits@1. + +
NELL-995MetaQA-3hopWebQuestionsSP
# Facts154,213196,45343,724,175
# Entities75,49243,23012,942,798
# Relations2009616
Time (seconds)44.372.61820
+ +![](images/c70f98c410bb2b1fc3be684a44c85c15eadf6758032a1c40f98e39b31fe4c6de.jpg) + +Table 5: Left, time to run 10K examples for KBs of different size. Right, time for 10k examples vs Hits@1 performance for ReifKB compared to three baselines on MetaQA-3hop questions. + +Hits@1 to $34.4\%$ , while our model's performance is more robust. MetaQA answers can also be sets, so we also modified MetaQA so that MINERVA could be used (by making the non-entity part of the sentence the "relation" input and the seed entity the "start node" input) and noted a similarly poor performance for MINERVA. These results are shown on the right of Table 4. + +In Tables 5 we compare the training time of our model with minibatch size of 10 on NELL-995, MetaQA, and WebQuestionsSP. With over 40 million facts and nearly 13 million entities from Freebase, it takes less than 10 minutes to run one epoch over WebQuestionsSP (with 3097 training examples) on four P100 GPUs. In the accompanying plot, we also summarize the tradeoffs between accuracy and training time for our model and three baselines on the MetaQA 3-hop task. (Here ideal performance is toward the upper left of the plot). The state-of-the-art PullNet Sun et al. (2019) system, which uses a learned method to incrementally retrieve from the KB, is about 15 times slower than the reified KB system. GRAFT-Net is only slightly less accurate, but also only slightly faster: recall that GRAFT-Net uses a heuristically selected subset (of up to 500 triples) from the KB for each query, while our system uses the full KB. Here the full KB is about 400 times as large as the question-specific subset used by GRAFT-Net. A key-value memory baseline including the full KB is nearly three times as slow as our system, while also performing quite poorly. + +# 4 RELATED WORK + +The relation-set following operation using reified KBs is implemented in an open-source package called NQL, for neural query language. NQL implements a broader range of operations for manipulating KBs, which are described in a companion paper (Cohen et al., 2019). This paper focuses on implementation and evaluation of the relation-set following operation with different KB representations, issues not covered in the companion paper. + +TensorLog (Cohen et al., 2017), a probabilistic logic which also can be compiled to TensorFlow, and hence is another differentiable approach to neuralizing a KB. TensorLog is also based on sparse matrices, but does not support relation sets, making it unnatural to express the models shown in this paper, and does not use the more efficient reified KB representation. The differentiable theorem prover (DTP) is another differentiable logic (Rocktäschel & Riedel, 2017), but DPT appears to be much less scalable: it has not been applied to KBs larger than a few thousand triples. The Neural ILP system (Yang et al., 2017) uses approaches related to late mixing together with an LSTM controller to perform KB completion and some simple QA tasks, but it is a monolithic architecture focused on rule-learning, while in contrast we propose a re-usable neural component, which can be used in as a component in many different architectures, and a scalable implementation of this. It has also been reported that neural ILP does not scale to the size of the NELL995 task (Das et al., 2017). + +The goals of this paper are related to KB embedding methods, but distinct. In KB embedding, models are generally fully differentiable, but it is not considered necessary (or even desirable) to accurately match the behavior of inference in the original KB. Being able to construct a learned approximation of a symbolic KB is undeniably useful in some contexts, but embedded KBs also have many disadvantages. In particular, they are much larger than a reified KB, with many more learned parameters—typically a long dense vector for every KB entity. Embedded models are typically evaluated by their ability to score a single triple accurately, and many models are not capable of executing multi-step KB inferences efficiently; further, models that do allow multi-step inference are known to produce cascaded errors on long reasoning chains (Guu et al., 2015; Hamilton et al., 2018). In contrast we focus on accurate models of reasoning in a symbolic KB, which requires consideration of novel scalability issues associated with sparse matrice representations. + +Mathematically, our definition of relation-set following is much like the bilinear model for path following from Guu et al. (2015); however, we generalize this to path queries that include weighted sets of relations, allowing the relations in paths to be learned. Similar differences apply to the work of Hamilton et al. (2018), which extends the work of Guu et al. (2015) to include intersection operations. The vector representation used here for weighted sets in a reified KB makes intersection trivial to implement, as intersection corresponds to Hadamard product. Conveniently set union also corresponds to vector sum, and the complement of $X$ is $1 - \mathbf{x}$ , which is perhaps why only a single additional neural operation is needed to support the KB reasoning tasks needed for the five benchmark tasks considered here. + +Neural architectures like memory networks (Weston et al., 2014), or other architectures that use attention over some data structure approximating assertions (Andreas et al., 2016; Gupta & Lewis, 2018) can be used to build soft versions of relation-set following: however, they also do not scale well to large KBs, so they are typically used either with a non-differentiable ad hoc retrieval mechanism, or else in cases where a small amount of information is relevant to a question (Weston et al., 2015; Zhong et al., 2017). Similarly graph CNNs (Kipf & Welling, 2016) also can be used for reasoning, and often do use sparse matrix multiplication, but again existing implementations have not been scaled to tens of millions of triples/edges or millions of entities/graph nodes. Additionally, while graph CNNs have been used for reasoning tasks, the formal connection between them and logical reasoning remains unclear, whereas there is a precise connection between relation-set following and inference. + +Reinforcement learning (RL) methods have been used to learn mappings from natural-language questions to non-differentiable logical representations (Liang et al., 2016; 2018) and have also been applied to KB completion tasks (Das et al., 2017; Xiong et al., 2017). Above we compared experimentally to MINERVA, one such method; however, the gradient-based approaches enabled by our methods are generally preferred as being easier to implement and tune on new problems, and easier to combine in a modular way with other architectural elements. + +# 5 CONCLUSIONS + +We introduced here a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. In a reified KB, all KB relations are represented with three sparse matrices, which can be distributed across multiple GPUs, and symbolic reasoning on realistic KBs with many relations is much faster than with naive implementations—more than four orders of magnitude faster on synthetic-data experiments compared to naive sparse-matrix implementations. + +This new architectural component leads to radically simpler architectures for neural semantic parsing from denotations and KB completion—in particular, they make it possible to learn neural KBQA models in a completely end-to-end way, mapping from text to KB entity sets, for KBs with tens of millions of triples and entities and hundreds of relations. + +# ACKNOWLEDGMENTS + +The authors are greatful to comments and suggestions from Fernando Peireira, Bhuwan Dhingra, and many other colleagues on earlier versions of this work. + +# REFERENCES + +Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation (\{OSDI\} 16), pp. 265-283, 2016. +Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 39-48, 2016. +Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1533-1544, 2013. +Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pp. 1247-1250. AcM, 2008. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pp. 2787-2795, 2013. +William W Cohen, Fan Yang, and Kathryn Rivard Mazaitis. Tensorlog: Deep learning meets probabilistic DBs. arXiv preprint arXiv:1707.05390, 2017. +William W. Cohen, Matthew Siegler, and R. Alex Hofer. Neural query language: A knowledge base query language for Tensorflow. CoRR, abs/1905.06209, 2019. URL http://arxiv.org/abs/1905.06209. +Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, and Andrew McCallum. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. arXiv preprint arXiv:1711.05851, 2017. +Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, and Eduard Hovy. Iterative search for weakly supervised semantic parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2669-2680, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1273. URL https://www.aclweb.org/anthology/N19-1273. +Luc De Raedt, Angelika Kimmig, and Hannu Toivonen. Problog: A probabilistic Prolog and its application in link discovery. In *IJCAI*, volume 7, pp. 2462-2467. Hyderabad, 2007. +Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Catherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. Improving text-to-SQL evaluation methodology. arXiv preprint arXiv:1806.09029, 2018. +Nitish Gupta and Mike Lewis. Neural compositional denotational semantics for question answering. CoRR, abs/1808.09942, 2018. URL http://arxiv.org/abs/1808.09942. +Kelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. arXiv preprint arXiv:1506.01094, 2015. +Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, and Jure Leskovec. Embedding logical queries on knowledge graphs. In Advances in Neural Information Processing Systems, pp. 2026-2037, 2018. + +Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. +Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020, 2016. +Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc V Le, and Ni Lao. Memory augmented policy optimization for program synthesis and semantic parsing. In Advances in Neural Information Processing Systems, pp. 9994-10006, 2018. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111-3119, 2013. +Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126, 2016. URL http://arxiv.org/abs/1606.03126. +Dipendra Misra, Ming-Wei Chang, Xiaodong He, and Wen-tau Yih. Policy shaping and generalized update equations for semantic parsing from denotations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2442-2452, 2018. +Panupong Pasupat and Percy Liang. Inferring logical forms from denotations. arXiv preprint arXiv:1606.06900, 2016. +Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. In Advances in Neural Information Processing Systems, pp. 3788-3800, 2017. +Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, and Yasemin Altun. Generating logical forms from graph representations of text and entities. arXiv preprint arXiv:1905.08407, 2019. +Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake A. Hechtman. Mesh-tensorflow: Deep learning for supercomputers. CoRR, abs/1811.02084, 2018. URL http://arxiv.org/abs/1811.02084. +Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William W Cohen. Open domain question answering using early fusion of knowledge bases and text. EMNLP, 2018. +Haitian Sun, Tania Bedrax-Weiss, and William W Cohen. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. arXiv preprint arXiv:1904.09537, 2019. +Théo Trouillon, Christopher R Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guillaume Bouchard. Knowledge graph completion via complex tensor factorization. The Journal of Machine Learning Research, 18(1):4735-4772, 2017. +Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. +Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merrienboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. +Wenhan Xiong, Thien Hoang, and William Yang Wang. Deeppath: A reinforcement learning method for knowledge graph reasoning. arXiv preprint arXiv:1707.06690, 2017. +Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575, 2014. +Fan Yang, Zhilin Yang, and William W Cohen. Differentiable learning of logical rules for knowledge base reasoning. In Advances in Neural Information Processing Systems, pp. 2319-2328, 2017. + +Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pp. 1321-1331, 2015. +Wen-tau Yih, Matthew Richardson, Chris Meek, Ming-Wei Chang, and Jina Suh. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pp. 201-206, 2016. +Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J Smola, and Le Song. Variational reasoning for question answering with knowledge graph. In AAAI, 2018. +Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103, 2017. + +# A ADDITIONAL BACKGROUND AND EXTENSIONS + +KBs, entities, and relations, and types. In the more general case, a KB consists of entities, relations, and types. Again use $x$ to denote an entity and $r$ to denote a relation. We also assume each entity $x$ has a type, written $type(x)$ , and let $N_{\tau}$ denote the number of entities of type $\tau$ . Each entity $x$ in type $\tau$ has a unique index $index_{\tau}(x)$ , which is an integer between 1 and $N_{\tau}$ . We write $x_{\tau,i}$ for the entity that has index $i$ in type $\tau$ , or $x_i$ if the type is clear from context. + +Every relation $r$ has a subject type $\tau_{\text{subj}}$ and an object type $\tau_{\text{obj}}$ , which constrains the types of $x$ and $x'$ for any pair $(x, x') \in r$ . Hence $r$ can be encoded as a subset of $\{1, \dots, N_{\tau_{\text{subj}}} \} \times \{1, \dots, N_{\tau_{\text{obj}}} \}$ . Relations with the same subject and object types are called type-compatible. + +Our differentiable operations are based on typed weighted sets, where again each element $x$ of weighted set $X$ is associated with a non-negative real number, written $\omega[[x \in X]]$ , and we define $\omega[[x \in X]] \equiv 0$ for all $x \notin X$ . A set $X$ has a type $type(X) = \tau$ , and all members of $X$ must be entities of type $\tau$ . + +We also assume every relation $r$ in a KB is associated with an entity $x_r$ , and hence, an index and a type. Sets of relations $R$ are allowed only if all members are type-compatible. For example $R = \{ \text{writer_of}, \text{director_of} \}$ might be a set of type-compatible relations. + +A weighted set $X$ of type $\tau$ can be encoded as an entity-set vector $\mathbf{x} \in \mathbb{R}^{N_{\tau}}$ , where the $i$ -th component of $\mathbf{x}$ is the weight of the $i$ -th entity of that type in the set $X$ : e.g., $\mathbf{x}[index_{\tau}(x)] = \omega[[x \in X]]$ . We also use type $(\mathbf{x})$ to denote the type $\tau$ of the set encoded by $\mathbf{x}$ . + +A relation $r$ with subject type $\tau_{1}$ and object type $\tau_{2}$ can be encoded as a relation matrix $\mathbf{M}_r \in \mathbb{R}^{N_{\tau_1} \times N_{\tau_2}}$ . + +Background on sparse matrices. A COO encoding consists of a $N_r \times 2$ matrix $\mathbf{Ind}_r$ containing pairs of entity indices, and a parallel vector $\mathbf{w}_r \in \mathbb{R}^{N_r}$ containing the weights of the corresponding entity pairs. In this encoding, if $(i,j)$ is row $k$ of $\mathbf{Ind}$ , then $\mathbf{M}_r[i,j] = \mathbf{w}_r[k]$ , and if $(i,j)$ does not appear in $\mathbf{Ind}_r$ , then $\mathbf{M}[i,j]$ is zero. + +Extension to soft KBs. In the paper, we assume the non-zero weights in a relation matrix $\mathbf{M}_r$ are all equal to 1.0. This can be relaxed: if assertions in a KB are associated with confidences, then this confidence can be stored in $\mathbf{M}_r$ . In this case, the reified KB must be extended to encode the weight for a triple: we find it convenient to redefine $\mathbf{M}_{rel}$ to hold that weight. In particular if the weight for the $\ell$ -th triple $r_k(x_i, x_j)$ is $w_\ell$ , then we let + +$$ +\mathbf {M} _ {r e l} [ \ell , m ] \equiv \left\{ \begin{array}{l} w _ {\ell} \text {i f} m = k _ {\ell} \\ 0 \text {e l s e} \end{array} \right. +$$ + +# B PROOF OF CLAIM 1 + +Claim 1 The support of $\text{follow}(\pmb{x}, \pmb{r})$ is exactly the set of $R$ -neighbors $(X)$ . + +To better understand this claim, let $\mathbf{z} = \text{follow}(\mathbf{x}, \mathbf{r})$ . The claim states $\mathbf{z}$ can approximate the $R$ neighborhood of any hard sets $R, X$ by setting to zero the appropriate components of $\mathbf{x}$ and $\mathbf{r}$ . It is also clear that $\mathbf{z}[j]$ decreases when one decreases the weights in $\mathbf{r}$ of the relations that link $x_j$ to entities in $X$ , and likewise, $\mathbf{z}[j]$ decreases if one decreases the weights of the entities in $X$ that are linked to $x_j$ via relations in $R$ , so there is a smooth, differentiable path to reach this approximation. + +More formally, consider first a matrix $\mathbf{M}_r$ encoding a single binary relation $r$ , and consider the vector $\mathbf{x}' = \mathbf{x}\mathbf{M}_r$ . As weighted sets, $X$ and $r$ have non-negative entries, so clearly for all $i$ , + +$$ +\mathbf {x} ^ {\prime} [ j ] \neq 0 \text {i f f} \exists j: \mathbf {M} _ {r} [ i, j ] \neq 0 \wedge \mathbf {x} [ i ] \neq 0 \text {i f f} \exists x _ {i} \in X \text {s o t h a t} (x _ {i}, x _ {j}) \in r +$$ + +and so if $\mathbf{r}$ is a one-hot vector for the set $\{r\}$ , then the support of $follow(\mathbf{x}, \mathbf{r})$ is exactly the set $r$ -neighbors $(X)$ . Finally note that the mixture $\mathbf{M}_R$ has the property that $\mathbf{M}_R[i(e_1), i(e_2)] > 0$ exactly when $e_1$ is related to $e_2$ by some relation $r \in R$ . + +# C MINIBATCHED COMPUTATIONS OF NAIVE AND LATE MIXING + +The major problem with naive mixing is that, in the absence of general sparse tensor contractions, it is difficult to adapt to mini-batches—i.e., a setting in which $\mathbf{x}$ and $\mathbf{r}$ are replaced with matrices $\mathbf{X}$ and $\mathbf{R}$ with minibatch size $b$ . An alternative strategy is late mixing, which mixes the output of many single-relation following steps, rather than mixing the KB itself: + +$$ +f o l l o w (\mathbf {X}, \mathbf {R}) = \sum_ {k = 1} ^ {N _ {R}} (\mathbf {R} [:, k ] \cdot \mathbf {X M} _ {k}) +$$ + +Here $\mathbf{R}[:,k]$ , the $k$ -th column of $\mathbf{R}$ , is "broadcast" to element of the matrix $\mathbf{X}\mathbf{M}_k$ . As noted in the body of the text, while there are $N_R$ matrices $\mathbf{X}\mathbf{M}_k$ , each of size $O(bN_E)$ , they need not all be stored at once, so the space complexity becomes $O(bN_E + bN_R + N_T)$ ; however we must now sum up $N_R$ dense matrices. + +The implementation of relation-set following for the reified KB can be straightforwardly extended to a minibatch: + +$$ +f o l l o w (\mathbf {X}, \mathbf {R}) = \left(\mathbf {X} \mathbf {M} _ {s u b j} ^ {T} \odot \mathbf {R} \mathbf {M} _ {r e l} ^ {T}\right) \mathbf {M} _ {o b j} +$$ + +# D DISTRIBUTED MATRIX MULTIPLICATION + +Matrix multiplication $\mathbf{xM}$ was distributed as follows: $\mathbf{x}$ can be split into a "horizontal stacking" of $m$ submatrices, which we write as $[\mathbf{x}_1; \ldots; \mathbf{x}_m]$ , and $\mathbf{M}$ can be similarly partitioned into $m^2$ submatrices. We then have the result that + +$$ +\mathbf {x} \mathbf {M} = \left[ \mathbf {x} _ {1}; \mathbf {x} _ {2}; \dots ; \mathbf {x} _ {m} \right] \left[ \begin{array}{c c c c} \mathbf {M} _ {1, 1} & \mathbf {M} _ {1, 2} & \dots & \mathbf {M} _ {1, m} \\ \vdots & \vdots & & \vdots \\ \mathbf {M} _ {m, 1} & \mathbf {M} _ {m, 2} & \dots & \mathbf {M} _ {m, m} \end{array} \right] = \left[ \left(\sum_ {i = 1} ^ {m} \mathbf {x} _ {1} \mathbf {M} _ {i, 1}\right); \dots ; \left(\sum_ {i = 1} ^ {m} \mathbf {x} _ {m} \mathbf {M} _ {i, m}\right) \right] +$$ + +This can be computed without storing either $\mathbf{X}$ or $\mathbf{M}$ on a single machine, and mathematically applies to both dense and sparse matrices. In our experiments we distributed the matrices that define a reified KB "horizontally", so that different triple ids $\ell$ are stored on different GPUs. + +Specifically, we shard the "triple index" dimension $N_T$ of matrices $\mathbf{M}_{subj}$ , $\mathbf{M}_{rel}$ and $\mathbf{M}_{obj}$ in Eq. 4 to perform a distributed relation-set following on the reified KB. Let $\mathbf{M}_{subj,i}$ be the $i$ 'th shard of matrix $\mathbf{M}_{subj}$ , and thus $\mathbf{M}_{subj} = [\mathbf{M}_{subj,1}^T; \ldots; \mathbf{M}_{subj,m}^T]^T \in \mathbb{R}^{N_T \times N_E}$ . $\mathbf{M}_{obj}$ and $\mathbf{M}_{rel}$ are represented in the similar way. A distributed relation-set following is computed as a combination of relation-set following results on all shards of the KB. + +$$ +\begin{array}{l} f o l l o w (\mathbf {x}, \mathbf {r}) = \left(\mathbf {x} \mathbf {M} _ {s u b j} ^ {T} \odot \mathbf {r} \mathbf {M} _ {r e l} ^ {T}\right) \mathbf {M} _ {o b j} \\ = \left(\left[ \mathbf {x} \mathbf {M} _ {s u b j, 1} ^ {T}; \dots ; \mathbf {x} \mathbf {M} _ {s u b j, m} ^ {T} \right] \odot \left[ \mathbf {r} \mathbf {M} _ {r e l, 1} ^ {T}; \dots ; \mathbf {r} \mathbf {M} _ {r e l, m} ^ {T} \right]\right) \left[ \begin{array}{c} \mathbf {M} _ {o b j, 1} \\ \vdots \\ \mathbf {M} _ {o b j, m} \end{array} \right] (5) \\ = \sum_ {i = 1} ^ {m} \left(\mathbf {x} \mathbf {M} _ {\text {s u b j}, i} ^ {T} \odot \mathbf {r} \mathbf {M} _ {\text {r e l}, i} ^ {T}\right) \mathbf {M} _ {\text {o b j}, i} (6) \\ \end{array} +$$ + +This method can be easily extended to a mini-batch of examples $\mathbf{X}$ . + +# EXPERIMENTAL DETAILS + +Reproducing experiments. To reproduce these experiments, first download and install the Google language package5. Many of the experiments in this paper can be reproduced using scripts stored in the some subdirectory of the source directory language/nql/demos: for example, the scalability experiments of Figure 1 can be performed using scripts in language/nql/demos/gridworld_scaling/. + +Grid experiments. In the grid experiments, the entity vector $\mathbf{x}$ is a randomly-chosen singleton set, and the relation vector $\mathbf{r}$ weights relations roughly uniformly—more specifically, each relation has weight $1 + \epsilon$ where $\epsilon$ is a drawn uniformly at random between 0 and 0.001. We vary the number of relations by inventing $m$ new relation names and assigning existing grid edges to each new relation. These experiments were conducted on a Titan Xp GPU with 12Gb of memory. + +For key-value networks, the key is the concatenation of a relation and a subject entity, and the value is the object entity. We considered only the run-time for queries on an untrained randomly-initialized network (since run-time performance on a trained network would be the same); however, it should be noted that considerable time that might be needed to train the key-value memory to approximate the KB. (In fact, it is not obvious under what conditions a KB can be approximated well by the key-value memory.) + +We do not show results on the grid task for smaller minibatch sizes, but both reified and late mixing are about $40\mathrm{x}$ slower with $b = 1$ than with $b = 128$ . + +WebQuestionsSP experiments. For efficiency, on this problem we exploit the type structure of the problem (see Appendix A). Our model uses two types of nodes, CVT and entity nodes. The model also uses three types of relations: relations mapping entities to entities, relations mapping entities to CVT nodes; and relations mapping CVT nodes to entity nodes. + +MetaQA experiments. An example of a 2-hop question in MetaQA could be "Who co-starred with Robert Downey Jr. in their movies?", and the answer would be a set of actor entities, e.g., "Chris Hemsworth", "Thomas Stanley", etc. Triples in the knowledge base are represented as (subject, relation, object) triples, e.g., ("Robert Downey Jr.", "act_in", "Avengers: Endgame"), ("Avengers: Endgame", "stars", "Thomas Stanley"), etc. The quoted strings here all indicate KB entities. + +We also observed that in the MetaQA 2-hop and 3-hop questions, the questions often exclude the seed entities (e.g., "other movies with the same director as Pulp Fiction"). This can be modeled by masking out seed entities from the predictions after the second hop (ReifKB + mask in the table). + +Timing on MetaQA and other natural problems. The raw data for the bubble plot of Table 5 is below. + +
Time (seconds)Accuracy (hits@1)Method
72.679.7Reif KB
189.810.1KV-mem
28.977.2GRAFT-Net
1131.091.4PullNet
+ +Discussion of the KB completion model. The KB completion model is + +$$ +\mathrm {f o r} i = 1, \ldots , N \mathrm {a n d} t = 1, \ldots , T: \mathbf {r} _ {i} ^ {t} = f _ {i} ^ {t} (q); \mathbf {x} _ {i} ^ {t} = f o l l o w (\mathbf {x} _ {i} ^ {t - 1}, \mathbf {r} _ {i} ^ {t}) + \mathbf {x} _ {i} ^ {t - 1} +$$ + +It may not be immediately obvious why we used + +$$ +\mathbf {x} _ {i} ^ {t} = f o l l o w (\mathbf {x} _ {i} ^ {t - 1}, \mathbf {r} _ {i} ^ {t}) + \mathbf {x} _ {i} ^ {t - 1} +$$ + +instead of the simpler + +$$ +\mathbf {x} _ {i} ^ {t} = \text {f o l l o w} \left(\mathbf {x} _ {i} ^ {t - 1}, \mathbf {r} _ {i} ^ {t}\right) +$$ + +In the main text, we say that this "gives the model access to outputs of all chains of length less than $t$ ". This statement is probably easiest to understand by considering a concrete example. Let us simplify notation slightly by dropping the subscripts and writing follow $(\mathbf{x}_i^{t - 1},\mathbf{r}_i^t)$ as $f^t (\mathbf{x}^{t - 1})$ . Now expand the definition of $\mathbf{x}^t$ for a few small values of $t$ , using the linearity of the definition of relation-set + +following where appropriate to simplify: + +$$ +\begin{array}{l} \mathbf {x} ^ {1} = f ^ {1} \left(\mathbf {x} ^ {0}\right) + \mathbf {x} ^ {0} \\ \mathbf {x} ^ {2} = f ^ {2} \left(\mathbf {x} ^ {1}\right) + \mathbf {x} ^ {1} \\ = f ^ {2} \left(\left(f ^ {1} (\mathbf {x} ^ {0}) + \mathbf {x} ^ {0}\right) + \left(\left(f ^ {1} (\mathbf {x} ^ {0}) + \mathbf {x} ^ {0}\right) \right. \right. \\ = f ^ {2} \left(f ^ {1} \left(\mathbf {x} ^ {0}\right)\right) + f ^ {2} \left(\mathbf {x} ^ {0}\right) + f ^ {1} \left(\mathbf {x} ^ {0}\right) + \mathbf {x} ^ {0} \\ \end{array} +$$ + +$$ +\begin{array}{l} {\bf x} ^ {3} = f ^ {3} ({\bf x} ^ {2}) + {\bf x} ^ {2} \\ = f ^ {3} \left(\left(f ^ {2} \left(f ^ {1} (\mathbf {x} ^ {0})\right) + f ^ {2} (\mathbf {x} ^ {0}) + f ^ {1} (\mathbf {x} ^ {0}) + \mathbf {x} ^ {0}\right) + f ^ {2} \left(f ^ {1} (\mathbf {x} ^ {0})\right) + f ^ {2} (\mathbf {x} ^ {0}) + f ^ {1} (\mathbf {x} ^ {0}) + \mathbf {x} ^ {0}\right) \\ = f ^ {3} \left(f ^ {2} \left(f ^ {1} \left(\mathbf {x} ^ {0}\right)\right)\right) + f ^ {3} \left(f ^ {2} \left(\mathbf {x} ^ {0}\right)\right) + f ^ {3} \left(f ^ {1} \left(\mathbf {x} ^ {0}\right)\right) + f ^ {3} \left(\mathbf {x} ^ {0}\right) + f ^ {2} \left(f ^ {1} \left(\mathbf {x} ^ {0}\right)\right) + f ^ {2} \left(\mathbf {x} ^ {0}\right) + f ^ {1} \left(\mathbf {x} ^ {0}\right) + \mathbf {x} ^ {0} \\ \end{array} +$$ + +A pattern is now clear: with this recursive definition $\mathbf{x}^t$ expands to a mixture of many paths, each of which applies a different subset of $f^1, \ldots, f^t$ to the initial input $\mathbf{x}$ . Since the weights of the mixture can to a large extent be controlled by varying the norm of the relation vectors $\mathbf{r}^1, \ldots, \mathbf{r}^t$ , this "kernel-like trick" increases the expressive power of the model without introducing new parameters. The final mixture of the $\mathbf{x}^t$ 's seems to provide a bias towards accepting the output of shorter paths, which appears to be useful in practice. \ No newline at end of file diff --git a/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/images.zip b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..600c5523006879dc2c01d8c205c8c3f6579ddfad --- /dev/null +++ b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f4c0608173c40c8d21e63f575617a988c608e37e21edc77651dd7d9de4d5c55 +size 401614 diff --git a/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/layout.json b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..590adfc964d1a486d55e770fcd016cc09069c3f6 --- /dev/null +++ b/scalableneuralmethodsforreasoningwithasymbolicknowledgebase/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf84c831e663b837ffecd881a1105f05543f740da0a8a7de12c1da78cf05b5f3 +size 685174 diff --git a/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_content_list.json b/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..505709ad8f5e9e94926f364dfd25b2fdca79468c --- /dev/null +++ b/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b48aafedcbd7478c3edc00672386c4e7346dab0c78e6d432199818e21d64e1b3 +size 79713 diff --git a/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_model.json b/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4258802498ed400642a7aeeecd84e8ef143c1efb --- /dev/null +++ b/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0f1bbde868512b1d5169731eb12ca4b0cdb0ee5351519d9f101bb570a75b37f +size 95222 diff --git a/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_origin.pdf b/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f46ac2d4c99e383aada922a04b64590c48b64f81 --- /dev/null +++ b/scaleequivariantsteerablenetworks/c776ad5a-9c50-4e61-b0b8-1f4f3ec89b32_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6263ef926aabc0b3a595da2978fdea5bb52ed6fb2d440b3e4af6ee539af1c471 +size 730863 diff --git a/scaleequivariantsteerablenetworks/full.md b/scaleequivariantsteerablenetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a4d9db8df5a48ef2dd56c4cd89c6a19752d5ff5a --- /dev/null +++ b/scaleequivariantsteerablenetworks/full.md @@ -0,0 +1,325 @@ +# SCALE-EQUIVARIANT STEERABLE NETWORKS + +Ivan Sosnovik*, Michal Szmaja†, Arnold Smeulders + +UvA-Bosch Delta Lab + +University of Amsterdam + +# ABSTRACT + +The effectiveness of Convolutional Neural Networks (CNNs) has been substantially attributed to their built-in property of translation equivariance. However, CNNs do not have embedded mechanisms to handle other types of transformations. In this work, we pay attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera. First, we introduce the general theory for building scale-equivariant convolutional networks with steerable filters. We develop scale-convolution and generalize other common blocks to be scale-equivariant. We demonstrate the computational efficiency and numerical stability of the proposed method. We compare the proposed models to the previously developed methods for scale equivariance and local scale invariance. We demonstrate state-of-the-art results on the MNIST-scale dataset and on the STL-10 dataset in the supervised learning setting. + +# 1 INTRODUCTION + +Scale transformations occur in many image and video analysis tasks. They are a natural consequence of the variable distances among objects, or between objects and the camera. Such transformations result in significant changes in the input space which are often difficult for models to handle appropriately without careful consideration. At a high level, there are two modeling paradigms which allow a model to deal with scale changes: models can be endowed with an internal notion of scale and transform their predictions accordingly, or instead, models can be designed to be specifically invariant to scale changes. In image classification, when scale changes are commonly a factor of 2, it is often sufficient to make class prediction independent of scale. However, in tasks such as image segmentation, visual tracking, or object detection, scale changes can reach factors of 10 or more. In these cases, it is intuitive that the ideal prediction should scale proportionally to the input. For example, the segmentation map of a nearby pedestrian should be easily converted to that of a distant person simply by downscaling. + +Convolutional Neural Networks (CNNs) demonstrate state-of-the-art performance in a wide range of tasks. Yet, despite their built-in translation equivariance, they do not have a particular mechanism for dealing with scale changes. One way to make CNNs account for scale is to train them with data augmentation Barnard & Casasent (1991). This is, however, suitable only for global transformations. As an alternative, Henriques & Vedaldi (2017) and Tai et al. (2019) use the canonical coordinates of scale transformations to reduce scaling to well-studied translations. While these approaches do allow for scale equivariance, they consequently break translation equivariance. + +Several attempts have thus been made to extend CNNs to both scale and translation symmetry simultaneously. Some works use input or filter resizing to account for scaling in deep layers Xu et al. (2014); Kanazawa et al. (2014). Such methods are suboptimal due to the time complexity of tensor resizing and the need for interpolation. In Ghosh & Gupta (2019) the authors pre-calculate filters defined on several scales to build scale-invariant networks, while ignoring the important case of scale equivariance. In contrast, Worrall & Welling (2019) employ the theory of semigroup equivariant networks with scale-space as an example; however, this method is only suitable for integer downscale factors and therefore limited. + +In this paper we develop a theory of scale-equivariant networks. We demonstrate the concept of steerable filter parametrization which allows for scaling without the need for tensor resizing. Then we derive scale-equivariant convolution and demonstrate a fast algorithm for its implementation. Furthermore, we experiment to determine to what degree the mathematical properties actually hold true. Finally, we conduct a set of experiments comparing our model with other methods for scale equivariance and local scale invariance. + +The proposed model has the following advantages compared to other scale-equivariant models: + +1. It is equivariant to scale transformations with arbitrary discrete scale factors and is not limited to either integer scales or scales tailored by the image pixel grid. +2. It does not rely on any image resampling techniques during training, and therefore, produces deep scale-equivariant representations free of any interpolation artifacts. +3. The algorithm is based on the combination of tensor expansion and 2-dimensional convolution, and demonstrates the same computation time as the general CNN with a comparable filter bank. + +# 2 PRELIMINARIES + +Before we move into scale-equivariant mappings, we discuss some aspects of equivariance, scaling transformations, symmetry groups, and the functions defined on them. For simplicity, in this section, we consider only 1-dimensional functions. The generalization to higher-dimensional cases is straightforward. + +Equivalence Let us consider some mapping $g$ . It is equivariant under $L_{\theta}$ if and only if there exists $L_{\theta}'$ such that $g \circ L_{\theta} = L_{\theta}' \circ g$ . In case $L_{\theta}'$ is the identity mapping, the function $g$ is invariant. + +In this paper we consider scaling transformations. In order to guarantee the equivariance of the predictions to such transformations, and to improve the performance of the model, we seek to incorporate this property directly inside CNNs. + +Scaling Given a function $f: \mathbb{R} \to \mathbb{R}$ , a scale transformation is defined as follows: + +$$ +L _ {s} [ f ] (x) = f \left(s ^ {- 1} x\right), \quad \forall s > 0 \tag {1} +$$ + +We refer to cases with $s > 1$ as upscale and to cases with $s < 1$ as downscale. If we convolve the downscaled function with an arbitrary filter $\psi$ and perform a simple change of variables inside the integral, we get the following property: + +$$ +\begin{array}{l} \left[ L _ {s} [ f ] \star \psi \right] (x) = \int_ {\mathbb {R}} L _ {s} [ f ] \left(x ^ {\prime}\right) \psi \left(x ^ {\prime} - x\right) d x ^ {\prime} = \int_ {\mathbb {R}} f \left(s ^ {- 1} x ^ {\prime}\right) \psi \left(x ^ {\prime} - x\right) d x ^ {\prime} \tag {2} \\ = s \int_ {\mathbb {R}} f (s ^ {- 1} x ^ {\prime}) \psi (s (s ^ {- 1} x ^ {\prime} - s ^ {- 1} x)) d (s ^ {- 1} x ^ {\prime}) = s L _ {s} [ f \star L _ {s ^ {- 1}} [ \psi ] ] (x) \\ \end{array} +$$ + +In other words, convolution of the downscaled function with a filter can be expressed through a convolution of the function with the correspondingly upscaled filter where downscaling is performed afterwards. Equation 2 shows us that the standard convolution is not scale-equivariant. + +Steerable Filters In order to make computations simpler, we reparametrize $\psi_{\sigma}(x) = \sigma^{-1}\psi (\sigma^{-1}x)$ , which has the following property: + +$$ +L _ {s ^ {- 1}} \left[ \psi_ {\sigma} \right] (x) = \psi_ {\sigma} (s x) = s ^ {- 1} \psi_ {s ^ {- 1} \sigma} (x) \tag {3} +$$ + +It gives a shorter version of Equation 2: + +$$ +L _ {s} [ f ] \star \psi_ {\sigma} = L _ {s} [ f \star \psi_ {s ^ {- 1} \sigma} ] \tag {4} +$$ + +We will refer to such a parameterization of filters as Steerable Filters because the scaling of these filters is the transformation of its parameters. Note that we may construct steerable filters from any + +function. This has the important consequence that it does not restrict our approach. Rather it will make the analysis easier for discrete data. Moreover, note that any linear combination of steerable filters is still steerable. + +Scale-Translation Group All possible scales form the scaling group $S$ . Here we consider the discrete scale group, i.e. scales of the form $\ldots a^{-1}, a^{-1}, 1, a, a^2, \ldots$ with base $a$ as a parameter of our method. Analysis of this group by itself breaks the translation equivariance of CNNs. Thus we seek to incorporate scale and translation symmetries into CNNs, and, therefore consider the Scale-Translation Group $H$ . It is a semidirect product of the scaling group $S$ and the group of translations $T \cong \mathbb{R}$ . In other words: $H = \{(s, t) | s \in S, t \in T\}$ . For multiplication of group elements, we have $(s_2, t_2) \cdot (s_1, t_1) = (s_2 s_1, s_2 t_1 + t_2)$ and for the inverse $(s_2, t_2)^{-1} \cdot (s_1, t_1) = (s_2^{-1} s_1, s_2^{-1}(t_1 - t_2))$ . Additionally, for the corresponding scaling and translation transformations, we have $L_{st} = L_s L_t \neq L_t L_s$ , which means that the order of the operations matters. + +From now on, we will work with functions defined on groups, i.e. mappings $H \to \mathbb{R}$ . Note, that simple function $f: \mathbb{R} \to \mathbb{R}$ may be considered as a function on $H$ with constant value along the $S$ axis. Therefore, Equation 4 holds true for functions on $H$ as well. One thing we should keep in mind is that when we apply $L_{s}$ to functions on $H$ and $\mathbb{R}$ we use different notations. For example $L_{s}[f](x') = f(s^{-1}x')$ and $L_{s}[f](s',t') = f((s,0)^{-1}(s',t')) = f(s^{-1}s',s^{-1}t')$ + +Group-Equivariant Convolution Given group $G$ and two functions $f$ and $\psi$ defined on it, $G$ -equivariant convolution is given by + +$$ +[ f \star_ {G} \psi ] (g) = \int_ {G} f (g ^ {\prime}) L _ {g} [ \psi ] (g ^ {\prime}) d \mu (g ^ {\prime}) = \int_ {G} f (g ^ {\prime}) \psi (g ^ {- 1} g ^ {\prime}) d \mu (g ^ {\prime}) \tag {5} +$$ + +Here $\mu(g')$ is the Haar measure also known as invariant measure Folland (2016). For $T \cong \mathbb{R}$ we have $d\mu(g') = dg'$ . For discrete groups, the Haar measure is the counting measure, and integration becomes a discrete sum. This formula tells us that the output of the convolution evaluated at point $g$ is the inner product between the function $f$ and the transformed filter $L_g[\psi]$ . + +# 3 SCALE-EQUIVARIANT MAPPINGS + +Now we define the main building blocks of scale-equivariant models. + +Scale Convolution In order to derive scale convolution, we start from group equivariant convolution with $G = H$ . We first use the property of semidirect product of groups which splits the integral, then choose the appropriate Haar measures, and finally use the properties of steerable filters. Given the function $f(s,t)$ and a steerable filter $\psi_{\sigma}(s,t)$ defined on $H$ , a scale convolution is given by: + +$$ +\begin{array}{l} [ f \star_ {H} \psi_ {\sigma} ] (s, t) = \int_ {S} \int_ {T} f (s ^ {\prime}, t ^ {\prime}) L _ {s t} [ \psi_ {\sigma} ] (s ^ {\prime}, t ^ {\prime}) d \mu (s ^ {\prime}) d \mu (t ^ {\prime}) \\ = \sum_ {s ^ {\prime}} \int_ {T} f \left(s ^ {\prime}, t ^ {\prime}\right) \psi_ {s \sigma} \left(s ^ {- 1} s ^ {\prime}, t ^ {\prime} - t\right) d t ^ {\prime} = \sum_ {s ^ {\prime}} \left[ f \left(s ^ {\prime}, \cdot\right) \star \psi_ {s \sigma} \left(s ^ {- 1} s ^ {\prime}, \cdot\right) \right] (t) \tag {6} \\ \end{array} +$$ + +And for the case of $C_{\mathrm{in}}$ input and $C_{\mathrm{out}}$ output channels we have: + +$$ +\left[ f \star_ {H} \psi_ {\sigma} \right] _ {m} (s, t) = \sum_ {n = 1} ^ {C _ {\text {i n}}} \sum_ {s ^ {\prime}} \left[ f _ {n} \left(s ^ {\prime}, \cdot\right) \star \psi_ {n, m, s \sigma} \left(s ^ {- 1} s ^ {\prime}, \cdot\right) \right] (t), \quad m = 1 \dots C _ {\text {o u t}} \tag {7} +$$ + +The proof of the equivariance of this convolution to transformations from $H$ is given in Appendix A. + +Kondor & Trivedi (2018) prove that a feed-forward neural network is equivariant to transformations from $G$ if and only if it is constructed from G-equivariant convolutional layers. Thus Equation 7 shows the most general form of scale-equivariant layers which allows for building scale-equivariant convolutional networks with such choice of $S$ . We will refer to models using scale-equivariant layers with steerable filters as Scale-Equivariant Steerable Networks, or shortly $SESN^1$ + +Nonlinearities In order to guarantee the equivariance of the network to scale transformations, we use scale equivariant nonlinearities. We are free to use simple point-wise nonlinearities. Indeed, + +point-wise nonlinearities $\nu$ , like ReLU, commute with scaling transformations: + +$$ +\begin{array}{l} [ \nu \circ L _ {s} [ f ] ] (s ^ {\prime}, x ^ {\prime}) = \nu \left(L _ {s} [ f ] \left(s ^ {\prime}, x ^ {\prime}\right)\right) = \nu \left(f \left(s ^ {- 1} s ^ {\prime}, s ^ {- 1} x ^ {\prime}\right)\right) \\ = \nu [ f ] \left(s ^ {- 1} s ^ {\prime}, s ^ {- 1} x ^ {\prime}\right) = \left[ L _ {s} \circ \nu [ f ] \right] \left(s ^ {\prime}, x ^ {\prime}\right) \tag {8} \\ \end{array} +$$ + +Pooling Until now we did not discuss how to convert an equivariant mapping to invariant one. One way to do this is to calculate the invariant measure of the signal. In case of translation, such a measure could be the maximum value for example. + +First, we propose the maximum scale projection defined as $f(s,x) \to \max_s f(s,x)$ . This transformation projects the function $f$ from $H$ to $T$ . Therefore, the representation stays equivariant to scaling, but loses all information about the scale itself. + +Second, we are free to use spatial max-pooling with a moving window or global max pooling. Transformation $f(s,x) \to \max_x f(s,x)$ projects the function $f$ from $H$ to $S$ . The obtained representation is invariant to scaling in spatial domain, however, it stores the information about scale. + +Finally, we can combine both of these pooling mechanisms in any order. The obtained transformation produces a scale invariant function. It is useful to utilize this transformation closer to the end of the network, when the deep representation must be invariant to nuisance input variations, but already has very rich semantic meaning. + +# 4 IMPLEMENTATION + +In this paragraph we discuss an efficient implementation of Scale-Equivariant Steerable Networks. We illustrate all algorithms in Figure 1. For simplicity we assume that zero padding is applied when it is needed for both the spatial axes and the scale axis. + +Filter Basis A direct implementation of Equation 7 is impossible due to several limitations. First, the infinite number of scales in $S$ calls for a discrete approximation. We truncate the scale group and limit ourselves to $N_{S}$ scales and use discrete translations instead of continuous ones. Training of SESN involves searching for the optimal filter in functional space which is a problem by itself. Rather than solving it directly, we choose a complete basis of $N_{b}$ steerable functions $\Psi = \{\psi_{s^{-1}\sigma ,i}\}_{i = 1}^{N_b}$ and represent convolutional filter as a linear combination of basis functions with trainable parameters $w = \{w_{i}\}_{i = 1}^{N_{b}}$ . In other words, we do the following substitution in Equation 7: $\psi_{\sigma}\rightarrow \kappa = \sum_{i}w_{i}\Psi_{i}$ + +In our experiments we use a basis of 2D Hermite polynomials with 2D Gaussian envelope, as it demonstrates good results. The basis is pre-calculated for all scales and fixed. For filters of size $V \times V$ , the basis is stored as an array of shape $[N_b, S, V, V]$ . See Appendix C for more details. + +Conv $T \rightarrow H$ If the input signal is just a function on $T$ with spatial size $U \times U$ , stored as an array of shape $[C_{\mathrm{in}}, U, U]$ , then Equation 7 can be simplified. The summation over $S$ degenerates, and the final result can be written in the following form: + +$$ +\operatorname {c o n v T H} (f, w, \Psi) = \text {s q u e e z e} (\operatorname {c o n v 2 d} (f, \exp \left(w \times \Psi\right))) \tag {9} +$$ + +Here $w$ is an array of shape $[C_{\mathrm{out}}, C_{\mathrm{in}}, N_b]$ . We compute filter $w \times \Psi$ of shape $[C_{\mathrm{out}}, C_{\mathrm{in}}, S, V, V]$ and expand it to shape $[C_{\mathrm{out}}, C_{\mathrm{in}}S, V, V]$ . Then we use standard 2D convolution to produce the output with $C_{\mathrm{out}}S$ channels and squeeze it to shape $[C_{\mathrm{out}}, S, U, U]$ . Note that the output can be viewed as a stack of feature maps, where all the features in each spatial position are vectors of $S$ components instead of being scalars as in standard CNNs. + +Conv $H \to H$ The function on $H$ has a scale axis and therefore there are two options for choosing weights of the convolutional filter. The filter may have just one scale and, therefore, does not capture the correlations between different scales of the input function; or, it may have a non-unitary extent $K_{S}$ in the scale axis and capture the correlation between $K_{S}$ neighboring scales. We refer to the second case as interscale interaction. + +It the first case $w$ has shape $[C_{\mathrm{out}}, C_{\mathrm{in}}, N_b]$ and Equation 7 degenerates in the same way as before + +$$ +\operatorname {c o n v H H} (f, w, \Psi) = \text {s q u e e z e} (\operatorname {c o n v 2 d} (\operatorname {e x p a n d} (f), \operatorname {e x p a n d} (w \times \Psi))) \tag {10} +$$ + +We expand $f$ to an array of shape $[C_{\mathrm{in}}S, U, U]$ and expand $w \times \Psi$ to have shape $[C_{\mathrm{out}}S, C_{\mathrm{in}}S, V, V]$ . The result of the convolution is then squeezed in the same way as before. + +In the case of interscale interaction, $w$ has shape $[C_{\mathrm{out}}, C_{\mathrm{in}}, K_S, N_b]$ . We iterate over all scales in interaction, shift $f$ for each scale, choose a corresponding part of $w$ , and apply convHH to them. We sum the obtained $K_S$ results afterwards. + +![](images/87cdd1884edf4f6ea8ea99e5f14fb408860e8f708ad09826518d4021ad11df60.jpg) +Figure 1: Left: the way steerable filters are computed using steerable filter basis. Middle and right: a representation of scale-convolution using Equation 9 and Equation 10. As an example we use input signal $f$ with 3 channels. It has 1 scale on $T$ and 4 scales on $H$ . It is convolved with filter $\kappa = w \times \Psi$ without scale interaction, which produces the output with 2 channels and 4 scales as well. Here we represent only channels of the signals and the filter. Spatial components are hidden for simplicity. + +# 5 RELATED WORK + +Various works on group-equivariant convolutional networks have been published recently. These works have considered roto-translation groups in 2D Cohen & Welling (2016a); Hoogeboom et al. (2018); Worrall et al. (2017); Weiler & Cesa (2019) and 3D Worrall & Brostow (2018); Kondor (2018); Thomas et al. (2018) and rotation equivariant networks in 3D Cohen et al. (2017); Esteves et al. (2018); Cohen et al. (2019). In Freeman & Adelson (1991) authors describe the algorithm for designing steerable filters for rotations. Rotation steerable filters are used in Cohen & Welling (2016b); Weiler et al. (2018a;b) for building equivariant networks. In Jacobsen et al. (2017) the authors build convolutional blocks locally equivariant to arbitrary $k$ -parameter Lie group by using a steerable basis. And in Murugan et al. the authors discuss the approach for learning steerable filters from data. To date, the majority of papers on group equivariant networks have considered rotations in 2D and 3D, but have not paid attention to scale symmetry. As we have argued above, it is a fundamentally different case. + +Many papers and even conferences have been dedicated to image scale-space — a concept where the image is analyzed together with all its downscaled versions. Initially introduced in Iijima (1959) and later developed by Witkin (1987); Perona & Malik (1990); Lindeberg (2013) scale space relies on the scale symmetry of images. The differential structure of the image Koenderink (1984) allows one to make a connection between image formation mechanisms and the space of solutions of the 2-dimensional heat equation, which significantly improved the image analysis models in the pre-deep learning era. + +One of the first works on scale equivariance and local scale invariance in the framework of CNNs was proposed by Xu et al. (2014) named SiCNN. The authors describe the model with siamese CNNs, where the filters of each instance are rescaled using interpolation techniques. This is the simplest case of equivariance where no interaction between different scales is done in intermediate layers. In SI-ConvNet by Kanazawa et al. (2014) the original network is modified such that, in each layer, the input is first rescaled, then convolved and rescaled back to the original size. Finally, the response with the maximum values is chosen between the scales. Thus, the model is locally scale-invariant. In + +
MethodEquivalenceAdmissible ScalesApproachInterscale
SiCNNGridFilter RescalingX
SI-ConvNetXGridInput RescalingX
SEVFGridInput Rescaling
DSSIntegerFilter Dilation
SS-CNNXAnySteerable FiltersX
SESN, OursAnySteerable Filters
+ +Table 1: Comparing SESN to SiCNN Xu et al. (2014), SI-ConvNet Kanazawa et al. (2014), SEVF Marcos et al. (2018), DSS Worrall & Welling (2019) and SS-CNN Ghosh & Gupta (2019). "Interscale" refers to the ability of capturing interscale interactions with kernels of non-unitary scale extent. "Grid" stands for the scales which generate images which lie exactly on the initial pixel grid. + +Marcos et al. (2018), in the SEVF model, the input of the layers is rescaled and convolved multiple times to form vector features instead of scalar ones. The length of the vector in each position is the maximum magnitude of the convolution, while the direction of the angle encodes the scale of the image which gave this response. These scale-equivariant networks rely on image rescaling which is quite slow. Worrall & Welling (2019) (DSS) generalize the concept of scale-space to deep networks. They use filter dilation to analyze the images on different scales. While this approach is as fast as the standard CNN, it is restricted only to integer downscale factors 2, 4, 8.... In Ghosh & Gupta (2019), while discussing SS-CNN the authors use scale-steerable filters to deal with scale changes. The paper does not discuss equivariance, which is an important aspect for scale. + +We summarize the information about these models in Table 1. In contrast to other scale-equivariant models, SESN uses steerable filters which allows for fast scale-convolution with no limitation of flexibility. With the framework of Scale-Equivariant Convolutional Networks we are free to build both equivariant and invariant models of different kinds. + +# 6 EXPERIMENTS + +![](images/9927fca6dfe90ae37188589d2142c0587d0643b4b223cd81cbecb48d854032cb.jpg) +Figure 2: Equivariance error $\Delta$ as a function of the number of layers (left), downscaling applied to the input image (middle), and as a function of number of scales in interscale interactions (right). The bars indicate standard deviation. + +![](images/e31bed07ec0ec0c07ce3c8a11c43cdda9bd54c0f94677fe74de37f467adf8cc3.jpg) + +![](images/c8347e91127dafe58a8004e27b2b4aeb6190a9f873a0f7baa0b624756f416d6c.jpg) + +In this section we conduct the experiments and compare various methods for working with scale variations in input data. Alongside SESN, we test local scale invariant SI-ConvNet and SS-CNN, scale equivariant SiCNN, SEVF and DSS. For SEVF, DSS and SS-CNN we use the code provided by authors, while for others we reimplement the main buildings blocks. + +We provide additional experimental results on time performance of all these methods in Appendix B. Due to the algorithm proposed in Section 4 SESN allows for training several times faster than other methods which rely on image rescaling. + +
Method(28 × 28)(28 × 28) +(56 × 56)(56 × 56) +# Params
CNN2.56 ± 0.041.96 ± 0.072.02 ± 0.071.60 ± 0.09495 K
SiCNN2.40 ± 0.031.86 ± 0.102.02 ± 0.141.59 ± 0.03497 K
SI-ConvNet2.40 ± 0.121.94 ± 0.071.82 ± 0.111.59 ± 0.10495 K
SEVF Scalar2.30 ± 0.061.96 ± 0.071.87 ± 0.091.62 ± 0.07494 K
SEVF Vector2.63 ± 0.092.23 ± 0.092.12 ± 0.131.81 ± 0.09475 K
DSS Scalar2.53 ± 0.102.04 ± 0.081.92 ± 0.081.57 ± 0.08494 K
DSS Vector2.58 ± 0.111.95 ± 0.071.97 ± 0.081.57 ± 0.09494 K
SS-CNN2.32 ± 0.152.10 ± 0.151.84 ± 0.101.76 ± 0.07494 K
SESN Scalar2.10 ± 0.101.79 ± 0.091.74 ± 0.091.50 ± 0.07495 K
SESN Vector2.08 ± 0.091.76 ± 0.081.68 ± 0.061.42 ± 0.07495 K
+ +Table 2: Classification error of different methods on MNIST-scale dataset, lower is better. In experiment we use image resolution of ${28} \times {28}$ and ${56} \times {56}$ . We test both the regime without data augmentation, and the regime with scaling data augmentation, denoted with "+" . All results are reported as mean $\pm$ std over 6 different fixed realizations of the dataset. The best results are bold. + +# 6.1 EQUIVARIANCE ERROR + +We have presented scale-convolution which is equivariant to scale transformation and translation for continuous signals. While translation equivariance holds true even for discretized signals and filters, scale equivariance may not be exact. Therefore, before starting any experiments, we check to which degree the predicted properties of scale-convolution hold true. We do so by measuring the difference $\Delta = \| [L_s\Phi (f) - \Phi L_s(f)\| _2^2 /\| L_s\Phi (f)\| _2^2$ , where $\Phi$ is scale-convolution with randomly initialized weights. + +In case of perfect equivariance the difference is equal to zero. We calculate the error on randomly sampled images from the STL-10 dataset Coates et al. (2011). The results are represented in Figure 2. The networks on the left and on the middle plots do not have interscale interactions. The networks on the middle and on the right plots consist of just one layer. We use $N_S = 5, 13, 5$ scales for the networks on the left, the middle, and the right plots respectively. While discretization introduces some error, it stays very low, and is not much higher than $6\%$ for the networks with 50 layers. The difference, however, increases if the input image is downscaled more than 16 times. Therefore, we are free to use deep networks. However, we should pay extra attention to extreme cases where scale changes are of very big magnitude. These are quite rare but still appear in practice. Finally, we see that using SESN with interscale interaction introduces extra equivariance error due to the truncation of $S$ . We will build the networks with either no scale interaction or interaction of 2 scales. + +# 6.2 MNIST-SCALE + +Following Kanazawa et al. (2014); Marcos et al. (2018); Ghosh & Gupta (2019) we conduct experiments on the MNIST-scale dataset. We rescale the images of the MNIST dataset LeCun et al. (1998) to $0.3 - 1.0$ of the original size and pad them with zeros to retain the initial resolution. The scaling factors are sampled uniformly and independently for each image. The obtained dataset is then split into 10,000 for training, 2,000 for evaluation and 50,000 for testing. We generate 6 different realizations and fix them for all experiments. + +As a baseline model we use the model described in Ghosh & Gupta (2019), which currently holds the state-of-the-art result on this dataset. It consists of 3 convolutional and 2 fully-connected layers. Each layer has filters of size $7 \times 7$ . We keep the number of trainable parameters almost the same for all tested methods. This is achieved by varying the number of channels. For scale equivariant models we add scale projection at the end of the convolutional block. + +For SiCNN, DSS, SEVF and our model, we additionally train counterparts where after each convolution, an extra projection layer is inserted. Projection layers transform vector features in each spatial position of each channel into scalar ones. All of the layers have now scalar inputs instead of + +vector inputs. Therefore, we denote these models with "Scalar". The original models are denoted as "Vector". The exact type of projection depends on the way the vector features are constructed. For SiCNN, DSS, and SESN, we use maximum pooling along the scale dimension, while for SEVF, it is a calculation of the $L_{2}$ -norm of the vector. + +All models are trained with the Adam optimizer Kingma & Ba (2014) for 60 epochs with a batch size of 128. Initial learning rate is set to 0.01 and divided by 10 after 20 and 40 epochs. We conduct the experiments with 4 different settings. Following the idea discussed in Ghosh & Gupta (2019), in addition to the standard setting we train the networks with input images upscaled to $56 \times 56$ using bilinear interpolation. This results in all image transformations performed by the network becoming more stable, which produces less interpolation artifacts. For both input sizes we conduct the experiments without data augmentation and with scaling augmentation, which results in 4 setups in total. We run the experiments on 6 different realizations of MNIST-scale and report mean $\pm$ std calculated over these runs. + +The obtained results are summarized in Table 2. The reported errors may differ a bit from the ones in the original paper because of the variations in generated datasets and slightly different training procedure. Nevertheless, we try to keep our configuration as close as possible to Ghosh & Gupta (2019) which currently demonstrated the best classification accuracy on MNIST-scale. For example, SS-CNN reports error of $1.91 \pm 0.04$ in Ghosh & Gupta (2019) while it has $1.84 \pm 0.10$ in our experiments. + +SESN significantly outperforms other methods in all 4 regimes. "Scalar" versions of it already outperform all previous methods, and "Vector" versions make the gain even more significant. The global architectures of all models are the same for all rows, which indicates that the way scale convolution is done plays an important role. + +# 6.3 STL-10 + +In order to evaluate the role of scale equivariance in natural image classification, we conduct the experiments on STL-10 dataset Coates et al. (2011). This dataset consists of 8,000 training and 5,000 testing labeled images. Additionally, it includes 100,000 unlabeled images. The images have a resolution of $96 \times 96$ pixels and RGB channels. Labeled images belong to 10 classes such as bird, horse or car. We use only the labeled subset to demonstrate the performance of the models in the low data regime. + +The dataset is normalized by subtracting the per-channel mean and dividing by the per-channel standard deviation. During training, we augment the dataset by applying 12 pixel zero padding and randomly cropping the images to size $96 \times 96$ . Additionally, random horizontal flips with probability $50\%$ and Cutout DeVries & Taylor (2017) with 1 hole of 32 pixels are used. + +As a baseline we choose WideResNet Zagoruyko & Komodakis (2016) with 16 layers and a widening factor of 8. We set dropout probability to 0.3 in all blocks. We train SESN-A with just vector features. For SESN-B we use maximum scalar projection several times in the intermediate layers, and for SESN-C we use interscale interaction. + +
MethodError, %# Params
WRN11.4811.0 M
SiCNN11.6211.0 M
SI-ConvNet12.4811.0 M
DSS11.2811.0 M
SS-CNN25.4710.8 M
SESN-A10.8311.0 M
SESN-B8.5111.0 M
SESN-C14.0811.0 M
Harm WRN9.5511.0 M
+ +Table 3: Classification error on STL-10. The best results are bold. We additionally report the current best result achieved by Harm WRN from Ulicny et al. (2019). + +All models are trained for 1000 epochs with a batch size of 128. We use SGD optimizer with Nesterov momentum of 0.9 and weight decay of $5 \cdot 10^{-4}$ . The initial learning rate is set to 0.1 and divided by 5 after 300, 400, 600 and 800 epochs. + +The results are summarized in Table 3. We found SEVF training unstable and therefore do not include it in the table. Pure scale-invariant SI-ConvNet and SS-CNN demonstrate significantly worse results than the baseline. We note the importance of equivariance for deep networks. We also find that SESN-C performs significantly worse than SESN-A and SESN-B due to high equivariance + +error caused by interscale interaction. SESN-B significantly improves the results of both WRN and DSS due to the projection between scales. The maximum scale projection makes the weights of the next layer to have a maximum receptive field in the space of scales. This is an easy yet effective method for capturing the correlations between different scales. This experiment shows that scale-equivariance is a very useful inductive bias for natural image classification with deep neural networks. + +To the best of our knowledge, the proposed method achieves a new state-of-the-art result on the STL-10 dataset in the supervised learning setting. The previous lowest error is demonstrated in Ulicny et al. (2019). The authors propose Harm WRN — a network where the convolutional kernels are represented as a linear combination of Discrete Cosine Transform filters. + +# 7 DISCUSSION + +In this paper, we have presented the theory of Scale-Equivariant Steerable Networks. We started from the scaling transformation and its application to continuous functions. We have obtained the exact formula for scale-equivariant mappings and demonstrated how it can be implemented for discretized signals. We have demonstrated that this approach outperforms other methods for scale-equivariant and local scale-invariant CNNs. It demonstrated new state-of-the-art results on MNIST-scale and on the STL-10 dataset in the supervised learning setting. + +We suppose that the most exciting possible application of SESN is in computer vision for autonomous vehicles. Rapidly changing distances between the objects cause significant scale variations which makes this well suited for our work. We especially highlight the direction of siamese visual tracking where the equivariance to principle transformations plays an important role. + +# ACKNOWLEDGMENTS + +We thank Daniel Worrall for insightful discussion, Thomas Andy Keller, Victor Garcia, Artem Moskalev and Konrad Groh for valuable comments and feedback. + +# REFERENCES + +Etienne Barnard and David Casasent. Invariance and neural nets. IEEE Transactions on neural networks, 2(5):498-508, 1991. +Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 215-223, 2011. +Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990-2999, 2016a. +Taco Cohen, Mario Geiger, Jonas Kohler, and Max Welling. Convolutional networks for spherical signals. arXiv preprint arXiv:1709.04893, 2017. +Taco S Cohen and Max Welling. Steerable cnns. arXiv preprint arXiv:1612.08498, 2016b. +Taco S Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. arXiv preprint arXiv:1902.04615, 2019. +Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. +Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, and Kostas Daniilidis. Learning so (3) equivariant representations with spherical cnns. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 52-68, 2018. +Gerald B Folland. A course in abstract harmonic analysis. Chapman and Hall/CRC, 2016. +William T. Freeman and Edward H Adelson. The design and use of steerable filters. IEEE Transactions on Pattern Analysis & Machine Intelligence, (9):891-906, 1991. + +Rohan Ghosh and Anupam K Gupta. Scale steerable filters for locally scale-invariant convolutional neural networks. arXiv preprint arXiv:1906.03861, 2019. +Joao F Henriques and Andrea Vedaldi. Warped convolutions: Efficient invariance to spatial transformations. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1461-1469. JMLR.org, 2017. +Emiel Hoogeboom, Jorn WT Peters, Taco S Cohen, and Max Welling. Hexaconv. arXiv preprint arXiv:1803.02108, 2018. +Taizo Iijima. Basic theory of pattern observation. Technical Group on Automata and Automatic Control, pp. 3-32, 1959. +Jörn-Henrik Jacobsen, Bert De Brabandere, and Arnold WM Smeulders. Dynamic steerable blocks in deep residual networks. arXiv preprint arXiv:1706.00598, 2017. +Angjoo Kanazawa, Abhishek Sharma, and David Jacobs. Locally scale-invariant convolutional neural networks. arXiv preprint arXiv:1412.5104, 2014. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Jan J Koenderink. The structure of images. Biological cybernetics, 50(5):363-370, 1984. +Risi Kondor. N-body networks: a covariant hierarchical neural network architecture for learning atomic potentials. arXiv preprint arXiv:1803.01588, 2018. +Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. arXiv preprint arXiv:1802.03690, 2018. +Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +Tony Lindeberg. Scale-space theory in computer vision, volume 256. Springer Science & Business Media, 2013. +Diego Marcos, Benjamin Kellenberger, Sylvain Lobry, and Devis Tuia. Scale equivariance in cnns with vector fields. arXiv preprint arXiv:1807.11783, 2018. +Muthuvel Murugan, KV Subrahmanyam, and Siruseri IT Park. So (2)-equivariance in neural networks using tensor nonlinearity. +Pietro Perona and Jitendra Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on pattern analysis and machine intelligence, 12(7):629-639, 1990. +Kai Sheng Tai, Peter Bailis, and Gregory Valiant. Equivariant transformer networks. arXiv preprint arXiv:1901.11399, 2019. +Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. +Matej Ulicny, Vladimir A Krylov, and Rozenn Dahyot. Harmonic networks with limited training samples. arXiv preprint arXiv:1905.00135, 2019. +Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. In Advances in Neural Information Processing Systems, pp. 14334-14345, 2019. +Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In Advances in Neural Information Processing Systems, pp. 10381-10392, 2018a. +Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for rotation equivariant cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 849-858, 2018b. + +Andrew P Witkin. Scale-space filtering. In Readings in Computer Vision, pp. 329-332. Elsevier, 1987. +Daniel Worrall and Gabriel Brostow. *Cubenet: Equivalence to 3d rotation and translation.* In Proceedings of the European Conference on Computer Vision (ECCV), pp. 567-584, 2018. +Daniel E Worrall and Max Welling. Deep scale-spaces: Equivariance over scale. arXiv preprint arXiv:1905.11697, 2019. +Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5028-5037, 2017. +Yichong Xu, Tianjun Xiao, Jiaxing Zhang, Kuiyuan Yang, and Zheng Zhang. Scale-invariant convolutional neural networks. arXiv preprint arXiv:1411.6369, 2014. +Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. + +# A PROOF OF EQUIVARIANCE + +Let us first show that scale-convolution defined in Equation 6 is equivariant to translations. + +$$ +\begin{array}{l} [ L _ {\hat {t}} [ f ] \star_ {H} \psi_ {\sigma} ] (s, t) = \sum_ {s ^ {\prime}} [ L _ {\hat {t}} [ f ] (s ^ {\prime}, \cdot) \star \psi_ {s \sigma} (s ^ {- 1} s ^ {\prime}, \cdot) ] (t) \\ = \sum_ {s ^ {\prime}} L _ {\hat {t}} \left[ f \left(s ^ {\prime}, \cdot\right) \star \psi_ {s \sigma} \left(s ^ {- 1} s ^ {\prime}, \cdot\right) \right] (t) \tag {11} \\ = L _ {\hat {t}} \left\{\sum_ {s ^ {\prime}} \left[ f \left(s ^ {\prime}, \cdot\right) \star \psi_ {s \sigma} \left(s ^ {- 1} s ^ {\prime}, \cdot\right) \right] \right\} (t) \\ = L _ {t} [ f \star_ {H} \psi_ {\sigma} ] (s, t) \\ \end{array} +$$ + +Now we show that scale convolution is equivariant to scale transformations: + +$$ +\begin{array}{l} [ L _ {\hat {s}} [ f ] \star_ {H} \psi_ {\sigma} ] (s, t) = \sum_ {s ^ {\prime}} [ L _ {\hat {s}} [ f ] (s ^ {\prime}, \cdot) \star \psi_ {s \sigma} (s ^ {- 1} s ^ {\prime}, \cdot) ] (t) \\ = \sum_ {s ^ {\prime}} L _ {\hat {s}} \left[ f \left(\hat {s} ^ {- 1} s ^ {\prime}, \cdot\right) \star \psi_ {\hat {s} ^ {- 1} s \sigma} \left(s ^ {- 1} s ^ {\prime}, \cdot\right) \right] (t) \\ = \sum_ {s ^ {\prime \prime}} \left[ f \left(s ^ {\prime \prime}, \cdot\right) \star \psi_ {\hat {s} ^ {- 1} s \sigma} \left(\hat {s} s ^ {- 1} s ^ {\prime \prime}, \cdot\right) \right] \left(\hat {s} ^ {- 1} t\right) \tag {12} \\ = [ f \star_ {H} \psi_ {\sigma} ] (\hat {s} ^ {- 1} s, \hat {s} ^ {- 1} t) \\ = L _ {\delta} [ f \star_ {H} \psi_ {\sigma} ] (s, t) \\ \end{array} +$$ + +Finally, we can use the property of semidirect product of groups + +$$ +L _ {\hat {s} \hat {t}} [ f ] \star_ {H} \psi_ {\sigma} = L _ {\hat {s}} L _ {\hat {t}} [ f ] \star_ {H} \psi_ {\sigma} = L _ {\hat {s}} \left[ L _ {\hat {t}} [ f ] \star_ {H} \psi_ {\sigma} \right] = L _ {\hat {s}} L _ {\hat {t}} [ f \star_ {H} \psi_ {\sigma} ] = L _ {\hat {s} \hat {t}} [ f \star_ {H} \psi_ {\sigma} ] \tag {13} +$$ + +# B TIME PERFORMANCE + +We report the average time per epoch of different methods for scale equivariance and local scale invariance in Table 4. Experimental setups from Section 6.2 are used. We used 1 Nvidia GeForce GTX 1080Ti GPU for training the models. + +The methods relying on image rescaling techniques during training (SiCNN, SI-ConvNet, SEVF) demonstrate significantly worse time performance than the ones, using either steerable filters or filter dilation. Additionally, we see that our method outperforms SS-CNN by a wide margin. Despite the similar filter sizes and comparable number of parameters between SS-CNN and SESN Scalar, the second one demonstrates significantly better results due to the algorithm proposed in Section 4. Finally, DSS performs slightly faster in some cases than our method as each convolution involves less FLOPs. Dilated filters are sparse, while steerable filters are dense. + +
Method28 × 28, s56 × 56, s
CNN3.83.8
SiCNN Scalar13.518.9
SiCNN Vector15.322.8
SI-ConvNet18.433.1
SEVF Scalar21.038.4
SEVF Vector25.446.0
DSS Scalar3.95.0
DSS Vector3.94.8
SS-CNN14.816.6
SESN Scalar3.85.1
SESN Vector3.86.8
+ +Table 4: Average time per epoch during training on input data with resolution ${28} \times {28}$ and ${56} \times {56}$ . + +# C BASIS + +Assuming that the center of the filter is point $(0,0)$ in coordinates $(x,y)$ , we use the filters of the following form: + +$$ +\psi_ {\sigma} (x, y) = A \frac {1}{\sigma^ {2}} H _ {n} \left(\frac {x}{\sigma}\right) H _ {m} \left(\frac {y}{\sigma}\right) \exp \left[ - \frac {x ^ {2} + y ^ {2}}{2 \sigma^ {2}} \right] \tag {14} +$$ + +Here $A$ is a constant independent on $\sigma$ , $H_{n}$ — Hermite polynomial of the $n$ -th order. We iterate over increasing pairs of $n, m$ to generate the required number of functions. + +# D MODEL CONFIGURATION + +# D.1 MNIST-SCALE + +
MethodConv 1Conv 2Conv 3FC 1# Scales
CNN3263951
SiCNN3263957
SI-ConvNet3263957
SEVF Scalar3263958
SEVF Vector2345682568
DSS3263954
SS-CNN3060906
SESN3263954
+ +# D.2 STL-10 + +Table 5: Number of channels in convolutional layers, number of units in fully-connected layers and number of scales used by different models in Section 6.2. + +
MethodBlock 1Block 2Block 3# Scales
CNN1632641
SiCNN1632643
SI-ConvNet1632643
SEVF1123453
DSS1632644
SS-CNN1122443
SESN1632643
+ +Table 6: Number of channels in convolutional blocks and number of scales used by different models in Section 6.2. We report the number of channels up to the widening factor. \ No newline at end of file diff --git a/scaleequivariantsteerablenetworks/images.zip b/scaleequivariantsteerablenetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..79f01e30579b36d67fb19e9727ac717b09d69a6a --- /dev/null +++ b/scaleequivariantsteerablenetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e390e365e4afb656a4b3e4ed507e0d00603c1a2aeee50e995804b22a7082cc9 +size 437738 diff --git a/scaleequivariantsteerablenetworks/layout.json b/scaleequivariantsteerablenetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..61b3996f3801d1fb913c043bacaea87bc7d704bc --- /dev/null +++ b/scaleequivariantsteerablenetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8adf69a143901499d3f54c0e3c5c377e7abc5b9dbd4c724c4ba76f1abdf4af9 +size 412592 diff --git a/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_content_list.json b/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..323c798d6327d74e209d6b52e61e8bc955b68870 --- /dev/null +++ b/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46ad9f55bd3dffa194d71355b27f6654bee76149d04a57d16e35cb6b5d14191e +size 106550 diff --git a/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_model.json b/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7f0ebadd851d940d4b53bce453eb63e5193c51e0 --- /dev/null +++ b/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f68a584ef93cbd033bcbe6593877356799bf6bb9775a32960d682e1f6063e98 +size 124430 diff --git a/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_origin.pdf b/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6eaa67983c341434da0e5794df81b93929bc314d --- /dev/null +++ b/scalorgenerativeworldmodelswithscalableobjectrepresentations/43184893-1f5a-4606-af45-06953e067b3e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5912d6a71db5f3cfe32ae5149c8b2f194fdd077b6d512fef019b5a189ef9356 +size 11694841 diff --git a/scalorgenerativeworldmodelswithscalableobjectrepresentations/full.md b/scalorgenerativeworldmodelswithscalableobjectrepresentations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..93839cac61035579c6ca55d738bde09091224e94 --- /dev/null +++ b/scalorgenerativeworldmodelswithscalableobjectrepresentations/full.md @@ -0,0 +1,388 @@ +# SCALOR: GENERATIVE WORLD MODELS WITH SCALABLE OBJECT REPRESENTATIONS + +Jindong Jiang†, Sepehr Janghorbani†, Gerard de Melo & Sungjin Ahn + +Rutgers University + +# ABSTRACT + +Scalability in terms of object density in a scene is a primary challenge in unsupervised sequential object-oriented representation learning. Most of the previous models have been shown to work only on scenes with a few objects. In this paper, we propose SCALOR, a probabilistic generative world model for learning SCALable Object-oriented Representation of a video. With the proposed spatially-parallel attention and proposal-rejection mechanisms, SCALOR can deal with orders of magnitude larger numbers of objects compared to the previous state-of-the-art models. Additionally, we introduce a background module that allows SCALOR to model complex dynamic backgrounds as well as many foreground objects in the scene. We demonstrate that SCALOR can deal with crowded scenes containing up to a hundred objects while jointly modeling complex dynamic backgrounds. Importantly, SCALOR is the first unsupervised object representation model shown to work for natural scenes containing several tens of moving objects. https://sites.google.com/view/scalor/home + +# 1 INTRODUCTION + +Unsupervised structured representation learning for visual scenes is a key challenge in machine learning. When a scene is properly decomposed into meaningful entities such as foreground objects and background, we can benefit from numerous advantages of abstract symbolic representation. These include interpretability, sample efficiency, the ability of reasoning and causal inference, as well as compositionality and transferability for better generalization. In addition to symbols, another essential dimension is time. Objects, agents, and spaces all operate under the governance of time. Without accounting for temporal developments, it is often much harder if not impossible to discover certain relationships in a scene. + +Among a few methods that have been proposed for unsupervised learning of object-oriented representation in temporal scenes, SQAIR (Kosiorek et al., 2018) is by far the most complete model. As a probabilistic temporal generative model, it can learn object-wise structured representation while modeling underlying stochastic temporal transitions in the observed data. Introducing the propagation-discovery model, SQAIR can also handle dynamic scenes where objects may disappear or be introduced in the middle of a sequence. Although SQAIR provides promising ideas and shows the potential of this important direction, a few key challenges remain, limiting its applicability merely to synthetic toy tasks that are far simpler than typical natural scenes. + +The first and foremost limitation is scalability. Sequentially processing every object in an image, SQAIR has a fundamental limitation in scaling up to scenes with a large number of objects. As such, the state-of-the-art remains at the level of modeling videos containing only a few objects, such as MNIST digits, per image. Considering the complexity of typical natural scenes as well as the importance of scalable unsupervised object perception for applications such as self-driving systems, it is thus a challenge of the highest priority to scale robustly to scenes with a large number of objects. Scaling up the object-attention capacity is an important problem because it allows us to maximize the modern parallel computation that can maximize search capacity. This is contrary to humans, who can attend only to a few objects at a time in a time-consuming sequential manner. The + +AlphaGo system (Silver et al., 2017) is an example demonstrating the power of such parallel search (attention) beyond human capacity. + +The second limitation is that previous models including SQAIR lack any form of background modeling and thus only cope with scenes without background, whereas natural scenes usually have a dynamic background. Thus, a temporal generative model that can deal with dynamic backgrounds along with many foreground objects is an important step toward natural video scene understanding. + +In this paper, we propose a model called SCALable Sequential Object-Oriented Representation (SCALOR). SCALOR resolves the aforementioned key limitations and hence can model complex videos with several tens of moving objects along with dynamic backgrounds, eventually making the model applicable to natural videos. In SCALOR, we achieve scalability with respect to the object density by parallelizing both the propagation and discovery processes, reducing the time complexity of processing each image from $\mathcal{O}(N)$ to $\mathcal{O}(1)$ , with $N$ being the number of objects in an image. We also observe that the sequential object processing in SQAIR, which is based on an RNN, not only increases the computation time but also deteriorates discovery performance. To this end, we propose a parallel discovery model with superior discovery capacity and performance. SCALOR can also be regarded as a generative tracking model since it not only detects object trajectories but is also able to predict trajectories into the future. In our experiments, we demonstrate that SCALOR can model videos with nearly one hundred moving objects along with a dynamic background on synthetic datasets. Furthermore, we showcase the ability of SCALOR to operate on natural-scene videos containing tens of objects with a dynamic background. + +The contributions of this work are: (i) We propose the SCALAR model, which significantly improves (two orders of magnitude) the scalability in terms of object density. It is applicable to nearly a hundred objects while providing more efficient computation time than SQAIR. (ii) We propose parallelizing the propagation-discovery process by introducing the propose-reject model, reducing the time complexity to $\mathcal{O}(1)$ . (iii) SCALAR can model scenes with a dynamic background. (iv) SCALAR is the first probabilistic model demonstrating its working on a significantly more complex task, i.e., natural scenes containing tens of objects as well as background. + +# 2 PRELIMINARIES:SEQUENTIAL ATTEND INFER REPEAT (SQAIR) + +SQAIR models a sequence of images $\mathbf{x} = \mathbf{x}_{1:T}$ by assuming that observation $\mathbf{x}_t$ at time $t$ is generated from a set of object latent variables $\mathbf{z}_t^{\mathcal{O}} = \{\mathbf{z}_{t,n}\}_{n\in \mathcal{O}_t}$ with $\mathcal{O}_t$ the set of objects present at time $t$ . Latent variable $\mathbf{z}_{t,n}$ corresponding to object $n$ consists of three factors ( $z_{t,n}^{\mathrm{pres}},\mathbf{z}_{t,n}^{\mathrm{where}},\mathbf{z}_{t,n}^{\mathrm{what}}$ ), which represent the existence, pose, and appearance of the object, respectively. SQAIR also assumes that an object can disappear or be introduced in the middle of a sequence. To model this, it introduces the propagation-discovery model. In propagation, a subset of currently existing objects is propagated to the next time-step and those not propagated (e.g., moved out of the scene) are deleted. In discovery, after deciding how many objects $D_t$ will be discovered, $D_t$ objects are newly introduced into the scene. Combining the propagated set $\mathcal{P}_t$ and discovered set $\mathcal{D}_t$ , we obtain the set of currently existing objects $\mathcal{O}_t$ . The overall process can be formalized as: + +$$ +p \left(\mathbf {x} _ {1: T}, \mathbf {z} _ {1: T} ^ {\mathrm {f g}}, D _ {1: T}\right) = p \left(D _ {1}, \mathbf {z} _ {1} ^ {\mathcal {P}}\right) \prod_ {t = 2} ^ {T} p \left(\mathbf {x} _ {t} \mid \mathbf {z} _ {t} ^ {\mathrm {f g}}\right) p \left(D _ {t}, \mathbf {z} _ {t} ^ {\mathcal {P}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}\right) p \left(\mathbf {z} _ {t} ^ {\mathcal {P}} \mid \mathbf {z} _ {t - 1} ^ {\mathrm {f g}}\right). \tag {1} +$$ + +For SQAIR, we use $\mathbf{z}_t^{\mathrm{fg}}$ ("fg" standing for foreground) to denote $\mathbf{z}_t^\mathcal{O}$ because SQAIR does not have any latent variables for background. Due to the intractable posterior, SQAIR is trained through variational inference with the following posterior approximation: + +$$ +q \left(D _ {1: T}, \mathbf {z} _ {1: T} ^ {\mathrm {f g}} \mid \mathbf {x} _ {1: T}\right) = \prod_ {t = 1} ^ {T} q \left(D _ {t}, \mathbf {z} _ {t} ^ {\mathcal {D}} \mid \mathbf {x} _ {t}, \mathbf {z} _ {t} ^ {\mathcal {P}}\right) \prod_ {n \in \mathcal {O} _ {t - 1}} q \left(\mathbf {z} _ {t, n} \mid \mathbf {z} _ {t - 1, n}, \mathbf {x} _ {\leq t}\right). \tag {2} +$$ + +SQAIR is trained using the importance-weighted autoencoder (IWAE) objective (Burda et al., 2015). The VIMCO estimator (Mnih & Rezende, 2016) is used to backpropagate through the discrete random variables while using the reparameterization trick (Kingma & Welling, 2013; Williams, 1992) for continuous variables. SQAIR has two main limitations in terms of scalability. First, for propagation, SQAIR relies on an RNN, which sequentially processes each object by conditioning on previously processed objects. Second, the discovery is also sequential because it uses RNN-based + +discovery based on AIR (Eslami et al., 2016). Consequently, SQAIR has a time complexity of $O(|\mathcal{O}_t|)$ per step $t$ . In Crawford & Pineau (2019b), the authors demonstrated that this sequential discovery can easily fail beyond the scale of a few objects. Moreover, SQAIR lacks any model for the background and its temporal transitions, which is important in modeling natural scenes. + +# 3 SCALOR + +In this section, we describe the proposed model, SCALOR. We first describe the generative process along with the proposal-rejection mechanism, which is designed to prevent propagation collapse, and then the inference process and learning. + +# 3.1 GENERATIVE PROCESS + +SCALOR assumes that an image $\mathbf{x}_t$ is generated by background latent $\mathbf{z}_t^{\mathrm{bg}}$ and foreground latent $\mathbf{z}_t^{\mathrm{fg}}$ . The foreground is further factorized into a set of object representations $\mathbf{z}_t^{\mathrm{fg}} = \{\mathbf{z}_{t,n}\}_{n\in \mathcal{O}_t}$ . In SCALOR, we represent an object by $\mathbf{z}_{t,n} = (z_{t,n}^{\mathrm{pres}},\mathbf{z}_{t,n}^{\mathrm{where}},\mathbf{z}_{t,n}^{\mathrm{what}})$ similarly to SQAIR. The appearance representation $\mathbf{z}_{t,n}^{\mathrm{what}}$ is a continuous vector representation, and $\mathbf{z}_{t,n}^{\mathrm{where}}$ is further decomposed into the center position $\mathbf{z}_{t,n}^{\mathrm{pos}}$ , scale $\mathbf{z}_{t,n}^{\mathrm{scale}}$ , and depth $z_{t,n}^{\mathrm{depth}}$ . The depth representation, which is missing in SQAIR, represents the relative depth between objects from the camera viewpoint. This depth modeling helps deal with object occlusion. The foreground mask $\mathbf{m}_{t,n}$ obtained from $\mathbf{z}_{t,n}^{\mathrm{what}}$ is used to distinguish background and foreground. We adopt the propagation-discovery model from SQAIR, but improve it in such a way to resolve the scalability problem. The generative process of SCALOR can be written as: + +$$ +p \left(\mathbf {x} _ {1: T}, \mathbf {z} _ {1: T}\right) = p \left(\mathbf {z} _ {1} ^ {\mathcal {D}}\right) \left(\mathbf {z} _ {1} ^ {\mathrm {b g}}\right) \prod_ {t = 2} ^ {T} \underbrace {p \left(\mathbf {x} _ {t} \mid \mathbf {z} _ {t}\right)} _ {\text {r e n d e r i n g}} \underbrace {p \left(\mathbf {z} _ {t} ^ {\mathrm {b g}} \mid \mathbf {z} _ {< t} ^ {\mathrm {b g}} , \mathbf {z} _ {t} ^ {\mathrm {f g}}\right)} _ {\text {b a c k g r o u n d t r a n s i t i o n}} \underbrace {p \left(\mathbf {z} _ {t} ^ {\mathcal {D}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}\right)} _ {\text {d i s c o v e r y}} \underbrace {p \left(\mathbf {z} _ {t} ^ {\mathcal {P}} \mid \mathbf {z} _ {< t}\right)} _ {\text {p r o p a g a t i o n}}, \tag {3} +$$ + +where $\mathbf{z}_t = (\mathbf{z}_t^{\mathrm{bg}},\mathbf{z}_t^{\mathrm{fg}})$ . As shown, the generation process is decomposed into four modules: (i) propagation, (ii) discovery, (iii) background transition, and (iv) rendering. + +Propagation. The propagation in SCALAR is modeled as follows: + +$$ +p \left(\mathbf {z} _ {t} ^ {\mathcal {P}} \mid \mathbf {z} _ {< t}\right) = \prod_ {n \in \mathcal {O} _ {t}} p \left(z _ {t, n} ^ {\text {p r e s}} \mid \mathbf {z} _ {< t, n}\right) \left\{p \left(\mathbf {z} _ {t, n} ^ {\text {w h e r e}} \mid \mathbf {z} _ {< t, n}\right) p \left(\mathbf {z} _ {t, n} ^ {\text {w h a t}} \mid \mathbf {z} _ {< t, n}\right) \right\} z _ {t, n} ^ {\text {p r e s}}, \tag {4} +$$ + +where $p(z_{t,n}^{\mathrm{pres}}|\mathbf{z}_{< t,n})$ is a Bernoulli distribution with parameter $\beta_{t,n}$ . The distributions of "what" and "where" are defined only when the object is propagated. To implement this, for each object $n$ we assign an object-tracker RNN denoted by its hidden state $\mathbf{h}_{t,n}$ . The RNN is updated by input $\mathbf{z}_{t,n}$ for all $t$ where the object $n$ is present in the scene. The parameter $\beta_{t,n}$ is obtained as $\beta_{t,n} = f_{\mathrm{mlp}}(\mathbf{h}_{t,n})$ . If $z_{t,n}^{\mathrm{pres}} = 0$ , the object $n$ is not propagated and the tracker RNN is deleted. Importantly, unlike the RNN-based sequential propagation in SQAIR, the propagation in SCALOR is fully parallel. + +Discovery by Proposal-Rejection. The main contribution in making our model scalable with respect to the number of objects is our new discovery model that consists of two phases: proposal and rejection. In the proposal phase, we assume that the target image can be covered by $H \times W$ latent grid cells, and we propose an object latent variable $\tilde{\mathbf{z}}_{t,h,w}$ per grid cell. This proposal phase can be written as: + +$$ +p \left(\tilde {\mathbf {z}} _ {t} ^ {\mathcal {D}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}\right) = \prod_ {h, w = 1} ^ {H W} p \left(\tilde {\mathbf {z}} _ {t, h, w} ^ {\mathcal {D}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}\right) = \prod_ {h, w = 1} ^ {H W} p \left(\tilde {z} _ {t, h, w} ^ {\text {p r e s}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}\right) \left\{p \left(\tilde {\mathbf {z}} _ {t, h, w} ^ {\text {w h e r e}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}\right) p \left(\tilde {\mathbf {z}} _ {t, h, w} ^ {\text {w h a t}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}\right) \right\} \tilde {\bar {z}} _ {t, h, w} ^ {\text {p r e s}}. \tag {5} +$$ + +In the rejection phase, our goal is to reject some of the proposed objects if a proposed object largely overlaps with a propagated object. We realize this by using the mask variable $\mathbf{m}_{t,n}$ . Specifically, if the overlapping area between the mask of a proposed object and that of a propagated object is over a threshold $\tau$ , we reject the proposed object. This procedure can be described as (i) proposal: $\tilde{\mathbf{z}}_t^D \sim p(\tilde{\mathbf{z}}_t^D | \mathbf{z}_t^D)$ and (ii) accept-reject: $\mathbf{z}_t^D = f_{\mathrm{accept - reject}}(\tilde{\mathbf{z}}_t^D, \mathbf{z}_t^D, \tau)$ . In this way, the final discovery set $\mathbf{z}_t^D$ is always a subset of the proposal set $\tilde{\mathbf{z}}_t^D$ , i.e., $\mathbf{z}_t^D \subseteq \tilde{\mathbf{z}}_t^D$ . Although we use a deterministic function for the rejection, it can be a design choice to implement this as a stochastic decision. While one rationale behind this design is to reflect an inductive bias of a Gestalt + +![](images/ae8dfe3788cd0366bba2b8c80b2f81b9164046521c7f6e18ff74956d753a3047.jpg) +Figure 1: SCALAR inference procedure: (A) Proposal, (B) Accept-Reject, (C) Propagation, (D) Background Module and Rendering Process. (A) The proposal module takes the input image and propagation mask and combines them to make the proposal representation. (B) From the proposal representation, the proposal mask is generated, and then compared to the propagation mask to make an accept-reject decision. Only the accepted proposals are considered as discovered objects. (C) The tracker RNNs decide what and where to propagate after looking at the input image. The gray boxes represent what is not propagated. (D) Given inferred foreground objects and the input image, the background module infers the background representation. The rendering process combines the foreground and background representations according to the foreground mask assignment + +principle saying that two objects cannot coexist in the same position, we shall also see later further reasons as to why this design is effective. The final discovery model can then be written as: $p(\mathbf{z}_t^{\mathcal{D}}|\mathbf{z}_t^{\mathcal{P}}) = p(\tilde{\mathbf{z}}_t^{\mathcal{D}}|\mathbf{z}_t^{\mathcal{P}})\prod_{h,w = 1}^{HW}p(\mathbf{z}_{t,h,w}^{\mathcal{D}}|\mathbf{z}_t^{\mathcal{P}},\tilde{\mathbf{z}}_t^{\mathcal{D}},\tau)$ , where the acceptance model is: + +$$ +p \left(\mathbf {z} _ {t, h, w} ^ {\mathcal {D}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}, \tilde {\mathbf {z}} _ {t} ^ {\mathcal {D}}, \tau\right) = f \left(z _ {t, h, w} ^ {\text {a c c e p t}} \mid \mathbf {z} _ {t} ^ {P}, \tilde {\mathbf {z}} _ {t} ^ {\mathcal {D}}, \tau\right) p \left(\tilde {\mathbf {z}} _ {t, h, w} ^ {\text {w h e r e}}\right) ^ {z _ {t, h, w} ^ {\text {a c c e p t}}} p \left(\tilde {\mathbf {z}} _ {t, h, w} ^ {\text {w h a t}}\right) ^ {z _ {t, h, w} ^ {\text {a c c e p t}}}. \tag {6} +$$ + +Background Transition. Unlike SQAIR, SCALOR is endowed with a background model. The background image is encoded into a $D$ -dimension continuous vector $\mathbf{z}_t^{\mathrm{bg}}$ from the background transition $p(\mathbf{z}_t^{\mathrm{bg}}|\mathbf{z}_{< t}^{\mathrm{bg}},\mathbf{z}_t^{\mathrm{fg}})$ . The background RNN encodes the temporal transition of background images. + +**Rendering.** The implementation of the rendering process is the same as in SPAIR (Crawford & Pineau, 2019b), except that we process the objects in parallel. Implementation details are shown in Appendix A. + +# 3.2 LEARNING AND INFERENCE + +Due to the intractability of the true posterior distribution $p(\mathbf{z}_{1:T}|\mathbf{x}_{1:T})$ , we train our model using variational inference with the following posterior approximation: + +$$ +q \left(\mathbf {z} _ {1: T} \mid \mathbf {x} _ {1: T}\right) = \prod_ {t = 1} ^ {T} q \left(\mathbf {z} _ {t} \mid \mathbf {z} _ {< t}, \mathbf {x} _ {\leq t}\right) = \prod_ {t = 1} ^ {T} q \left(\mathbf {z} _ {t} ^ {\mathrm {b g}} \mid \mathbf {z} _ {t} ^ {\mathrm {f g}}, \mathbf {x} _ {t}\right) q \left(\mathbf {z} _ {t} ^ {\mathcal {P}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}, \mathbf {x} _ {\leq t}\right) q \left(\mathbf {z} _ {t} ^ {\mathcal {P}} \mid \mathbf {z} _ {< t}, \mathbf {x} _ {\leq t}\right). \tag {7} +$$ + +Posterior Propagation. $q(\mathbf{z}_t^{\mathcal{P}}|\mathbf{z}_{< t},\mathbf{x}_{\leq t})$ is similar to the propagation in generation, except that we now provide observation $\mathbf{x}_{\leq t}$ through an RNN encoding. Here, the propagation for each object $n$ is done by $q(\mathbf{z}_{t,n}|\mathbf{z}_{< t,n},\mathbf{x}_{\leq t})$ using attention $\mathbf{a}_{t,n} = f_{\mathrm{att}}(\mathbf{x}_{\leq t})$ on the feature map for object $n$ . To compute the attention, we use the previous position $\mathbf{z}_{t - 1,n}^{\mathrm{pos}}$ as the center position and extract half the width and height of the convolutional feature map using bilinear interpolation. This attention mechanism is motivated by the observation that only part of the image contains information for tracking an object and an inductive bias that objects cannot move a large distance within a short time span (i.e., objects do not teleport). + +Posterior Discovery. The posterior discovery also consists of proposal and rejection phases. The main difference is that we now compute the proposal in spatially-parallel manner by conditioning on the observations $\mathbf{x}_{\leq t}$ , i.e., $q(\tilde{\mathbf{z}}_t^{\mathcal{D}}|\mathbf{z}_t^{\mathcal{P}},\mathbf{x}_{\leq t}) = \prod_{h,w = 1}^{HW}q(\tilde{\mathbf{z}}_{t,h,w}^{\mathcal{D}}|\mathbf{z}_t^{\mathcal{P}},\mathbf{x}_{\leq t})$ . Here, the observation $\mathbf{x}_{\leq t}$ is encoded into the feature map of dimensionality $H\times W\times D$ using a Convolutional LSTM (Xingjian et al., 2015). Then, from each feature we obtain $\tilde{\mathbf{z}}_{t,h,w}^{\mathcal{D}}$ . Importantly, this is done over all the feature cells $h,w$ in parallel. A similar approach is used in SPAIR (Crawford & Pineau, 2019b), but it infers the object latent representations sequentially and thus is difficult to scale to a large number + +of objects (Lin et al., 2020). Even if this spatially-parallel proposal plays a key role in making our model scalable, we also observe another challenge due to this high capacity of the discovery module. The problem is that the discovery module tends to dominate the propagation module and thus most of the objects in an image are explained by the discovery module, i.e., objects are rediscovered at every time-step while nothing is propagated. We call this problem propagation collapse. + +Why would the model tend to explain an image through discovery while suppressing propagation? First, the model does not care where—either from discovery or propagation—an object is sourced from as long as it can make an accurate reconstruction. Second, the propagation step performs a much harder task than the discovery. For the propagation to properly predict, tracker $n$ needs to learn to find the matching object from an image containing many objects. Although the propagation attention plays an important role in balancing the discovery and propagation, we found that it does not eliminate the problem of re-discovery, and without rejection, its effectiveness varies across different experiment settings. On the contrary, the discovery module does not need to solve such a difficult association problem because it only performs local image-to-latents encoding without associating latents of the previous time-step. Therefore, it is much easier for the discovery encoder to produce latents that are more accurate than those inferred from propagation. If we limit the capacity of the discovery module and sequentially process objects like in SQAIR, we may mitigate this problem because the propagation module is naturally encouraged to explain what this weakened discovery module cannot. This approach, however, cannot scale. + +We employ two techniques to resolve the aforementioned problem. First, we simply bias the initial network parameter so that it has a high propagation probability at the beginning of the training. This helps the model prefer to explain the observation first through propagation. The second technique is our proposal-rejection mechanism, which is implemented the same way as in the generation process. This prevents the discovery model from redundantly explaining what is already explained by the propagation module. The posterior for the discovery model can be written as: + +$$ +q \left(\mathbf {z} _ {t} ^ {\mathcal {D}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}, \mathbf {x} _ {\leq t}\right) = q \left(\tilde {\mathbf {z}} _ {t} ^ {\mathcal {D}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}, \mathbf {x} _ {\leq t}\right) \prod_ {h, w = 1} ^ {H W} p _ {\text {a c c e p t}} \left(\mathbf {z} _ {t, h, w} ^ {\mathcal {D}} \mid \mathbf {z} _ {t} ^ {\mathcal {P}}, \tilde {\mathbf {z}} _ {t} ^ {\mathcal {D}}\right), \tag {8} +$$ + +where the acceptance model is $p_{\mathrm{accept}}(\mathbf{z}_{t,h,w}^{\mathcal{D}}|\mathbf{z}_t^\mathcal{P},\tilde{\mathbf{z}}_t^\mathcal{D}) = p(z_{t,h,w}^{\mathrm{pres}}|\mathbf{z}_t^\mathcal{P},\tilde{\mathbf{z}}_t^\mathcal{D})(p(\tilde{\mathbf{z}}_{t,h,w}^{\mathrm{where}})p(\tilde{\mathbf{z}}_{t,h,w}^{\mathrm{what}}))^{z_{t,h,w}^{\mathrm{pres}}}$ . + +Posterior Background. The posterior of the background $q(\mathbf{z}_t^{\mathrm{bg}}|\mathbf{z}_t^{\mathrm{fg}},\mathbf{x}_t)$ is conditioned on the input image and currently existing objects. Here, we provide the foreground latents so that the remaining parts in the image can be explained by the background module. + +Training. We train our model by maximizing the following evidence lower bound $\mathcal{L}(\theta, \phi) =$ + +$$ +\sum_ {t = 1} ^ {T} \mathbb {E} _ {q _ {\phi} (\mathbf {z} _ {< t} | \mathbf {x} _ {< t})} \left[ \mathbb {E} _ {q _ {\phi} (\mathbf {z} _ {t} | \mathbf {z} _ {< t}, \mathbf {x} _ {\leq t})} \left[ \log p _ {\theta} (\mathbf {x} _ {t} | \mathbf {z} _ {t}) \right] - \mathbb {K L} \left[ q _ {\phi} (\mathbf {z} _ {t} | \mathbf {z} _ {< t}, \mathbf {x} _ {\leq t}) \| p _ {\theta} (\mathbf {z} _ {t} | \mathbf {z} _ {< t}) \right] \right]. \tag {9} +$$ + +We use the reparameterization trick (Williams, 1992; Kingma & Welling, 2013) for continuous random variables such as $\mathbf{z}^{\mathrm{what}}$ , and the Gumbel-Softmax trick (Jang et al., 2016) for discrete variables such as $z^{\mathrm{pres}}$ . We found that our proposed model works well and robustly with these simpler training methods than what is used in SQAIR, i.e., VIMCO and IWAE. + +# 4 RELATED WORK + +Different approaches have been taken to tackle the problem of unsupervised scene representation learning. Object-oriented models such as AIR (Eslami et al., 2016) and SPAIR (Crawford & Pineau, 2019b) decompose scenes into latent variables representing the appearance, position, and size of the underlying objects. While AIR makes use of a recurrent neural network, SPAIR applies spatially invariant attention to extract local feature maps. Although the latter provides better scalability than AIR, it is still limited as it performs sequential inference on objects. SQAIR (Kosiorek et al., 2018), discussed in Section 2, extends the ideas proposed in AIR to temporal sequences. On the other hand, scene-mixture models (Greff et al., 2017; Van Steenkiste et al., 2018; Burgess et al., 2019; Greff et al., 2019; Engelcke et al., 2019) decompose scenes into a collection of components, each being a full-image level representation. Although such models allow decomposition of the input image into components, they are not object-wise disentangled as multiple objects can be in the same + +![](images/2bf1c2b236e1f40bb8ec812907c280bdf9a0422878fddba6574a5141781763d9.jpg) +(a) Dataset Density + +![](images/9ed65e26a57485809ceaf719278415e4bb692669ab6c25280d0f12225fab6e72.jpg) +(b) Dataset Density +Figure 2: Quantitative result showing superior performance of SCALOR (SC) compared to SQAIR (SQ). (a) Tracking Accuracy (b) Object Count and Reconstruction Error + +component. Furthermore the obtained representation does not contain explicit interpretable features like position and scale, etc. SPACE (Lin et al., 2020) combines both of the above approaches by using object detection for foreground and mixture decomposition for background. It improves upon SPAIR by parallelizing the latent inference process. + +DDPAE (Hsieh et al., 2018) is another object-oriented sequential generative model that models each object with an appearance and a position vector. The model assumes the appearance of an object to be fixed, and thus shares the content vector across different time-steps. NEM (Greff et al., 2017) and RNEM (Van Steenkiste et al., 2018) introduce a spatial mixture model to disentangle the scene into multiple components representing each entity. Since each component generates a full scene image, the latent representations are not interpretable. Tracking-By-Animation (He et al., 2019) introduces a deterministic model to tackle the task of object tracking in an unsupervised fashion. Furthermore, there is a substantial amount of work on object tracking from the computer vision community using the same "bounding box"-representation approach proposed in SCALOR (Kosiorek et al., 2017; Ning et al., 2017; Nam & Han, 2016; Tao et al., 2016), but such methods use provided labels to tackle the problem of object tracking and thus are usually fully or semi-supervised and not probabilistic object-oriented models. + +We also note that Crawford & Pineau (2019a) has independently and concurrently developed a similar architecture to ours. This model also emphasizes the scalability problem with a similar idea motivated by parallelizing SPAIR and extending it to sequential modeling. The main differences are the usage of the proposal-rejection mechanism and the background modeling that make our model work on complex natural scenes. + +# 5 EXPERIMENTS + +In this section, we describe the experiments conducted to empirically evaluate the performance of SCALOR. We propose two tasks, (i) synthetic MNIST/dSprites shapes and (ii) natural-scene CCTV footage of walking pedestrians. We will show SCALOR's abilities to detect and track objects, to generate future trajectories, and to generalize to unseen settings. Furthermore, we provide a quantitative comparison to state-of-the-art baselines. + +# 5.1 TASK 1: LARGE-SCALE MNIST AND DSPRITES SHAPES + +We first evaluate our model on datasets of moving dSprites shapes as well as moving MNIST digits. In all experiments, the image sequence covers a $64 \times 64$ partial view of the center of the whole environment. Therefore, while there is a fixed number of objects in the environment at each timestep, only a subset of them are visible in the observed image. The environment size is customized for each setting in a way that objects can conveniently move out of the viewpoint completely and re-enter within a few time-steps. We test on five different scale settings. In each setting, the number of objects in each trajectory is sampled uniformly from an interval [min, max]. Each scale setting is specified with a triplet (min, avg, max), where min and max are as mentioned and avg represents the average number of visible objects in the trajectories in that setting. The five settings are referred to as Very Low Density (VLD) [(2, 2.9, 4)], Low Density (LD) [(8, 8, 11)], Medium Density (MD) [(18, 20, 24)], High Density (HD) [(50, 55, 64)] and Very High Density (VHD) [(90, 90, 110)]. For + +![](images/f33fde141ecac432b8d06186c646263217269c1b25ad25d5547d3ed123af9ded.jpg) +Figure 3: Qualitative results of SCALOR for Moving dSprites and Moving MNIST (HD) tasks: a) Inferred bounding boxes superimposed on the original image sequence. White circles indicate discovery at that timestep, b) Reconstructed sequence, c) Per-object reconstruction + +![](images/5fb7d8fe423b4471bb6f8c78a6a7d89c0917905a65f11af4beeafac563680bbd.jpg) +Figure 4: Qualitative samples of tracking on Moving dSprites task with dynamic background: a) Original image sequence with inferred bounding boxes, b) Reconstructed sequence + +example, in the MD setting, there are always between 18 to 25 objects in the overall environment, while only about 20 of them are visible on average in each frame. + +Experiment 1 - Tracking Performance. This experiment evaluates the tracking performance in an environment without background. We use the following tracking metrics: Multi-Object Tracking Accuracy (MOTA), precision-recall of the inferred bounding boxes (Bernardin & Stiefelhagen, 2008), reconstruction mean squared error, and normalized counting mean absolute error (CountMAE). CountMAE measures the difference between the number of predicted objects and the actual number of objects, normalized by the latter. MOTA measures the tracking accuracy and consistency of the bounding boxes. Precision-recall measures the number of false-positive and negatives, regardless of the associated IDs. For computing the MOT metrics, we choose the Euclidean distance threshold to be twice the actual object size in each setting. Figure 2 shows the quantitative results of SCALOR compared to baselines. More detailed quantitative results are given in Appendix B. + +We compare the performance of the proposed model with SQAIR in the VLD setting for both MNIST and dSprites datasets, and LD setting for dSprites. Note that we were not able to make SQAIR work on other settings due to the high object density. As shown in Figure 2, SCALAR outperforms SQAIR in all these settings, obtaining significantly higher accuracy and recall. SQAIR either misses some objects or misidentifies distinct objects as one. In addition, SCALAR has lower values of CountMAE in comparison to SQAIR, showing that SCALAR can infer the number of objects in the scene more accurately. Furthermore, we observe that increasing the number of objects in the scene does not significantly impede SCALAR's tracking ability, which demonstrates the strength of SCALAR when applied to images with a high number of objects. SCALAR can achieve relatively high precision-recall even for scenes containing about 100 objects. Note that in the VHD case, the number of objects (about 90) in the first time-step exceeds the number of detection grid cells $(8\times 8)$ the discovery model has. Thus, the model can only detect up to 64 objects at the first time-step, and detects the rest in the following time-steps. This results in lower performance on the tracking and detection accuracy in the VHD case. This is demonstrated in Figure 10 in Appendix C. + +Figure 3 demonstrates the qualitative performance of SCALOR on dSprites and MNIST (HD). To clarify tracking consistency, bounding boxes with distinct ids are represented by distinct colors. Discovered objects are emphasized by white circles. As shown in Figure 3(a), the discovery module of SCALOR can identify newly introduced objects and put them in the propagation list while the propagation module keeps tracking existing objects. Figure 3(c) shows object-wise rendering of inferred $\mathbf{z}^{\mathrm{what}}$ latent variables. For clear visualization, object-wise rendering is shown only for a + +![](images/96e859250f70dc8c961d0b69411c773eb427175c2367b461506221cc448b72c4.jpg) +(a) +(b) +Figure 5: Conditional generation. The first 5 frames are provided to the model while the next 5 are generated. The red line indicates where generation starts from. a) Image reconstruction/generation. b)Dynamic background inference/generation + +![](images/1208dab2cbf88f52c21fe7819fd0326103bf60ab9fc6270232093ea24fc1f871.jpg) +(a) +Figure 6: Generation from scratch. Objects are sampled from discovery prior and propagated in the next time frames. a) Generated sequence. b) Generated dynamic background + +subset of objects present in all the time-steps. SCALOR infers consistent object-wise representation even when objects are largely occluded. + +Experiment 2 - Dynamic Background. Another interesting yet challenging setting is the presence of a dynamic background. As we can see in Figure 4, SCALOR can decompose the video sequence to a set of foreground objects as well as a dynamic background. While the background module can model the background dynamics, it can also model the dynamics of the whole environment including the moving objects. This usually brings competition between background module and foreground module on explaining the foreground part. However, the background module and foreground module cooperate properly in SCALOR. We believe this is because of our modeling that the background module obtains information from the foreground part by conditioning on the foreground latent variables. Table 1 also provides the tracking performance of the model for this setting to show how complex images affect the tracking quality. We observe that it achieves comparable performance to the default no-background setting. These results are provided in Table 1 under "SCALOR - BG". + +Experiment 3 - Future Time Prediction/Generation. This section showcases the generation ability of SCALOR via two experiments. The first experiment is aimed at showing conditional generation. Here, the model is provided with the first 5 frames, and then tested to generate the next 5 frames. Latent variables at each time-step are sampled independently from the prior distribution of the propagation step conditioned on latent variables from previous time-step. Because the focus is on conditional generation of background and objects, discovery prior of $\mathbf{z}^{\mathrm{pres}}$ is manually set to zero. Figure 5 shows a generated sample. The first 5 frames (before the red line) correspond to reconstruction while the next 5 frames represent conditional generation1. As we can see in Figure 5, SCALOR can generate dynamic background and object trajectories that are reasonably consistent. + +The second experiment showcases video generation from scratch. In this setting, the model generates all latent variables from the discovery prior at the first time-step. For the following time-steps, latent variables are sampled by conditioning on the latent variables of the previous time-steps. The object generation probability for each grid cell at the first time-step, i.e. $\mathbf{z}_{1,h,w}^{\mathrm{pres}}$ , is set to 0.2. Figure 6 shows one generated sample. At the first time-step where all variables are sampled from a fixed prior, the background is generated reasonably, but the objects are generated only partially. This behavior is expected because newly introduced objects are mostly partially observed at image boundaries. This makes the inference network learn a $\mathbf{z}^{\mathrm{what}}$ representation corresponding to the partial views. Furthermore, since the $\mathbf{z}^{\mathrm{what}}$ prior is a standard Gaussian, it is hard to model the multi-modal object appearance. Yet, it is interesting to see that the conditional generation of each object's $\mathbf{z}^{\mathrm{what}}$ gradually converges to a consistent complete view of an object. This demonstrates that in our model, + +![](images/0f53b3ad1748edbf9f29c91b33c3367164d7c01ea7d2e42eff23ed1874fdcbe2.jpg) +Figure 7: SCALOR's performance on Grand Central Station Dataset. (a) input sequence, (b) overall reconstruction of objects and background, (c) reconstruction of the extracted background, (d) segmentation for each object, colors indicate tracking ID, (e) extracted object trajectories + +the distribution of $\mathbf{z}_{t\geq 2}^{\mathrm{what}}$ in propagation can actually achieve multi-modality when conditioned on $\mathbf{z}_1^{\mathrm{what}}$ from the first time-step. Furthermore, it shows that the propagation prior network is capable of maintaining a consistent representation of the object along the sequence as is reflected in the dataset. Therefore, SCALOR is capable of generating objects with independent appearance and behavior. + +# 5.2 TASK 2: REAL-WORLD SCENARIO + +This section considers the performance of SCALOR on real-world natural video, which previous models have not been able to handle due to the scalability issue and lack of a background model. Compared to synthetic data, the challenges in this setting are significantly more difficult. Specifically, we consider the Crowded Grand Central Station dataset (Zhou et al., 2012), which was collected from CCTV cameras at New York City's Grand Central Station. Due to the complex pedestrian behavior, the density of the dataset can be considered a mix of LD and HD. + +Figure 7 shows the tracking result on a sample sequence. We can see that SCALOR performs reasonably well on this pedestrian tracking dataset by maintaining consistent temporal trajectories. As shown in Figure 7(c), the background module infers the background component and reconstructs the extracted background correctly. As for object detection, SCALOR succeeds in accurate pedestrian detection and tracking. Furthermore, the foreground mask produced by $\mathbf{z}^{\mathrm{what}}$ provides the segmentation of each individual pedestrian, as shown in Figure 7(d). We draw tracking trajectories in Figure 7(e) for each pedestrian in the natural scene. Trajectories in different colors correspond to different pedestrian ids inferred by the identity of the latent variable of each object. Additional figures and the dataset details are provided in Appendix D. + +Figure 14 in Appendix D shows the conditional generation. The first 5 frames are the inference reconstruction while the last 5 frames are model generation. Starting from the 6th frame, the latent transition of the propagation trackers is modeled by the sequential prior network. In the generation process, a different prior of $z^{\mathrm{pres}}$ is introduced in the discovery module to introduce new objects emerging in the scene (see Appendix E for more details). As shown in the figure, the model tends to generate movement aligned with the previous frames. This also applies to newly generated objects from the discovery phase as shown in Figure 14(f). Although the trajectories are consistent, the appearance of generated objects fails to maintain its consistency across different time frames. Noticeably, in the generated sequence the segmentation mask for each object tends to deform into a different shape. This may stem from imperfections during the inference that makes the learning of appearance transition difficult. Additional figures for generation are provided in Appendix D. + +The ground truth trajectories of the Grand Central Station Dataset were not available when this experiment was conducted. We instead use negative log-likelihood (NLL) to compare our model with two baselines, a sequential VAE and a Recurrent Latent Variable Model (VRNN). The sequential VAE has one latent variable $\mathbf{z}$ of dimensionality 64, and a sequential prior $p(\mathbf{z}_t|\mathbf{z}_{< t})$ . It is similar to our background module with the same number of latent variables for encoder and decoder. As for the VRNN, we implement a VRNN with 128 dimensions of the LSTM hidden state and latent variable $\mathbf{z}$ + +![](images/ae3cdf4953b73e3883e45403ab8ad597935df88c120b70832b707ad92e3fb0af.jpg) +(a) +Figure 8: Attention-Rejection Ablation study and computational efficiency comparison. (a) Propagation rate, (b) Number of discovered objects, (c) Inference time and, (d) Training convergence time + +![](images/3e03cf7e33494132d7150ff19f3a8e5b06a111fad96361e62df404cc24f5a70a.jpg) +(b) + +![](images/cce95ec688b8612a324205a0cd2fecaa2beeab21317c9b7185cdf1b1daeb459f.jpg) +(c) + +![](images/e92fc4acb42dff46e8be92a335d096d3e17f965a16d4ab450f2a5c2bd7646ef0.jpg) +(d) + +and choose a convolutional network as the image encoder and decoder. The NLL value for our model is 28.30, and for the VAE and VRNN baseline it is 27.59 and 27.79 respectively. While learning a highly structured representation, SCALOR can still obtain a comparable generation quality. + +# 5.3 ABLATION STUDY AND COMPUTATIONAL EFFICIENCY COMPARISON + +Ablation Study. We perform an ablation study on the rejection mechanism and the propagation attention mechanism. Here, we train our model on the moving dSprites dataset where the number of objects varies from 6 to 36. We use two metrics to evaluate different architectures. The first metric is the propagation rate, which measures how long an object is propagated. Since, by design, objects always stay in a scene for more than one frame, the propagation rate can be regarded as a proxy to measure the success rate of tracking through propagation. The second metric counts the number of discoveries that occurred in the whole sequence. If the discovery module dominates, a high value will be observed. As we can see in Figure 8 (a), with no rejection (Rej) or attention (Att) in propagation, the propagation rate goes down to $30\%$ in early training iterations and keeps decreasing as the training progresses. In this setting, we found that the model re-discover objects in every frame without tracking them properly. This is also shown in Figure 8 (b), where it has a significantly larger number of discoveries. With the propagation attention mechanism, the propagation rate increases to $95\%$ . This is because the attention mechanism reduces the cost of finding the matching object between time-steps in propagation. The model thus favors tracking in propagation over re-discovery. It also prevents the discovery module from propagation collapse. However, as we can see in Figure 8 (b), the re-discovery problem still exists in this setting. The rejection mechanism, however, makes the propagation rate converge to 1 even without propagation attention. The propagation attention together with the rejection produced more accurate boxes. + +Computational Efficiency. In Figure 8 (c) and (d), we measure the inference time (c), and training convergence time (d). For measuring the inference time, we consider a hypothetical situation with $N$ objects in the first frame. The model is desired to discover and propagate them accordingly. Furthermore, we set the discovery capacity of SQAIR to be the same as the number of discovery grid cells in SCALOR. Figure 8(c) demonstrates how much time one forward step takes, as the number of objects/discovery capacity increases from 4 to 64. SQAIR's speed decreases linearly as both its discovery and propagation mechanisms process objects sequentially. In contrast, SCALOR does not suffer from such phenomena as discovery and propagation are done in parallel. Figure 8 (d) shows MSE convergence vs. training time when both models are trained on the MNIST VLD setting. SCALOR converges to a lower MSE than SQAIR and does so orders of magnitude faster. + +# 6 CONCLUSION + +We introduce SCALOR, a probabilistic generative world model aimed at visually modeling an environment of crowded dynamic objects. With the proposed parallel discovery-propagation and proposal-rejection mechanism, we improve the capacity of object density from a few objects to up to a hundred objects. Unlike previous models, the proposed model can also deal with dynamic backgrounds. These contributions consequently makes our model applicable, for the first time in this line of research, to natural scenes. An interesting future direction is to introduce more structure to the background and to model interactions among objects and the background. + +# ACKNOWLEDGMENTS + +SA thanks Kakao Brain and Center for Super Intelligence (CSI) for their support. + +# REFERENCES + +Keni Bernardin and Rainer Stiefelhagen. Evaluating multiple object tracking performance: the clear mot metrics. Journal on Image and Video Processing, 2008:1, 2008. +Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. +Christopher P Burgess, Loic Matthew, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. Monet: Unsupervised scene decomposition and representation. arXiv preprint arXiv:1901.11390, 2019. +Eric Crawford and Joelle Pineau. Exploiting spatial invariance for scalable unsupervised object tracking. arXiv preprint arXiv:1911.09033, 2019a. +Eric Crawford and Joelle Pineau. Spatially invariant unsupervised object detection with convolutional neural networks. In Proceedings of AAAI, 2019b. +Martin Engelcke, Adam R. Kosiorek, Oiwi Parker Jones, and Ingmar Posner. Genesis: Generative scene inference and sampling with object-centric latent representations, 2019. +SM Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, and Geoffrey E Hinton. Attend, infer, repeat: Fast scene understanding with generative models. In Advances in Neural Information Processing Systems, pp. 3225-3233, 2016. +Klaus Greff, Sjoerd van Steenkiste, and Jurgen Schmidhuber. Neural expectation maximization. In Advances in Neural Information Processing Systems, pp. 6691-6701, 2017. +Klaus Greff, Raphaël Lopez Kaufmann, Rishab Kabra, Nick Watters, Chris Burgess, Daniel Zoran, Loic Matthew, Matthew Botvinick, and Alexander Lerchner. Multi-object representation learning with iterative variational inference. arXiv preprint arXiv:1903.00450, 2019. +Zhen He, Jian Li, Daxue Liu, Hangen He, and David Barber. Tracking by animation: Unsupervised learning of multi-object attentive trackers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1318-1327, 2019. +Jun-Ting Hsieh, Bingbin Liu, De-An Huang, Li F Fei-Fei, and Juan Carlos Niebles. Learning to decompose and disentangle representations for video prediction. In Advances in Neural Information Processing Systems, pp. 517-526, 2018. +Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Adam Kosiorek, Alex Bewley, and Ingmar Posner. Hierarchical attentive recurrent tracking. In Advances in Neural Information Processing Systems, pp. 3053-3061, 2017. +Adam Kosiorek, Hyunjik Kim, Yee Whye Teh, and Ingmar Posner. Sequential attend, infer, repeat: Generative modelling of moving objects. In Advances in Neural Information Processing Systems, pp. 8606-8616, 2018. +Zhixuan Lin, Yi-Fu Wu, Skand Vishwanath Peri, Weihao Sun, Gautam Singh, Fei Deng, Jindong Jiang, and Sungjin Ahn. Space: Unsupervised object-oriented scene representation via spatial attention and decomposition. In International Conference on Learning Representations, 2020. +Andriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. arXiv preprint arXiv:1602.06725, 2016. +Hyeonseob Nam and Bohyung Han. Learning multi-domain convolutional neural networks for visual tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4293-4302, 2016. + +Guanghan Ning, Zhi Zhang, Chen Huang, Xiaobo Ren, Haohong Wang, Canhui Cai, and Zhihai He. Spatially supervised recurrent convolutional neural networks for visual object tracking. In 2017 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1-4. IEEE, 2017. +Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874-1883, 2016. +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017. +Ran Tao, Efstratios Gavves, and Arnold WM Smeulders. Siamese instance search for tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1420-1429, 2016. +Sjoerd Van Steenkiste, Michael Chang, Klaus Greff, and Jürgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. arXiv preprint arXiv:1802.10353, 2018. +Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992. +SHI Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems, pp. 802-810, 2015. +Bolei Zhou, Xiaogang Wang, and Xiaou Tang. Understanding collective crowd behaviors: Learning a mixture model of dynamic pedestrian-agents. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2871-2878. IEEE, 2012. + +# A ALGORITHMS + +Algorithm 1: Discovery Proposal-Rejection Inference +Input: $\mathbf{x}_{1:t}$ - image sequence up to current time-step $\mathbf{M}_t^P$ propagated mask from current time-step $\tau$ rejection hyper-parameter +/\* Feature encoding \*/ + $\mathbf{e}_t^{\mathrm{img}} = \mathrm{ConvLSTM}(\mathbf{x}_{1:t})$ $\mathbf{e}_t^{\mathrm{mask}} = \mathrm{MaskEncoder}(\mathbf{M}_t^P)$ $\mathbf{e}_t^{\mathrm{agg}} = \mathrm{Concat}[\mathbf{e}_t^{\mathrm{img}},\mathbf{e}_t^{\mathrm{mask}}]$ +/\* Proposal-Rejection (done in parallel) +AcceptedList $t = []$ +for $h\gets 1$ to $H$ do +for $w\gets 1$ to $W$ do +/\* Objects proposal + $\tilde{z}_{t,h,w}^{\mathrm{pres}}\sim \mathrm{Bern}(\cdot |f_{nn}^{\mathrm{pres}}(\mathbf{e}_{t,h,w}^{\mathrm{agg}}))$ +if $\tilde{z}_{t,h,w}^{pres} = = 0$ then continue +end + $\tilde{\mathbf{z}}_{t,h,w}^{\mathrm{where}}\sim \mathcal{N}(\cdot |f_{nn}^{\mathrm{where}}(\mathbf{e}_{t,h,w}^{\mathrm{agg}}))$ $\mathbf{g}_{t,\mathbf{h},\mathbf{w}}^{\mathrm{att}} = \mathrm{STN}(\mathbf{x}_{\mathbf{t}},\tilde{\mathbf{z}}_{t,\mathbf{h},\mathbf{w}}^{\mathrm{where}})\quad /$ Attended Glimpse + $\tilde{\mathbf{z}}_{t,h,w}^{\mathrm{what}}\sim \mathcal{N}(\cdot |\mathrm{GlimpseEnc}(\mathbf{g}_{t,h,w}^{\mathrm{att}}))$ ${\bf o}_{t,h,w},{\bf m}_{t,h,w} = {\bf STN}^{-1}({\bf GlimpseDec}(\tilde{\mathbf{z}}_{t,h,w}^{\mathrm{what}}),\tilde{\mathbf{z}}_{t,h,w}^{\mathrm{where}})$ +/\*Accept-Reject Test + $\delta = \frac{\mathcal{A}(\mathbf{m}_{t,h,w}^D\cap\mathbf{M}_t^P)}{\mathcal{A}(\mathbf{m}_{t,h,w}^D)}\quad /$ A - pixel area +if $\delta < \tau$ then +| AcceptedList. Add( $\tilde{\mathbf{z}}_{t,h,w},\tilde{\mathbf{z}}_{t,h,w}^{\mathrm{where}},\tilde{\mathbf{z}}_{t,h,w}^{\mathrm{pres}}$ +end +end +Output: AcceptedList + +Algorithm 2: Propagation Inference +Input: $\mathbf{x}_{1:t}$ - image sequence up to current time-step PropList $t - 1 = \{\mathbf{z}_{t - 1,n},\mathbf{h}_{t - 1,n}\}_{n\in \mathcal{O}_{t - 1}}$ - latent variable from previous time-step +/\* Feature encoding \*/ + $\mathbf{e}_t^{\mathrm{img}} = \mathrm{ConvLSTM}(\mathbf{x}_{1:t})$ +/\* Object tracking (done in parallel) +PropList $t =$ PropList $t - 1$ +for $n\gets 1$ to $\mathcal{O}_{t - 1}$ do + $\mathbf{e}_{t,n}^{\mathrm{att}} = f_{nn}^{\mathrm{att}}(\mathrm{STN}(\mathbf{e}_{t,n}^{\mathrm{img}},(\mathbf{z}_{t - 1,n}^{\mathrm{pos}},\mathbf{z}_{t - 1,n}^{\mathrm{scale}}))) / /$ Feature map attention + $\mathbf{h}_{t,n} = \mathrm{GRU}(f_{nn}([\mathbf{e}_{t,n}^{\mathrm{att}},\mathbf{z}_{t - 1,n}^{\mathrm{what}},\mathbf{z}_{t - 1,n}^{\mathrm{pos}},\mathbf{z}_{t - 1,n}^{\mathrm{scale}},z_{t - 1,n}^{\mathrm{pres}}]),\mathbf{h}_{t - 1,n})$ $\mathbf{e}_{t,n}^{\mathrm{agg}} = f_{nn}^{\mathrm{agg}}([\mathbf{e}_{t,n}^{\mathrm{att}},\mathbf{z}_{t - 1,n}^{\mathrm{what}},\mathbf{z}_{t - 1,n}^{\mathrm{pos}},\mathbf{z}_{t - 1,n}^{\mathrm{scale}},\mathbf{h}_{t,n}])$ $\mathbf{z}_{t,n}^{\mathrm{pos}},\mathbf{z}_{t,n}^{\mathrm{scale}}\sim \mathcal{N}(.|f_{nn}^{\mathrm{where}}(\mathbf{e}_{t,n}^{\mathrm{agg}}))$ $\mathbf{g}_{\mathbf{t},\mathbf{n}}^{\mathrm{att}} = \mathrm{STN}(\mathbf{x}_{\mathbf{t}},(\mathbf{z}_{\mathbf{t},\mathbf{n}}^{\mathrm{pos}},\mathbf{z}_{\mathbf{t},\mathbf{n}}^{\mathrm{scale}})) / /$ Attented Glimpse + $\mathbf{z}_{t,n}^{\mathrm{whata}}\sim \mathcal{N}(\cdot |\mathrm{GlimpseEnc}(\mathbf{g}_{\mathbf{t},\mathbf{n}}^{\mathrm{att}}))$ $z_{t,n}^{depth}\sim \mathcal{N}(\cdot |f_{nn}(\mathbf{e}_{t,n}^{\mathrm{att}},\mathbf{z}_{t,n}^{\mathrm{what}},\mathbf{h}_{t,n}))$ $z_{t,n}^{pres}\sim \operatorname {Bern}(\cdot |f_{nn}^{pres}(\mathbf{z}_{t,n}^{pos},\mathbf{z}_{t,n}^{scale},\mathbf{z}_{t,n}^{what},\mathbf{h}_{t,n}))$ $\mathbf{z}_{t,n}^{where} = (\mathbf{z}_{t,n}^{pos},\mathbf{z}_{t,n}^{scale},z_{t,n}^{depth})$ +if $z_{t,n}^{pres}\equiv 1$ then +| PropList $t,n$ .Update $(\mathbf{z}_{t,n}^{whata},\mathbf{z}_{t,n}^{where},\mathbf{z}_{t,n}^{pres})$ +else +| PropList $t_n$ .Delete() +end + +Algorithm 3: Background Module and Rendering +Input: $\mathbf{x}_t$ - image at current time-step, $\mathbf{o}_t = \{\mathbf{o}_{t,n}\}_{n\in \mathcal{O}_t}$ - object RGB glimpses $\mathbf{m}_t = \{\mathbf{m}_{t,n}\}_{n\in \mathcal{O}_t}$ - object masks glimpses, $\{(\mathbf{z}_{t,n}^{\mathrm{pos}},\mathbf{z}_{t,n}^{\mathrm{scale}}),z_{t,n}^{\mathrm{pres}}\}_{n\in \mathcal{O}_t}$ - object latents for position and presence / Foreground object rendering (done in parallel) +for $n\gets 1$ to $\mathcal{O}_t$ do $\begin{array}{rl} & {\mathbf{x}_{t,n}^{\mathrm{fg}} = \mathrm{STN}^{-1}(\mathbf{o}_{t,n},(\mathbf{z}_{t,n}^{\mathrm{pos}},\mathbf{z}_{t,n}^{\mathrm{scale}}))}\\ & {\gamma_{t,n} = \mathrm{STN}^{-1}(\mathbf{m}_{t,n}\cdot z_{t,n}^{\mathrm{pres}}\sigma (-\mathbf{z}_{t,n}^{\mathrm{depth}}),(\mathbf{z}_{t,n}^{\mathrm{pos}},\mathbf{z}_{t,n}^{\mathrm{scale}}))}\\ & {\gamma_{t,n} = \mathrm{normalize}(\gamma_{t,n},\forall n)} \end{array}$ +end + $\mathbf{x}_{t}^{\mathrm{fg}} = \sum_{n\in \mathcal{O}_{t}}\mathbf{x}_{t,n}^{\mathrm{fg}}\gamma_{t,n}$ +/\* Foreground mask rendering +for $n\gets 1$ to $\mathcal{O}_t$ do $\begin{array}{r}\mathbf{M}_{t,n} = \mathrm{STN}^{-1}(\mathbf{m}_{t,n},(\mathbf{z}_{t,n}^{\mathrm{pos}},\mathbf{z}_{t,n}^{\mathrm{scale}}))\\ \end{array}$ +end + $\mathbf{M}_t = \min (\sum_{n\in \mathcal{O}_t}\mathbf{M}_{t,n},1)$ +/\* Background rendering + ${\bf e}^{\mathrm{bg}} =$ BackgroundEncoder(Concat[x_t,(1-M_t)]) + ${\bf z}^{\mathrm{bg}}\sim \mathcal{N}(.|f_{nn}^{\mathrm{bg}}({\bf e}^{\mathrm{bg}}))$ ${\bf x}_{t}^{\mathrm{bg}} =$ BackgroundDecoder(zbg) +/\* Foreground background combination + ${\bf x}_t = {\bf x}_t^{\mathrm{fg}} + (1 - {\bf M}_t)\odot {\bf x}_t^{\mathrm{bg}}$ +Output: $\mathbf{x}_t$ + +# B ADDITIONAL QUANTITATIVE RESULT + +Table 1 includes Multi Object Tracking Accuracy (MOTA) as well as precision-recall of the inferred bounding boxes (Bernardin & Stiefelhagen, 2008). + +Table 2 provides a comparison of SCALOR to SQAIR and VRNN in terms of the reconstruction error (MSE) and negative log-likelihood (NLL). NLL is computed per pixel across the whole sequence. + +
Experimental SettingMOTA ↑Precision ↑Recall ↑MAE ↓
dSputesSCALOR - VHD84.6%96.8%88.1%0.091
SCALOR - HD92.8%97.6%95.4%0.033
SCALOR - MD93.5%97.8%95.9%0.041
SCALOR - LD92.5%97.9%94.9%0.059
SCALOR - VLD93.8%96.3%98.9%0.067
SQAIR - LD20.2%97.6%64.1%0.335
SQAIR - VLD72.5%98.4%75.4%0.239
MNISTSCALOR - MD86.9%99.0%87.9%0.113
SCALOR - LD85.9%99.0%87.1%0.124
SCALOR - VLD94.8%97.7%98.8%0.040
SQAIR - VLD78.5%99.9%78.8%0.214
SCALOR - BG91.9%97.2%95.3%0.034
SCALOR - LG92.3%97.0%95.7%0.032
+ +Table 1: Quantitative results of SCALOR for different experimental settings. "SCALOR - BG" refers to the dynamic background task for Experiment 3. "SCALOR - LG" refer to the length generalization experiment in Section C.4 + +
dSpritesMNIST
MethodVLDLDMDHDVLDLDMD
NLLSCALOR22.1622.4122.5622.887.457.547.82
VRNN22.3322.8023.2823.307.627.707.71
SQAIR22.6340.50--7.75--
MSESCALOR0.00100.00300.00240.00150.00290.00260.0010
VRNN0.00180.00710.01050.01900.00220.00540.0035
SQAIR0.01350.0918--0.0084--
+ +Table 2: Negative Log Likelihood and Mean Squared Error for SCALOR vs. baselines + +# C ADDITIONAL EXPERIMENTS ON DSPRITES AND MNIST + +# C.1 FREQUENT DENSE DISCOVERY + +![](images/3174db884255a37ac5f9963343778629ec454b36f6991b0b7b4e6d1ec07160e8.jpg) +Figure 9: Frequent Dense Discovery experiment: a) First row: inferred bounding boxes superimposed on the original sequence. b) Second row: discovery bounding boxes. c) Third row: discovery reconstruction. d) Last row: propagation reconstruction + +This experiment evaluates the ability to discover many newly introduced objects across time-steps. This is important because in many applications only key-frames of a video are available, i.e., frames at which significant changes happen. An example of a key-frame is a sudden change in the observer's viewpoint. This change of viewpoint introduces many new objects in the frame. Figure 9 shows one such instance. In this setting, 10-15 objects are introduced at the first, fifth, and ninth time-steps, respectively. As is shown, SCALOR is able to discover many new objects in each frame. + +# C.2 VERY HIGH DENSITY + +![](images/612085b7e08b379663580f45696c2323aee20a6ee5608fdbf6fd1ae646f37835.jpg) +Figure 10: Sample from Very High Density setting: a) First row: inferred bounding boxes, b) Second row: overall reconstruction, c) Third row: discovery bounding boxes, d) Fourth row: discovery reconstruction, d) Fifth row: propagation bounding boxes, e) Sixth row: propagation reconstruction + +Figure 10 shows samples from the Very High Density experiment. This experiment places 90-110 objects in the overall environment, around 90 of which are visible at every time-step on average. The discovery module contains $8 \times 8$ detection grid cells and thus can only detect up to 64 objects. Interestingly, as is shown, the model will identify as many objects as possible in the first time-step and discover the remaining in the second time-step. Furthermore, if too many objects are densely packed in one local region of the space, SCALOR will perform similarly detecting them through multiple frames. + +# C.3 ABILITY TO HANDLE OVERLAP AND OCCLUSION + +![](images/8a2dd5d4371503078ee1e8b477518aef15752edc7a931843f60f113755507e38.jpg) +Figure 11: Highly occluded setting: Sample sequences in which objects move towards each other move more aggressively so overlap and occlusion happens more frequently + +Figure 11 shows a sample of the MNIST settings where objects move towards each other more aggressively. In this setting, there is a higher chance of overlapping and occlusion. As shown, the identity of the objects is preserved when objects overlap with each other. + +# C.4 GENERALIZATION TEST + +![](images/5ee2047b42eb3a1c8e930566af0da4ba3669c872a5f66f1e88c248338c99a8e0.jpg) +Figure 12: Generalization with respect to longer sequences. a) First row: bounding boxes for the first 10 time-steps. b) Second row: bounding boxes for the last 10 time-steps + +![](images/0188e2c7e946773de497c5a70895f8e2ce352a0cc428aa9104fc3f09c2aab1cc.jpg) +Figure 13: Generalization Experiment: a) First row: generalization to unseen shapes. b) Second row: generalization to a larger number of objects + +We conduct three sets of experiments on generalization. In the first experiment, we investigate generalization to longer sequences. In this setting, the model is trained on trajectories of length 10 while being tested on trajectories of length 20. In the second experiment, we evaluate generalization in more crowded scenes. In this setting, the model is trained on 15-25 objects and tested on 50-60 objects. The third experiment tests the generalization of the model to unseen objects. The model is trained only on moving MNIST images containing digits 0 to 5 while being tested on images containing digits 6 to 9. Figures 12 and 13 show samples from these experiments. "SCALOR - LG" in Table 1 also provides tracking results for the "length generalization" setting. + +# D EXPERIMENT DETAIL AND ADDITIONAL QUALITATIVE RESULTS ON GRAND CENTRAL STATION DATASET + +For natural-scene experiments, we spatially split the video into 8 parts and create a dataset of $400\mathrm{k}$ frames in total. We choose the first $360\mathrm{k}$ frames for training and $40\mathrm{k}$ frames for testing. Since the movement of pedestrians is too slow under 25 fps in the original video, we treat every other 7 frames as two consecutive frames in the dataset sequence. The length of the input sequence is 10, and each image is resized to $128 \times 128$ . + +![](images/3d176f00d0117ffde1a5fb330c0c32a602b0acc60133aef2c8b4091aa8384fc4.jpg) +Figure 14: Conditional generation on Grand Central Station Dataset. The first 5 frames are observed, the next 5 frames are generated (after the red line). (a) Overall reconstruction and generation, (b) Conditional generation of background, (c) Conditional generation of segmented objects, (d) Conditional generation of movement trajectories + +The generation result mentioned in Section 5.2 is provided in Figure 14. We provide additional qualitative results in Figures 15 to 20. Rows from top to bottom represent input sequence, overall reconstruction, extracted background, foreground segmentation mask with IDs, center position from each $\mathbf{z}_{t,n}^{\mathrm{pos}}$ latent, and extracted trajectories based on the transition of center position. Additionally, Figures 18 to 20 show examples of conditional generation, where images starting from time-step 6 (after the red line) are generated sequences. Rows from top to bottom are conditional generations of the overall images, conditional generations of background, extracted conditional generation of each segmentation mask, and conditional generation of trajectories. + +![](images/5041685ac63b75402836101fee690d5ffc37ddecba7423b35583bd53cf86118b.jpg) +Figure 15: Tracking sample 1 + +![](images/1cbcb0dc72c741cbe278b5d6af84b513eec3e75e2905a819b2ef2d41eb9a26e1.jpg) +Figure 16: Tracking sample 2 + +![](images/c2689dae2e9ce965b52b66452d8f10a8436c21c77aa5e7ffc398ba2c8597ffea.jpg) +Figure 17: Tracking sample 3 + +![](images/1665d6bbe9db49b5941614ce0d496869088af6ddecdb5b0b82fe36709feb8f5f.jpg) +Figure 18: Conditional generation sample 1 + +![](images/180b28201609b584893723d2084a030d2b6da431c3b7f93398413c6e024bbcaf.jpg) +Figure 19: Conditional generation sample 2 + +![](images/48ecba639532a6f9abf3bf4662652339449d51dea5f12bc342ef03174468cd5f.jpg) +Figure 20: Conditional generation sample 3 + +# E MODEL ARCHITECTURE DETAILS + +In this section, we provide additional details of the architecture and hyperparameters used for pedestrian detection. When a new frame is provided, the network uses a fully convolutional encoder to obtain a $H \times W$ feature map. The feature map is fed into a convolutional LSTM to model the sequential information along the sequence. The convolutional LSTM is shared across discovery module and propagation module. The discovery module and propagation module also share the $\mathbf{z}^{\mathrm{what}}$ encoder and decoder. The encoder has a convolutional network followed by one fully connected layer, while the glimpse decoder uses a fully convolutional network with sub-pixel layer (Shi et al., 2016) for upsampling. The background module shares a similar structure with the $\mathbf{z}^{\mathrm{what}}$ encoder and decoder. It takes a 4-dimensional input, i.e., RGB and foreground mask, and outputs a 3-dimensional image. We use GRUs in propagation trackers and in prior transition networks. + +We choose a batch size of 20 for the natural scene experiments and a batch size of 16 for MNIST/dSprites experiments. The learning rate is fixed at 4e-5 for natural image experiments and 5e-4 for dSprites/MNIST experiments. We use RMSprop for optimization during training. The standard deviation of the image distribution is chosen to be 0.1 for natural experiments and 0.2 for toy experiments. The prior for all Gaussian posteriors is set to standard normal. For the pedestrian tracking dataset, we constrain the range of $\mathbf{z}^{\mathrm{scale}}$ so that the inferred width can vary from 5.2 pixels to 11.7 pixels, and the height can vary from 12.0 to 28.8, and both with a prior of the middle value in discovery. Similarly, we constrain $\mathbf{z}^{\mathrm{scale}}$ on synthetic datasets so that it can vary from half to 1.5 times the actual object size. The $\mathbf{z}^{\mathrm{pos}}$ variable in the propagation phase is modeled as the deviation of the position from the previous time-step instead of the global coordinate. The prior for $z^{\mathrm{pres}}$ in discovery is set to be 0.1 at the beginning of training and to quickly anneal to 1e-3 for natural image experiments and 1e-4 for dSprites/MNIST experiments. The temperature used for modelling $z^{\mathrm{pres}}$ is set to be 1.0 at the beginning and anneal to 0.3 after 20k iterations. + +Full details of the architecture will be released along with our code. \ No newline at end of file diff --git a/scalorgenerativeworldmodelswithscalableobjectrepresentations/images.zip b/scalorgenerativeworldmodelswithscalableobjectrepresentations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e50d8d2344bd6f13cd876d669b5bcb9fdd9d58b8 --- /dev/null +++ b/scalorgenerativeworldmodelswithscalableobjectrepresentations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17bf590166a97a1eee4232530e100b6ccd911697190c0a306c9292e39c0c255e +size 2140366 diff --git a/scalorgenerativeworldmodelswithscalableobjectrepresentations/layout.json b/scalorgenerativeworldmodelswithscalableobjectrepresentations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2abfcf3f2c5aa3a46224eed3ecb84eaf6376a8b1 --- /dev/null +++ b/scalorgenerativeworldmodelswithscalableobjectrepresentations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25d5f16a16d408663bb8d2ff5ee7919cefeb2f8c8fdd2b2a2f7f7dd6ce08a9ab +size 528206 diff --git a/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_content_list.json b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3f454837d925479f34ffbf1f8a5466ecdfe7df64 --- /dev/null +++ b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7687d560b2f0d9dbae7b36c3da923f6a793c253b988ad4c9aee3e254b5ecc691 +size 99335 diff --git a/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_model.json b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1fdb2a90eb753c625576d1d6cc7679c67d878209 --- /dev/null +++ b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffa8427a713909f4572bfbf55c3c0b22daf28f6c02487ff339c4032a99987ecd +size 122061 diff --git a/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_origin.pdf b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e2a226ed56a8fb970a16ac59caa9d5194d0cc90b --- /dev/null +++ b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/55e11940-5fda-479c-9365-007aa8114069_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c09806832d62e6dab4ee9d1ff7ac2ec3cb144cf0716e0c0db3ac9f42430514e6 +size 1770180 diff --git a/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/full.md b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/full.md new file mode 100644 index 0000000000000000000000000000000000000000..779346ab8ff91278f0c6f6a8f2a77b10be9c7056 --- /dev/null +++ b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/full.md @@ -0,0 +1,355 @@ +# SEED RL: SCALABLE AND EFFICIENT DEEP-RL WITH ACCELERATED CENTRAL INFERENCE + +Lasse Espeholt; Raphaël Marinier; Piotr Stanczyk; Ke Wang & Marcin Michalski + +Brain Team + +Google Research + +{lespeholt, raphaelm, stanczyk, kewa, michalski}@google.com + +# ABSTRACT + +We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state of the art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 three times faster in wall-time. For the scenarios we consider, a $40\%$ to $80\%$ cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out. + +GibHub: http://github.com/google-research/seed_r1. + +# 1 INTRODUCTION + +The field of reinforcement learning (RL) has recently seen impressive results across a variety of tasks. This has in part been fueled by the introduction of deep learning in RL and the introduction of accelerators such as GPUs. In the very recent history, focus on massive scale has been key to solve a number of complicated games such as AlphaGo (Silver et al., 2016), Dota (OpenAI, 2018) and StarCraft 2 (Vinyals et al., 2017). + +The sheer amount of environment data needed to solve tasks trivial to humans, makes distributed machine learning unavoidable for fast experiment turnaround time. RL is inherently comprised of heterogeneous tasks: running environments, model inference, model training, replay buffer, etc. and current state-of-the-art distributed algorithms do not efficiently use compute resources for the tasks. The amount of data and inefficient use of resources makes experiments unreasonably expensive. The two main challenges addressed in this paper are scaling of reinforcement learning and optimizing the use of modern accelerators, CPUs and other resources. + +We introduce SEED (Scalable, Efficient, Deep-RL), a modern RL agent that scales well, is flexible and efficiently utilizes available resources. It is a distributed agent where model inference is done centrally combined with fast streaming RPCs to reduce the overhead of inference calls. We show that with simple methods, one can achieve state-of-the-art results faster on a number of tasks. For optimal performance, we use TPUs (cloud.google.com(tpu/) and TensorFlow 2 (Abadi et al., 2015) to simplify the implementation. The cost of running SEED is analyzed against IMPALA (Espeholt et al., 2018) which is a commonly used state-of-the-art distributed RL algorithm (Veeriah et al. (2019); Li et al. (2019); Deverett et al. (2019); Omidshafiei et al. (2019); Vezhnevets et al. (2019); Hansen et al. (2019); Schaarschmidt et al.; Tirumala et al. (2019), ...). We show cost reductions of up to $80\%$ while being significantly faster. When scaling SEED to many accelerators, it can train on millions of frames per second. Finally, the implementation is open-sourced together with examples of running it at scale on Google Cloud (see Appendix A.4 for details) making it easy to reproduce results and try novel ideas. + +# 2 RELATED WORK + +For value-based methods, an early attempt for scaling DQN was Nair et al. (2015) that used asynchronous SGD (Dean et al., 2012) together with a distributed setup consisting of actors, replay buffers, parameter servers and learners. Since then, it has been shown that asynchronous SGD leads to poor sample complexity while not being significantly faster (Chen et al., 2016; Espeholt et al., 2018). Along with advances for Q-learning such as prioritized replay (Schaul et al., 2015),ueling networks (Wang et al., 2016), and double-Q learning (van Hasselt, 2010; Van Hasselt et al., 2016) the state-of-the-art distributed Q-learning was improved with Ape-X (Horgan et al., 2018). Recently, R2D2 (Kapturowski et al., 2018) achieved impressive results across all the Arcade Learning Environment (ALE) (Bellemare et al., 2013) games by incorporating value-function rescaling (Pohlen et al., 2018) and LSTMs (Hochreiter & Schmidhuber, 1997) on top of the advancements of Ape-X. + +There have also been many approaches for scaling policy gradients methods. A3C (Mnih et al., 2016) introduced asynchronous single-machine training using asynchronous SGD and relied exclusively on CPUs. GPUs were later introduced in GA3C (Mahmood, 2017) with improved speed but poor convergence results due to an inherently on-policy method being used in an off-policy setting. This was corrected by V-trace (Espeholt et al., 2018) in the IMPALA agent both for single-machine training and also scaled using a simple actor-learner architecture to more than a thousand machines. PPO (Schulman et al., 2017) serves a similar purpose to V-trace and was used in OpenAI Rapid (Petrov et al., 2018) with the actor-learner architecture extended with Redis (redis.io), an in-memory data store, and was scaled to 128,000 CPUs. For inexpensive environments like ALE, a single machine with multiple accelerators can achieve results quickly (Stooke & Abbeel, 2018). This approach was taken a step further by converting ALE to run on a GPU (Dalton et al., 2019). + +A third class of algorithms is evolutionary algorithms. With simplicity and massive scale, they have achieved impressive results on a number of tasks (Salimans et al., 2017; Such et al., 2017). + +Besides algorithms, there exist a number of useful libraries and frameworks for reinforcement learning. ELF (Tian et al., 2017) is a framework for efficiently interacting with environments, avoiding Python global-Interpreter-lock contention. Dopamine (Castro et al., 2018) is a flexible research focused RL framework with a strong emphasis on reproducibility. It has state of the art agent implementations such as Rainbow (Hessel et al., 2017) but is single-threaded. TF-Agents (Guadarrama et al., 2018) and rlpyt (Stooke & Abbeel, 2019) both have a broader focus with implementations for several classes of algorithms but as of writing, they do not have distributed capability for largescale RL. RLLib (Liang et al., 2017) provides a number of composable distributed components and a communication abstraction with a number of algorithm implementations such as IMPALA and Ape-X. Concurrent with this work, TorchBeast (Küttler et al., 2019) was released which is an implementation of single-machine IMPALA with remote environments. + +SEED is closest related to IMPALA, but has a number of key differences that combine the benefits of single-machine training with a scalable architecture. Inference is moved to the learner but environments run remotely. This is combined with a fast communication layer to mitigate latency issues from the increased number of remote calls. The result is significantly faster training at reduced costs by as much as $80\%$ for the scenarios we consider. Along with a policy gradients (V-trace) implementation we also provide an implementation of state of the art Q-learning (R2D2). In the work we use TPUs but in principle, any modern accelerator could be used in their place. TPUs are particularly well-suited given they high throughput for machine learning applications and the scalability. Up to 2048 cores are connected with a fast interconnect providing $100+$ petaflops of compute. + +# 3 ARCHITECTURE + +Before introducing the architecture of SEED, we first analyze the generic actor-learner architecture used by IMPALA, which is also used in various forms in Ape-X, OpenAI Rapid and others. An overview of the architecture is shown in Figure 1a. + +A large number of actors repeatedly read model parameters from the learner (or parameter servers). Each actor then proceeds the local model to sample actions and generate a full trajectory of observations, actions, policy logits/Q-values. Finally, this trajectory along with recurrent state is transferred + +![](images/917c1bda197b4b4aefe5952227e67aeaa6254fefec51d30304f18b6c1069ad7c.jpg) +(a) IMPALA architecture (distributed version) + +![](images/13b4167d301e70b51fddb3d106021f57dd48f1b9a4b0e31387742aedb8079f62.jpg) +(b) SEED architecture, see detailed replay architecture in Figure 3. +Figure 1: Overview of architectures + +to a shared queue or replay buffer. Asynchronously, the learner reads batches of trajectories from the queue/replay buffer and optimizes the model. + +There are a number of reasons for why this architecture falls short: + +1. Using CPUs for neural network inference: The actor machines are usually CPU-based (occasionally GPU-based for expensive environments). CPUs are known to be computationally inefficient for neural networks (Raina et al., 2009). When the computational needs of a model increase, the time spent on inference starts to outweigh the environment step computation. The solution is to increase the number of actors which increases the cost and affects convergence (Espeholt et al., 2018). +2. Inefficient resource utilization: Actors alternate between two tasks: environment steps and inference steps. The compute requirements for the two tasks are often not similar which leads to poor utilization or slow actors. E.g. some environments are inherently single-threading while neural networks are easily parallelizable. +3. Bandwidth requirements: Model parameters, recurrent state and observations are transferred between actors and learners. Relatively to model parameters, the size of the observation trajectory often only accounts for a few percents. Furthermore, memory-based models send large states, increase bandwidth requirements. + +While single-machine approaches such as GA3C (Mahmood, 2017) and single-machine IMPALA avoid using CPU for inference (1) and do not have network bandwidth requirements (3), they are restricted by resource usage (2) and the scale required for many types of environments. + +The architecture used in SEED (Figure 1b) solves the problems mentioned above. Inference and trajectory accumulation is moved to the learner which makes it conceptually a single-machine setup with remote environments (besides handling failures). Moving the logic effectively makes the actors a small loop around the environments. For every single environment step, the observations are sent to the learner, which runs the inference and sends actions back to the actors. This introduces a new problem: 4. Latency. + +To minimize latency, we created a simple framework that uses gRPC (grpc.io) - a high performance RPC library. Specifically, we employ streaming RPCs where the connection from actor to learner is kept open and metadata sent only once. Furthermore, the framework includes a batching module that efficiently batches multiple actor inference calls together. In cases where actors can fit on the same machine as learners, gRPC uses uint domain sockets and thus reduces latency, CPU and syscall overhead. Overall, the end-to-end latency, including network and inference, is faster for a number of the models we consider (see Appendix A.7). + +![](images/aeae9a128a2d0380835b6640d65acb8e2cbe05ed25db67ecd5e921a65cadc98f.jpg) +(a) Off-policy in IMPALA. For the entire trajectory the policy stays the same. By the time the trajectory is sent to the queue for optimization, the policy has changed twice. + +![](images/6ecb44f1102b80775f0ea96e3936f58b3d426b9cac64309eb06f7d44523618a3.jpg) +(b) Off-policy in SEED. Optimizing a model has immediate effect on the policy. Thus, the trajectory consists of actions sampled from many different policies $(\pi_{\theta_t},\pi_{\theta_{t + 1}},\ldots)$ . + +![](images/989cd79aca29c044b1e41b4a55659afade0ab179a602fd7d8ee84c4451af2f3e.jpg) +Figure 2: Variants of "near on-policy" when evaluating a policy $\pi$ while asynchronously optimizing model parameters $\theta$ . +Figure 3: Detailed Learner architecture in SEED (with an optional replay buffer). + +The IMPALA and SEED architectures differ in that for SEED, at any point in time, only one copy of the model exists whereas for distributed IMPALA each actor has its own copy. This changes the way the trajectories are off-policy. In IMPALA (Figure 2a), an actor uses the same policy $\pi_{\theta_t}$ for an entire trajectory. For SEED (Figure 2b), the policy during an unroll of a trajectory may change multiple times with later steps using more recent policies closer to the one used at optimization time. + +A detailed view of the learner in the SEED architecture is shown on Figure 3. Three types of threads are running: 1. Inference 2. Data prefetching and 3. Training. Inference threads receive a batch of observations, rewards and episode termination flags. They load the recurrent states and send the data to the inference TPU core. The sampled actions and new recurrent states are received, and the actions are sent back to the actors while the latest recurrent states are stored. When a trajectory is fully unrolled it is added to a FIFO queue or replay buffer and later sampled by data prefetching threads. Finally, the trajectories are pushed to a device buffer for each of the TPU cores taking part in training. The training thread (the main Python thread) takes the prefetched trajectories, computes gradients using the training TPU cores and applies the gradients on the models of all TPU cores (inference and training) synchronously. The ratio of inference and training cores can be adjusted for maximum throughput and utilization. The architecture scales to a TPU pod (2048 cores) by round-robin assigning actors to TPU host machines, and having separate inference threads for each TPU host. When actors wait for a response from the learner, they are idle so in order to fully utilize the machines, we run multiple environments on a single actor. + +To summarize, we solve the issues listed previously by: + +1. Moving inference to the learner and thus eliminating any neural network related computations from the actors. Increasing the model size in this architecture will not increase the need for more actors (in fact the opposite is true). +2. Batching inference on the learner and having multiple environments on the actor. This fully utilize both the accelerators on the learner and CPUs on the actors. The number of + +TPU cores for inference and training is finely tuned to match the inference and training workloads. All factors help reducing the cost of experiments. + +3. Everything involving the model stays on the learner and only observations and actions are sent between the actors and the learner. This reduces bandwidth requirements by as much as $99\%$ . +4. Using streaming gRPC that has minimal latency and minimal overhead and integrating batching into the server module. + +We provide the following two algorithms implemented in the SEED framework: V-trace and Q-learning. + +# 3.1 V-TRACE + +One of the algorithms we adapt into the framework is V-trace (Espeholt et al., 2018). We do not include any of the additions that have been proposed on top of IMPALA such as van den Oord et al. (2018); Gregor et al. (2019). The additions can also be applied to SEED and since they are more computational expensive, they would benefit from the SEED architecture. + +# 3.2 Q-LEARNING + +We show the versatility of SEED's architecture by fully implementing R2D2 (Kapturowski et al., 2018), a state of the art distributed value-based agent. R2D2 itself builds on a long list of improvements over DQN (Mnih et al., 2015): double Q-learning (van Hasselt, 2010; Van Hasselt et al., 2016), multi-step bootstrap targets (Sutton, 1988; Sutton & Barto, 1998; Mnih et al., 2016), dueling network architecture (Wang et al., 2016), prioritized distributed replay buffer (Schaul et al., 2015; Horgan et al., 2018), value-function rescaling (Pohlen et al., 2018), LSTM's (Hochreiter & Schmidhuber, 1997) and burn-in (Kapturowski et al., 2018). + +Instead of a distributed replay buffer, we show that it is possible to keep the replay buffer on the learner with a straightforward flexible implementation. This reduces complexity by removing one type of job in the setup. It has the drawback of being limited by the memory of the learner but it was not a problem in our experiments by a large margin: a replay buffer of $10^{5}$ trajectories of length 120 of $84 \times 84$ uncompressed grayscale observations (following R2D2's hyperparameters) takes 85GBs of RAM, while Google Cloud machines can offer hundreds of GBs. However, nothing prevents the use of a distributed replay buffer together with SEED's central inference, in cases where a much larger replay buffer is needed. + +# 4 EXPERIMENTS + +We evaluate SEED on a number of environments: DeepMind Lab (Beattie et al., 2016), Google Research Football (Kurach et al., 2019) and Arcade Learning Environment (Bellemare et al., 2013). + +# 4.1 DEEPMIND LAB AND V-TRACE + +DeepMind Lab is a 3D environment based on the Quake 3 engine. It features mazes, laser tag and memory tasks. We evaluate on four commonly used tasks. The action set used is from Espeholt et al. (2018) although for some tasks, higher return can be achieved with bigger action sets such as the one introduced in Hessel et al. (2018). For all experiments, we used an action repeat of 4 and the number of frames in plots is listed as environment frames (equivalent to 4 times the number of steps). The same set of 24 hyperparameter sets and the same model (ResNet from IMPALA) was used for both agents. More details can be found in Appendix A.1.2. + +# 4.1.1 STABILITY + +The first experiment evaluates the effect of the change in off-policy behavior described in Figure 2. Exactly the same hyperparameters are used for both IMPALA and SEED, including the number of environments used. As is shown in Figure 4, the stability across hyperparameters of SEED is slightly better than IMPALA, while achieving slightly higher final returns. + +![](images/5ff794288bfb92cf5c0cbb93831735a1ce1c6b4db9092578994b4925bdaf9378.jpg) +Figure 4: Comparison of IMPALA and SEED under the exact same conditions (175 actors, same hyperparameters, etc.) The plots show hyperparameter combinations sorted by the final performance across different hyperparameter combinations. + +![](images/0d03e7aad13143277abd902914e8a1388ad7c3a7512ee5c9ae4ba8f967c3bc6f.jpg) +Figure 5: Training on 4 DeepMind Lab tasks. Each curve is the best of the 24 runs based on final return following the evaluation procedure in Espeholt et al. (2018). Sample complexity is maintained up to 8 TPU v3 cores, which leads to 11x faster training than the IMPALA baseline. Top Row: X-axis is per frame (number of frames = 4x number of steps). Bottom Row: X-axis is hours. + +# 4.1.2 SPEED + +For evaluating performance, we compare IMPALA using an Nvidia P100 with SEED with multiple accelerator setups. They are evaluated on the same set of hyperparameters. We find that SEED is $2.5\mathrm{x}$ faster than IMPALA using 2 TPU v3 cores (see Table 1), while using only $77\%$ more environments and $41\%$ less CPU (see section 4.4.1). Scaling from 2 to 8 cores results in an additional $4.4\mathrm{x}$ speedup with sample complexity maintained (Figure 5). The speed-up is greater than $4\mathrm{x}$ due to using 6 cores for training and 2 for inference instead of 1 core for each, resulting in better utilization. A $5.3\mathrm{x}$ speed-up instead of $4.4\mathrm{x}$ can be obtained by increasing the batch size linearly with the number of training cores, but contrary to related research (You et al., 2017b; Goyal et al., 2017) we found that increased batch size hurts sample complexity even with methods like warm-up and actor de-correlation (Stooke & Abbeel, 2018). We hypothesize that this is due to the limited actor and environment diversity in DeepMind Lab tasks. In McCandlish et al. (2018) they found that Pong scales poorly with batch size but that Dota can be trained effectively with a batch size five orders of magnitude larger. Note, for most models, the effective batch size is batch size $\cdot$ trajectory length. In Figure 5, we include a run from a limited sweep on "explore_goal Locations_small" using 64 cores with an almost linear speed-up. Wall-time performance is improved but sample complexity is heavily penalized. + +When using an Nvidia P100, SEED is $1.58\mathrm{x}$ slower than IMPALA. A slowdown is expected because SEED performs inference on the accelerator. SEED does however use significantly fewer CPUs and is lower cost (see Appendix A.6). The TPU version of SEED has been optimized but it is likely that improvements can be found for SEED with P100. + +
ArchitectureAcceleratorsEnvironmentsActor CPUsBatch SizeFPSRatio
DeepMind Lab
IMPALANvidia P1001761763230K
SEEDNvidia P100176443219K0.63x
SEEDTPU v3, 2 cores3121043274K2.5x
SEEDTPU v3, 8 cores1560520481330K11.0x
SEEDTPU v3, 64 cores12,4804,16038412.4M80.0x
Google Research Football
IMPALA, Default2 x Nvidia P10040040012811K
SEED, DefaultTPU v3, 2 cores62441612818K1.6x
SEED, DefaultTPU v3, 8 cores2,4961,664160371K6.5x
SEED, MediumTPU v3, 8 cores1,5501,032160344K
SEED, LargeTPU v3, 8 cores1,260840160329K
SEED, LargeTPU v3, 32 cores5,0403,3606403114K3.9x
Arcade Learning Environment
R2D2Nvidia V100256N/A6485K2
SEEDNvidia V100256556467K0.79x
SEEDTPU v3, 8 cores61021364260K3.1x
SEEDTPU v3, 8 cores1200419256440K45.2x
+ +$^{1}$ 6/8 cores used for training. ${}^{2}$ Each of the 256 R2D2 actors run at 335 FPS (information from the R2D2 authors). ${}^{3}$ 5/8 cores used for training. ${}^{4}$ No frame stacking. + +Table 1: Performance of SEED, IMPALA and R2D2. + +# 4.2 GOOGLE RESEARCH FOOTBALL AND V-TRACE + +Google Research Football is an environment similar to FIFA video games (ea.com). We evaluate SEED on the "Hard" task introduced in Kurach et al. (2019) where the baseline IMPALA agent achieved a positive average score after 500M frames using the engineered "checkpoint" reward function but a negative average score using only the score as a reward signal. In all experiments we use the model from Kurach et al. (2019) and sweep over 40 hyperparameter sets with 3 seeds each. See all hyperparameters in Appendix A.2.1. + +# 4.2.1 SPEED + +Compared to the baseline IMPALA using 2 Nvidia P100's (and CPUs for inference) in Kurach et al. (2019) we find that using 2 TPU v3 cores in SEED improves the speed by $1.6\mathrm{x}$ (see Table 1). Additionally, using 8 cores adds another $4.1\mathrm{x}$ speed-up. A speed-up of $4.5\mathrm{x}$ is achievable if the batch size is increased linearly with the number of training cores (5). However, we found that increasing the batch size, like with DeepMind Lab, hurts sample complexity. + +# 4.2.2 INCREASEDMAP SIZE + +More compute power allows us to increase the size of the Super Mini Map (SMM) input state. By default its size is $96 \times 72$ (x 4) and it represents players, opponents, ball and the active player as 2d bit maps. We evaluated three sizes: (1) Default $96 \times 72$ , (2) Medium $120 \times 90$ and (3) Large $144 \times 108$ . As shown in Table 1, switching from Default to Large SMM results in $60\%$ speed reduction. However, increasing the map size improves the final score (Table 2). It may suggest that the bit map representation is not granular enough for the task. For both reward functions, we improve upon the results of Kurach et al. (2019). Finally, training on 4B frames improves the results by a significant margin (an average score of 0.46 vs. 4.76 in case of the scoring reward function). + +# 4.3 ARCADE LEARNING ENVIRONMENT AND Q-LEARNING + +![](images/8d2e8b7becc9f03d2bd419a294d6c7296631a8487f09d7138077315a493dc99f.jpg) +Figure 6: Median human-normalized score on Atari-57 for SEED and related agents. SEED was run with 1 seed for each game. All agents use up to 30 random no-ops for evaluation. Left: X-axis is hours Right: X-axis is environment frames (a frame is 1/4th of an environment step due to action repeat). SEED reaches state of the art performance 3.1x faster (wall-time) than R2D2. + +![](images/62168af7d5243faf2afa2c8b525d8577163599dd2ab55a4651bbb2e49b9132eb.jpg) + +We evaluate our implementation of R2D2 in SEED architecture on 57 Atari 2600 games from the ALE benchmark. This benchmark has been the testbed for most recent deep reinforcement learning agents because of the diversity of visuals and game mechanics. + +We follow the same evaluation procedure as R2D2. In particular, we use the full action set, no loss-of-life-as-episode-end heuristic and start episodes with up to 30 random no-ops. We use 8 TPU v3 cores and 610 actors to maximize TPU utilization. This achieves 260K environment FPS and performs 9.5 network updates per second. Other hyperparameters are taken from R2D2, and are fully reproduced in appendix A.3.1. + +Figure 6 shows the median human-normalized scores for SEED, R2D2, Ape-X and Rainbow. As expected, SEED has similar sample efficiency as R2D2, but it is $3.1\mathbf{x}$ faster (see Table 1). This allows us to reach a median human-normalized score of $1880\%$ in just 1.8 days of training instead of 5, establishing a new wall-time state of the art on Atari-57. + +With the number of actors increased to 1200, a batch size increased to 256 and without framestacking, we can achieve $440\mathrm{K}$ environment FPS and learn using 16 + +batches per second. However, this significantly degrades sample efficiency and limits the final median human-normalized score to approximately $1000\%$ . + +
ArchitectureAcceleratorsSMMMedianMax
Scoring reward
IMPALA2 x Nvidia P100Default-0.740.06
SEEDTPU v3, 2 coresDefault-0.72-0.12
SEEDTPU v3, 8 coresDefault-0.83-0.02
SEEDTPU v3, 8 coresMedium-0.740.12
SEEDTPU v3, 8 coresLarge-0.690.46
SEEDTPU v3, 32 coresLargen/a4.761
Checkpoint reward
IMPALA2 x Nvidia P100Default-0.271.63
SEEDTPU v3, 2 coresDefault-0.441.64
SEEDTPU v3, 8 coresDefault-0.681.55
SEEDTPU v3, 8 coresMedium-0.521.76
SEEDTPU v3, 8 coresLarge-0.381.86
SEEDTPU v3, 32 coresLargen/a7.661
+ +1 32 core experiments trained on 4B frames with a limited sweep. + +Table 2: Google Research Football "Hard" using two kinds of reward functions. For each reward function, 40 hyperparameter sets ran with 3 seeds each which were averaged after 500M frames of training. The table shows the median and maximum of the 40 averaged values. This is a similar setup to Kurach et al. (2019) although we ran 40 hyperparameter sets vs. 100 but did not rerun our best models using 5 seeds. + +# 4.4 COST COMPARISONS + +With growing complexity of environments as well as size of neural networks used in reinforcement learning, the need of running big experiments increases, making cost reductions important. In this + +section we analyze how increasing complexity of the network impacts training cost for SEED and IMPALA. In our experiments we use the pricing model of Google AI Platform, ML Engine.2 + +Our main focus is on obtaining lowest possible cost per step, while maintaining training speed. Distributed experiments from Espeholt et al. (2018) (IMPALA) used between 150 and 500 CPUs, which translates into $7.125 -$ 23.75 of actors' cost per hour. The cost of single-GPU learner is $1.46 per hour. Due to the relatively high expense of the actors, our main focus is to reduce number of actors and to obtain high CPU utilization. + +
ResourceCost per hour
CPU core$0.0475
Nvidia Tesla P100$1.46
TPU v3 core$1.00
+ +# 4.4.1 DEEPMIND LAB + +Our DeepMind Lab experiment is based on the ResNet model from IMPALA. We evaluate increasing the number of filters in the convolutional layers: (1) Default 1x (2) Medium 2x and (3) Large 4x. Experiments are performed on the "explore_goal Locations_small" task. IMPALA uses a single Nvidia Tesla P100 GPU for training while inference is done on CPU by the actors. SEED uses one TPU v3 core for training and one for inference. + +For IMPALA, actor CPU utilization is close to $100\%$ but in case of SEED, only the environment runs on an actor making CPU idle while waiting for inference step. To improve utilization, a single SEED actor runs multiple environments (between 12 and 16) on a 4-CPU machine. + +Table 3: Cost of cloud resources as of Sep. 2019. + +
ModelActorsCPUsEnvs.SpeedCost/1BCost ratio
IMPALA
Default17617617630k$90
Medium13013013016.5k$128
Large1001001007.3k1$236
SEED
Default2610431274k$253.60
Medium124815634k$353.66
Large6248416k$544.37
+ +1 The batch size was lowered from 32 to 16 due to limited memory on Nvidia P100. + +Table 4: Training cost on DeepMind Lab for 1 billion frames. + +As Table 4 shows, SEED turns out to be not only faster, but also cheaper to run. The cost ratio between SEED and IMPALA is around 4. Due to high cost of inference on a CPU, IMPALA's cost increases with increasing complexity of the network. In the case of SEED, increased network size has smaller impact on overall costs, as inference accounts for about $30\%$ of the costs (see Table A.5). + +# 4.4.2 GOOGLE RESEARCH FOOTBALL + +We evaluate cost of running experiments with Google Research Football with different sizes of the Super Mini Map representation (the size has virtually no impact on environment's speed.) For IMPALA, two Nvidia P100 GPUs were used for training and SEED used one TPU v3 core for training and one for inference. + +For IMPALA, we use one core per actor while SEED's actors run multiple environments per actor for better CPU utilization (8 cores, 12 environments). + +For the default size of the SMM, per experiment training cost differs by only $68\%$ . As the Google Research Football environment is more expensive than DeepMind Lab, training and inference costs + +
ModelActorsCPUsEnvs.SpeedCost/1BCost ratio
IMPALA
Default40040040011k$553
Medium3003003007k$681
Large3003003005.3k$899
SEED
Default5241662417.5k$3451.68
Medium3124831010.5k$3651.87
Large211682527.5k$3692.70
+ +Table 5: Training cost on Google Research Football for 1 billion frames. + +have relatively smaller impact on the overall experiment cost. The difference increases when the size of the SMM increases and SEED needing relatively fewer actors. + +# 4.4.3 ARCADE LEARNING ENVIRONMENT + +Due to lack of baseline implementation for R2D2, we do not include cost comparisons for this environment. However, comparison of relative costs between ALE, DeepMind Lab and Football suggests that savings should be even more significant. ALE is the fastest among the three environments making inference proportionally the most expensive. Appendix A.5 presents training cost split between actors and the learner for different setups. + +# 5 CONCLUSION + +We introduced and analyzed a new reinforcement learning agent architecture that is faster and less costly per environment frame than previous distributed architectures by better utilizing modern accelerators. It achieved a 11x wall-time speedup on DeepMind Lab compared to a strong IMPALA baseline while keeping the same sample efficiency, improved on state of the art scores on Google Research Football, and achieved state of the art scores on Atari-57 3.1x faster (wall-time) than previous research. + +The agent is open-sourced and packaged to easily run on Google Cloud. We hope that this will accelerate reinforcement learning research by allowing the community to replicate state-of-the-art results and build on top of them. + +As a demonstration of the potential of this new agent architecture, we were able to scale it to millions of frames per second in some realistic scenarios (80x speedup compared to previous research). However, this requires increasing the number of environments and using larger batch sizes which hurts sample efficiency in the environments tested. Preserving sample efficiency with larger batch-sizes has been studied to some extent in RL (Stooke & Abbeel, 2018; McCandlish et al., 2018) and in the context of supervised learning (You et al., 2017b;a; 2019; Goyal et al., 2017). We believe it is still an open and increasingly important area of research in order to scale up reinforcement learning. + +# ACKNOWLEDGMENTS + +We would like to thank Steven Kapturowski, Georg Ostrovski, Tim Salimans, Aidan Clark, Manuel Kroiss, Matthieu Geist, Leonard Hussenot, Alexandre Passos, Marvin Ritter, Neil Zeghidour, Marc G. Bellemare and Sylvain Gelly for comments and insightful discussions and Marcin Andrychowicz, Dan Abolafia, Damien Vincent, Dehao Chen, Eugene Brevdo and Ruoxin Sang for their code contributions. + +# REFERENCES + +Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew + +Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaojiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org. +Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Kuttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.org/abs/1612.03801. +Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res. (JAIR), 47:253-279, 2013. +Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, and Marc G. Bellemare. *Dopamine: A Research Framework for Deep Reinforcement Learning*. 2018. URL http://arxiv.org/abs/1812.06110. +Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Józefowicz. Revisiting distributed synchronous SGD. CoRR, abs/1604.00981, 2016. URL http://arxiv.org/abs/1604.00981. +Steven Dalton, Iuri Frosio, and Michael Garland. Gpu-accelerated atari emulation for reinforcement learning. CoRR, abs/1907.08467, 2019. URL http://arxiv.org/abs/1907.08467. +Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'aurlio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1223-1231. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks.pdf. +Ben Deverett, Ryan Faulkner, Meire Fortunato, Greg Wayne, and Joel Z Leibo. Interval timing in deep reinforcement learning agents. arXiv preprint arXiv:1905.13469, 2019. +Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pp. 1406-1415, 2018. URL http://proceedings.mlr.press/v80/espeholt18a.html. +Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256, 2010. +Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training imageNet in 1 hour. CoRR, abs/1706.02677, 2017. URL http://arxiv.org/abs/1706.02677. +Karol Gregor, Danilo Jimenez Rezende, Frederic Besse, Yan Wu, Hamza Merzic, and Aaron van den Oord. Shaping belief states with generative environment models for RL. CoRR, abs/1906.09237, 2019. URL http://arxiv.org/abs/1906.09237. +Sergio Guadarrama, Anoop Korattikara, Oscar Ramirez, Pablo Castro, Ethan Holly, Sam Fishman, Ke Wang, Ekaterina Gonina, Neal Wu, Chris Harris, Vincent Vanhoucke, and Eugene Brevdo. TF-Agents: A library for reinforcement learning in tensorflow. https://github.com/tensorflow/agents, 2018. URL https://github.com/tensorflow/agents. [Online; accessed 25-June-2019]. + +Steven Hansen, Will Dabney, Andre Barreto, Tom Van de Wiele, David Warde-Farley, and Volodymyr Mnih. Fast task inference with variational intrinsic successor features. arXiv preprint arXiv:1906.05030, 2019. +Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Daniel Horgan, Bilal Piot, Mohammad Gheshlaghi Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. CoRR, abs/1710.02298, 2017. URL http:// arxiv.org/abs/1710.02298. +Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, and Hado van Hasselt. Multi-task deep reinforcement learning with popart. CoRR, abs/1809.04474, 2018. URL http://arxiv.org/abs/1809.04474. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, and David Silver. Distributed prioritized experience replay. In ICLR, 2018. +Steven Kapturowski, Georg Ostrovski, John Quan, Remi Munos, and Will Dabney. Recurrent experience replay in distributed reinforcement learning. 2018. +Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2014. +Karol Kurach, Anton Raichuk, Piotr Stanczyk, Michal Zajac, Olivier Bachem, Lasse Espeholt, Carlos Riquelme, Damien Vincent, Marcin Michalski, Olivier Bousquet, et al. Google research football: A novel reinforcement learning environment. arXiv preprint arXiv:1907.11180, 2019. +Heinrich Kuttler, Nantas Nardelli, Thibaut Lavril, Marco Selvatici, Viswanath Sivakumar, Tim Rocktäschel, and Edward Grefenstette. TorchBeast: A PyTorch Platform for Distributed RL. arXiv preprint arXiv:1910.03552, 2019. URL https://github.com/facebookresearch/torchbeast. +Ang Li, Huiyi Hu, Piotr Mirowski, and Mehrdad Farajtabar. Cross-view policy learning for street navigation. arXiv preprint arXiv:1906.05930, 2019. +Eric Liang, Richard Liaw, Philipp Moritz, Robert Nishihara, Roy Fox, Ken Goldberg, Joseph E Gonzalez, Michael I Jordan, and Ion Stoica. Rllib: Abstractions for distributed reinforcement learning. arXiv preprint arXiv:1712.09381, 2017. +Ashique Mahmood. Incremental Off-policy Reinforcement Learning Algorithms. PhD thesis, University of Alberta, 2017. +Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training. CoRR, abs/1812.06162, 2018. URL http://arxiv.org/abs/1812.06162. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928-1937, 2016. +Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, and David Silver. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015. +Shayegan Omidshafiei, Daniel Hennes, Dustin Morrill, Rémi Munos, Julien Pérolat, Marc Lanctot, Audrunas Gruslys, Jean-Baptiste Lespiau, and Karl Tuyls. Neural replicator dynamics. CoRR, abs/1906.00190, 2019. URL http://arxiv.org/abs/1906.00190. + +OpenAI. Openai five. https://blog.openai.com/openai-five/, 2018. +Michael Petrov, Szymon Sidor, Susan Zhang, Jakub Pachocki, Przemysław Debiak, Filip Wolski, Christy Dennison, Henrique Ponde, Greg Brockman, Jie Tang, David Farhi, Brooke Chan, and Jonathan Raiman. Openai rapid. https://openai.com/blog/openai-five/, 2018. Accessed: 2019-09-14. +Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado Van Hasselt, John Quan, Mel Večerík, et al. Observe and look further: Achieving consistent performance on atari. arXiv preprint arXiv:1805.11593, 2018. +Rajat Raina, Anand Madhavan, and Andrew Y Ng. Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th annual international conference on machine learning, pp. 873-880. ACM, 2009. +Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017. +Michael Schaarschmidt, Sven Mika, Kai Fricke, and Eiko Yoneki. Rlgraph: Modular computation graphs for deep reinforcement learning. +Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In Proc. of ICLR, 2015. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347. +David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016. +Adam Stooke and Pieter Abbeel. Accelerated methods for deep reinforcement learning. CoRR, abs/1803.02811, 2018. URL http://arxiv.org/abs/1803.02811. +Adam Stooke and Pieter Abbeel. rplyt: A research code base for deep reinforcement learning in pytorch. arXiv preprint arXiv:1909.01500, 2019. +Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. CoRR, abs/1712.06567, 2017. URL http://arxiv.org/abs/1712.06567. +Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3 (1):9-44, 1988. +Richard S Sutton and Andrew G Barto. Reinforcement learning: an introduction mit press. Cambridge, MA, 1998. +Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and C. Lawrence Zitnick. Elf: An extensive, lightweight and flexible research platform for real-time strategy games. Advances in Neural Information Processing Systems (NIPS), 2017. +Dhruva Tirumala, Hyeonwoo Noh, Alexandre Galashov, Leonard Hasenclever, Arun Ahuja, Greg Wayne, Razvan Pascanu, Yee Whye Teh, and Nicolas Heess. Exploiting hierarchy for learning and transfer in kl-regularized RL. CoRR, abs/1903.07438, 2019. URL http://arxiv.org/abs/1903.07438. +Aäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. CoRR, abs/1807.03748, 2018. URL http://arxiv.org/abs/1807.03748. +Hado van Hasselt. Double Q-learning. In Advances in Neural Information Processing Systems 23, pp. 2613-2621, 2010. + +Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Thirtieth AAAI conference on artificial intelligence, 2016. +Vivek Veeriah, Matteo Hessel, Zhongwen Xu, Richard Lewis, Janarthanan Rajendran, Junhyuk Oh, Hado van Hasselt, David Silver, and Satinder Singh. Discovery of useful questions as auxiliary tasks. arXiv preprint arXiv:1909.04607, 2019. +Alexander Sasha Vezhnevets, Yuhuai Wu, Remi Leblond, and Joel Leibo. Options as responses: Grounding behavioural hierarchies in multi-agent rl. arXiv preprint arXiv:1906.01470, 2019. +Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Kuttler, John Agapiou, Julian Schrittwieser, et al. Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782, 2017. +Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. *Dueling network architectures for deep reinforcement learning.* In *Proceedings of The 33rd International Conference on Machine Learning*, pp. 1995–2003, 2016. +Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017a. +Yang You, Igor Gitman, and Boris Ginsburg. Scaling sgd batch size to 32k forImagenet training. arXiv preprint arXiv:1708.03888, 6, 2017b. +Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962, 2019. + +# A APPENDIX + +# A.1 DEEPMIND LAB + +# A.1.1 LEVEL CACHED + +We enable DeepMind Lab's option for using a level cache for both SEED and IMPALA which greatly reduces CPU usage and results in stable actor CPU usage at close to $100\%$ for a single core. + +# A.1.2 HYPERPARAMETERS + +
ParameterRange
Action Repetitions4
Discount Factor (γ){.99, .993, .997, .999}
Entropy CoefficientLog-uniform (1e-5, 1e-3)
Learning RateLog-uniform (1e-4, 1e-3)
OptimizerAdam
Adam Epsilon{1e-1, 1e-3, 1e-5, 1e-7}
Unroll Length/n-step32
Value Function Coefficient.5
V-trace λ{.9, .95, .99, 1.}
+ +Table 6: Hyperparameter ranges used in the stability experiments. + +# A.2 GOOGL RESEARCH FOOTBALL + +# A.2.1 HYPERPARAMETERS + +
ParameterRange
Action Repetitions1
Discount Factor (γ){.99, .993, .997, .999}
Entropy CoefficientLog-uniform (1e-7, 1e-3)
Learning RateLog-uniform (1e-5, 1e-3)
OptimizerAdam
Unroll Length/n-step32
Value Function Coefficient.5
V-trace λ{.9, .95, .99, 1}
+ +# A.3 ALE + +# A.3.1 HYPERPARAMETERS + +We use the same hyperparameters as R2D2 (Kapturowski et al., 2018), except that we use more actors in order to best utilize 8 TPU v3 cores. For completeness, agent hyperparameters are in table 8 and environment processing parameters in table 9. We use the same neural network architecture as R2D2, namely 3 convolutional layers with filter sizes [32, 64, 64], kernel sizes $[8 \times 8, 4 \times 4, 3 \times 3]$ and strides $[4, 2, 1]$ , ReLU activations and "valid" padding. They feed into a linear layer with 512 units, feeding into an LSTM layer with 512 hidden units (that also uses the one-hot encoded previous action and the previous environment reward as input), feeding intoueling heads with 512 hidden units. We use Glorot uniform (Glorot & Bengio, 2010) initialization. + +Table 7: Hyperparameter ranges used for experiments with scoring and checkpoint rewards. + +
ParameterValue
Number of actors610
Replay ratio0.75
Sequence length120 incl. prefix of 40 burn-in transitions
Replay buffer size105 part-overlapping sequences
Minimum replay buffer size5000 part-overlapping sequences
Priority exponent0.9
Importance sampling exponent0.6
Discount γ0.997
Training batch size64
Inference batch size64
OptimizerAdam (lr = 10-4, ε = 10-3) (Kingma & Ba, 2014)
Target network update interval2500 updates
Value function rescalingx → sign(x)(√|x|+1-1) + εx, ε = 10-3
Gradient norm clipping80
n-steps5
Epsilon-greedyTraining: i-th actor ∈ [0, N) uses εi = 0.41+7i/N-1
Evaluation: ε = 10-3
Sequence priorityp = η maxiδi+(1 - η)δ where η = 0.9, +δi are per-step absolute TD errors.
+ +Table 8: SEED agent hyperparameters for Atari-57. + +
ParameterValue
Observation size84 × 84
Resizing methodBilinear
Random no-opsuniform in [1,30]. Applied before action repetition.
Frame stacking4
Action repetition4
Frame pooling2
Color modegrayscale
Terminal on loss of lifeFalse
Max frames per episode108K (30 minutes)
Reward clippingNo
Action setFull (18 actions)
Sticky actionsNo
+ +Table 9: Atari-57 environment processing parameters. + +A.3.2 FULL RESULTS ON ATARI-57 + +
GameHumanR2D2SEED 8 TPU v3 cores
Alien7127.7229496.9262197.4
Amidar1719.529321.428976.4
Assault742.0108197.0102954.7
Asterix8503.3999153.3983821.0
Asteroids47388.7357867.7296783.0
Atlantis29028.11620764.01612438.0
BankHeist753.124235.947080.6
BattleZone37187.5751880.0777200.0
BeamRider16926.5188257.4173505.3
Berzerk2630.453318.757530.4
Bowling160.7219.5204.2
Boxing12.198.5100.0
Breakout30.5837.7854.1
Centipede12017.0599140.3574373.1
ChopperCommand7387.8986652.0994991.0
CrazyClimber35829.4366690.7337756.0
Defender18688.9665792.0555427.2
DemonAttack1971.0140002.3143748.6
DoubleDunk-16.423.724.0
Enduro860.52372.72369.3
FishingDerby-38.785.875.0
Freeway29.632.533.0
Frostbite4334.7315456.4101726.8
Gopher2412.5124776.3117650.4
Gravitar3351.415680.77813.8
Hero30826.439537.137223.1
IceHockey0.979.379.2
Jamesbond302.825354.025987.0
Kangaroo3035.014130.713862.0
Krull2665.5218448.1113224.8
KungFuMaster22736.3233413.3239713.0
MontezumaRevenge4753.32061.3900.0
MsPacman6951.642281.743115.4
NameThisGame8049.058182.768836.2
Phoenix7242.6864020.0915929.6
Pitfall6463.70.0-0.1
Pong14.621.021.0
PrivateEye69571.35322.7198.0
Qbert13455.0408850.0546857.5
Riverraid17118.045632.136906.4
RoadRunner7845.0599246.7601220.0
Robotank11.9100.4104.8
Seaquest42054.7999996.7999990.2
Skiing-4336.9-30021.7-29973.6
Solaris12326.73787.2861.6
SpaceInvaders1668.743223.462957.8
StarGunner10250.0717344.0448480.0
Surround6.59.99.8
Tennis-8.3-0.123.9
TimePilot5229.2445377.3444527.0
Tutankham167.6395.3376.5
UpNDown11693.2589226.9549355.4
Venture1187.51970.72005.5
VideoPinball17667.9999383.2979432.1
WizardOfWor4756.5144362.7136352.5
YarsRevenge54576.9995048.4973319.0
Zaxxon9173.3224910.7168816.5
+ +Table 10: Final performance of SEED 8 TPU v3 cores, 610 actors (1 seed) compared to R2D2 (averaged over 3 seeds) and Human, using up to 30 random no-op steps at the beginning of each episode. SEED was evaluated by averaging returns over 200 episodes for each game after training on 40e9 environment frames. + +![](images/0e930877d113d127127c422a58a4e515aa60d82b23a40f4a0433ae8537ba7f33.jpg) +Figure 7: Learning curves on 57 Atari 2600 games for SEED (8 TPUv3 cores, 610 actors, evaluated with 1 seed). Each point of each curve averages returns over 200 episodes. No curve smoothing was performed. Curves end at approximately 43 hours of training, corresponding to 40e9 environment frames. + +# A.4 SEED LOCALLY AND ON CLOUD + +SEED is open-sourced together with an example of running it both on a local machine and with scale using AI Platform, part of Google Cloud. We provide a public Docker image with low-level components implemented in C++ already pre-compiled to minimize the time needed to start SEED experiments. + +The main pre-requisite to running on Cloud is setting up a Cloud Project. The provided startup script uploads the image and runs training for you. For more details please see github.com/google-research/seed_r1. + +# A.5 EXPERIMENTS COST SPLIT + +
ModelAlgorithmActors costLearner costTotal cost
Arcade Learning Environment
DefaultSEED$10.8$8.5$19.3
DeepMind Lab
DefaultIMPALA$77.0$13.4$90
MediumIMPALA$103.6$24.4$128
LargeIMPALA$180.5$55.5$236
DefaultSEED$20.1$8.2$28
MediumSEED$18.6$16.4$35
LargeSEED$19.6$35$54
Google Research Football
DefaultIMPALA$479$74$553
MediumIMPALA$565.2$115.8$681
LargeIMPALA$746.1$153$899
DefaultSEED$313$32$345
MediumSEED$312$53$365
LargeSEED$295$74$369
+ +# A.6 COST COMPARISON ON DEEPMIND LAB USING NVIDIA P100 GPUs + +In section 4.4.1, we compared the cost of running SEED using two TPU v3 cores and IMPALA on a single Nvidia P100 GPU. In table 12, we also compare the cost when both agents run on a single Nvidia P100 GPU on DeepMind Lab. Even though this is a sub-optimal setting for SEED because it performs inference on the accelerator and therefore benefits disproportionately from more efficient accelerators such as TPUs, SEED compares favorably, being $1.76\mathbf{x}$ cheaper than IMPALA per frame. + +Table 11: Cost of performing 1 billion frames for both IMPALA and SEED split by component. + +
ArchitectureActorsCPUsEnvs.SpeedCost/1BCost ratio
IMPALA17617617630k$90
SEED154417619k$511.76
+ +Table 12: Cost of performing 1 billion frames for both IMPALA and SEED when running on a single Nvidia P100 GPU on DeepMind Lab. + +A.7 INFERENCE LATENCY + +
ModelIMPALASEED
DeepMind Lab
Default17.97ms10.98ms
Medium25.86ms12.70ms
Large48.79ms14.99ms
Google Research Football
Default12.59ms6.50ms
Medium19.24ms5.90ms
Large34.20ms11.19ms
Arcade Learning Environment
DefaultN/A7.2ms
+ +Table 13: End-to-end inference latency of IMPALA and SEED for different environments and models. \ No newline at end of file diff --git a/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/images.zip b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2d57b79cbe8856e06772653af785e2cd80b882e2 --- /dev/null +++ b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a16813a3ab4a664d6ea14d6833a68453e1c376c0ffa9d9014349727f5bdbe156 +size 1247406 diff --git a/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/layout.json b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..53ff0ba108a90251e88d5c1166c5123ad562c9d5 --- /dev/null +++ b/seedrlscalableandefficientdeeprlwithacceleratedcentralinference/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:299d5818a0e7461d486466f495a63f62e171e8ae7ddaa47edf425b18d66777ba +size 434678 diff --git a/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_content_list.json b/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..83412d7e611131183e63784b26b6c48c086fce34 --- /dev/null +++ b/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e76ac06f71c0c22321314abacad843226e9b996ee6eecf4d887469080702418 +size 135165 diff --git a/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_model.json b/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_model.json new file mode 100644 index 0000000000000000000000000000000000000000..40406b1b9949fa240796a0ec3b6d2a2e7b7528df --- /dev/null +++ b/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cd1c7a998ae4f0aaf6774907c3249e56a5e523ed0f2ffd1d86f36f471170969 +size 165057 diff --git a/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_origin.pdf b/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..034070cf9fb7604ab8d5f322122eec1526e9565a --- /dev/null +++ b/selectionviaproxyefficientdataselectionfordeeplearning/eccd4622-048e-4075-a61e-4226eb6db810_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1417170ff95595628724d75e0b5c55877bdad427bdb61883975fb9647782c33b +size 17258364 diff --git a/selectionviaproxyefficientdataselectionfordeeplearning/full.md b/selectionviaproxyefficientdataselectionfordeeplearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2d3a5e6dd5a48d15c55cb053e7e1a8f9548f18cc --- /dev/null +++ b/selectionviaproxyefficientdataselectionfordeeplearning/full.md @@ -0,0 +1,420 @@ +# SELECTION VIA PROXY: EFFICIENT DATA SELECTION FOR DEEP LEARNING + +Cody Coleman,\* Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, Matei Zaharia +Stanford University + +# ABSTRACT + +Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets. However, they can be prohibitively expensive to apply in deep learning because they depend on feature representations that need to be learned. In this work, we show that we can greatly improve the computational efficiency by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model, using smaller architectures, and training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signals for data selection. We evaluate this "selection via proxy" (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error (often within $0.1\%$ ). For core-set selection on CIFAR10, proxies that are over $10\times$ faster to train than their larger, more accurate targets can remove up to $50\%$ of the data without harming the final accuracy of the target, leading to a $1.6\times$ end-to-end training time improvement. + +# 1 INTRODUCTION + +Data selection methods, such as active learning and core-set selection, improve the data efficiency of machine learning by identifying the most informative training examples. To quantify informativeness, these methods depend on semantically meaningful features or a trained model to calculate uncertainty. Concretely, active learning selects points to label from a large pool of unlabeled data by repeatedly training a model on a small pool of labeled data and selecting additional examples to label based on the model's uncertainty (e.g., the entropy of predicted class probabilities) or other heuristics (Lewis & Gale, 1994; Rosenberg et al., 2005; Settles, 2011; 2012). Conversely, core-set selection techniques start with a large labeled or unlabeled dataset and aim to find a small subset that accurately approximates the full dataset by selecting representative examples (Har-Peled & Kushal, 2007; Tsang et al., 2005; Huggins et al., 2016; Campbell & Broderick, 2017; 2018; Sener & Savarese, 2018). + +Unfortunately, classical data selection methods are often prohibitively expensive to apply in deep learning (Shen et al., 2017; Sener & Savarese, 2018; Kirsch et al., 2019). Deep learning models learn complex internal semantic representations (hidden layers) from raw inputs (e.g., pixels or characters) that enable them to achieve state-of-the-art performance but result in substantial training times. Many core-set selection and active learning techniques require some feature representation before they can accurately identify informative points either to take diversity into account or as part of a trained model to quantify uncertainty. As a result, new deep active learning methods request labels in large batches to avoid retraining the model too many times (Shen et al., 2017; Sener & Savarese, 2018; Kirsch et al., 2019). However, batch active learning still requires training a full deep model for every batch, which is costly for large models (He et al., 2016b; Jozefowicz et al., 2016; Vaswani et al., 2017). Similarly, core-set selection applications mitigate the training time of deep learning models by using + +bespoke combinations of hand-engineered features and simple models (e.g., hidden Markov models) pretrained on auxiliary tasks (Wei et al., 2013; 2014; Tschiatschek et al., 2014; Ni et al., 2015). + +In this paper, we propose selection via proxy (SVP) as a way to make existing data selection methods more computationally efficient for deep learning. SVP uses the feature representation from a separate, less computationally intensive proxy model in place of the representation from the much larger and more accurate target model we aim to train. SVP builds on the idea of heterogeneous uncertainty sampling from Lewis & Catlett (1994), which showed that an inexpensive classifier (e.g., naive Bayes) can select points to label for a much more computationally expensive classifier (e.g., decision tree). In our work, we show that small deep learning models can similarly serve as an inexpensive proxy for data selection in deep learning, significantly accelerating both active learning and core-set selection across a range of datasets and selection methods. To create these cheap proxy models, we can scale down deep learning models by removing layers, using smaller model architectures, and training them for fewer epochs. While these scaled-down models achieve significantly lower accuracy than larger models, we surprisingly find that they still provide useful representations to rank and select points. Specifically, we observe high Spearman's and Pearson's correlations between the rankings from small proxy models and the larger, more accurate target models on metrics including uncertainty (Settles, 2012), forgetting events (Toneva et al., 2019), and submodular algorithms such as greedy k-centers (Wolf, 2011). Because these proxy models are quick to train (often $10 \times$ faster), we can identify which points to select nearly as well as the larger target model but significantly faster. + +We empirically evaluated SVP for active learning and core-set selection on five datasets: CIFAR10, CIFAR100 (Krizhevsky & Hinton, 2009), ImageNet (Russakovsky et al., 2015), Amazon Review Polarity, and Amazon Review Full (Zhang et al., 2015). For active learning, we considered both least confidence uncertainty sampling (Settles, 2012; Shen et al., 2017; Gal et al., 2017) and the greedy k-centers approach from Sener & Savarese (2018) with a variety of proxies. Across all datasets, we found that SVP matches the accuracy of the traditional approach of using the same large model for both selecting points and the final prediction task. Depending on the proxy, SVP yielded up to a $7 \times$ speed-up on CIFAR10 and CIFAR100, $41.9 \times$ speed-up on Amazon Review Polarity and Full, and $2.9 \times$ speed-up on ImageNet in data selection runtime (i.e., the time it takes to repeatedly train and select points). For core-set selection, we tried three methods to identify a subset of points: max entropy uncertainty sampling (Settles, 2012), greedy k-centers as a submodular approach (Wolf, 2011), and the recent approach of forgetting events (Toneva et al., 2019). For each method, we found that smaller proxy models have high Spearman's rank-order correlations with models that are $10 \times$ larger and performed as well as these large models at identifying subsets of points to train on that yield high test accuracy. On CIFAR10, SVP applied to forgetting events removed $50\%$ of the data without impacting the accuracy of ResNet164 with pre-activation (He et al., 2016b), using a $10 \times$ faster model than ResNet164 to make the selection. This substitution yielded an end-to-end training time improvement of about $1.6 \times$ for ResNet164 (including the time to train and use the proxy). Taken together, these results demonstrate that SVP is a promising, yet simple approach to make data selection methods computationally feasible for deep learning. While we focus on active learning and core-set selection, SVP is widely applicable to methods that depend on learned representations. + +# 2 METHODS + +In this section, we describe SVP and show how it can be incorporated into active learning and core-set selection. Figure 1 shows an overview of SVP: in active learning, we retrain a proxy model $A_{k}^{P}$ in place of the target model $A_{k}^{T}$ after each batch is selected, and in core-set selection, we train the proxy $A_{[n]}^{P}$ rather than the target $A_{[n]}^{T}$ over all the data to learn a feature representation and select points. + +# 2.1 ACTIVE LEARNING + +Pool-based active learning starts with a large pool of unlabeled data $U = \{\mathbf{x}_i\}_{i\in [n]}$ where $[n] = \{1,\ldots ,n\}$ . Each example is from the space $\mathcal{X}$ with an unknown label from the label space $\mathcal{V}$ and is sampled i.i.d. over the space $\mathcal{Z} = \mathcal{X}\times \mathcal{Y}$ as $(\mathbf{x}_i,y_i)\sim p_{\mathcal{Z}}$ . Initially, methods label a small pool of points $s^0 = \{s_j^0\in [n]\}_{j\in [m]}$ chosen uniformly at random. Given $U$ , a loss function $\ell$ , and the labels $\{y_{s_j^0}\}_{j\in [m]}$ for the initial random subset, the goal of active learning is to select up to a budget of $b$ points $s = s^0\cup \{s_j\in [n]\setminus s^0\}_{j\in [b - m]}$ to label that produces a model $A_{s}$ with low error. + +![](images/7efb0d849baa915443f236b24726aa106f770854b3bdba53abef7041acd22917.jpg) + +![](images/7b05592fea8dc663c4b1ecc5a4bd8d815e8dad3233268fbec94eb4c8ca9c18fd.jpg) +(a) Active Learning + +![](images/cf5190d8a00fca46f1cb76623079f134b72719be008e0fd9f25039115ce42f71.jpg) +(b) Core-Set Selection +Figure 1: SVP applied to active learning (left) and core-set selection (right). In active learning, we followed the same iterative procedure of training and selecting points to label as traditional approaches but replaced the target model with a cheaper-to-compute proxy model. For core-set selection, we learned a feature representation over the data using a proxy model and used it to select points to train a larger, more accurate model. In both cases, we found the proxy and target model have high rank-order correlation, leading to similar selections and downstream results. + +Baseline. In this paper, we apply SVP to least confidence uncertainty sampling (Settles, 2012; Shen et al., 2017; Gal et al., 2017) and the recent greedy k-centers approach from Sener & Savarese (2018). Like recent work for deep active learning (Shen et al., 2017; Sener & Savarese, 2018; Kirsch et al., 2019), we consider a batch setting with $K$ rounds where we select $\frac{b}{K}$ points in every round. Following Gal et al. (2017); Sener & Savarese (2018); Kirsch et al. (2019), we reinitialize the target model and retrain on all of the labeled data from the previous $k$ rounds to avoid any correlation between selections (Frankle & Carbin, 2018; Kirsch et al., 2019). We denote this trained model as $A_{s^0\cup \dots \cup s^k}^T$ or just $A_{k}^{T}$ for simplicity. Then using $A_{k}^{T}$ , we either calculate the model's confidence as: + +$$ +f _ {\text {c o n f i d e n c e}} (\mathbf {x}; A _ {k} ^ {T}) = 1 - \max _ {\hat {y}} P (\hat {y} | \mathbf {x}; A _ {k} ^ {T}) +$$ + +and select the examples with the lowest confidence or extract a feature representation from the model's final hidden layer and compute the distance between examples (i.e., $\Delta (\mathbf{x}_i,\mathbf{x}_j;A_k^T)$ ) to select points according to the greedy k-centers method from Wolf (2011); Sener & Savarese (2018) (Algorithm 1). The same model is trained on the final $b$ labeled points to yield the final model, $A_K^T$ , which is then tested on a held-out set to evaluate error and quantify the quality of the selected data. + +Although other selection approaches exist, least confidence uncertainty sampling and greedy k-centers cover the spectrum of uncertainty-based and representativeness-based approaches for deep active learning. Other uncertainty metrics such as entropy or margin were highly correlated with confidence when using the same trained model (i.e., above a 0.96 Spearman's correlation in our experiments on CIFAR). Query-by-committee (Seung et al., 1992) can be prohibitively expensive in deep learning, where training a single model is already costly. BALD (Houlsby et al., 2011) has seen success in deep learning (Gal et al., 2017; Shen et al., 2017) but is restricted to Bayesian neural networks or networks with dropout (Srivastava et al., 2014) as an approximation (Gal & Ghahramani, 2016). + +Algorithm 1 GreEDy K-CENTERS (WOLF, 2011; SENER & SAVARES) + +Input: data $\mathbf{x}_i$ , existing pool $s^0$ , trained model $A_0^T$ , and a budget $b$ + +1: Initialize $s = s^0$ +2: repeat +3: $\bar{u} = \arg \max_{i\in [n]\backslash s}\min_{j\in s}\Delta \left(\mathbf{x}_i,\mathbf{x}_j;A_0^T\right)$ +4: $s = s\cup \{u\}$ +5: until $|s| = b + |s^0|$ +6: return $s \setminus s^0$ + +```txt +Algorithm 2 FORGETTING EVENTS (TONEVA ET AL., 2019) +1: Initialize prev_acc $i = 0, i \in [n]$ +2: Initialize forgetting_events $i = 0, i \in [n]$ +3: while training is not done do +4: Sample mini-batch $B$ from $L$ +5: for example $i \in B$ do +6: Compute acc $i$ +7: if prev_acc $i > \text{acc}_i$ then +8: forgetting_events $i += 1$ +9: prev_acc $i = \text{acc}_i$ +10: Gradient update classifier on $B$ +11: return forgetting_events +``` + +# 2.2 CORE-SET SELECTION + +Core-set selection can be broadly defined as techniques that find a subset of data points that maintain a similar level of quality (e.g., generalization error of a trained model or minimum enclosing ball) as the full dataset. Specifically, we start with a labeled dataset $L = \{\mathbf{x}_i, y_i\}_{i \in [n]}$ sampled i.i.d. from $\mathcal{Z}$ with $p_{\mathcal{Z}}$ and want to find a subset of $m \leq n$ points $s = \{s_j \in [n]\}_{j \in [m]}$ that achieves comparable quality to the full dataset: $\min_{s:|s| = m} E_{\mathbf{x},y \sim p_{\mathcal{Z}}}[\ell(\mathbf{x},y;A_s)] - E_{\mathbf{x},y \sim p_{\mathcal{Z}}}[\ell(\mathbf{x},y;A_{[n]})]$ + +Baseline. To find $s$ for a given budget $m$ , we implement three core-set selection techniques: greedy k-centers (Wolf, 2011; Sener & Savarese, 2018), forgetting events (Toneva et al., 2019), and max entropy uncertainty sampling (Lewis & Gale, 1994; Settles, 2012). Greedy k-centers is described above and in Algorithm 1. Forgetting events are defined as the number of times an example is incorrectly classified after having been correctly classified earlier during training a model, as described in Algorithm 2. To select points, we follow the same procedure as Toneva et al. (2019): we keep the points with the $m$ highest number of forgetting events. Points that are never correctly classified are treated as having an infinite number of forgetting events. Similarly, we rank examples based on the entropy from a trained target $A_{[n]}^T$ as: + +$$ +f _ {\text {e n t r o p y}} (\mathbf {x}; A _ {[ n ]} ^ {T}) = - \sum_ {\hat {y}} P (\hat {y} | \mathbf {x}; A _ {[ n ]} ^ {T}) \log P (\hat {y} | \mathbf {x}; A _ {[ n ]} ^ {T}) +$$ + +and keep the $m$ examples with the highest entropy. To evaluate core-set quality, we compare the performance of training the large target model on the selected subset $A_{s}^{T}$ to training the target model on the entire dataset $A_{[n]}^{T}$ by measuring error on a held-out test set. + +# 2.3 APPLYING SELECTION VIA PROXY + +In general, SVP can be applied by replacing the models used to compute data selection metrics such as uncertainty with proxy models. In this paper, we applied SVP to the active learning and core-set selection methods described in Sections 2.1 and 2.2 as follows: + +- For active learning, we replaced the model trained at each batch $(A_k^T)$ with a proxy $(A_k^P)$ , but then trained the same final model $A_K^T$ once the budget $b$ was reached. +- For core-set selection, we used a proxy $\tilde{A}_{[n]}^{P}$ instead of $A_{[n]}^{T}$ to compute metrics and select $s$ . + +We explored two main methods to create our proxy models: + +Scaling down. For deep models with many layers, reducing the dimension or the number of hidden layers is an easy way to trade-off accuracy to reduce training time. For example, deep ResNet models come in a variety of depths (He et al., 2016b;a) and widths (Zagoruyko & Komodakis, 2016) that represent many points on the accuracy and training time curve. As shown in Figure 4a in the Appendix, a ResNet20 model achieves a top-1 error of $7.6\%$ on CIFAR10 in 26 minutes, while a larger ResNet164 model takes 4 hours and reduces error by $2.5\%$ . Similar results have been shown for many other tasks, including neural machine translation (Vaswani et al., 2017), text classification (Conneau et al., 2016), and recommendation (He et al., 2017). Looking across architectures gives even more options to reduce computational complexity. We exploit the limitless model architectures in deep learning to trade-off between accuracy and complexity to scale down to a proxy that can be trained quickly but still provides a good approximation of the target's decision boundary. + +Training for fewer epochs. Similarly, a significant amount of training is spent on a relatively small reduction in error. While training ResNet20, almost half of the training time (i.e., 12 minutes out of 26 minutes) is spent on a $1.4\%$ improvement in test error, as shown in Figure 4a in the Appendix. Based on this observation, we also explored training proxy models for a smaller number of epochs to get good approximations of the decision boundary of the target model even faster. + +# 3 RESULTS + +We applied SVP to data selection methods from active learning and core-set selection on five datasets. After a brief description of the datasets and models in Section 3.1, Section 3.2 evaluates SVP's impact on active learning and shows that across labeling budgets SVP achieved similar or higher accuracy + +and up to a $41.9\times$ improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points). Next, we applied SVP to the core-set selection problem (Section 3.3). For all selection methods, the target model performed nearly as well as or better with SVP than the oracle that trained the target model on all of the data before selecting examples. On CIFAR10, a small proxy model trained for 50 epochs instead of 181 epochs took only 7 minutes compared to the 4 hours for training the target model for all 181 epochs, making SVP feasible for end-to-end training time speed-ups. Finally, Section 3.4 illustrates why proxy models performed so well by evaluating how varying models and methods rank examples. + +# 3.1 EXPERIMENTAL SETUP + +Datasets. We focused on classification as a well-studied task in the active learning literature (see Section A.1 for more detail). Our experiments included three image classification datasets: CIFAR10, CIFAR100 (Krizhevsky & Hinton, 2009), and ImageNet (Russakovsky et al., 2015); and two text classification datasets: Amazon Review Polarity and Full (Zhang & LeCun, 2015; Zhang et al., 2015). CIFAR10 is a coarse-grained classification task over 10 classes, and CIFAR100 is a fine-grained task with 100 classes. Both datasets contain 50,000 images for training and 10,000 images for testing. ImageNet has 1.28 million training images and 50,000 validation images that belong to 1 of 1,000 classes. Amazon Review Polarity has 3.6 million reviews split evenly between positive and negative ratings with an additional 400,000 reviews for testing. Amazon Review Full has 3 million reviews split evenly between the 5 stars with an additional 650,000 reviews for testing. + +Models. For CIFAR10 and CIFAR100, we used ResNet164 with pre-activation from He et al. (2016b) as our large target model. The smaller, proxy models are also ResNet architectures with pre-activation, but they use pairs of $3 \times 3$ convolutional layers as their residual unit rather than bottlenecks. For ImageNet, we used the original ResNet architecture from He et al. (2016a) implemented in PyTorch1 (Paszke et al., 2017) with ResNet50 as the target and ResNet18 as the proxy. For Amazon Review Polarity and Amazon Review Full, we used VDCNN (Conneau et al., 2017) and fastText (Joulin et al., 2016) with VDCNN29 as the target and fastText and VDCNN9 as proxies. In general, we followed the same training procedure as the original papers (more details in Section A.2). + +# 3.2 ACTIVE LEARNING + +We explored the impact of SVP on two active learning techniques: least confidence uncertainty sampling and the greedy k-centers approach from Sener & Savarese (2018). Starting with an initial random subset of $2\%$ of the data, we selected $8\%$ of the remaining unlabeled data for the first round and $10\%$ for subsequent rounds until the labeled data reached the budget $b$ and retrained the models from scratch between rounds as described in Section 2.1. Across datasets, SVP sped up data selection without significantly impacting the final predictive performance of the target. + +CIFAR10 and CIFAR100. For least confidence uncertainty sampling and greedy k-centers, SVP sped-up data selection by up to $7 \times$ and $3.8 \times$ respectively without impacting data efficiency (see Tables 1 and 3) despite the proxy achieving substantially higher top-1 error than the target ResNet164 model (see Figure 6 in the Appendix). The speed-ups for least confidence were a direct reflection of the difference in training time between the proxy in the target models. As shown in Figures 4 and 5 in the Appendix, ResNet20 was about $8 \times$ faster to train than ResNet164, taking 30 minutes to train rather than 4 hours. Larger budgets required more rounds of selection and, in turn, more training, which led to larger speed-ups as training became a more significant fraction of the total time. Training for fewer epochs provided a significant error reduction compared to random sampling but was not as good as training for the full schedule (see Table 4 in the Appendix). For greedy k-centers, the speed-ups increased more slowly because executing the selection algorithm added more overhead. + +ImageNet. For least confidence uncertainty sampling, SVP sped-up data selection by up to $1.6 \times$ (Table 1) despite ResNet18's higher error compared to ResNet50 (Figure 6g in the Appendix). Training for fewer epochs increased the speed-up to $2.9 \times$ without increasing the error rate of ResNet50 (Table 4). Greedy k-centers was too slow on ImageNet due to the quadratic complexity of Algorithm 1. + +Table 1: SVP performance on active learning. Average (± 1 std.) data selection speed-ups from 3 runs of active learning using least confidence uncertainty sampling with varying proxies and labeling budgets on four datasets. Bold speed-ups indicate settings that either achieve lower error or are within 1 std. of the mean top-1 error for the baseline approach of using the same model for selection and the final predictions. Across datasets, SVP sped up selection without significantly increasing the error of the final target. Additional results and details are in Table 3. + +
Budget (b/n)Data Selection Speed-up
10%20%30%40%50%
DatasetSelection Model
CIFAR10ResNet164 (Baseline)1.0×1.0×1.0×1.0×1.0×
ResNet1101.8×1.9×1.9×1.8×1.8×
ResNet562.6×2.9×3.0×3.1×3.1×
ResNet203.8×5.8×6.7×7.0×7.2×
CIFAR100ResNet164 (Baseline)1.0×1.0×1.0×1.0×1.0×
ResNet1101.5×1.6×1.6×1.6×1.6×
ResNet562.4×2.7×3.0×2.9×3.1×
ResNet204.0×5.8×6.6×7.0×7.2×
ImageNetResNet50 (Baseline)1.0×1.0×1.0×1.0×1.0×
ResNet181.2×1.3×1.4×1.5×1.6×
AmazonVDCNN29 (Baseline)1.0×1.0×1.0×1.0×1.0×
ReviewVDCNN91.9×1.8×1.8×1.8×1.8×
PolarityfastText10.6×20.6×32.2×41.9×51.3×
+ +Amazon Review Polarity and Amazon Review Full. On Amazon Review Polarity, SVP with a fastText proxy for VDCNN29 led to up to a relative error reduction of $14\%$ over random sampling for large budgets (Table 3), while being up to $41.9 \times$ faster at data selection than the baseline approach (Table 1). Despite fastText's architectural simplicity compared to VDCNN29 and higher error (Figure 6e), the calculated confidence signaled which examples would be the most informative. For all budgets, VDCNN9 was within $0.1\%$ top-1 error of VDCNN29, giving a consistent $1.8 \times$ speed-up. On Amazon Review Full, neither the baseline least confidence uncertainty sampling approach nor the application of SVP outperformed random sampling (see Table 3 in the Appendix), so the data selection speed-ups were uninteresting even though they were similar to Amazon Review Polarity. For both datasets, greedy k-centers was too slow as mentioned above in the ImageNet experiments. + +# 3.3 CORE-SET SELECTION + +CIFAR10 and CIFAR100. For all methods on both CIFAR10 and CIFAR100, SVP proxy models performed as well as or better than an oracle where ResNet164 itself is used as the core-set selection model, as shown in Figure 2 (and Figure 7 in the Appendix). Using forgetting events on CIFAR10, SVP with ResNet20 as the proxy removed $50\%$ of the data without a significant increase in error from ResNet164. The entire process of training ResNet20 on all the data, selecting which examples to keep, and training ResNet164 on the subset only took 2 hours and 20 minutes (see Table 6 in the Appendix), which was a $1.6 \times$ speed-up compared to training ResNet164 over all of the data. If we stopped training ResNet56 early and removed $50\%$ of the data based on forgetting events from the first 50 epochs, SVP achieved an end-to-end training time speed-up of $1.8 \times$ with only a slightly higher top-1 error from ResNet164 ( $5.4\%$ vs. $5.1\%$ ) as shown in Table 7 in the Appendix. In general, training the proxy for fewer epochs also maintained the accuracy of the target model on CIFAR10 because the ranking quickly converged (Figure 11a and 12a in the Appendix). On CIFAR100, partial training did not work as well for proxies at large subset sizes because the ranking took longer to stabilize and were less correlated (Figure 11b and Figure 12b in the Appendix). On small $30\%$ subsets with forgetting events, partial training improved accuracy on CIFAR100. + +ImageNet. Neither the baseline approach nor SVP was able to remove a significant percentage of the data without increasing the final error of ResNet50, as shown in Table 5 in the Appendix. However, the selected subsets from both ResNet18 and ResNet50 outperformed random sampling with up to a $1\%$ drop in top-1 error using forgetting events. Note, due to the quadratic computational complexity of Algorithm 1, we were unable to run greedy k-centers in a reasonable amount of time. + +![](images/7b79db81c562b71af39ceecdfecfb41d37b5eba255520f3a1ad7f2659febc75c.jpg) +(a) CIFAR10 forgetting events + +![](images/6a26c8b6cce545b350cb2f9b2d14be799e47626b0c080c3070b94356455b1812.jpg) +(b) CIFAR10 entropy +(c) CIFAR10 greedy k-centers + +![](images/caf00af45c35e917c3ce4cbed141b68152090ed36ee41c4c550524ff2b7f607d.jpg) + +![](images/e056dfc2526694b0346dfcf7af31a958b90783f34a947cd6c48f595a672e02ee.jpg) +Figure 2: SVP performance on core-set selection. Average $(\pm 1$ std.) top-1 error of ResNet164 over 5 runs of core-set selection with different selection methods, proxies, and subset sizes on CIFAR10. We found subsets using forgetting events (left), entropy (middle), and greedy k-centers (right) from a proxy model trained over the entire dataset. Across datasets and selection methods, SVP performed as well as an oracle baseline but significantly faster (speed-ups in parentheses). +(a) CIFAR10 forgetting events +Figure 3: Comparing selection across model sizes and methods on CIFAR10. Average Spearman's correlation between different runs of ResNet (R) models and at varying depths. We computed rankings based on forgetting events (left), entropy (middle), and greedy k-centers (right). We saw a similarly high correlation across model architectures (off-diagonal) as between different runs of the same architecture (on-diagonal), suggesting that small models are good proxies for data selection. + +![](images/ce5942d18f8ecc22f4ec2c3e11a847a669778c73de2ebc3bc905ade3dc44ea8d.jpg) +(b) CIFAR10 entropy + +![](images/9e4a72ba2468c6dd74467d3b78f6cae927324107759cd0261af70fee1495a411.jpg) +(c) CIFAR10 greedy k-centers + +Amazon Review Polarity and Amazon Review Full. On Amazon Review Polarity, we were able to remove $20\%$ of the dataset with only a $0.1\%$ increase in VDCNN29's top-1 error using fastText as the proxy (see Table 5). In comparison to VDCNN29, which took 16 hours and 40 minutes to train over the entire dataset on a Titan V GPU, fastText was two orders of magnitude faster, taking less than 10 minutes on a CPU to train over the same data and compute output probabilities. This difference allowed us to train VDCNN29 to nearly the same error in 13 and a half hours. However, on Amazon Review Full, both the baseline approach and SVP failed to outperform random sampling. Similar to ImageNet, we were unable to run greedy k-centers in a reasonable amount of time, and additionally, Facebook's fastText implementation2 did not allow us to compute forgetting events. + +# 3.4 RANKING CORRELATION BETWEEN MODELS + +Models with fewer layers. Figure 3 (and Figure 9 in the appendix) shows the Spearman's rank-order correlation between ResNets of varying depth for three selection methods on CIFAR10 (and CIFAR100). For greedy k-centers, we started with 1,000 randomly selected points and ranked the remaining points based on the order they are added to set $s$ in Algorithm 1. Across models, there was a positive correlation similar to the correlation between runs of the same model. For forgetting events and entropy, we ranked points in descending order based on the number of forgetting events and the entropy of the output predictions from the trained model, respectively. Both metrics had comparable positive correlations between different models and different runs of the same model. We also looked at the Pearson correlation coefficient for the number of forgetting events and entropy in Figure 15 in the Appendix and found a similar positive correlation. The consistent positive correlation between varying depths illustrates why small models are good proxies for larger models in data selection. + +Models with different architectures. We further investigated different model architectures by calculating the Spearman's correlation between pretrained ImageNet models and found that correlations were high across a wide range of models (Figure 8 in the Appendix). For example, MobileNet V2's (Sandler et al., 2018) entropy-based rankings were highly correlated to ResNet50 (on par with ResNet18), even though the model had far fewer parameters (3.5M vs. 25.6M). In concert with our fastText and VDCNN results, the high correlations between different model architectures suggest that SVP might be widely applicable. While there are likely limits to how different architectures can be, there is a wide range of trade-offs between accuracy and computational complexity, even within a narrow spectrum of models. + +# 4 RELATED WORK + +Active learning. There are examples in the active learning literature that address the computational efficiency of active learning methods by using one model to select points for a different, more expensive model. For instance, Lewis & Catlett (1994) proposed heterogeneous uncertainty sampling and used a Naïve Bayes classifier to select points to label for a more expensive decision tree target model. Tomanek et al. (2007) uses a committee-based active learning algorithm for an NLP task and notes that the set of selected points are "reusable" across different models (maximum entropy, conditional random field, naive Bayes). In our work, we showed that this can be generalized to deep learning by either using smaller models or fewer training epochs, where it can significantly reduce the running time of uncertainty-based (Settles, 2012; Shen et al., 2017; Gal et al., 2017) and recent representativeness-based (Sener & Savarese, 2018) methods. + +Core-set selection. Core-set selection attempts to find a representative subset of points to speed up learning or clustering; such as $k$ -means and $k$ -medians (Har-Peled & Kushal, 2007), SVM (Tsang et al., 2005), Bayesian logistic regression (Huggins et al., 2016), and Bayesian inference (Campbell & Broderick, 2017; 2018). However, these examples generally require ready-to-use features as input, and do not directly apply to deep neural networks unless a feature representation is first learned, which usually requires training the full target model itself. There is also a body of work on data summarization based on submodular maximization (Wei et al., 2013; 2014; Tschiatschek et al., 2014; Ni et al., 2015), but these techniques depend on a combination of hand-engineered features and simple models (e.g., hidden Markov models and Gaussian mixture models) pretrained on auxiliary tasks. In comparison, our work demonstrated that we can use the feature representations of smaller, faster-to-train proxy models as an effective way to select core-sets for deep learning tasks. + +Recently, Toneva et al. (2019) showed that a large number of "unforgettable" examples that are rarely incorrectly classified once learned (i.e., $30\%$ on CIFAR10) could be omitted without impacting generalization, which can be viewed as a core-set selection method. They also provide initial evidence that forgetting events are transferable across models and throughout training by using the forgetting events from ResNet18 to select a subset for WideResNet (Zagoruyko & Komodakis, 2016) and by computing the Spearman's correlation of forgetting events during training compared to their final values. In our work, we evaluated a similar idea of using proxy models to approximate various properties of a large model, and showed that proxy models closely match the rankings of large models in the entropy, greedy k-centers, and example forgetting metrics. + +# 5 CONCLUSION + +In this work, we introduced selection via proxy (SVP) to improve the computational efficiency of active learning and core-set selection in deep learning by substituting a cheaper proxy model's representation for an expensive model's during data selection. Applied to least confidence uncertainty sampling and Sener & Savarese (2018)'s greedy k-centers approach, SVP achieved up to a $41.9 \times$ and $3.8 \times$ improvement in runtime respectively with no significant increase in error (often within $0.1\%$ ). For core-set selection, we found that SVP can remove up to $50\%$ of the data from CIFAR10 in $10 \times$ less time than it takes to train the target model, achieving a $1.6 \times$ speed-up in end-to-end training. + +# ACKNOWLEDGMENTS + +This research was supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google,Infosys, NEC, and VMware—as well as Toyota Research Institute, Northrop Grumman, Amazon Web Services, Cisco, and the NSF under CAREER grant CNS-1651570. Jure Leskovec is a Chan Zuckerberg Biohub investigator. We also gratefully acknowledge the support of DARPA under Nos. FA865018C7880 (ASED), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), DGE-1656518 (GRFP), DGE-114747 (GRFP); SNSF under Nos. P2EZP2_172187; Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Biohub, JD.com, Amazon, Boeing, Docomo, Huawei, Hitachi, Observe, Siemens, UST Global. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of SNSF, NSF, DARPA, NIH, ARO, or the U.S. Government. + +# REFERENCES + +Trevor Campbell and Tamara Broderick. Automated scalable bayesian inference via hilbert coresets. arXiv preprint arXiv:1710.05053, 2017. +Trevor Campbell and Tamara Broderick. Bayesian coreset construction via greedy iterative geodesic ascent. arXiv preprint arXiv:1802.01737, 2018. +Alexis Conneau, Holger Schwenk, Loic Barrault, and Yann Lecun. Very deep convolutional networks for text classification. arXiv preprint arXiv:1606.01781, 2016. +Alexis Conneau, Holger Schwenk, Loic Barrault, and Yann Lecun. Very deep convolutional networks for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp. 1107-1116. Association for Computational Linguistics, 2017. URL http://aclweb.org/anthology/E17-1104. +Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635, 2018. +Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050-1059, 2016. +Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1183-1192. JMLR.org, 2017. +Priya Goyal, Piotr Dálár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. +Sariel Har-Peled and Akash Kushal. Smaller coresets for k-median and k-means clustering. Discrete & Computational Geometry, 37(1):3-19, 2007. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016a. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630-645. Springer, 2016b. +Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, pp. 173-182. International World Wide Web Conferences Steering Committee, 2017. + +Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Mate Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745, 2011. +Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, volume 1, pp. 3, 2017. +Jonathan Huggins, Trevor Campbell, and Tamara Broderick. Coresets for scalable bayesian logistic regression. In Advances in Neural Information Processing Systems, pp. 4080-4088, 2016. +Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet: Alexnet-level accuracy with 50x fewer parameters and $< 0.5$ mb model size. arXiv preprint arXiv:1602.07360, 2016. +Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759, 2016. +Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. +Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. arXiv preprint arXiv:1906.08158, 2019. +Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. +David D Lewis and Jason Catlett. Heterogeneous uncertainty sampling for supervised learning. In Machine Learning Proceedings 1994, pp. 148-156. Elsevier, 1994. +David D Lewis and William A Gale. A sequential algorithm for training text classifiers. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 3-12. Springer-Verlag New York, Inc., 1994. +Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017. +Stephen Mussmann and Percy Liang. On the relationship between data efficiency and error for uncertainty sampling. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 3674-3682, Stockholm, Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/mussmann18a.html. +Chongjia Ni, Cheung-Chi Leung, Lei Wang, Nancy F Chen, and Bin Ma. Unsupervised data selection and word-morph mixed language model for tamil low-resource keyword search. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 4714-4718. IEEE, 2015. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. +Álvaro Peris and Francisco Casacuberta. Active learning for interactive neural machine translation of data streams. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pp. 151-160, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/K18-1015. URL https://www.aclweb.org/anthology/K18-1015. +Charles Rosenberg, Martial Hebert, and Henry Schneiderman. Semi-supervised self-training of object detection models, January 2005. + +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015. +Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510-4520, 2018. +Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1aIuk-RW. +Burr Settles. From theories to queries: Active learning in practice. In Isabelle Guyon, Gavin Cawley, Gideon Dror, Vincent Lemaire, and Alexander Statnikov (eds.), Active Learning and Experimental Design workshop In conjunction with AISTATS 2010, volume 16 of Proceedings of Machine Learning Research, pp. 1-18, Sardinia, Italy, 16 May 2011. PMLR. URL http://proceedings.mlr.press/v16/settles11a.html. +Burr Settles. Active learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 6 (1):1-114, 2012. +H Sebastian Seung, Manfred Opper, and Haim Sompolinsky. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory, pp. 287-294. ACM, 1992. +Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. Deep active learning for named entity recognition. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 252-256, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/W17-2630. URL https://www.aclweb.org/anthology/W17-2630. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. +Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015. +Katrin Tomanek, Joachim Wermter, and Udo Hahn. An approach to text corpus construction which cuts annotation costs and maintains reusability of annotated data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), 2007. +Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J. Gordon. An empirical study of example forgetting during deep neural network learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BJ1xm30cKm. +Ivor W Tsang, James T Kwok, and Pak-Ming Cheung. Core vector machines: Fastsvm training on very large data sets. Journal of Machine Learning Research, 6(Apr):363-392, 2005. +Sebastian Tschiatschek, Rishabh K Iyer, Haochen Wei, and Jeff A Bilmes. Learning mixtures of submodular functions for image collection summarization. In Advances in neural information processing systems, pp. 1413-1421, 2014. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998-6008, 2017. + +Kai Wei, Yuzong Liu, Katrin Kirchhoff, and Jeff Bilmes. Using document summarization techniques for speech data subset selection. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 721-726, 2013. +Kai Wei, Yuzong Liu, Katrin Kirchhoff, Chris Bartels, and Jeff Bilmes. Submodular subset selection for large-scale speech training data. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 3311-3315. IEEE, 2014. +Gert W Wolf. Facility location: concepts, models, algorithms and case studies., 2011. +Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 5987-5995. IEEE, 2017. +Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. +Xiang Zhang and Yann LeCun. Text understanding from scratch. arXiv preprint arXiv:1502.01710, 2015. +Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pp. 649-657, 2015. + +# A APPENDIX + +# A.1 CHOICE OF DATASETS + +For our experimental evaluation in Section 3, we focused on classification because it is a widely studied task in active learning (Lewis & Gale, 1994; Lewis & Catlett, 1994; Settles, 2012; Sener & Savarese, 2018; Kirsch et al., 2019; Mussmann & Liang, 2018; Gal et al., 2017; Houlsby et al., 2011). While there are a few bespoke solutions for machine translation (Peris & Casacuberta, 2018) and named entity recognition (Shen et al., 2017), we wanted to compare against the broader body of active learning research. Many popular active learning methods like uncertainty sampling (e.g., entropy, least confidence, and max-margin) assume a single categorical probability distribution for each example, which makes it hard to adapt to other domains. Instead of tackling open challenges for active learning like machine translation, we applied SVP to many classification datasets; however, the simplicity of SVP means that the core ideas of this paper can be applied more broadly in future work. + +Specifically, we performed experiments on three image classification datasets: CIFAR10, CIFAR100 (Krizhevsky & Hinton, 2009), and ImageNet (Russakovsky et al., 2015); and two text classification datasets: Amazon Review Polarity and Full (Zhang & LeCun, 2015; Zhang et al., 2015). While multiple tasks on roughly the same data distribution may seem redundant, the data efficiency of active learning depends on error (Mussmann & Liang, 2018). We included both CIFAR10 (low-error) and CIFAR100 (high-error) to demonstrate that our approach performs as well as standard active learning at different points on the error and data efficiency curve. The same rationale is also valid for Amazon Review Polarity (low-error) and Full (high-error). However, the Amazon Review dataset adds a medium (text) and a much larger scale (3.6M and 3M examples, respectively). Adding ImageNet further allows us to investigate scale in the number of examples, but also the number of classes and the dimension of the input. To the best of our knowledge, we are the first active learning paper to present results on the full ImageNet classification task. + +# A.2 IMPLEMENTATION DETAILS + +CIFAR10 and CIFAR100. We used ResNet164 with pre-activation from He et al. (2016b) as our large target model for both CIFAR10 and CIFAR100. Note that as originally proposed in He et al. (2016a), the smaller, proxy models are also ResNet architectures with pre-activation, but they use pairs of $3 \times 3$ convolutional layers as their residual unit rather than bottlenecks and achieve lower accuracy as shown in Figure 4. As with He et al. (2016b), the ResNets we used were much narrower when applied to CIFAR rather than ImageNet (256 filters rather than 2048 in the final layer of the last bottleneck) and have fewer sections, which means far fewer weights despite the increased depth. For example, ResNet50 on ImageNet has $\sim 25\mathrm{M}$ weights while ResNet164 on CIFAR has $\sim 1.7\mathrm{M}$ (see Table 2). More recent networks such as Wide Residual Networks (Zagoruyko & Komodakis, 2016), ResNeXt (Xie et al., 2017), and DenseNets (Huang et al., 2017) use models with more than 25M parameters on CIFAR10, making ResNet164 relatively small in comparison. Core-set selection experiments used a single Nvidia P100 GPU, while the active learning experiments used a Titan V GPU. We followed the same training procedure, initialization, and hyperparameters as He et al. (2016b) with the exception of weight decay, which was set to 0.0005 and decreased the model's validation error in all conditions. + +ImageNet. we used the original ResNet architecture from He et al. (2016a) implemented in PyTorch $^{3}$ (Paszke et al., 2017) with ResNet50 as the target and ResNet18 as the proxy. For training, we used a custom machine with 4 Nvidia Titan V GPUs and followed Nvidia's optimized implementation $^{4}$ with a larger batch size, appropriately scaled learning rate (Goyal et al., 2017), a 5-epoch warm-up period, and mixed precision training (Micikevicius et al., 2017) with the apex $^{5}$ library. For active learning, we used the same batch size of 768 images for both ResNet18 and ResNet50 for simplicity, which was the maximum batch size that could fit into memory for ResNet50. However, ResNet18 with a batch size of 768 underutilized the GPU and yielded a lower speed-up. With separate batch sizes for ResNet18 and ResNet50, we would have seen speed-ups closer to $2.7 \times$ . + +Table 2: Number of parameters in each model. + +
DatasetModelNumber of Parameters (millions)
CIFAR10ResNet1641.7
ResNet1101.73
ResNet560.86
ResNet200.27
ResNet140.18
ResNet80.08
CIFAR100ResNet1641.73
ResNet1101.74
ResNet560.86
ResNet200.28
ResNet140.18
ResNet80.08
ImageNetResNet5025.56
ResNet1811.69
Amazon Review PolarityVDCNN2916.64
VDCNN914.17
Amazon Review FullVDCNN2916.64
VDCNN914.18
+ +Amazon Review Polarity (2-classes) and Full (5-classes). For Amazon Review Polarity and Amazon Review Full, we used VDCNN (Conneau et al., 2017) and fastText (Joulin et al., 2016) with VDCNN29 as the target and fastText and VDCNN9 as proxies. For Amazon Review Polarity, core-set selection experiments used a single Nvidia P100 GPU, while the active learning experiments used a Nvidia Titan V GPU to train VDCNN models. For Amazon Review Full, core-set selection and active learning experiments both used a Nvidia Titan V GPU. In all settings, we used the same training procedure from Conneau et al. (2017) for VDCNN9 and VDCNN29. For fastText, we used Facebook's implementation $^{6}$ and followed the same training procedure from Joulin et al. (2016). + +# A.3 MOTIVATION FOR CREATING PROXIES + +![](images/1660d415493622c63dd49dbc58859576b8629a4dfdd657dae3debfca930dc671.jpg) +(a) Top-1 test error and training time on CIFAR10 for ResNet with pre-activation and a varying number of layers. There is a diminishing return in accuracy by increasing the number of layers. + +![](images/3383aafaffbc8b1d36c879faf96e07f884f7534fa3827e896d5fe8f4bb2733bd.jpg) +(b) Top-1 test error during training of ResNet20 with pre-activation. In the first 14 minutes, ResNet20 reaches $9.0\%$ top-1 error, while the remaining 12 minutes are spent on decreasing error to $7.6\%$ . +Figure 4: Top-1 test error on CIFAR10 for varying model sizes (left) and over the course of training a single model (right), demonstrating a large amount of time is spent on small changes in accuracy. + +![](images/6460fe048ddf41fda043ffc9b136a41d90207eaaed27d05cda0a252d9f94e599.jpg) +(a) Top-1 test error and training time on CIFAR100 for ResNet with pre-activation and a varying number of layers. There are diminishing returns in accuracy from increasing the number of layers. + +![](images/4382019ae9585178e95c41495520d125edc422a6b0a2cb81a8728a1887685c79.jpg) +(b) Top-1 test error during training of ResNet20 with pre-activation. In the first 15 minutes, ResNet20 reaches $33.9\%$ top-1 error, while the remaining 12 minutes are spent on decreasing error to $31.1\%$ +Figure 5: Top-1 test error on CIFAR100 for varying model sizes (left) and over the course of training a single model (right), demonstrating a large amount of time is spent on small changes in accuracy. + +# A.4 ADDITIONAL ACTIVE LEARNING RESULTS + +![](images/f822e965e978897857aafa9410f30e2df6adcf022943209ffcd1152ed80c294a.jpg) + +![](images/5a84e1a381232a0403a0acf7571a8c079d26328f53bb8927338b4e9c7f26f045.jpg) +(a) CIFAR10 greedy k-centers + +![](images/3f5376503c0d71fd9e523f866bddc67b79c91d8e1604ce03e831e21b24894388.jpg) +(c) CIFAR10 least confidence + +![](images/c375651a1658670f7e168d82beb48dba65de453cbd77de34629a8c712c0a69eb.jpg) +(e) Amazon Review Polarity least confidence +(g) ImageNet least confidence +Figure 6: Quality of proxies compared to target models. Average $(\pm 1$ std.) top-1 error from 3 runs of active learning with varying proxies, selection methods, and budgets on five classification datasets. Dotted lines show the top-1 error of the proxy models, while solid lines show the top-1 error of the target models. CIFAR10 and CIFAR100 experiments used varying depths of pre-activation ResNet (R) models as proxies and ResNet164 (R164) as the target model (e.g., R20-R164 is ResNet20 selecting for ResNet164). ImageNet used ResNet18 (R18) as the proxy and ResNet50 (R50) as the target. Amazon Review Polarity and Amazon Review Full used VDCNN9 (V9) and fastText (FT) as proxies and VDCNN29 (V29) as the target. Across datasets, proxies, methods, and budgets, smaller proxies had higher top-1 error than the target model, but selecting points that were nearly as good as the points selected by the target that did not harm the final target model's predictive performance. + +![](images/ab60306431888fee704b83220beeda2b270f909c069be1c2fc853f08240654a4.jpg) + +![](images/462f776e39bd797f5ad8253e56460c97820c3bf79dcbab9c95dd458565cf1cd9.jpg) +(b) CIFAR100 greedy k-centers + +![](images/04a80ea3e7db285400035e9958c3835240f414d0e73facf0cea0e7428cec16c9.jpg) +(d) CIFAR100 least confidence +(f) Amazon Review Full least confidence + +Table 3: SVP performance on active learning. Average (± 1 std.) top-1 error and data selection speed-ups from 3 runs of active learning with varying proxies, methods, and labeling budgets on five datasets. Bold speed-ups indicate settings that either achieve lower error or are within 1 std. of the mean top-1 error for the baseline approach of using the same model for selection and the final predictions. Across datasets and methods, SVP sped up selection without significantly increasing the error of the final target. + +
Budget (b/n)Top-1 Error of Target Model (%)Data Selection Speed-up
10%20%30%40%50%10%20%30%40%50%
DatasetMethodSelection Model
CIFAR10Random-20.3 ± 0.5112.9 ± 0.3710.1 ± 0.248.5 ± 0.227.5 ± 0.11------
Least ConfidenceResNet164 (Baseline)18.7 ± 0.3110.4 ± 0.387.4 ± 0.166.1 ± 0.325.3 ± 0.061.0×1.0×1.0×1.0×1.0×1.0×
ResNet11018.1 ± 0.4110.5 ± 0.067.5 ± 0.115.9 ± 0.335.3 ± 0.051.8×1.9×1.9×1.8×1.8×1.8×
ResNet5618.2 ± 0.7310.3 ± 0.287.4 ± 0.106.1 ± 0.065.5 ± 0.082.6×2.9×3.0×3.1×3.1×3.1×
ResNet2018.1 ± 0.2810.5 ± 0.427.4 ± 0.235.9 ± 0.195.4 ± 0.413.8×5.8×6.7×7.0×7.2×7.2×
Greedy k-CentersResNet164 (Baseline)20.1 ± 0.3911.3 ± 0.408.1 ± 0.226.6 ± 0.245.6 ± 0.041.0×1.0×1.0×1.0×1.0×1.0×
ResNet11019.4 ± 0.5511.6 ± 0.168.1 ± 0.166.4 ± 0.105.7 ± 0.132.1×1.8×1.7×1.7×1.7×1.6×
ResNet5619.8 ± 0.4911.6 ± 0.168.4 ± 0.216.3 ± 0.175.7 ± 0.193.0×2.9×2.8×2.8×2.8×2.8×
ResNet2019.5 ± 0.7612.1 ± 0.448.8 ± 0.317.2 ± 0.196.1 ± 0.183.8×4.6×5.0×5.3×5.5×5.5×
CIFAR100Random-60.7 ± 0.8142.5 ± 0.5536.0 ± 0.4231.9 ± 0.4829.3 ± 0.16------
Least ConfidenceResNet164 (Baseline)61.2 ± 1.0942.2 ± 0.6733.9 ± 0.3329.9 ± 0.1826.9 ± 0.211.0×1.0×1.0×1.0×1.0×1.0×
ResNet11060.2 ± 0.8442.3 ± 0.9534.1 ± 0.3829.7 ± 0.4127.2 ± 0.251.5×1.6×1.6×1.6×1.6×1.6×
ResNet5661.5 ± 0.9342.0 ± 0.1733.7 ± 0.3329.7 ± 0.0826.4 ± 0.132.4×2.7×3.0×2.9×3.1×3.1×
ResNet2062.4 ± 1.0741.4 ± 0.2533.8 ± 0.3729.8 ± 0.1026.6 ± 0.144.0×5.8×6.6×7.0×7.2×7.2×
Greedy k-CentersResNet164 (Baseline)60.4 ± 1.3042.4 ± 0.5734.5 ± 0.4030.2 ± 0.3327.3 ± 0.241.0×1.0×1.0×1.0×1.0×1.0×
ResNet11059.6 ± 0.7842.2 ± 0.7634.9 ± 0.4030.3 ± 0.4627.4 ± 0.212.3×1.9×1.8×1.7×1.6×1.6×
ResNet5660.9 ± 1.0842.6 ± 0.4735.2 ± 0.4030.8 ± 0.2527.8 ± 0.233.3×3.2×3.1×3.1×3.0×3.0×
ResNet2060.2 ± 1.2742.9 ± 0.5235.8 ± 0.4531.6 ± 0.3128.5 ± 0.484.5×5.5×5.9×6.1×6.2×6.2×
ImageNetRandom-48.5 ± 0.0437.5 ± 0.3432.5 ± 0.1229.9 ± 0.4227.8 ± 0.13------
Least ConfidenceResNet50 (Baseline)48.2 ± 0.3735.9 ± 0.2231.0 ± 0.1028.3 ± 0.3226.3 ± 0.161.0×1.0×1.0×1.0×1.0×1.0×
ResNet1848.3 ± 0.3136.1 ± 0.1931.1 ± 0.1228.2 ± 0.1326.4 ± 0.021.2×1.3×1.4×1.5×1.6×1.6×
Amazon Review PolarityRandom-6.5 ± 0.035.6 ± 0.075.2 ± 0.074.9 ± 0.014.7 ± 0.03------
Least ConfidenceVDCNN29 (Baseline)5.8 ± 0.084.8 ± 0.044.5 ± 0.014.3 ± 0.024.2 ± 0.021.0×1.0×1.0×1.0×1.0×1.0×
VDCNN9 fastText5.8 ± 0.114.9 ± 0.014.5 ± 0.024.3 ± 0.044.3 ± 0.031.9×1.8×1.8×1.8×1.8×1.8×
6.9 ± 0.815.2 ± 0.174.6 ± 0.014.3 ± 0.014.3 ± 0.0210.6×20.6×32.2×41.9×51.3×51.3×
Amazon Review FullRandom-41.7 ± 0.1939.9 ± 0.0539.0 ± 0.0938.4 ± 0.1437.9 ± 0.10------
Least ConfidenceVDCNN29 (Baseline)41.9 ± 0.5439.7 ± 0.2238.6 ± 0.1038.2 ± 0.0337.6 ± 0.011.0×1.0×1.0×1.0×1.0×1.0×
VDCNN9 fastText42.0 ± 0.4439.8 ± 0.2338.7 ± 0.0938.1 ± 0.0937.7 ± 0.102.0×1.9×1.8×1.8×1.8×1.8×
42.7 ± 0.7739.8 ± 0.0238.7 ± 0.0538.1 ± 0.0637.7 ± 0.058.7×17.7×26.7×35.1×43.1×43.1×
+ +Table 4: Performance of training for fewer epochs on active learning. Average (± 1 std.) top-1 error and data selection speed-ups from 3 runs of active learning with varying proxies trained for a varying number of epochs on CIFAR10, CIFAR100, and ImageNet. Bold speed-ups indicate settings that either achieve lower error or are within 1 std. of the mean top-1 error for the baseline approach of using the same model for selection and the final predictions. Training for fewer epochs can provide a significant improvement over random sampling but is not quite as good as training for the full schedule. + +
Budget (b/n)Top-1 Error of Target Model (%)Data Selection Speed-up
10%20%30%40%50%10%20%30%40%50%
DatasetMethodSelection ModelEpochs
CIFAR10Random--20.2 ± 0.4912.9 ± 0.5010.1 ± 0.188.6 ± 0.127.5 ± 0.15------
Least ConfidenceResNet164 (Baseline)18118.7 ± 0.3110.4 ± 0.387.4 ± 0.166.1 ± 0.325.3 ± 0.061.0×1.0×1.0×1.0×1.0×1.0×
10018.4 ± 0.2310.3 ± 0.137.3 ± 0.086.0 ± 0.165.3 ± 0.131.8×1.8×1.8×1.8×1.8×1.8×
5019.2 ± 0.3511.7 ± 0.558.4 ± 0.157.2 ± 0.155.9 ± 0.143.4×3.6×3.6×3.6×3.6×3.6×
ResNet2018118.1 ± 0.2810.5 ± 0.427.4 ± 0.235.9 ± 0.195.4 ± 0.413.8×5.8×6.7×7.0×7.2×7.2×
10018.4 ± 0.3810.3 ± 0.207.4 ± 0.135.9 ± 0.315.3 ± 0.166.8×10.3×11.6×12.3×12.7×12.7×
5018.8 ± 0.8111.5 ± 0.338.5 ± 0.196.8 ± 0.095.9 ± 0.3111.4×19.4×23.1×24.7×25.6×25.6×
Greedy k-CentersResNet164 (Baseline)18120.1 ± 0.3811.3 ± 0.268.2 ± 0.196.7 ± 0.255.6 ± 0.051.0×1.0×1.0×1.0×1.0×1.0×
10019.9 ± 0.4111.9 ± 0.088.7 ± 0.226.8 ± 0.106.1 ± 0.101.3×1.4×1.4×1.5×1.5×1.5×
5021.9 ± 0.5813.3 ± 0.239.9 ± 0.298.0 ± 0.207.0 ± 0.111.6×1.9×2.2×2.3×2.5×2.5×
ResNet11018119.3 ± 0.5111.5 ± 0.218.1 ± 0.196.4 ± 0.075.6 ± 0.102.1×1.8×1.6×1.6×1.5×1.5×
10019.6 ± 0.7411.9 ± 0.158.6 ± 0.146.9 ± 0.065.8 ± 0.073.2×2.8×2.6×2.6×2.5×2.5×
5021.0 ± 0.9112.9 ± 0.319.7 ± 0.318.0 ± 0.126.9 ± 0.124.7×4.8×4.7×4.7×4.7×4.7×
ResNet5618119.7 ± 0.4811.6 ± 0.208.3 ± 0.186.2 ± 0.105.7 ± 0.242.9×2.7×2.6×2.7×2.6×2.6×
10019.7 ± 0.8311.7 ± 0.268.7 ± 0.237.0 ± 0.196.2 ± 0.174.6×4.7×4.8×4.8×4.8×4.8×
5020.5 ± 0.0513.0 ± 0.449.6 ± 0.268.1 ± 0.197.0 ± 0.156.3×7.5×8.1×8.4×8.8×8.8×
ResNet2018119.0 ± 0.3512.0 ± 0.618.7 ± 0.327.2 ± 0.196.3 ± 0.063.7×4.5×4.8×5.0×5.0×5.2×
10020.1 ± 0.3112.6 ± 0.199.1 ± 0.087.4 ± 0.356.4 ± 0.115.8×7.6×8.9×9.7×10.2×10.2×
5021.6 ± 0.5513.7 ± 0.2210.4 ± 0.048.3 ± 0.237.1 ± 0.157.7×11.1×13.7×15.6×17.2×17.2×
CIFAR100Random--61.0 ± 0.3142.3 ± 0.6136.2 ± 0.3632.0 ± 0.6129.4 ± 0.20------
Least ConfidenceResNet164 (Baseline)18161.2 ± 1.0942.2 ± 0.6733.9 ± 0.3329.9 ± 0.1826.9 ± 0.211.0×1.0×1.0×1.0×1.0×1.0×
10061.1 ± 1.5741.4 ± 0.2733.8 ± 0.6029.8 ± 0.2126.9 ± 0.241.8×1.8×1.8×1.8×1.8×1.8×
5062.5 ± 2.4944.1 ± 0.2435.2 ± 0.2330.8 ± 0.4727.5 ± 0.433.3×3.6×3.6×3.6×3.6×3.6×
ResNet2018162.4 ± 1.0741.4 ± 0.2533.8 ± 0.3729.8 ± 0.1026.6 ± 0.144.0×5.8×6.6×7.0×7.2×7.2×
10061.9 ± 1.1242.2 ± 0.6134.3 ± 0.3729.7 ± 0.0627.0 ± 0.266.8×10.4×12.0×12.7×13.1×13.1×
5062.7 ± 1.4043.5 ± 0.5835.4 ± 0.2330.9 ± 0.2327.9 ± 0.6411.5×18.9×22.4×24.3×25.1×25.1×
Greedy k-CentersResNet164 (Baseline)18160.5 ± 0.9042.1 ± 0.4734.4 ± 0.4530.4 ± 0.3027.4 ± 0.051.0×1.0×1.0×1.0×1.0×1.0×
10060.4 ± 1.6542.7 ± 0.5034.9 ± 0.0930.3 ± 0.4627.8 ± 0.431.3×1.4×1.5×1.5×1.6×1.6×
5060.5 ± 0.1743.0 ± 0.1136.3 ± 0.3332.2 ± 0.2229.2 ± 0.231.7×2.0×2.3×2.5×2.6×2.6×
ResNet11018159.3 ± 0.7042.2 ± 1.0634.7 ± 0.4030.3 ± 0.6527.5 ± 0.172.3×1.9×1.8×1.7×1.6×1.6×
10060.1 ± 0.6042.7 ± 0.4735.0 ± 0.2030.6 ± 0.3827.7 ± 0.123.1×2.7×2.5×2.4×2.3×2.3×
5062.6 ± 1.5443.8 ± 0.5136.4 ± 0.9732.4 ± 0.2529.1 ± 0.485.1×5.0×5.0×5.0×5.0×5.0×
ResNet5618160.7 ± 0.7142.7 ± 0.4735.3 ± 0.5530.8 ± 0.3327.9 ± 0.073.4×3.2×3.2×3.1×3.1×3.1×
10060.9 ± 1.0743.2 ± 0.3835.1 ± 0.3830.9 ± 0.5727.8 ± 0.064.6×4.7×4.7×4.9×5.0×5.0×
5060.1 ± 1.2744.4 ± 0.5636.5 ± 0.3232.6 ± 0.6229.6 ± 0.536.8×8.1×8.7×9.1×9.2×9.2×
ResNet2018160.0 ± 0.5942.8 ± 0.7035.8 ± 0.6331.7 ± 0.4028.7 ± 0.544.5×5.4×5.8×5.9×6.1×6.1×
10061.9 ± 1.0143.2 ± 0.3335.6 ± 0.2331.4 ± 0.3828.8 ± 0.236.4×8.2×9.5×10.3×10.9×10.9×
5061.6 ± 0.6545.1 ± 0.6137.3 ± 1.0532.9 ± 0.4930.2 ± 0.188.1×11.6×14.3×16.3×17.7×17.7×
ImageNetRandom--48.5 ± 0.0437.5 ± 0.3432.5 ± 0.1229.9 ± 0.4227.8 ± 0.13------
Least ConfidenceResNet50 (Baseline)9048.2 ± 0.3735.9 ± 0.2231.0 ± 0.1028.3 ± 0.3226.3 ± 0.161.0×1.0×1.0×1.0×1.0×1.0×
4548.7 ± 0.2136.3 ± 0.0331.3 ± 0.0228.3 ± 0.1926.5 ± 0.171.7×1.8×1.8×1.8×1.7×1.7×
ResNet189048.3 ± 0.3136.1 ± 0.1931.1 ± 0.1228.2 ± 0.1326.4 ± 0.021.2×1.3×1.4×1.5×1.6×1.6×
4548.3 ± 0.3136.3 ± 0.0731.3 ± 0.0228.4 ± 0.1726.6 ± 0.082.1×2.5×2.7×2.9×3.1×3.1×
+ +# A.5 ADDITIONAL CORE-SET SELECTION RESULTS + +![](images/f8d48c1626e8193e5539102db6a845dc8a86d68395fe11bde8cc85acc4a2c942.jpg) +Figure 7: SVP performance on core-set selection. Average (± 1 std.) top-1 error of ResNet164 over 5 runs of core-set selection with different selection methods, proxies, and subset sizes on CIFAR100. We found subsets using forgetting events (left), entropy (middle), and greedy k-centers (right) from a proxy model trained over the entire dataset. Across datasets and selection methods, SVP performed as well as an oracle baseline but significantly faster (speed-ups in parentheses). + +Table 5: Average top-1 error (± 1 std.) from 3 runs of core-set selection with varying selection methods on ImageNet, Amazon Review Polarity, and Amazon Review Full. + +
Top-1 Error (%)
40%60%80%100%
DatasetMethodSelection Model
ImageNetRandom-32.2 ± 0.1228.0 ± 0.1525.8 ± 0.0623.3 ± 0.11
EntropyResNet50 (Baseline)34.9 ± 0.0828.8 ± 0.0325.9 ± 0.04-
EntropyResNet1832.2 ± 0.0427.0 ± 0.0125.1 ± 0.07-
Forgetting EventsResNet50 (Baseline)31.9 ± 0.0726.7 ± 0.0624.8 ± 0.03-
Forgetting EventsResNet1831.6 ± 0.0727.1 ± 0.1025.3 ± 0.18-
Amazon Review PolarityRandom-4.9 ± 0.024.5 ± 0.054.3 ± 0.014.1 ± 0.04
EntropyVDCNN29 (Baseline)4.4 ± 0.034.2 ± 0.024.2 ± 0.02-
EntropyVDCNN94.4 ± 0.024.2 ± 0.014.2 ± 0.00-
EntropyfastText4.4 ± 0.024.2 ± 0.024.2 ± 0.02-
Amazon Review FullRandom-38.4 ± 0.0337.6 ± 0.0337.0 ± 0.0536.6 ± 0.06
EntropyVDCNN29 (Baseline)42.7 ± 1.1439.3 ± 0.1437.6 ± 0.10-
EntropyVDCNN941.1 ± 0.2438.8 ± 0.0337.7 ± 0.09-
EntropyfastText39.0 ± 0.1837.8 ± 0.0637.1 ± 0.06-
+ +Table 6: Average (± 1 std.) top-1 error and runtime in minutes from 5 runs of core-set selection with varying proxies, selection methods, and subset sizes on CIFAR10 and CIFAR100. + +
Top-1 Error of ResNet164 (%)Data Selection Runtime in MinutesTotal Runtime in Minutes
30%50%70%30%50%70%30%50%70%
DatasetMethodSelection Model
CIFAR10Facility LocationResNet164 (Baseline)8.9 ± 0.296.3 ± 0.235.4 ± 0.09265 ± 48.0286 ± 91.6260 ± 42.6342 ± 47.7406 ± 94.3425 ± 41.7
ResNet209.1 ± 0.336.4 ± 0.135.5 ± 0.2127 ± 1.128 ± 1.430 ± 2.2104 ± 1.9147 ± 1.0193 ± 5.7
ResNet568.9 ± 0.096.1 ± 0.215.3 ± 0.0765 ± 3.967 ± 3.868 ± 3.4142 ± 4.7187 ± 4.8230 ± 5.1
Forgetting EventsResNet164 (Baseline)7.7 ± 0.195.2 ± 0.115.0 ± 0.12218 ± 1.4218 ± 1.6219 ± 1.5296 ± 3.2340 ± 6.8382 ± 4.6
ResNet207.6 ± 0.185.2 ± 0.115.1 ± 0.0724 ± 1.324 ± 1.425 ± 1.5101 ± 2.6142 ± 2.5185 ± 5.0
ResNet567.7 ± 0.275.2 ± 0.095.1 ± 0.0963 ± 4.363 ± 4.063 ± 4.0141 ± 5.4184 ± 4.6226 ± 2.8
EntropyResNet164 (Baseline)9.6 ± 0.166.4 ± 0.275.6 ± 0.19218 ± 1.4218 ± 1.7218 ± 1.6296 ± 1.5338 ± 2.2382 ± 3.1
ResNet208.9 ± 0.185.7 ± 0.235.3 ± 0.0924 ± 1.324 ± 1.525 ± 1.5103 ± 2.2145 ± 1.3190 ± 3.7
ResNet569.9 ± 0.296.6 ± 0.095.7 ± 0.1763 ± 4.363 ± 4.063 ± 4.0141 ± 4.8182 ± 4.0226 ± 3.8
CIFAR100Facility LocationResNet164 (Baseline)40.8 ± 0.2029.5 ± 0.2924.6 ± 0.42263 ± 52.2325 ± 158.7296 ± 70.2339 ± 52.7446 ± 158.1460 ± 69.1
ResNet2035.2 ± 0.3728.2 ± 0.2324.7 ± 0.3027 ± 0.828 ± 1.330 ± 1.4105 ± 2.6151 ± 3.6198 ± 4.6
ResNet5640.8 ± 0.8929.6 ± 0.2824.7 ± 0.4064 ± 1.766 ± 1.967 ± 2.2142 ± 1.7185 ± 1.5230 ± 3.9
ResNet11042.3 ± 0.4429.5 ± 0.4324.7 ± 0.38129 ± 3.7131 ± 3.6132 ± 3.5208 ± 7.3253 ± 8.5303 ± 11.6
Forgetting EventsResNet164 (Baseline)36.8 ± 0.3627.1 ± 0.4023.5 ± 0.19221 ± 6.1221 ± 6.1221 ± 6.1298 ± 5.7342 ± 5.5384 ± 4.7
ResNet2037.2 ± 0.2927.1 ± 0.1423.4 ± 0.1624 ± 0.725 ± 0.725 ± 0.7104 ± 3.3148 ± 3.6193 ± 6.1
ResNet5636.7 ± 0.2327.0 ± 0.3323.3 ± 0.2862 ± 2.462 ± 2.662 ± 1.9141 ± 7.1183 ± 3.8228 ± 5.2
ResNet11036.6 ± 0.5126.9 ± 0.2723.4 ± 0.37127 ± 2.7127 ± 2.7127 ± 2.7207 ± 3.7250 ± 4.9293 ± 7.3
EntropyResNet164 (Baseline)39.6 ± 0.4330.1 ± 0.1225.4 ± 0.39220 ± 6.4220 ± 6.4220 ± 6.4297 ± 7.3340 ± 7.3380 ± 7.1
ResNet2046.5 ± 0.7429.7 ± 0.4524.2 ± 0.2124 ± 0.625 ± 0.725 ± 0.7105 ± 1.7148 ± 2.6193 ± 3.6
ResNet5642.6 ± 0.6329.6 ± 0.1324.8 ± 0.2962 ± 1.762 ± 1.862 ± 1.9142 ± 1.9186 ± 3.9230 ± 5.9
ResNet11040.2 ± 0.2830.4 ± 0.3525.5 ± 0.34127 ± 3.0127 ± 3.1127 ± 3.1204 ± 3.3247 ± 3.5291 ± 3.7
+ +Table 7: Average top-1 error (± 1 std.) and runtime in minutes from 5 runs of core-set selection with varying selection methods calculated from ResNet20 models trained for a varying number of epochs on CIFAR10 and CIFAR100. + +# A.6 ADDITIONAL CORRELATION RESULTS + +![](images/5314ac5f1449014ec96f45470c3fa8580d79d1bf37d03ad82bcd7f152bd72f3f.jpg) +Figure 8: Comparing selection across model architectures on ImageNet. Spearman's correlation between max entropy rankings from PyTorch (Paszke et al., 2017) pretrained models on ImageNet. Correlations are high across a wide range of model architectures (Xie et al., 2017; He et al., 2016a; Sandler et al., 2018; Huang et al., 2017; Szegedy et al., 2015; Simonyan & Zisserman, 2014; Iandola et al., 2016; Krizhevsky et al., 2012). For example, MobileNet V2's entropy-based rankings were highly correlated to ResNet50, even though the model had far fewer parameters (3.5M vs. 25.6M). In concert with our fastText and VDCNN results from Section 3.2, the high correlations between different model architectures suggest that SVP might be widely applicable. + +![](images/0b15a5d8e8dd74357614659ee8eb877eccbdfc652f703e6156e4471dbe699653.jpg) +(a) CIFAR100 forgetting events + +![](images/4b2bdbb212afe1dc99ddc9e976348657e42b6fe2f7a8668788e3b38d77423dd6.jpg) +(b) CIFAR100 entropy +Figure 9: Comparing selection across model sizes and methods on CIFAR100. Average Spearman's correlation between different runs of ResNet (R) models and a varying depths. We computed rankings based on forgetting events (left), entropy (middle), and greedy k-centers (right). We saw a similarly high correlation across model architectures (off-diagonal) as between runs of the same architecture (on-diagonal), suggesting that small models are good proxies for data selection. + +![](images/7766acd8fef3bda47192b727b9bf5ee4768c05160c7ef1532f61ba4863f87301.jpg) +(c) CIFAR100 greedy k-centers + +![](images/06ddc61a9eef30c681e861ec6b92776df89cc0b19ed1ad12b88504cadce6051e.jpg) +(a) CIFAR10 facility location + +![](images/827e1c08249de5341b0d3d15e2a99e2432167ab1ded8f417f30ef3106ecd03fd.jpg) +(b) CIFAR100 facility location +Figure 10: Spearman's rank-order correlation between different runs of ResNet (R) with pre-activation and a varying number of layers on CIFAR10 (left) and CIFAR100 (right). For each combination, we compute the average from 20 pairs of runs. For each run, we compute rankings based on the order examples are added in facility location using the same initial subset of 1,000 randomly selected examples. The results are consistent with Figure 3c and Figure 9c, demonstrating that most of the variation is due to stochasticity in training rather than the initial subset. + +![](images/dc4b670b6c13439f12cbc7fca6710bd56be381f54b8a06c411cdd225130065bc.jpg) +(a) CIFAR10 forgetting events +Figure 11: Average $(\pm 1$ std.) Spearman's rank-order correlation with ResNet164 during 5 training runs of varying ResNet architectures on CIFAR10 (left) and CIFAR100 (right), where rankings were based on forgetting events. + +![](images/65a76ca71acf498dd9e99d339553e40dca2784ff80debf91bbb857e6766ec5ae.jpg) +(b) CIFAR100 forgetting events + +![](images/eb625b2a80149b95c7da1344cbefbf65db3045e5fbb38ff567530c10ee6ce98e.jpg) +(a) CIFAR10 entropy + +![](images/571cc5fede71645a7b25ba650131fe423d8a43d6fe57dfd0c36ee5f5f8997554.jpg) +(b) CIFAR100 entropy +Figure 12: Average (± 1 std.) Spearman's rank-order correlation with ResNet164 during 5 training runs of varying ResNet architectures on CIFAR10 (left) and CIFAR100 (right), where rankings were based on entropy. + +![](images/254dfef1c405837476e12760cf72423b5e0565d15cb6148ab7b6e8b5d1839dd3.jpg) +(a) CIFAR10 forgetting events +Figure 13: Average $(\pm 1$ std.) Spearman's rank-order correlation between epochs during 5 training runs of varying ResNet architectures on CIFAR10 (left) and CIFAR100 (right), where rankings were based on forgetting events. + +![](images/d07c85ce2184f9af7faa7bb6df15968be37bce82a6eab772ab956b5d5f412e1d.jpg) +(b) CIFAR100 forgetting events + +![](images/113bc8f66c8685cd1c51b8fb534409e3c7f6a8e0eaa3db51a33b4662f346fe16.jpg) +(a) CIFAR10 entropy +Figure 14: Average $(\pm 1$ std.) Spearman's rank-order correlation between epochs during 5 training runs of varying ResNet architectures on CIFAR10 (left) and CIFAR100 (right), where rankings were based on entropy. + +![](images/ac7ab413b4a71ac2e3bb91b41feac4f957ec006f018c9d94dc8f45eb7965ff42.jpg) +(b) CIFAR100 entropy + +![](images/e9fea48be4ecdb58b3381166523e5b6f9bb596f0281ad24c2d8d16559f77144d.jpg) +(a) CIFAR10 forgetting events + +![](images/77d0c7fb349aee667d479239cec32b5c5ee282b4b9bdd3ff9717e7f153df8c92.jpg) +(b) CIFAR10 entropy + +![](images/f50c3e15fdb2061d9c7ee349465fe37258dc1457635cd1490654777b19079c96.jpg) +(c) CIFAR100 forgetting events + +![](images/a7539a8658563685738309d25cb6d6e7ee50172cff407f541564a768c07215d4.jpg) +(d) CIFAR100 entropy +Figure 15: Pearson correlation coefficient between different runs of ResNet (R) with pre-activation and a varying number of layers on CIFAR10 (top) and CIFAR100 (bottom). For each combination, we compute the average from 20 pairs of runs. For each run, we compute rankings based on the number of forgetting events (left), and entropy of the final model (right). Generally, we see a similarly high correlation across model architectures (off-diagonal) as between runs of the same architecture (on-diagonal), providing further evidence that small models are good proxies for data selection. + +![](images/9b20e1bdc2b193494cc6c1b6a6896604b6048372692b230bef8adfc80a8a3b15.jpg) + +![](images/ced0eaeee64ca288012f3f0521dc4365893df9688b4ed3225854de95559d8237.jpg) +(d) ResNet20 forgetting events +Figure 16: 2D t-SNE plots from the final hidden layer of a fully trained ResNet164 model on CIFAR10 and a $30\%$ subset selected (black). The top row uses another run of ResNet164 to select the subset and the bottom row uses ResNet20. Rankings are computed using forgetting events (left), entropy (middle), and greedy k-centers (right). + +![](images/30945e4aaceca5d1fdd74616543861f8a07b537f88f4d0c8d13a08fb8cc3956b.jpg) + +![](images/899baa30793e7fb61f8944a11dda7ede48041149e65227ef3a10b95a5a62e00a.jpg) + +![](images/b8a54d9eaca54d186537c565fe0e2f92131bc8a7a4153a58b1d56469e3883797.jpg) +(b) ResNet164 entropy +(e) ResNet20 entropy + +![](images/4fd72c7f082941e45232848ec40cb2686c22f22bc10e73fcdae1c877e5197bab.jpg) +(c) ResNet164 greedy k-centers +(f) ResNet20 greedy k-centers + +![](images/8ae7607594841d33936c9dc495042531e4e1929b70443315ed90ea4c00e09a6f.jpg) + +![](images/0d0d5d174f34802ff72aa50250273972043e6688d5e51e08b2645b1767481c40.jpg) + +![](images/adfb5074f25ae8e730ff873160b5542b72986792bd1dc2118f6b45a0e160f283.jpg) + +![](images/c8eacd49a110f80febbaf6886cd3751d143ec18668762ae542f4cd7d9f7486f3.jpg) + +![](images/d9c76ee9de9b4d51b7cabceba0c0744144615ddb9be84c3ab90ce05a4769cf6b.jpg) +(a) FE at 1 epoch +(e) Entropy at 1 epoch +Figure 17: 2D t-SNE plots from the final hidden layer of a fully trained ResNet164 model on CIFAR10 and a $30\%$ subset selected (black) with ResNet20 trained after a varying number of epochs. Rankings are calculated with forgetting events (top) and entropy (bottom). Notably, the ranking from forgetting events is much more stable because the model's uncertainty is effectively averaged throughout training rather than a single snapshot at the end like entropy. For t-SNE plots of the entire training run, please see http://bit.ly/svp-cifar10-tsne-entropy and http://bit.ly/svp-cifar10-tsne-forget for entropy and forgetting events respectively. + +![](images/836044a4c9ee2d976040e2c8625751025b666e4e5a591cd2dafa71b60b8a93ea.jpg) +(b) FE at 25 epochs +(f) Entropy at 25 epochs + +![](images/7a4dfe4b6d4320bcf5dff58194d9f7d1fbc1780b0eb30e7d5499c227e470f282.jpg) +(c) FE at 50 epochs +(g) Entropy at 50 epochs + +![](images/e81579d3c1af084f3f5bbd84c32344e565b834fb318828ee3ce1582c7927c9b8.jpg) +(d) FE at 100 epochs +(h) Entropy at 100 epochs \ No newline at end of file diff --git a/selectionviaproxyefficientdataselectionfordeeplearning/images.zip b/selectionviaproxyefficientdataselectionfordeeplearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6397e6d08bb3dce73e503e750b7d1846c82ea173 --- /dev/null +++ b/selectionviaproxyefficientdataselectionfordeeplearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75a86e308696332fc067446d826d6d63debbee5d075d9e5ef054d71d70e65002 +size 2285244 diff --git a/selectionviaproxyefficientdataselectionfordeeplearning/layout.json b/selectionviaproxyefficientdataselectionfordeeplearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6d1ad5cfc1ec34ef61b097eca462fd073f5c22df --- /dev/null +++ b/selectionviaproxyefficientdataselectionfordeeplearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab8c668d83751dfc5d46723dd19acfe4f96d26eb2261e24664d87d69a3514725 +size 654743 diff --git a/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_content_list.json b/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2176e276bf701a7b00457bf96f2ef45884c692da --- /dev/null +++ b/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3026c56dc9642995b3d206b91cacfe82c8a76910cd871672d221544956471cb8 +size 98613 diff --git a/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_model.json b/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_model.json new file mode 100644 index 0000000000000000000000000000000000000000..42bfe35667c961bcf4263e72812b09422a78ff65 --- /dev/null +++ b/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c60c4f3a7ad9f57e0ed5af4731bd3eaefa4801e4a4b2330cf8d535edab79a02a +size 114810 diff --git a/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_origin.pdf b/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8f6f517af9db3604c0b521b494db9931a61f547c --- /dev/null +++ b/selfadversariallearningwithcomparativediscriminationfortextgeneration/fc8a1008-1a15-488f-9ac3-e5604afd7107_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4864e82c0f646df966d7c313bde06cf79140e7463c01de9e61ade6b20bd7dbc9 +size 448409 diff --git a/selfadversariallearningwithcomparativediscriminationfortextgeneration/full.md b/selfadversariallearningwithcomparativediscriminationfortextgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a0085adc460a74bf06d00dde0771e4cc82a18f07 --- /dev/null +++ b/selfadversariallearningwithcomparativediscriminationfortextgeneration/full.md @@ -0,0 +1,370 @@ +# SELF-ADVERSARIAL LEARNING WITH COMPARATIVE DISCRIMINATION FOR TEXT GENERATION + +Wangchunshu Zhou $^{1*}$ Tao Ge $^{2}$ Ke Xu $^{1}$ Furu Wei $^{2}$ Ming Zhou $^{2}$ + +1Beihang University, Beijing, China +$^{2}$ Microsoft Research Asia, Beijing, China + +zhouwangchunshu@buaa.edu.cn, kexu@nlsde.buaa.edu.cn + +{tage, fuwei, mingzhou}@microsoft.com + +# ABSTRACT + +Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples. To address the issues, we propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation. In contrast to standard GANs that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier for comparing the text quality between a pair of samples. During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples. This self-improvement reward mechanism allows the model to receive credits more easily and avoid collapsing towards the limited number of real samples, which not only helps alleviate the reward sparsity issue but also reduces the risk of mode collapse. Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity, and yields more stable performance compared to the previous GANs for text generation. + +# 1 INTRODUCTION + +Generative Adversarial Networks (Goodfellow et al., 2014) (GANs) have achieved tremendous success for image generation and received much attention in computer vision. For text generation, however, the performance of GANs is severely limited due to reward sparsity and mode collapse: reward sparsity refers to the difficulty for the generator to receive reward signals when its generated samples can hardly fool the discriminator that is much easier to train; while mode collapse refers to the phenomenon that the generator only learns limited patterns from the real data. As a result, both the quality and diversity of generated text samples are limited. + +To address the above issues, we propose a novel self-adversarial learning (SAL) paradigm for improving adversarial text generation. In contrast to standard GANs (Figure 1(a)) that use a binary classifier as its discriminator to predict whether a sample is real or generated, SAL employs a comparative discriminator which is a pairwise classifier assessing whether the currently generated sample is better than its previously generated one, as shown in Figure 1(b). During training, SAL rewards the generator when its currently generated samples are found to be better than its previously generated samples. In the earlier training stage when the quality of generated samples is far below the real data, this self-improvement reward mechanism makes it easier for the generator to receive non-sparse rewards with informative learning signals, effectively alleviating the reward sparsity issue; while in the later training stage, SAL can prevent a sample from keeping receiving high reward as the self-improvement for a popular mode will become more and more difficult, and therefore help the generator avoid collapsing toward the limited patterns of real data. + +We comprehensively evaluate the proposed self-adversarial learning paradigm in both synthetic data and real data on the text generation benchmark platform (Zhu et al., 2018). Compared to the previous approaches for adversarial text generation (Yu et al., 2017; Che et al., 2017; Lin et al., 2017), our + +![](images/6d32a57570de319c8ae6a495ff35d071d4c1d196a50d9d9f14d452e0f2ab7c65.jpg) +Figure 1: (a) Conventional adversarial learning that uses a binary real/fake classifier as its discriminator; (b): Self-adversarial learning that employs a comparative discriminator to compare the currently generated sample to its previously generated samples for obtaining rewards through self-improvement. + +approach shows a substantial improvement in terms of both the quality and the diversity of generated samples as well as better performance stability in adversarial learning. + +# 2 BACKGROUND: ADVERSARIAL TEXT GENERATION + +Adversarial text generation has drawn much attention in recent years due to its advantages (e.g., sequence-level guidance without the exposure bias issue (Bengio et al., 2015)) over maximum likelihood estimation (MLE) for natural language generation. It formulates the learning process as a minimax game between a generator $G_{\theta}$ parameterized by $\theta$ and a discriminator $D_{\phi}$ parameterized by $\phi$ : the discriminator is trained to distinguish between the samples drawn from the real data distribution $p_{data}$ and the samples generated by the generator; while the generator is trained to generate samples that can "fool" the discriminator. The adversarial learning objective of the generator and the discriminator can be formulated as: + +$$ +\min _ {\theta} \max _ {\phi} \mathbb {E} _ {\boldsymbol {x} \sim p _ {\mathrm {d a t a}}} [ \log D _ {\phi} (\boldsymbol {x}) ] + \mathbb {E} _ {\boldsymbol {z} \sim p _ {\boldsymbol {z}}} [ \log (1 - D _ {\phi} (G _ {\theta} (\boldsymbol {z}))) ] \tag {1} +$$ + +where $\pmb{x}$ is a sample from the real data, $G_{\theta}(z)$ is a sample generated by the generator with the initialization $z$ that is drawn from the noise distribution $p_z$ (e.g., standard normal distribution). + +While GANs have shown some promising results (Yu et al., 2017; Guo et al., 2018), there are two fundamental issues that impede their progress in text generation: (i) Reward sparsity, which is due to the fact that the discriminator tends to learn much better than the generator and thus easily recognizes generated samples as fake. In such cases, it will be difficult for the generator to receive rewards; (ii) Mode collapse, which arises from the intrinsic nature of GANs and leads the adversarial models to only learn the limited patterns from the real samples. These two issues limit the ability of GANs to generate high-quality and diverse text samples, which have not been well addressed yet. + +# 3 SELF-ADVERSARIAL LEARNING + +To address the aforementioned issues, we propose a novel self-adversarial learning (SAL) paradigm. Inspired by self-play (Silver et al., 2017; Rennie et al., 2017) in reinforcement learning, the core idea of SAL is to reward the generator if its currently generated sample is found to be better than its previously generated ones. Like AlphaGo (Silver et al., 2017), the generator in SAL struggles to learn to generate better samples than its previously generated samples for passing the "self-improvement" test by a comparative discriminator, which is a pairwise classifier trained to compare the quality of two samples, as Figure 1(b) shows. + +Compared to conventional GANs (Figure 1(a)), SAL has the following advantages: First, in the earlier training stage when the quality of generated samples is far below the real data, the self-improvement + +![](images/a60e340325d159d195165f267ef027d2db2de7ea037625b1fa3f65663da37e5d.jpg) +Figure 2: The training process of the comparative discriminator. + +reward mechanism of SAL allows the generator to receive informative learning signals more easily as it makes the assessment of sample quality better adapted to the current capability of the generator, making it less likely to suffer from the issue of reward sparsity; Second, in the later training stage when the generated samples' quality is high, SAL can prevent a sample from keeping receiving high reward as it will become more and more difficult to pass the "self-improvement" test, thus reducing the risk of the generator collapsing towards limited patterns. The self-improvement mechanism and the 'tie' option in the comparative discriminator also provides a reasonable baseline which corresponds to cases where newly generated samples are found to be indistinguishable with previous ones, thus improving the training stability. We provide a more detailed qualitative analysis of why the proposed self-adversarial learning paradigm can alleviate these problems in Appendix. + +# 3.1 COMPARATIVE DISCRIMINATOR + +As introduced above, the core component for SAL is the comparative discriminator, which is a pairwise classifier comparing the quality of two samples. It learns a total order of sample quality and encodes the inductive bias that one sample is better $(>)$ , worse $(<)$ , or indistinguishable $(\approx)$ in terms of its quality compared to the other. For a (text) sample, the comparative discriminator can offer more informative learning signals than the conventional binary (i.e., real/fake) classification based discriminator because the sample can receive multiple feedbacks from the comparative discriminator by comparing it with multiple other samples. + +For training the comparative discriminator, we construct pairwise training examples from the real and generated samples, as Figure 2 shows. For a real sample $s_+$ and a generated sample $s_-$ , we assign the label "better (>)" to the pair $(s_+, s_-)$ and "worse (<)" to $(s_-, s_+)$ . For two samples both from real data or from the generated samples, we assign the label "indistinguishable (≈)" to such pairs (i.e., $(s_+^i, s_+^j)$ and $(s_-^i, s_-^j)$ ). For a training set with $n$ real samples and $n$ generated samples, the comparative discrimination can construct $\binom{2n}{2}$ pairwise training examples, allowing to enhance the generalization ability of the comparative discriminator. + +Moreover, to improve the model's ability to distinguish between good generated samples and bad generated samples for self-play learning, we additionally select the samples generated during the later stage of MLE training as pseudo-real samples, and select those generated in the earlier epochs when the generator does not fully converge as fake sentences. We then pair the pseudo-real samples with the fake samples to construct training instances to supervise the model to compare their qualities. In this way, the comparative discriminator is prevented from being taught to always recognize two generated samples as equally bad and assign zero reward to the generator. As a result, it can become more sensitive to the quality difference in a pair of text samples and thus allow the generator to receive rewards more easily. + +# 3.2 TRAINING + +Before we formally introduce the training procedure for SAL, we first define the learning objective for the comparative discriminator $D_{\phi}$ and the generator $G_{\theta}$ in SAL: + +$$ +L _ {D} = - \mathbb {E} _ {\left(\boldsymbol {x} _ {1}, \boldsymbol {x} _ {2}\right) \sim \left(\mathcal {M} \cup p _ {\text {d a t a}} (x)\right) ^ {2}} \left[ \log D _ {\phi} ^ {Q \left(\boldsymbol {x} _ {1}, \boldsymbol {x} _ {2}\right)} \left(\boldsymbol {x} _ {1}, \boldsymbol {x} _ {2}\right) \right] \tag {2} +$$ + +$$ +L _ {G} = - \mathbb {E} _ {\left(\boldsymbol {z}, \boldsymbol {x} _ {r}\right) \sim \left(p _ {z} (\boldsymbol {z}), \mathcal {M}\right)} [ \sum_ {q \in \{>, < , \approx \}} w _ {q} \log D _ {\phi} ^ {q} \left(G _ {\theta} (\boldsymbol {z}), \boldsymbol {x} _ {r}\right) ] \tag {3} +$$ + +In Eq (2) and Eq (3), $\mathcal{M}$ is the set of previous generated samples by the generator, $Q(\pmb{x}_1,\pmb{x}_2)\in \{>, <, \approx\}$ is the true label for the pair $(\pmb{x}_1,\pmb{x}_2)$ , $D_{\phi}^{q}(\pmb{x}_{1},\pmb{x}_{2})$ is the probability of the comparative discriminator's prediction being $q$ ( $q\in \{>, <, \approx\}$ ) for the pair $(\pmb{x}_1,\pmb{x}_2)$ . $w_{q}$ is the reward weight for the case $q$ , which is a hyperparameter for SAL. If the generator generates a sample $G_{\theta}(z)$ that is better ( $>$ ) than its previously generated sample $\pmb{x}_r$ , it receives a positive reward; if $G_{\theta}(z)$ is worse ( $<$ ) than $\pmb{x}_r$ , it receives a negative reward; while if the quality of $G_{\theta}(z)$ is classified as similar ( $\approx$ ) to $\pmb{x}_r$ , it receives zero credit. Therefore, we have $w_{(>)} > 0 = w_{(\approx)} > w_{(<)}$ . + +Since $L_{G}$ can only be directly optimized in standard continuous GAN training, we alternatively employ the policy gradient algorithm (Sutton et al., 2000) to train the generator, as previous approaches for adversarial text generation. For SAL, we define the reward for a generated sample $x_{g}$ compared with a reference sample $x_{r}$ which is a previous generated sample by the generator as the weighted reward based on the probability distribution of the prediction by the comparative discriminator: + +$$ +\gamma_ {\phi} (\boldsymbol {x} _ {g}, \boldsymbol {x} _ {r}) = w _ {(>)} D _ {\phi} ^ {(>)} (\boldsymbol {x} _ {g}, \boldsymbol {x} _ {r}) + w _ {(< )} D _ {\phi} ^ {(< )} (\boldsymbol {x} _ {g}, \boldsymbol {x} _ {r}) + w _ {(\approx)} D _ {\phi} ^ {(\approx)} (\boldsymbol {x} _ {g}, \boldsymbol {x} _ {r}) \tag {4} +$$ + +In text generation, the generator $G_{\theta}$ obtains the reward only when one sample has been completely generated, which means no intermediate reward is gained. To relieve this problem, following the practice in SeqGAN (Yu et al., 2017), we utilize the Monte Carlo rollout method to approximate intermediate rewards by sampling unknown tokens with generated prefix $Y_{1:t}$ following generator policy $G_{\theta}$ till sample completion. Empirically, we found that the Monte Carlo rollout also helps to reduce the variance of the reference sample utilized for comparison. We calculate the expected reward as + +$$ +\mathcal {R} _ {\theta , \phi} (s = Y _ {1: t - 1}, a = y _ {t}) = \mathbb {E} _ {\left(\boldsymbol {x} _ {g}, \boldsymbol {x} _ {r}\right) \sim \left(G _ {\theta} \left(Y _ {1: t - 1}\right), \mathcal {M}\right)} \left[ \gamma_ {\phi} \left(\boldsymbol {x} _ {g}, \boldsymbol {x} _ {r}\right) \right] \tag {5} +$$ + +The objective of the generator is to generate a sequence to maximize its expected final reward. With likelihood ratio (Sutton et al., 2000), we can formulate the gradient of the objective function for generator $G_{\theta}$ as: + +$$ +\nabla_ {\theta} J (\theta) = \sum_ {t = 1} ^ {T} \mathbb {E} _ {Y _ {1: t - 1} \sim G _ {\theta}} \left[ \nabla_ {\theta} \log \left(G _ {\theta} \left(y _ {t} \mid Y _ {1: t}\right)\right) \cdot \mathcal {R} _ {\theta , \phi} (s = Y _ {1: t - 1}, a = y _ {t}) \right] \tag {6} +$$ + +To improve the training of self-adversarial learning, we borrow ideas from the field of deep reinforcement learning and propose two training techniques to improve self-adversarial training. + +Scheduled rewarding Similar to the exploitation-exploration trade-off in reinforcement learning (Langford & Zhang, 2007), the positive reward assigned for actions generating better samples encourage exploration while the penalty for generating worse samples makes the generator more conservative. Intuitively, in the earlier stage of self-adversarial learning, the generator should explore better policy by receiving higher rewards for relative progress; while in the later stage, the generator should be more conservative by penalizing more for worse samples to avoid performance degradation. We simply decrease $w_{(>)}$ and increase $w_{(<)}$ linearly with training iteration and refer this technique as scheduled rewarding. + +Algorithm 1 Self-Adversarial Learning With Comparative Discriminator +Require: Generator $G_{\theta}$ ; comparative discriminator $\bar{D}_{\phi}$ ; samples of real sentences $S_{+}$ ; self-adversarial learning step $g$ ; discriminator step $k$ ; memory buffer $\mathcal{M}$ for the previous generated samples +1: Pretrain $G_{\theta}$ using MLE on $S_{+}$ +2: Generate samples with $G_{\theta}$ and store them into $\mathcal{M}$ +3: repeat +4: for $k$ steps do +5: Collect a mini-batch of balanced sample pairs $(x_{1}, x_{2})$ from $\mathcal{M} \cup S_{+}$ +6: Update $D_{\phi}$ via Eq (2) +7: end for +8: for $g$ steps do +9: Generate a mini-batch of samples $x_{g} \sim G_{\theta}$ +10: Collect a mini-batch of reference samples $x_{r}$ from $\mathcal{M}$ +11: Update $G_{\theta}$ via Eq (6) +12: end for +13: Update $\mathcal{M}$ with $G_{\theta}$ +14: until Convergence + +Memory replay Continuously comparing the generator with its most recent stage may suffer from the correlation between generated samples and reference samples, which makes the training process unstable. Inspired by experience replay (Lillicrap et al., 2015), we construct a memory buffer which contains samples generated in the last $K$ training steps. Reference samples are sampled from the memory buffer rather than samples generated in the most recent stage of the generator, which empirically helps stabilize the training process. + +The training process of SAL is summarized in Algorithm 3. Self-adversarial learning with the proposed comparative discriminator achieves Nash Equilibrium when the generator models the distribution of real samples perfectly. In this case, the comparative discriminator cannot successfully distinguish generated samples from real samples and tends to recognize two samples as "indistinguishable". The reward received by the generator is thus zero and training converges. However, how a non-Bernoulli GAN converges to such an equilibrium is still an open problem (Goodfellow et al., 2014; Goodfellow, 2014) and is beyond the scope of this work. + +# 4 EXPERIMENTS + +# 4.1 EXPERIMENTAL SETTING + +Following the experimental settings in previous work (Lin et al., 2017; Guo et al., 2018; Shi et al., 2018; Zhu et al., 2018; Nie et al., 2018), we evaluate our approach in both synthetic and real datasets based on Texygen (Zhu et al., 2018), which is a benchmark platform for evaluating adversarial text generation models. Table 1 presents the basic information of the datasets used for evaluation. + +Table 1: Description of the datasets used for evaluation + +
SyntheticImage COCOEMNLP2017 WMT NEWS
Categorysimulated dataimage descriptionnews article
Vocabulary size500046825255
Sequence length20/40<37<51
Sentence number (training)1000010000270000
Sentence number (test)100001000010000
+ +As SeqGAN (Yu et al., 2017), our generator is a single-layer LSTM (Hochreiter & Schmidhuber, 1997) and our discriminator is almost based on TextCNN (Kim, 2014) except that it concatenates the feature representation of two compared samples and outputs the probability for their comparative relations (i.e., $>, <, \approx$ ). We keep the most of the hyperparameters same with the SeqGAN except the hyperparameters introduced by our models (i.e., $w_{(>)}$ , $w_{(<)}$ , $w_{(\approx)}$ ) which are tuned based on the synthetic experiment and kept the same for the real data experiments. + +We evaluate the adversarial text generation models in terms of both quality and diversity. Following the prior work, in the synthetic dataset, we use the oracle LSTM to evaluate the negative log-likelihood + +Table 2: Performance comparison of different models in synthetic tests where sequence length is set to 20 and 40 respectively. For all the metrics presented, the lower, the better. + +
Method\( NLL_{oracle} \)(20/40)\( NLL_{gen} \)(20/40)\( NLL_{oracle} + NLL_{gen} \)(20/40)
SAL7.71 ±0.17 / 9.31±0.036.58±0.15 / 6.97 ±0.0514.29 ±0.11 / 16.24±0.03
SeqGAN8.63 ±0.19 / 9.63 ±0.046.61±0.22 / 6.98±0.0815.00±0.03 / 16.35±0.02
RankGAN8.42 ±0.31 / 9.52±0.117.14±0.34 / 7.05±0.1215.01±0.02 / 16.37±0.02
MaliGAN8.74 ±0.16 / 9.67 ±0.036.62±0.25 / 7.14±0.0915.03±0.03 / 16.39±0.03
MLE9.05 ±0.03 / 9.84±0.025.96±0.02 / 6.55±0.0215.02±0.03 / 16.39±0.01
+ +of our generated samples (denoted as $\mathrm{NLL}_{\text {oracle }}$ ) as the metric to reflect quality; for the diversity metric, we use the negative log-likelihood of the synthetic dataset (denoted as $\mathrm{NLL}_{\text {gen }}$ ) evaluated by the generator with the best quality (i.e., the best $\mathrm{NLL}_{\text {oracle }}$ score) during training. We also use the best $\mathrm{NLL}_{\text {oracle }} + \mathrm{NLL}_{\text {gen }}$ obtained during training to evaluate the quality-diversity trade-off. + +For the real data experiments, we follow the previous work to apply the commonly-used BLEU scores (Papineni et al., 2002) (BLEU(F)) and the perplexity of generated samples evaluated by an open-sourced pretrained language model (Jozefowicz et al., 2016) as quality metrics since $\mathrm{NLL}_{\text{oracle}}$ cannot be evaluated without an oracle language model. For evaluating diversity, we employ both backward BLEU (Shi et al., 2018) (BLEU(B)) which evaluates the test data using generated samples as reference and $\mathrm{NLL}_{\text{gen}}$ as the metrics. To evaluate the generated samples in more aspects, we calculate frechet distance (Heusel et al., 2017) (FD) between generated samples and real data with sentence representation obtained by InferSent (Conneau et al., 2017) which is a pre-trained sentence embedding model. + +We compare our approach to previous well-known adversarial text generation models including SeqGAN (Yu et al., 2017), RankGAN (Li et al., 2017) and MaliGAN (Che et al., 2017). LeakGAN (Guo et al., 2018) and RelGAN (Nie et al., 2018) focus on architecture-level modification, which is orthogonal to our work and the proposed self-adversarial learning paradigm can be applied to them as well. We provide results of the combination of LeakGAN with SAL in the Appendix. + +In the following sections, we denote our proposed self-adversarial learning approach as SAL. Since adversarial training is very sensitive to random initialization and suffers from high variance, we conduct five individual runs with different random seeds for each model and report the mean and the standard deviation of the obtained results. + +# 4.2 EXPERIMENTAL RESULTS + +# 4.2.1 RESULTS IN SYNTHETIC DATA + +Table 2 shows the results in the synthetic dataset. We can observe that SAL largely outperforms the previous GANs in all metrics in both cases of sequence length 20 and 40. Although its performance in $\mathrm{NLL}_{\text{gen}}$ is worse than MLE as MLE directly optimizes the metric, it yields better quality-diversity trade-off than MLE training, which has not been achieved by the previous GANs, which is shown by the fact that the $\mathrm{NLL}_{\text{oracle}} + \mathrm{NLL}_{\text{gen}}$ for SAL is lower than that yielded by MLE, while other GANs have the same sum score with MLE, indicating that they fail to improve the quality-diversity trade-off after pretraining. This demonstrates SAL's advantage in alleviating the mode collapse problem. We also find that the training of SAL is more stable compared with other text GANs. + +In addition, we find SAL learns faster and better than the other GANs by comparing their performance curves of $\mathrm{NLL}_{\text {oracle }}$ during training in Figure 3, which is consistent with our intuition that the self-improvement reward mechanism in SAL can alleviate the reward sparsity issue in the earlier training stage and help the generator learn better. + +# 4.2.2 RESULTS IN REAL DATA + +The results for COCO image caption dataset are presented in Table 3 and the performance curve of the perplexity during training is shown in Figure 4. As in the synthetic data, we observe that our SAL consistently yields better results in all the metrics with stable performance (i.e., low variance) compared to the previous GANs. According to Table 18 and Figure 4, SeqGAN and our SAL can improve MLE in the quality metrics (i.e., BLEU (F) and Perplexity) while MaliGAN and RankGAN + +![](images/549c270376ec49f5499a62eace8796a5ac9677ba4ad7fd02882136960edae49d.jpg) +Figure 3: The performance curves of NLL-oracle Figure 4: The performance curves of perplex-during training in the synthetic dataset ity during training in the Image COCO dataset + +![](images/89f90422dd336c55663b3d6cfae0092ab6d6ee68662b91b5b13fc14d8e757e3d.jpg) + +Table 3: Performance comparison of different models in the COCO caption generation task. Metrics from top to bottom represent respectively the generation quality, the generation diversity, and the divergence between real data of generated sentences. For all the BLEU metrics, the higher, the better; for $\mathrm{NLL}_{gen}$ and FD, the lower, the better. + +
MetricsMLESeqGANMaliGANRankGANSAL
BLEU-2(F)0.730 ±0.010.748 ±0.030.733 ±0.030.727 ±0.040.785 ±0.02
BLEU-3(F)0.494 ±0.010.514 ±0.040.497 ±0.040.491 ±0.040.581 ±0.03
BLEU-4(F)0.303 ±0.010.307 ±0.020.295 ±0.030.291 ±0.030.362 ±0.02
BLEU-5(F)0.187 ±0.010.187 ±0.020.178 ±0.020.175 ±0.030.227 ±0.02
Perplexity338.4 ±7.6307.2 ±14.9343.8 ±21.3391.2 ±35.1231.3 ±10.8
BLEU-2(B)0.759 ±0.020.694 ±0.030.676±0.040.683±0.040.724 ±0.02
BLEU-3(B)0.531 ±0.020.472±0.030.443±0.030.449±0.040.503 ±0.03
BLEU-4(B)0.332 ±0.020.285±0.020.279±0.020.277±0.020.313±0.02
BLEU-5(B)0.209 ±0.020.186±0.020.178 ±0.020.182 ±0.020.198 ±0.02
NLLgen0.721 ±0.021.035 ±0.021.052 ±0.021.145 ±0.020.873 ±0.02
FD0.043 ±0.0090.065 ±0.0180.076 ±0.0210.083 ±0.0270.051 ±0.014
+ +Table 4: Performance comparison of different models in the EMNLP2017 WMT news generation task. Metrics from top to bottom represent respectively the generation quality, the generation diversity, and the divergence between real and generated data. For all the BLEU metrics, the higher, the better. For $\mathrm{NLL}_{\mathrm{gen}}$ and FD, the lower, the better. + +
MetricsMLESeqGANMaliGANRankGANSAL
BLEU-2(F)0.769 ±0.020.761 ±0.030.764 ±0.030.736 ±0.020.788 ±0.02
BLEU-3(F)0.475 ±0.010.463 ±0.030.468 ±0.030.441 ±0.040.523 ±0.02
BLEU-4(F)0.243 ±0.020.228 ±0.030.231 ±0.030.204 ±0.020.281 ±0.02
BLEU-5(F)0.124 ±0.020.115 ±0.020.113 ±0.030.095 ±0.020.149 ±0.02
Perplexity702.6 ±18.6743.8 ±34.2825.1 ±44.3975.4 ±65.1612.8 ±22.5
BLEU-2(B)0.741 ±0.020.693 ±0.030.684±0.030.671±0.030.726±0.02
BLEU-3(B)0.476±0.010.413±0.030.391±0.040.373±0.050.431±0.03
BLEU-4(B)0.245±0.010.216±0.030.197±0.030.191±0.030.232±0.02
BLEU-5(B)0.129±0.010.112 ±0.020.094±0.020.096±0.030.123±0.02
NLLgen2.386±0.012.732±0.042.862±0.063.157±0.112.578±0.04
FD0.079 ±0.0120.172 ±0.0320.194 ±0.0430.219 ±0.0520.137 ±0.023
+ +perform comparably to MLE. However, in the WMT NEWS dataset where text samples tend to be longer, we observe something different from Table 4: all the previous GANs fail to improve MLE. This is because long text generation makes the discrepancy between generated samples and real samples very large even after MLE pre-training. As a result, previous GANs fail to stably enhance the sample quality due to the reward sparsity issue. In contrast, our SAL consistently performs well and largely improves the quality metrics over MLE. In addition, we observe that the diversity of samples generated by SAL is much better than previous GANs and only marginally worse than MLE, indicating SAL is helpful in reducing the risk of mode collapse. + +In addition to the automatic metrics, we also conduct human evaluation for the generated samples. As previous work (Nie et al., 2018), we randomly sample 20 sentences from each model and pool them + +Table 5: Human evaluation results of different models in both datasets. Scores are between 1-5, higher score indicates better quality. + +
DatasetMLESeqGANMaliGANRankGANSAL
COCO2.96±0.513.26±0.563.14±0.572.91±0.623.84±0.56 (p<=0.01)
WMT NEWS2.35±0.862.19±0.882.24±0.872.05±0.912.65±0.89 (p<=0.01)
+ +Table 6: Results of the ablation tests in the synthetic data and the COCO dataset. + +
DatasetSALCALw/o comparativew/o “≈”w/o scheduled rewardingw/o memory replay
Synthetic (NLL)14.29 ±0.1114.65 ±0.1615.01 ±0.0414.85 ±0.1614.46±0.1814.41 ±0.17
COCO (Perplexity)231.3±10.8276.7±12.5341.8 ±13.4291.5 ±16.3248.3 ±14.2254.6 ±13.7
+ +with anonymizing the models' identity. We invite 20 graduate students with good English proficiency to score each sentence on a scale of 1-5 in terms of quality. According to Table 5, our SAL is consistently well rated in the human evaluation and outperforms all the baselines in both COCO and WMT NEWS datasets. We perform the Wilcoxon Rank Sum Test with the human evaluation results and find that samples generated by baseline models can be distinguished from samples generated by SAL with $p < 0.01$ . Details of human evaluation procedure and samples generated by compared methods in two real-world datasets are presented in the Appendix. + +# 4.3 DISCUSSION + +To better understand SAL, we perform multiple ablation tests in both the synthetic and the real data. We employ $\mathrm{NLL}_{\text{oracle}} + \mathrm{NLL}_{\text{gen}}$ score with sequence length 20 as the evaluation metric for the synthetic data, denoted as NLL. For the real data, we use the perplexity of generated samples trained with COCO dataset as the evaluation metric. We compare SAL with the following reduced models: + +- CAL: Replacing the comparison between the generated samples (i.e., self-play) to the comparison between the real and generated samples. +- w/o comparative: Using the binary discrimination scores of other generated samples as baseline for the policy gradient algorithm, which can be considered as a combination of the self-critic training (Rennie et al., 2017) with RL-based text GANs. +- $w / o \approx$ : Replace the three-class comparative discriminator with a binary comparative discriminator by removing the “ $\approx$ ” class. +- w/o scheduled rewarding and w/o memory replay + +The results of the ablation tests are shown in Table 6. By observing the improvement by SAL over CAL, we confirm the importance of the self-play paradigm in SAL. It is notable that the proposed comparative discriminator alone (i.e., CAL) can yield good performance, demonstrating the effectiveness of learning by comparison. When replacing the comparative discriminator with the naive combination of self-critic baseline with text GANs, the performance largely decreases because the reward sparsity issue will be intensified when subtracting two already sparse rewards, this motivates the proposed pairwise comparative discriminator which makes self-comparison possible. + +In addition, we find that the “ $\approx$ ” option plays a critical role in improving the result, without which the performance degrades significantly because it makes the task less trivial and provides a baseline for the policy gradient algorithm. Moreover, the training techniques (i.e., scheduled rewarding and memory replay) borrowed from deep reinforcement learning are also shown useful in improving the results but not so important as the core components (e.g., self-play and the comparative discriminator). + +# 5 RELATED WORK + +Many variants of GANs (including TextGAN (Zhang et al., 2017), GSGAN (Kusner & Hernández-Lobato, 2016), SeqGAN (Yu et al., 2017), MaliGAN (Che et al., 2017), RankGAN (Lin et al., 2017), FMGAN (Chen et al., 2018), LeakGAN (Guo et al., 2018), and RelGAN (Nie et al., 2018)) have been proposed for text generation as adversarial training has received increasing attention in recent + +years. Typically, they address the non-differentiable issue by making continuous approximation or reinforcement learning. These approaches introduce several different architectures and optimization objectives of both the generator and the discriminator for adversarial text generation. Among the previous studies for adversarial text generation, the most related work to ours is RankGAN (Lin et al., 2017) which proposes a ranker to replace the conventional binary classifier as its discriminator for allowing the discrimination process to involve richer information. Another work whose idea is similar to ours is the relativistic discriminator (Jolicoeur-Martineau, 2018) (RGAN). It compares binary scores assigned to generated samples and real samples by subtraction as the learning signal to implicitly represent the inductive bias that half of the samples received by the discriminator is fake. In contrast, our comparative discriminator directly encodes this inductive bias and assesses generated sentences by comparison with a pairwise classifier, which provides more informative learning signals than subtraction in RGAN (Jolicoeur-Martineau, 2018) and normalized feature similarity in RankGAN (Lin et al., 2017). Our work is also related to the concurrent work (Zhou & Xu, 2020) that learns a comparative evaluator to evaluate open-domain natural language generation models. + +# 6 CONCLUSION AND FUTURE WORK + +We present a self-adversarial learning (SAL) paradigm for adversarial text generation. SAL rewards the generator when its comparative discriminator finds the generator becomes better than before. Through the self-improvement reward mechanism, the problem of reward sparsity and mode collapse can be alleviated and training of text GANs are more stable, which results in a better performance in the text generation benchmarks in terms of both quality, diversity, and lower variance. In the future, we plan to generalize our approach to other domains and modals to explore the potential of SAL for adversarial learning. Generated samples are presented in the Appendix together with other details, including human evaluation details and qualitative analysis of the proposed SAL. + +# REFERENCES + +Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 1171-1179, 2015. +Tong Che, Yanran Li, Ruixiang Zhang, R Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983, 2017. +Liqun Chen, Shuyang Dai, Chenyang Tao, Haichao Zhang, Zhe Gan, Dinghan Shen, Yizhe Zhang, Guoyin Wang, Ruiyi Zhang, and Lawrence Carin. Adversarial text generation via feature-mover's distance. In Advances in Neural Information Processing Systems, pp. 4666-4677, 2018. +Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364, 2017. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. +Ian J Goodfellow. On distinguishability criteria for estimating generative models. arXiv preprint arXiv:1412.6515, 2014. +Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. Long text generation via adversarial training with leaked information. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626-6637, 2017. + +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Alexia Jolicoeur-Martineau. The relativistic discriminator: a key element missing from standard gan. arXiv preprint arXiv:1807.00734, 2018. +Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. +Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014. +Matt J Kusner and José Miguel Hernández-Lobato. Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051, 2016. +John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In Proceedings of the 20th International Conference on Neural Information Processing Systems, pp. 817-824. Citeseer, 2007. +Jiwei Li, Will Monroe, Tianlin Shi, Sebastien Jean, Alan Ritter, and Dan Jurafsky. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2157-2169, 2017. +Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. +Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. Adversarial ranking for language generation. In Advances in Neural Information Processing Systems, pp. 3155-3165, 2017. +Weili Nie, Nina Narodytska, and Ankit Patel. Relgan: Relational generative adversarial networks for text generation. 2018. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311-318. Association for Computational Linguistics, 2002. +Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. doi: 10.1109/cvpr.2017.131. URL http://dx.doi.org/10.1109/cvpr.2017.131. +Zhan Shi, Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. Towards diverse text generation with inverse reinforcement learning. arXiv preprint arXiv:1804.11258, 2018. +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017. +Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057-1063, 2000. +Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. +Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. Adversarial feature matching for text generation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 4006-4015. JMLR.org, 2017. +Wangchunshu Zhou and Ke Xu. Learning to compare for better training and evaluation of open domain text generation models. In Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. +Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. Texygen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 1097-1100. ACM, 2018. + +# A GENERATED SAMPLES + +We present sentences generated by our proposed model and compared models to provide qualitative evaluation of different adversarial text generation models. From the presented generated samples, we can observe that samples generated by MLE training are less realistic compared with other samples. SeqGAN yield slightly better sample quality but the loss of diversity is observable even within randomly sampled 15 sentences. Adversarial training with proposed comparator, when trained by comparing with real samples, yield better quality but still lack of diversity. Finally, with the proposed self-adversarial learning paradigm, both quality and diversity of generated samples are improved. + +# A.1 GENERATED SAMPLES IN IMAGE COCO DATASET + +Table 7: Samples generated by SAL in Image COCO dataset + +
a picture of a person's umbrella in a cell phone .
a man stands in a green field .
a young boy riding a truck .
a man on a motorcycle is flying on a grassy field .
a girl on a motorcycle parked on a city street .
a motorcycle parked in a city street .
a group of bikers riding bikes on a city street .
a kitchen with a cat on the hood and a street .
a bathroom containing a toilet and a sink .
a young woman in a kitchen with a smiley face .
a jet plane on the side of a street .
a dish is sitting on a sidewalk next to a baby giraffe .
a dog on a large green bike parked outside of the motor bike .
a person on a Kawasaki bike on a race track .
a commercial aircraft is parked in front of a kitchen .
+ +Table 8: Samples generated by CAL in Image COCO dataset + +
a man is on a towel on a table outside of a real kitchen .
a group of lambs at a tall building .
a young boy riding a truck .
a man on a motorcycle is flying on a grassy field .
a man with a computer desk next to a white car .
a cat is on the walls of a cat .
a plane on a runway with a plane .
an elegant , dilapidated plane are standing in front of a parking bag .
the woman is riding a bike on their way .
a man wearing an old bathroom with a banana .
a plane is taking off from the ground .
a man holding a man in front of herself .
a woman is walking across the road .
a kitchen with an island in green tiles .
a clean kitchen with two small appliances .
+ +Table 9: Samples generated by SeqGAN in Image COCO dataset + +
a large image of a herd of racing train .
man and woman on horse .
a plane on a runway with a plane .
a man preparing a table with wood lid .
a view , tiled floors and a man prepares food .
a man wearing an old bathroom with a banana .
a man is is with a camera .
two people are parked on a street .
a white and white black kitten eating on a table .
a toilet is lit on the walls .
a kitchen is taking off from a window .
a man is wearing glasses wearing scarf .
a kitchen with graffiti hanging off from an open plain .
two women playing with the orange .
a kitchen with an island in a clear glass .
+ +Table 10: Samples generated by MLE in Image COCO dataset + +
a jet airplane flies flying through front from an airplane .
a furry tub and overhead pot .
a man in a kitchen filled with dark lights green side , ..
a cross baby field dressed making cardboard
a bathroom with a small tub and oven .
a man above a bathroom with an oven room .
a jet airliner flying through the sky .
a kitchen with a dishwasher , and plenty of pots , pans .
a person holding onto two red era arena sits on the street .
a bathroom with a toilet and a bath tub .
a cat perched on the phone and a baseball cap .
the view of the street filled with really parked at the gates on the road .
a large hairy dog on a high bike with a cake .
a man is riding a white back bench .
a narrow bed and white spotted dark tiled walls .
+ +# A.2 GENERATED SAMPLES IN EMNLP2017 WMT DATASET + +Table 11: Samples generated by SAL in EMNLP2017 WMT dataset + +
(1) it’s likely to be egyptian and many of the canadian refugees, but for a decade.
(2) the ministry spokesperson also said it now significant connected to the mountain.
(3) it is the time they can more competitive, where we have another $ 99. 100 per cent, and completely on the alternative, and that’s being affected.
(4) we expect $ 200 and 0. 3 percent for all you form other, and, which then well, it’s done.
(5) so we wouldn’t feel very large in the game, but you fail to fund, and and the paper that’s like its start.
(6) other countries made a playoff cut with pages by mrs. trump’s eighth consecutive season as a president.
+ +Table 12: Samples generated by CAL in EMNLP2017 WMT dataset + +
(1) i didn’t put relatively quiet, we have, ’ his work right in the particular heat rate, take steps traditionally clean.
(2) why the u . s . then the table is our cabinet to do getting an vital company for the correct review.
(3) those had trained for that, but no thin percentage of the nhs about being warned about the palestinian election before obama is not connected in israel.
(4) in course , voters - obama said : “ torture is the outcome , the most powerful trade - popularity is happening in it as a success .
(5) “ in 2012 , it is nice to remain - no trump actor established this night - scoring three films .
(6) we kind of not listen to knowing my most one , only , for a really good vote , and where things fun , you know .
+ +Table 13: Samples generated by SeqGAN in EMNLP2017 WMT dataset + +
(1) his missed 4 , 000 the first 95 really 69 - year - olds .
(2) but just things , you want to thank it as my playing side has begun meeting with “ and “ the score had to train up , so he was tied for 11 years .
(3) and when he got back doing fresh ties with his election , he will now step in january , back.
(4) when you ’ t know if i saw her task to find himself more responsibility ago .
(5) his hold over - up to a nine hike in 2015 , 13 percent of recently under suspects dead day , 24 , and to the city .
(6) “ i look up on by the city ’ s vehicle on the day in a meeting in november .
+ +Table 14: Samples generated by MLE in EMNLP2017 WMT dataset + +
(1) you know that that is great for our ability to make thinking about how you know and you?
(2) when it ’ s a real thing possible , is if you the first time in a time here and get .
(3) u . s , now government spending at the second half of four years , a country where the law will join the region to leave japan in germany .
(4) deputy president , the issue of government and geneva probe threats and not - backed trump , but well - changing violence for their islamic state militants were innocent people .
(5) he suggested in a presidential primary source and comment on its size following protests conducted by 18 , some in 2012 will be looked at tech energy hub .
(6) “it ’ s growing heavy hard , ” mr . romney said , he says matters that can ’ t again become the asian player .
+ +# B CASE STUDY: WHY IT WORKS + +In this section, we present several qualitative case study examples to illustrate why comparative discrimination and self-adversarial learning helps mitigate the problem of reward sparsity and mode collapse. We extract a typical sentence generated during the initial stage of adversarial learning: "a man holding a suitcase holding a phones." We see that this sentence is of limited quality and is easily recognized by binary discriminator in SeqGAN with high confidence, this makes the credit received by the generator very sparse and makes training difficult. Comparative adversarial learning (CAL) where we use the comparative discriminator to assess the quality of this sample by comparing it with a real sample helps as comparative discrimination have three categories, which is less trivial. The improvement is not so much as the discrepancy of generated samples and real samples is fairly large. However, with proposed self-adversarial learning paradigm, the comparative discriminator assesses this sentence by comparing it with a previous generated sentence which is also of poor quality. The self-improvement is easier to be recognized by the comparative discriminator and makes this sample get good rewards. + +As the comparative discriminator has to learn a total order of sample quality which is more challenging than standard binary discrimination, the chance of the comparative discriminator to be over-trained is reduced, which makes the model easier to achieve self-improvement, thus help to alleviate the reward sparsity problem. + +We also extract a sentence generated in the late stages which is fairly good and fools the binary discriminator: "a woman sitting on a bench on a park." In standard adversarial text generation models, a sentence like this would keep receiving large rewards and result in mode collapse. In self-adversarial learning paradigm, this sentence is not much better than other sentences generated by the generator itself, so its reward is limited, which reduces the risk of mode collapse. + +Table 15: Case study of comparative discrimination and self-adversarial learning. + +
Generated sentenceReference sentenceReward
a man holding a suitcase holding a phones.- (SeqGAN)0.018
a man holding a suitcase holding a phones.a student walks in the rain with a green umbrella. (Real)0.051
a man holding a suitcase holding a phones.a men ’s kitchen and a cow. (Self)0.561
a woman sitting on a bench on a park.- (SeqGAN)0.825
a woman sitting on a bench on a park.a young man rides his bicycle on top of a cement bench. (Real)0.438
a woman sitting on a bench on a park.a man sitting on a table watching a television. (Self)0.086
+ +# B.1 ABLATED MODELS + +For the ablated model variants, SAL w/o self-play and w/o comparative discriminator is trained with the following algorithms. Specifically, the difference between SAL and CAL is that the reference sample which is compared with the currently generated sample is replaced by a real sample instead of a previously generated one. For the variant without the comparative discriminator, we employ a binary discriminator trained in the same way with the vanilla GAN, as for the reward of generating $\pmb{x}_g$ , we first sample a previously generated sample $\pmb{x}_r$ as reference and calculate the reward as $D(\pmb{x}_g) - D(\pmb{x}_r)$ . + +Algorithm 2 Self-Adversarial Learning without self-play (i.e. CAL) +Require: Generator $G_{\theta}$ ; comparative discriminator $\bar{D}_{\phi}$ samples of real sentences $S_{+}$ ; self-adversarial learning step $g$ ; discriminator step $k$ ; memory buffer $\mathcal{M}$ for the previous generated samples +1: Pretrain $G_{\theta}$ using MLE on $S_{+}$ +2: Generate samples with $G_{\theta}$ and store them into $\mathcal{M}$ +3: repeat +4: for $k$ steps do +5: Collect a mini-batch of balanced sample pairs $(x_{1},x_{2})$ from $\mathcal{M}\cup S_{+}$ +6: Update $D_{\phi}$ via Eq (2) +7: end for +8: for $g$ steps do +9: Generate a mini-batch of samples $x_{g}\sim G_{\theta}$ +10: Collect a mini-batch of reference samples $x_{r}$ from $S_{+}$ +11: Update $G_{\theta}$ via Eq (6) +12: end for +13: Update $\mathcal{M}$ with $G_{\theta}$ +14: until Convergence + +Algorithm 3 Self-Adversarial Learning without comparative discriminator +Require: Generator $G_{\theta}$ ; binary discriminator $D_{\phi}$ samples of real sentences $S_{+}$ ; self-adversarial learning step $g$ discriminator step $k$ ; memory buffer $\mathcal{M}$ for the previous generated samples +1: Pretrain $G_{\theta}$ using MLE on $S_{+}$ +2: Generate samples with $G_{\theta}$ and store them into $\mathcal{M}$ +3: repeat +4: for $k$ steps do +5: Collect a mini-batch of generated samples from $\mathcal{M}$ +6: Update $D_{\phi}$ with conventional GAN discriminator loss +7: end for +8: for $g$ steps do +9: Generate a mini-batch of samples $\pmb {x}_g\sim G_\theta$ +10: Collect a mini-batch of reference samples $\pmb{x}_r$ from $\mathcal{M}$ +11: Update $G_{\theta}$ via Eq (6) +12: end for +13: Update $\mathcal{M}$ by $G_{\theta}$ with reward calculated by $D(x_{g}) - D(x_{r})$ +14: until Convergence + +# C TRAINING AND EVALUATION DETAILS + +# C.1 MODEL DETAILS + +We follow most of the hyperparameters used in the benchmark platform Texygen Zhu et al. (2018). Specifically, the generator is a one layer LSTM with embedding size and hidden size 32. The discriminator is implemented as a TextCNN with a filter size of [2,3] and a filter number of [100,200]. The proposed self-adversarial learning paradigm introduces the relative weights for credit assignment when a generated sample is found to be better, indistinguishable or worse compared with another sample generated by the generator itself. We tuned it based on the performance in synthetic experiment and set $w_0: w_2 = 1: -0.1$ . + +# C.2 CHOICE & EXPLANATION OF METRICS + +Note that many previous works use self-BLEU Zhu et al. (2018) as a diversity metric. However, we find that there exist problems in the official implementation of the self-BLEU metric: Only in the first time of evaluation that the reference and hypothesis come from the same "test data" (i.e. the set of generated sentences). After that, the hypothesis keeps updated but the reference remains unchanged (due to "is-first=False"), which means hypothesis and reference are not from the same "test data" any more, and thus the scores obtained under this implementation are not self-BLEU scores. To this end, we modified the implementation to make sure that the hypothesis and reference are always from the same "test data" (by simply removing the variables "self.reference" and "self.is-first") and found + +that the self-BLEU (2-5) scores are always 1 when evaluating all the models. This problem is also discussed in the openreview of the RelGAN paper Nie et al. (2018). + +Based on this consideration, we decide to employ backward-BLEU which is introduced in Shi et al. (2018) and can also evaluate the diversity of generated samples. The forward and backward BLEU resemble precision and recall of generated samples with respect to the test data, which measures the generation quality and the generation diversity respectively. + +For BLEU metric, as there is no sentence level alignment for unconditional generation, BLEU is evaluated by using the entire test set treated as a single reference, which contains 10000 sentences. We then generate the same number of sentences as the prediction, and then compute n-gram overlap between the reference and the prediction. We did not apply brevity penalty following previous works. But we found the number of tokens generated are roughly the same across different compared models. + +We briefly explain why $\mathrm{NLL}_{gen}$ is able to measure the diversity of the generator: $\mathrm{NLL}_{gen}$ measures the negative log-likelihood of the synthetic dataset evaluated by the generator. Intuitively, if the generator is diverse and captures more patterns in the synthetic dataset, the $\mathrm{NLL}_{gen}$ score will be lower. In contrast, if the generator suffers from severe mode collapse and is of low diversity, the $\mathrm{NLL}_{gen}$ will be higher as the generator fails to cover the diverse patterns in the training data. + +As for the metric: $\mathrm{NLL}_{\text{gen}} + \mathrm{NLL}_{\text{oracle}}$ , our motivation is that $\mathrm{NLL}_{\text{oracle}}$ measures the best quality of the generator throughout training, while $\mathrm{NLL}_{\text{gen}}$ measures the best diversity attained by the generator during training. However, the best quality and diversity are generally not achieved at the same time as GAN-training generally sacrifices the diversity for better quality. Therefore, we report $\mathrm{NLL}_{\text{gen}} + \mathrm{NLL}_{\text{oracle}}$ which can measure the quality-diversity trade-off, as the previous work demonstrated, as an additional reference. + +# C.3 HYPERPARAMETERS + +We follow most of the hyperparameters used in the benchmark platform Texygen Zhu et al. (2018). Specifically, we choose batch size to be 64, dropout keep prob to be 0.75, 12 regularization to be 0.2. We pretrain all model for 120 epochs and fine-tune them until convergence. + +The proposed self-adversarial learning paradigm introduces the relative weights for credit assignment when a generated sample is found to be better, indistinguishable or worse compared with another sample generated by the generator itself. We tuned it based on the performance in synthetic experiment and find $w_{0}:w_{2} = 1: - 0.1$ to be a good choice for the initial weights' and fixed $w_{1}$ to be 0. We empirically find that the performance of SAL is not very sensitive to the choice of reward weights as long as the absolute value of $w_{1}$ is larger enough than $w_{2}$ , which guarantees the training stability. + +# C.4 HUMAN EVALUATION + +Following the human evaluation setting in RelGAN Nie et al. (2018), The text quality evaluation is based on grammatical correctness and meaningfulness (i.e. if a sentence makes sense), text formatting problems (e.g., capitalization, punctuation, spelling errors, extra spaces between words and punctuations) are ignored. Detailed criteria is provided as follows: + +Table 16: The human evaluation scale from 1 to 5 with corresponding criteria and example sentences. + +
ScaleCriterion & Example
5-ExcellentIts grammatically correct and makes sense.For example, “a man standing on a skateboard in the middle of the street.”
4-GoodIt has some small grammatical errors and mostly makes sense.For example, “two women is in a cafe look outside.”
3-FairIt has major grammatical errors but the whole still conveys some meanings.For example, “a man riding on on motor scooter and window.”
2-PoorIt has severe grammatical errors and the whole doesn’t make sense,but some parts are meaningful.For example, “a blue bike on on on a dirt bike.”
1-UnacceptableIt is basically a random collection of words.For example, “a motorcycle close close on it and.”
+ +# C.5 ADDITIONAL RESULTS ON CAL AND LEAKGAN + +In this section, we present the performance comparison on SAL vs CAL (comparative adversarial learning, which uses comparative discriminator but does not train the model with self-play.) and the application of SAL on LeakGAN. We find that SAL significantly outperforms CAL on all dataset. In addition, we find that the proposed self-adversarial learning paradigm can also be applied on other text GAN architectures, e.g. LeakGAN, and help improve its performance. + +Table 17: Performance comparison of different models in synthetic tests where sequence length is set to 20 and 40 respectively. For all metrics presented, lower value is better. + +
MethodNLLoracle(20)NLLoracle(40)
SAL7.71 ±0.179.31±0.03
CAL8.01 ±0.249.43±0.05
LeakGAN7.04 ±0.377.19±0.35
LeakGAN + SAL6.69 ±0.296.91±0.27
+ +Table 18: Performance comparison of different models in the COCO caption generation task. Metrics from top to bottom represent respectively the generation quality, the generation diversity, and the divergence between real data of generated sentences. For all BLEU metrics, higher value is better, for NLLgen and FD, lower is better. + +
MetricsCALSALLeakGANLeakGAN + SAL
BLEU-2(F)0.767 ±0.030.785 ±0.020.749 ±0.030.798 ±0.03
BLEU-3(F)0.541 ±0.030.581 ±0.030.532 ±0.040.592 ±0.03
BLEU-4(F)0.337 ±0.030.362 ±0.020.353 ±0.030.369 ±0.02
BLEU-5(F)0.211 ±0.020.227 ±0.020.233 ±0.030.241 ±0.02
Perplexity276.7 ±12.5231.3 ±10.8291.4 ±12.5218.2 ±10.8
BLEU-2(B)0.705 ±0.030.724 ±0.020.733 ±0.030.741 ±0.02
BLEU-3(B)0.479 ±0.030.503 ±0.030.512 ±0.030.529 ±0.03
BLEU-4(B)0.296±0.030.313±0.020.321 ±0.030.334±0.02
BLEU-5(B)0.191 ±0.030.198 ±0.020.206 ±0.030.211 ±0.02
NLLgen0.936 ±0.030.873 ±0.020.683 ±0.030.655 ±0.02
FD0.058 ±0.0160.051 ±0.0140.048 ±0.0160.044 ±0.014
\ No newline at end of file diff --git a/selfadversariallearningwithcomparativediscriminationfortextgeneration/images.zip b/selfadversariallearningwithcomparativediscriminationfortextgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ae3e1616d58a2c472c9069557aae0d04afdfbc85 --- /dev/null +++ b/selfadversariallearningwithcomparativediscriminationfortextgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:877653f2293d9ce759820c72f7b1d75d243888698de2b98a1b369305854480e8 +size 1150302 diff --git a/selfadversariallearningwithcomparativediscriminationfortextgeneration/layout.json b/selfadversariallearningwithcomparativediscriminationfortextgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..79d0be21255d87861b793fd0f08f6ed57d734d6d --- /dev/null +++ b/selfadversariallearningwithcomparativediscriminationfortextgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fab7d464f1d2ba3e360fa267702a80b3954118aad4ac5633383c4419810369d3 +size 492255 diff --git a/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_content_list.json b/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fa731cd4198a26660600738ddb5fa69ce0855c27 --- /dev/null +++ b/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94f74619267bd9ab1e9a5f2b4ac5db781aac8e2e8a5be817ff08072f2f12891b +size 94636 diff --git a/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_model.json b/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..695ed33f01d71802ac712b70762455b7ce25ba50 --- /dev/null +++ b/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe659ee4b3bd60c97a2ed7c1700334c33ea8a5bb2a3c5654fd26e2bc841596ff +size 113998 diff --git a/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_origin.pdf b/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5333792491a10f4b067660be327639ee1159fd25 --- /dev/null +++ b/selflearningtofilternoisylabelswithselfensembling/d163edf8-8a40-410e-b67c-b1b66c64e6a6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe21e9e86e1cb752e0a0e87921f0c2d745f1563c2d588e4ff9b832ee4b0bfd32 +size 998135 diff --git a/selflearningtofilternoisylabelswithselfensembling/full.md b/selflearningtofilternoisylabelswithselfensembling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..18a7652d78a37e37a7534ddc7983109ed9a8a29a --- /dev/null +++ b/selflearningtofilternoisylabelswithselfensembling/full.md @@ -0,0 +1,361 @@ +# SELF: LEARNING TO FILTER NOISY LABELS WITH SELF-ENSEMBLING + +Duc Tam Nguyen *, Chaithanya Kumar Mummadi *† Thi Phuong Nhung Ngo †, Thi Hoai Phuong Nguyen ‡, Laura Beggel †, Thomas Brox † + +# ABSTRACT + +Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time. To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training. Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels. For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs. We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch. While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss. We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios. It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures. + +# 1 INTRODUCTION + +The acquisition of large quantities of a high-quality human annotation is a frequent bottleneck in applying DNNs. There are two cheap but imperfect alternatives to collect annotation at large scale: crowdsourcing from non-experts and web annotations, particularly for image data where the tags and online query keywords are treated as valid labels. Both these alternatives typically introduce noisy (wrong) labels. While Rolnick et al. (2017) empirically demonstrated that DNNs can be surprisingly robust to label noise under certain conditions, Zhang et al. (2017) has shown that DNNs have the capacity to memorize the data and will do so eventually when being confronted with too many noisy labels. Consequently, training DNNs with traditional learning procedures on noisy data strongly deteriorates their ability to generalize – a severe problem. Hence, limiting the influence of label noise is of great practical importance. + +A common approach to mitigate the negative influence of noisy labels is to eliminate them from the training data and train deep learning models just with the clean labels (Frenay & Verleysen, 2013). Employing semi-supervised learning can even counteract the noisy labels (Laine & Aila, 2016; Luo et al., 2018). However, the decision which labels are noisy and which are not is decisive for learning robust models. Otherwise, unfiltered noisy labels still influence the (supervised) loss and affect the task performance as in these previous works. They use the entire label set to compute the loss and severely lack a mechanism to identify and filter out the erroneous labels from the labels set. + +In this paper, we propose a self-ensemble label filtering (SELF) framework that identifies potentially noisy labels during training and keeps the network from receiving supervision from the filtered noisy labels. This allows DNNs to gradually focus on learning from undoubtedly correct samples even with an extreme level of noise in the labels (e.g., $80\%$ noise ratio) and leads to improved performance as the supervision becomes less noisy. The key contribution of our work is progressive filtering, i.e., + +![](images/a72082028b1edf4145acbaefd39319e8b155abd35bd56fa1d8d1e6bbd50cdaf9.jpg) +(a) Evaluation on CIFAR-10 + +![](images/78c0136809ad519ed6fdc4b14ba4c9a939f4ab77cde287f239b40ea075edb0dd.jpg) +(b) Evaluation on CIFAR-100 +Figure 1: Comparing the performance of SELF with previous works for learning under different (symmetric) label noise ratios on the (a) CIFAR-10 & (b) CIFAR-100 datasets. SELF retains higher robust classification accuracy at all noise levels. + +leverage the knowledge provided in the network's output over different training iterations to form a consensus of predictions (self-ensemble predictions) to progressively identify and filter out the noisy labels from the labeled data. + +When learning under label noise, the network receives noisy updates and hence fluctuates strongly. Such conduct of training would impede to learn stable neural representations and further mislead the consensus of the predictions. Therefore, it is essential to incorporate a model with stable training behavior to obtain better estimates from the consensus. Concretely, we employ the semi-supervised technique as a backbone to our framework to stabilize the learning process of the model. Correctly, we maintain the running average model, such as proposed by Tarvainen & Valpola (2017), a.k.a. the Mean-Teacher model. This model ensemble learning provides a more stable supervisory signal than the noisy model snapshots and provides a stable ground for progressive filtering to filter out potential noisy labels. Note that this is different from just a mere combination of semi-supervised techniques with a noisy label filtering method. + +We call our approach self-ensemble label filtering (SELF) - that establishes model ensemble learning as a backbone to form a solid consensus of the self-ensemble predictions to filter out the noisy labels progressively. Our framework allows to compute supervised loss on cleaner subsets rather than the entire noisy labeled data as in previous works. It further leverages the entire dataset, including the filtered out erroneous samples in the unsupervised loss. To best of our knowledge, we are the first to identify and propose self-ensemble as a principled technique against learning under noisy labels. + +Our motivation stems from the observation that DNNs start to learn from easy samples in initial phases and gradually adapt to hard ones during training. When trained on wrongly labeled data, DNNs learn from clean labels at ease and receive inconsistent error signals from the noisy labels before over-fitting to the dataset. The network's prediction is likely to be consistent on clean samples and inconsistent or oscillates strongly on wrongly labeled samples over different training iterations. Based on this observation, we record the outputs of a single network made on different training epochs and treat them as an ensemble of predictions obtained from different individual networks. We call these ensembles that are evolved from a single network self-ensemble predictions. Subsequently, we identify the correctly labeled samples via the agreement between the provided label set and our running average of self-ensemble predictions. The samples of ensemble predictions that agree with the provided labels are likely to be consistent and treated as clean samples. + +In summary, our SELF framework stabilizes the training process and improves the generalization ability of DNNs. We evaluate the proposed technique on image classification tasks using CIFAR10, CIFAR100 & ImageNet. We demonstrate that SELF consistently outperforms the existing approaches on asymmetric and symmetric noise at all noise levels, as shown in Fig. 1. Besides, SELF remains robust towards the choice of the network architecture. Our work is transferable to other tasks without the need to modify the architecture or the primary learning objective. + +![](images/bcb50ab3052302b40fa31eefdbb33dd77b1085054a07b4ea8e2865ad7173de79.jpg) +Figure 2: Overview of the self-ensemble label filtering (SELF) framework. The model starts in iteration 0 with training from the noisy label set. During training, the model maintains a self-ensemble, a running average of itself (Tarvainen & Valpola, 2017) to provide a stable learning signal. Also, the model collects a self-ensemble prediction (moving-average) for the subsequent filtering. Once the best model is found, these predictions identify and filter out noisy labels using the original label set $L_{0}$ . The model performs this progressive filtering until there is no more better model. For details see Algorithm 1. + +# 2 SELF-ENSEMBLE LABEL FILTERING + +# 2.1 OVERVIEW + +Fig. 2 shows an overview of our proposed approach. In the beginning, we assume that the labels of the training set are noisy. The model attempts to identify correct labels progressively using self-forming ensembles of models and predictions. Since wrong labels cause strong fluctuations in the model's predictions, using ensembles is a natural way to counteract noisy labels. + +Concretely, in each iteration, the model learns from a detected set of potentially correct labels and maintains a running average of model snapshots (realized by the Mean Teacher model Tarvainen & Valpola (2017)). This ensemble model is evaluated on the entire dataset and provides an additional learning signal for training the single models. Additionally, our framework maintains the running-average of the model's predictions for the filtering process. The model is trained until we find the best model w.r.t. the performance on the validation set (e.g., by early-stopping). The set of correct labels is detected based on the strategy defined in Sec. 2.2. In the next iteration, we again use all data and the new filtered label set as input for the model training. The iterative training procedure stops when no better model can be found. In the following, we give more details about the combination of this training and filtering procedure. + +# 2.2 PROGRESSIVE LABEL FILTERING + +Progressive detection of correctly labeled samples Our framework Self-Ensemble Label Filtering (Algorithm 1) focuses on the detection of certainly correct labels from the provided label set $L_{0}$ . In each iteration $i$ , the model is trained using the label set of potentially correct labels $L_{i}$ . At the end of each iteration, the model determines the next correct label set $L_{i+1}$ using the filtering strategy described in 2.2. The model stops learning when no improvement was achieved after training on the refined label set $L_{i+1}$ . + +In other words, in each iteration, the model attempts to learn from the easy, in some sense, obviously correct labels. However, learning from easy samples also affects similar but harder samples from the same classes. Therefore, by learning from these easy samples, the network can gradually distinguish between hard and wrongly-labeled samples. + +Algorithm 1 SELF: Self-Ensemble Label Filtering pseudocode +Require: $\mathcal{D}_{train} =$ noisy labeled training set +Require: $D_{val} =$ noisy labeled validation set +Require: $(x,y) =$ training stimuli and label +Require: $\alpha =$ assembling momentum, $0\leq \alpha \leq 1$ $i\gets 0$ counter to track iterations + $M_i\gets$ train(Dtrain,Dval) initial Mean-Teacher ensemble model training + $M_{best}\gets M_i$ set initial model as best model initialize ensemble predictions of all samples (ignored sample index for simplicity) +while acc(Mi, $\mathcal{D}_{val})\geq$ acc(Mbest,Dval) do iterate until no best model is found on $\mathcal{D}_{val}$ save the best model + $M_{best}\gets M_i$ Dfilter $\leftarrow$ Dtrain set filtered dataset as initial label set + $i\gets i + 1$ +for $(x,y)$ in $\mathcal{D}_{filter}$ do evaluate model output $\hat{z}_i$ $\bar{z}_i\gets \alpha \bar{z}_{i - 1} + (1 - \alpha)\hat{z}_i$ accumulate ensemble predictions $\overline{z}_i$ +if $y\neq argmax(\overline{z}_i)$ then verify agreement of ensemble predictions & label identify it as noisy label & remove from label set +end if +end for + $M_i\gets$ train(Dfilter,Dval) train Mean-Teacher model on filtered label set +end while +return $M_{best}$ + +Our framework does not focus on repairing all noisy labels. Although the detection of wrong labels is sometimes easy, finding their correct hidden label might be extremely challenging in case of having many classes. If the noise is sufficiently random, the set of correct labels will be representative to achieve high model performance. Further, in our framework, the label filtering is performed on the original label set $L_{0}$ from iteration 0. Clean labels erroneously removed in an earlier iteration (e.g., labels of hard to classify samples) can be reconsidered for model training again in later iterations. + +Filtering strategy The model can determine the set of potentially correct labels $L_{i}$ based on agreement between the label $y$ and its maximal likelihood prediction $\hat{y} | x$ with $L_{i} = \{(y, x) \mid \hat{y}_{x} = y; \forall (y, x) \in L_{0}\}$ . $L_{0}$ is the label set provided in the beginning, $(y, x)$ are the samples and their respective noisy labels in the iteration $i$ . In other words, the labels are only used for supervised training if in the current epoch, the model predicts the respective label to be the correct class with the highest likelihood. In practice, our framework does not use $\hat{y}(x)$ of model snapshots for filtering but a moving-average of the ensemble models and predictions to improve the filtering decision. + +# 2.3 SELF-ENSEMBLE LEARNING + +The model's predictions for noisy samples tend to fluctuate. For example, take a cat wrongly labeled as a tiger. Other cat samples would encourage the model to predict the given cat image as a cat. Contrary, the wrong label tiger regularly pulls the model back to predict the cat as a tiger. Hence, using the model's predictions gathered in one single training epoch for filtering is sub-optimal. Therefore, in our framework SELF, our model relies on ensembles of models and predictions. + +Model ensemble with Mean Teacher A natural way to form a model ensemble is by using an exponential running average of model snapshots (Fig. 3a). This idea was proposed in Tarvainen & Valpola (2017) for semi-supervised learning and is known as the Mean Teacher model. In our framework, both the mean teacher model and the normal model are evaluated on all data to preserve the consistency between both models. The consistency loss between student and teacher output distribution can be realized with Mean-Square-Error loss or Kullback-Leibler-divergence. More details for training with the model ensemble can be found in Appendix A.1 + +Prediction ensemble Additionally, we propose to collect the sample predictions over multiple training epochs: $\overline{z}_j = \alpha \overline{z}_{j-1} + (1 - \alpha) \hat{z}_j$ , whereby $\overline{z}_j$ depicts the moving-average prediction of sample $k$ at epoch $j$ , $\alpha$ is a momentum, $\hat{z}_j$ is the model prediction for sample $k$ in epoch $j$ . This scheme is displayed in Fig. 3b. For each sample, we store the moving-average predictions, accumulated over the past iterations. Besides having a more stable basis for the filtering step, our proposed procedure also leads to negligible memory and computation overhead. + +![](images/73b82fa0eba4ae34c7a780d6de01ab91246088850210dc6f9e49c6b0756a3bc5.jpg) +(a) Model ensemble (Mean teacher) + +![](images/e3ab42679a7aae2bc70e0573cd88dbfcd288152f95daf2fba50ec976c1ad4bb1.jpg) +(b) Predictions ensemble +Figure 3: Maintaining the (a) model and (b) predictions ensembles is very effective against noisy model updates. These ensembles are self-forming during the training process as a moving-average of (a) model snapshots or (b) class predictions from previous training steps. + +Further, due to continuous training of the best model from the previous model, computation time can be significantly reduced, compared to re-training the model from scratch. On the new filtered dataset, the model must only slowly adapt to the new noise ratio contained in the training set. Depending on the computation budget, a maximal number of iterations for filtering can be set to save time. + +# 3 RELATED WORKS + +Reed et al. (2014); Azadi et al. (2015) performed early works on learning robustly under label noise for deep neural networks. Recently, Rolnick et al. (2017) have shown for classification that deep neural networks come with natural robustness to label noise following a particular random distribution. No modification of the network or the training procedure is required to achieve this robustness. Following this insight, our framework SELF relies on this natural robustness to kickstart the self-ensemble filtering process to extend the robust behavior to more challenging scenarios. + +Laine & Aila (2016); Luo et al. (2018) proposed to apply semi-supervised techniques on the data to counteract noise. These and other semi-supervised learning techniques learn from a static, initial set of noisy labels and have no mechanisms to repair labels. Therefore, the supervised losses in their learning objective are typically high until the model strongly overfits to the label noise. Compared to these works, our framework performs a variant of self-supervised label corrections. The network learns from a dynamic, variable set of labels, which is determined by the network itself. Progressive filtering allows the network to (1) focus on a label set with a significantly lower noise ratio and (2) repair wrong decisions made by itself in an earlier iteration. + +Other works assign weights to potentially wrong labels to reduce the learning signal (Jiang et al., 2017; Ren et al., 2018; Jenni & Favaro, 2018). These approaches tend to assign less extreme weights or hyperparameters that are hard to set. Since the typical classification loss is highly non-linear, a lower weight might still lead to learning from wrong labels. Compared to these works, the samples in SELF only receive extreme weights: either they are zero or one. Further, SELF focuses only on self-detecting the correct samples, instead of repairing the wrong labels. Typically, the set of correct samples are much easier to detect and are sufficiently representative to achieve high performance. + +Han et al. (2018b); Jiang et al. (2017) employ two collaborating and simultaneously learning networks to determine which samples to learn from and which not. However, the second network is free in its predictions and hence hard to tune. Compared to these works, we use ensemble learning as a principled approach to counteract model fluctuations. In SELF, the second network is extremely restricted and is only composed of running averages of the first network. To realize the second network, we use the mean-teacher model (Tarvainen & Valpola, 2017) as a backbone. Compared to their work, our self-ensemble label filtering gradually detects the correct labels and learns from them, so the label set is variable. Further, we do use not only model ensembles but also an ensemble of predictions to detect correct labels. + +Other works modify the primary loss function of the classification tasks. Patrini et al. (2017) estimates the noise transition matrix to correct the loss, Han et al. (2018a) uses human-in-the-loop, Zhang & Sabuncu (2018); Thulasidasan et al. (2019) propose other forms of cross-entropy losses. The loss modification impedes the transfer of these ideas to other tasks than classification. Compared to these works, our framework SELF does not modify the primary loss. However, many tasks rely on the presence of clean labels such as anomaly detection (Nguyen et al., 2019a) or self-supervised and unsupervised learning (Nguyen et al., 2019b). The progressive filtering procedure and self-ensemble learning proposed are also applicable in these tasks to counteract noise effectively. + +Table 1: Comparison of classification accuracy when learning under uniform label noise on CIFAR-10 and CIFAR-100. Following previous works, we compare two evaluation scenarios: with a noisy validation set (top) and with 1000 clean validation samples (bottom). The best model is marked in bold. Having a small clean validation set improves the model but is not necessary. + +
NOISE RATIOCIFAR-10CIFAR-100
40%60%80%40%60%80%
USING NOISY VALIDATION SET
REED-HARD (REED ET AL., 2014)69.66--51.34--
S-MODEL (GOLDBERGER & BEN-REUVEN, 2016)70.64--49.10--
OPEN-SET WANG ET AL. (2018)78.15-----
RAND. WEIGHTS (REN ET AL., 2018)86.06--58.01--
BI-LEVEL-MODEL (JENNI & FAVARO, 2018)89.00-20.0061.00-13.00
MENTORNET (JIANG ET AL., 2017)89.00-49.0068.00-35.00
\( L_{q} \) (ZHANG & SABUNCU, 2018)87.1382.5464.0761.7753.1629.16
TRUNC \( L_{q} \)(ZHANG & SABUNCU, 2018)87.6282.7067.9262.6454.0429.60
FORWARD \( \hat{T} \) (PATRINI ET AL., 2017)83.2574.9654.6431.0519.1208.90
CO-TEACHING (HAN ET AL., 2018B)81.8574.0429.2255.9547.9823.22
D2L (MA ET AL., 2018)83.3672.84-52.0142.27
SL (WANG ET AL., 2019)85.3480.0753.8153.6941.4715.00
JOINTOPT (TANAKA ET AL., 2018)83.2774.3940.0952.8842.6418.46
SELF (OURS)93.7093.1569.9171.9866.2142.09
USING CLEAN VALIDATION SET (1000 IMAGES)
DAC (THULASIDASAN ET AL., 2019)90.9387.5870.8068.2059.4434.06
MENTORNET (JIANG ET AL., 2017)78.00--59.00--
RAND. WEIGHTS (REN ET AL., 2018)86.55--58.34--
REN ET AL (REN ET AL., 2018)86.92--61.31--
SELF* (OURS)95.1093.7779.9374.7668.3546.43
+ +# 4 EVALUATION + +# 4.1 EXPERIMENTS DESCRIPTIONS + +# 4.1.1 STRUCTURE OF THE ANALYSIS + +We evaluate our approach on CIFAR-10, CIFAR-100, an ImageNet-ILSVRC on different noise scenarios. For CIFAR-10, CIFAR-100, and ImageNet, we consider the typical situation with symmetric and asymmetric label noise. In the case of the symmetric noise, a label is randomly flipped to another class with probability $p$ . Following previous works, we also consider label flips of semantically similar classes on CIFAR-10, and pair-wise label flips on CIFAR-100. Finally, we perform studies on the choice of the network architecture and the ablation of the components in our framework. Tab. 6 (Appendix) shows the in-deep analysis of semi-supervised learning strategies combined with recent works. Overall, the proposed framework SELF outperforms all these combinations. + +# 4.1.2 COMPARISONS TO PREVIOUS WORKS + +We compare our work to previous methods from Reed-Hard (Reed et al., 2014), S-model (Goldberger & Ben-Reuven, 2016), Wang et al. (2018), Rand. weights (Ren et al., 2018), Bi-level-model (Jenni & Favaro, 2018), D2L (Ma et al., 2018), SL (Wang et al., 2019), $L_{q}$ (Zhang & Sabuncu, 2018), Trunc $L_{q}$ (Zhang & Sabuncu, 2018), Forward $\hat{T}$ (Patrini et al., 2017), DAC (Thulasidasan et al., 2019), Random reweighting (Ren et al., 2018), and Learning to reweight (Ren et al., 2018). For co-teaching (Han et al., 2018b), MentorNet (Jiang et al., 2017), JointOpt (Tanaka et al., 2018), the source codes are available and hence used for evaluation. + +(Ren et al., 2018) and DAC (Thulasidasan et al., 2019) considered the setting of having a small clean validation set of 1000 and 5000 images respectively. For comparison purposes, we also experiment with a small clean set of 1000 images additionally. Further, we abandon oracle experiments or methods using additional information to keep the evaluation comparable. For instance, Forward $T$ (Patrini et al., 2017) uses the true underlying confusion matrix to correct the loss. This information is neither known in typical scenarios nor used by other methods. + +Table 2: Asymmetric noise on CIFAR-10, CIFAR-100. All methods use Resnet34. CIFAR-10: flip TRUCK $\rightarrow$ AUTOMOBILE, BIRD $\rightarrow$ AIRPLANE, DEER $\rightarrow$ HORSE, CAT $\leftrightarrow$ DOG with prob. $p$ . CIFAR-100: flip class $i$ to $(i + 1)\% 100$ with prob. $p$ . SELF retains high performances across all noise ratios and outperforms all previous works. + +
NOISE RATIOCIFAR-10CIFAR-100
10%20%30%40%10%20%30%40%
CCE90.6988.5986.1480.1166.5459.2051.4042.74
MAE82.6152.9350.3645.5213.3811.5008.9108.20
FORWARD T90.5289.0986.7983.5545.9642.4638.1334.44
Lq90.9189.3385.4576.7468.3666.5961.4547.22
TRUNC Lq90.4389.4587.1082.2868.8666.5961.8747.66
SL88.2485.3680.64-65.5865.1463.10-
JOINTOPT90.1289.4587.1887.9769.6168.9463.9953.71
SELF (OURS)93.7592.7692.4289.0772.4570.5365.0953.83
+ +Whenever possible, we adopt the reported performance from the corresponding publications. The testing scenarios are kept as similar as possible to enable a fair comparison. All tested scenarios use a noisy validation set with the same noise distribution as the training set unless stated otherwise. All model performances are reported on the clean test set. + +Table 3: Effect of the choice of network architecture on classification accuracy on CIFAR-10 & -100 with uniform label noise. SELF is compatible with all tested architectures. Here * represents baseline accuracy of the architectures that are trained on fully supervised setting at $0\%$ label noise. + +
CIFAR-10CIFAR-100CIFAR-10CIFAR-100
RESNET10193.89*81.14*RESNET3493.5*76.76*
NOISE40%80%40%80%NOISE40%80%40%80%
MENTORNET89.0049.0068.0035.00Lq87.1364.0761.7729.16
Co-T.62.5821.7939.5816.79TRUNC Lq87.6267.9262.6429.60
SELF92.7764.5269.0039.73FORWARD T83.2554.6431.058.90
SELF91.1363.5966.7135.56
WRN 28-1096.21*81.02*RESNET2696.37*81.20*
NOISE40%80%40%80%NOISE40%80%40%80%
MENTORNET88.746.3067.5030.10Co-T.81.8529.2255.9523.22
REWEIGHT86.02-58.01-SELF93.7069.9171.9842.09
SELF93.3467.4172.4842.06
+ +# 4.1.3 NETWORKS CONFIGURATION AND TRAINING + +For the basic training of self-ensemble model, we use the Mean Teacher model (Tarvainen & Valpola, 2017) available on GitHub ${}^{1}$ . The students and teacher networks are residual networks (He et al., 2016) with 26 layers with Shake-Shake-regularization (Gastaldi, 2017). We use the PyTorch (Paszke et al., 2017) implementation of the network and keep the training settings close to (Tarvainen & Valpola, 2017). The network is trained with Stochastic Gradient Descent. In each filtering iteration, the model is trained for a maximum of 300 epochs, with patience of 50 epochs. For more training details, see the appendix. + +# 4.2 EXPERIMENTS RESULTS + +# 4.2.1 SYMMETRIC LABEL NOISE + +CIFAR-10 and 100 Results for typical uniform noise scenarios with noise ratios on CIFAR-10 and CIFAR-100 are shown in Tab. 1. More results are visualized in Fig. 1a (CIFAR-10) and Fig. 1b (CIFAR-100). Our approach SELF performs robustly in the case of lower noise ratios with up to $60\%$ and outperforms previous works. Although a strong performance loss occurs at $80\%$ label noise, + +Table 4: Classification accuracy on clean ImageNet validation dataset. The models are trained at $40\%$ label noise and the best model is picked based on the evaluation on noisy validation data. Mentornet shows the best previously reported results. Mentornet* is based on Resnet-101. We chose the smaller Resnext50 model to reduce the run-time. + +
AccuracyResnext18 +P@1 P@5Resnext50 +P@1 P@5
Mentornet*--65.1085.90
ResNext50.675.9956.2580.90
Mean-T.58.0481.8262.9685.72
SELF (Ours)66.9286.6571.3189.92
+ +Table 5: Ablation study on CIFAR-10 and CIFAR-100. The Resnet baseline was trained on the full noisy label set. Adding progressive filtering improves over this baseline. The Mean Teacher maintains an ensemble of model snapshots, which helps counteract noise. Having progressive filtering and model ensembles (-MVA-pred.) makes the model more robust but still fails at $80\%$ noise. The full SELF framework additionally uses the prediction ensemble for detection of correct labels. + +
NOISE RATIOCIFAR-10CIFAR-100
40%80%40%80%
RESNET2683.2041.3753.1819.92
FILTERING87.3549.5861.4023.42
MEAN-T.93.7052.5065.8526.31
-MVA-PRED.93.7757.4071.6938,61
SELF (OURS)93.7069.9171.9842.09
+ +SELF still outperforms most of the previous approaches. The experiment SELF* using a 1000 clean validation images shows that the performance loss mostly originates from the progressive filtering relying too strongly on the extremely noisy validation set. + +ImageNet-ILSVRC Tab. 4 shows the precision@1 and @5 of various models, given $40\%$ label noise in the training set. Our networks are based on ResNext18 and Resnext50. Note that MentorNet (Jiang et al., 2017) uses Resnet101 (P@1: 78.25) (Goyal et al., 2017), which has higher performance compared to Resnext50 (P@1: 77.8) (Xie et al., 2017) on the standard ImageNet validation set. + +Despite the weaker model, SELF (ResNext50) surpasses the best previously reported results by more than $5\%$ absolute improvement. Even the significantly weaker model ResNext18 outperforms MentorNet, which is based on a more powerful ResNet101 network. + +# 4.2.2 ASYMMETRIC LABEL NOISE + +Tab. 2 shows more challenging noise scenarios when the noise is not class-symmetric and uniform. Concretely, labels are flipped among semantically similar classes such as CAT and DOG on CIFAR-10. On CIFAR-100, each label is flipped to the next class with a probability $p$ . In these scenarios, our framework SELF also retains high performance and only shows a small performance drop at $40\%$ noise. The high label noise resistance of our framework indicates that the proposed self-ensemble filtering process helps the network identify correct samples, even under extreme noise ratios. + +# 4.2.3 EFFECTS OF DIFFERENT ARCHITECTURES + +Previous works utilize a various set of different architectures, which hinders a fair comparison. Tab. 3 shows the performance of our framework SELF compared to previous approaches. SELF outperforms other works in all scenarios except for CIFAR-10 with $80\%$ noise. Typical robust learning approaches lead to significant accuracy losses at $40\%$ noise, while SELF still retains high performance. Further, note that SELF allows the network's performance to remain consistent across the different underlying architectures. + +# 4.2.4 ABLATION STUDY + +Tab. 5 shows the importance of each component in our framework. See Fig. 4a, Fig. 4b for experiments on more noise ratios. As expected, the Resnet-baseline rapidly breaks down with increasing noise ratios. Adding self-supervised filtering increases the performance slightly in lower noise ratios. However, the model has to rely on extremely noisy snapshots. Contrary, using a model ensemble alone such as in Mean-Teacher can counteract noise on the noisy dataset CIFAR-10. On the more challenging CIFAR-100, however, the performance decreases strongly. With self-supervised filtering and model ensembles, SELF (without MVA-pred) is more robust and only impairs performance at $80\%$ noise. The last performance boost is given by using moving-average predictions so that the network can reliably detect correctly labeled samples gradually. + +Fig. 4 shows the ablation experiments on more noise ratios. The analyses show that each component in SELF is essential for the model to learn robustly. + +![](images/afda49bb1904b8be1cf824658f6f3065865063fc7198c6f11cfe982a13437541.jpg) +(a) Ablation exps. on CIFAR-10 + +![](images/2f0b394d26167868321a1a1bcd8ea6ba99f1208606e6a5d88d73b28424d1b54b.jpg) +(b) Ablation exps. on CIFAR-100 +Figure 4: Ablation study on the importance of the components in our framework SELF, evaluated on (a) Cifar-10 and (b) Cifar-100 with uniform noise. Please refer Tab. 5 for details of components. + +Table 6: Analysis of semi-supervised learning (SSL) strategies: entropy learning, mean-teacher combined with recent works. Our progressive filtering strategy is shown to be effective and performs well regardless of the choice of the semi-supervised learning backbone. Overall, the proposed method SELF outperforms all these combinations. Best model in each SSL-category is marked in bold. Running mean-teacher+ co-teaching using the same configuration is not possible due to memory constraints. + +
NOISE RATIOCIFAR-10CIFAR-100
40%60%80%40%60%80%
BASELINE MODELS
RESNET26 (GASTALDI, 2017)83.2072.3541.3753.1844.3119.92
CO-TEACHING (HAN ET AL., 2018B)81.8574.0429.2255.9547.9823.22
JOINTOPT (TANAKA ET AL., 2018)83.2774.3940.0952.8842.6418.46
PROGRESSIVE FILTERING (OURS)87.3575.4749.5861.4050.6023.42
SEMI-SUPERVISED LEARNING WITH ENTROPY LEARNING
ENTROPY79.1385.9846.9354.6541.3421.29
ENTROPY + CO-TEACHING84.9474.2835.1655.6843.5220.5
ENTROPY + JOINT-OPT84.4475.8639.1656.7343.2717.24
ENTROPY+FILTERING (OURS)90.0483.8852.4659.9746.4523.53
SEMI-SUPERVISED LEARNING WITH MEAN-TEACHER
MEAN TEACHER93.7090.4052.565.8554.4826.31
MEAN-TEACHER + JOINTOPT91.4083.6245.1260.0945.9223.54
MEAN-TEACHER + FILTERING - SELF (OURS)93.7092.8569.9171.9866.2142.58
+ +# 4.2.5 SEMI-SUPERVISED LEARNING FOR PROGRESSIVE FILTERING + +Tab. 6 shows different semi-supervised learning strategies: entropy learning, mean-teacher combined with recent works. Note that Co-Teaching+Mean-Teacher cannot be implemented and run in the same configuration as other experiments, due to memory constraints. + +The analysis indicates the semi-supervised losses mostly stabilize the baselines, compared to the model without semi-supervised learning. However, Co-teaching and JointOpt sometimes perform worse than the purely semi-supervised model. This result indicates that their proposed frameworks are not always compatible with semi-supervised losses. + +The progressive filtering technique is seamlessly compatible with different semi-supervised losses. The filtering outperforms its counterparts when combined with Entropy Learning or Mean-teacher model. Overall, SELF outperforms all considered combinations. + +# 5 CONCLUSION + +We propose a simple and easy to implement a framework to train robust deep learning models under incorrect or noisy labels. We filter out the training samples that are hard to learn (possibly noisy labeled samples) by leveraging ensemble of predictions of the single network's output over different training epochs. Subsequently, we allow clean supervision from the non-hard samples and further leverage additional unsupervised loss from the entire dataset. We show that our framework results in DNN models with superior generalization performance on CIFAR-10, CIFAR-100 & ImageNet and outperforms all previous works under symmetric (uniform) and asymmetric noises. Furthermore, our models remain robust despite the increasing noise ratio and change in network architectures. + +# REFERENCES + +Samaneh Azadi, Jiashi Feng, Stefanie Jegelka, and Trevor Darrell. Auxiliary image regularization for deep cnns with noisy labels. arXiv preprint arXiv:1511.07069, 2015. +Benoit Frénay and Michel Verleysen. Classification in the presence of label noise: a survey. IEEE transactions on neural networks and learning systems, 25(5):845-869, 2013. +Xavier Gastaldi. Shake-shake regularization. arXiv preprint arXiv:1705.07485, 2017. +Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. 2016. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. pp. 9. +Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. +Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, and Masashi Sugiyama. Masking: A new perspective of noisy supervision. In Advances in Neural Information Processing Systems, pp. 5836-5846, 2018a. +Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, pp. 8535-8545, 2018b. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Simon Jenni and Paolo Favaro. Deep bilevel learning. In ECCV, 2018. +Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. arXiv:1712.05055 [cs], December 2017. URL http://arxiv.org/abs/1712.05055.arXiv:1712.05055. +Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. +Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. +Yucen Luo, Jun Zhu, Mengxi Li, Yong Ren, and Bo Zhang. Smooth neighbors on teacher graphs for semi-supervised learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8896-8905, 2018. +Xingjun Ma, Yisen Wang, Michael E Houle, Shuo Zhou, Sarah M Erfani, Shu-Tao Xia, Sudanthi Wijewickrema, and James Bailey. Dimensionality-driven learning with noisy labels. arXiv preprint arXiv:1806.02612, 2018. + +Duc Tam Nguyen, Zhongyu Lou, Michael Klar, and Thomas Brox. Anomaly detection with multiple-hypotheses predictions. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 4800-4809, Long Beach, California, USA, 09-15 Jun 2019a. PMLR. URL http://proceedings.mlr.press/v97/nguyen19b.html. +Tam Nguyen, Maximilian Dax, Chaithanya Kumar Mummadi, Nhung Ngo, Thi Hoai Phuong Nguyen, Zhongyu Lou, and Thomas Brox. Deepusps: Deep robust unsupervised saliency prediction via self-supervision. In Advances in Neural Information Processing Systems, pp. 204-214, 2019b. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. +Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944-1952, 2017. +Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596, 2014. +Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to Reweight Examples for Robust Deep Learning. arXiv:1803.09050 [cs, stat], March 2018. URL http://arxiv.org/abs/1803.09050. arXiv:1803.09050. +David Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017. +Ilya Sutskever, James Martens, George E Dahl, and Geoffrey E Hinton. On the importance of initialization and momentum in deep learning. ICML (3), 28(1139-1147):5, 2013. +Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint optimization framework for learning with noisy labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5552-5560, 2018. +Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pp. 1195-1204, 2017. +Sunil Thulasidasan, Tanmoy Bhattacharya, Jeff Bilmes, Gopinath Chennupati, and Jamal MohdYusof. Combating label noise in deep learning using abstention. arXiv preprint arXiv:1905.10964, 2019. +Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, and Shu-Tao Xia. Iterative learning with open-set noisy labels. arXiv preprint arXiv:1804.00092, 2018. +Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jinfeng Yi, and James Bailey. Symmetric cross entropy for robust learning with noisy labels. In Proceedings of the IEEE International Conference on Computer Vision, pp. 322-330, 2019. +Saining Xie, Ross Girshick, Piotr Dólár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492-1500, 2017. +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Sy8gdB9xx. +Zhilu Zhang and Mert Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in Neural Information Processing Systems, pp. 8778-8788, 2018. + +# A APPENDIX + +# A.1 MEAN TEACHER MODEL FOR ITERATIVE FILTERING + +We apply the Mean Teacher algorithm in each iteration $i$ in the $\text{train}(\mathcal{D}_{filter},\mathcal{D}_{val})$ procedure as follows. + +- Input: examples with potentially clean labels $D_{filter}$ from the filtering procedure. In the beginning $(i = 0)$ , here $D_{filter}$ refers to entire labeled dataset. +- Initialize a supervised neural network as the student model $M_{i}^{s}$ . +- Initialize the Mean Teacher model $M_{i}^{t}$ as a copy of the student model with all weights detached. +- Let the loss function be the sum of normal classification loss of $M_{i}^{s}$ and the consistency loss between the outputs of $M_{i}^{t}$ and $M_{i}^{t}$ +- Select an optimizer +- In each training iteration: + +- Update the weights of $M_{i}^{s}$ using the selected optimizer +- Update the weights of $M_{i}^{t}$ as an exponential moving-average of the student weights +- Evaluate performance of $M_{i}^{s}$ and $M_{i}^{t}$ over $\mathcal{D}_{val}$ to verify the early stopping criteria. + +- Return the best $M_{i}^{t}$ + +# A.2 ASSUMPTIONS DICUSSIONS + +Our method performs best when the following assumptions are hold. + +Natural robustness assumption of deep networks (Rolnick et al., 2017): The networks attempt to learn the easiest way to explain most of the data. SELF uses this assumption to kickstart the learning process. + +Correct samples dominate over wrongly labeled samples At $80\%$ noise on CIFAR-10, the correctly labeled cats (20% out of all cat images) still dominates over samples wrongly labeled as cat $(8.8\%)$ for each class). + +Independence results in less overfitting SELF performs best if the noises on the validation set and training set are i.i.d. . SELF uses the validation data for early stopping. Hence, a high correlation of label noise between train and valid increases the chance of model overfitting. + +Sufficient label randomness assumption The subset of all correctly labeled samples capture all samples clusters. In fact, many works from the active learning literature show that less than $100\%$ of the labeled samples are required to achieve the highest model performance. SELF performs progressive expansion of the correct labels sets. At larger noise ratios, not all clusters are covered by the identified samples. Therefore on task containing many classes, e.g., CIFAR-100, the model performance decreases faster than on CIFAR-10. + +The model performance reduces when these assumptions are strongly violated. Each assumption should have its own "critical" threshold for violation. A future in-depth analysis to challenge the assumptions is an interesting future research direction. + +# A.3 TRAINING DETAILS + +# A.3.1 CIFAR-10 AND CIFAR-100 + +Dataset Tab. 7 shows the details of CIFAR-10 and 100 datasets in our evaluation pipeline. The validation set is contaminated with the same noise ratio as the training data unless stated otherwise. + +Network training For the training our model SELF, we use the standard configuration provided by Tarvainen & Valpola (2017) $^{2}$ . Concretely, we use the SGD-optimizer with Nesterov Sutskever et al. (2013) momentum, a learning rate of 0.05 with cosine learning rate annealing Loshchilov & Hutter (2016), a weight decay of 2e-4, max iteration per filtering step of 300, patience of 50 epochs, total epochs count of 600. + +Table 7: Dataset description. Classification tasks on CIFAR-10 and CIFAR-100 with uniform noise. Note that the noise on the training and validation set is not correlated. Hence, maximizing the accuracy on the noisy set provides a useful (but noisy) estimate for the generalization ability on unseen test data. + +
TASK RESOLUTIONTYPECIFAR-10CIFAR-100
CLASSIFICATION10-WAY 32x32100-WAY
DATATRAIN (NOISY)4500045000
VALID (NOISY)50005000
TEST (CLEAN)1000010000
+ +For basic training of baselines models without semi-supervised learning, we had to set the learning rate to 0.01. In the case of higher learning rates, the loss typically explodes. Every other option is kept the same. + +Semi-supervised learning For the mean teacher training, additional hyperparameters are required. In both cases of CIFAR-10 and CIFAR-100, we again take the standard configuration with the consistency loss to mean-squared-error and a consistency weight: 100.0, logit distance cost: 0.01, consistency-ramp-up:5. The total batch-size is 512, with 124 samples being reserved for labeled samples, 388 for unlabeled data. Each epoch is defined as a complete processing of all unlabeled data. When training without semi-supervised-learning, the entire batch is used for labeled data. + +Data augmentation The data are normalized to zero-mean and standard-variance of one. Further, we use real-time data augmentation with random translation and reflection, subsequently random horizontal flip. The standard PyTorch-library provides these transformations. + +# A.3.2 IMAGENET-ILSVRC-2015 + +Network Training The network used for evaluation were ResNet He et al. (2016) and Resnext Xie et al. (2017) for training. All ResNext variants use a cardinality of 32 and base width of 4 (32x4d). ResNext models follow the same structure as their Resnet counterparts, except for the cardinality and base width. + +All other configurations are kept as close as possible to Tarvainen & Valpola (2017). The initial learning rate to handle large batches Goyal et al. (2017) is set to 0.1; the base learning rate is 0.025 with a single cycle of cosine annealing. + +Semi-supervised learning Due to the large images, the batch size is set to 40 in total with 20/20 for labeled and unlabeled samples, respectively. We found the Kullback-divergence leads to no meaningful network training. Hence, we set the consistency loss to mean-squared-error, with a weight of 1000. We use consistency ramp up of 5 epochs to give the mean teacher more time in the beginning. Weight decay is set to 5e-5; patience is four epochs to stop training in the current filtering iteration. + +Filtering We filter noisy samples with the topk=5 strategy, instead of taking the maximum-likelihood (ML) prediction as on CIFAR-10 and CIFAR-100. That means the samples are kept for supervised training if their provided label lies within the top 5 predictions of the model. The main + +![](images/afeda632ee7ab42ca90c407024bc6905f8750448a11f7406873287e1822c4eab.jpg) +(b) +Figure 5: Simple training losses to counter label noise. (a) shows the prediction of a sample given a model. The red bar indicates the noisy label, blue the correct one. Arrows depict the magnitude of the gradients (b) Typical losses reweighting schemes are not wrong but suffer from the gradient vanishing problem. Non-linear losses such as Negative-log-likelihood are not designed for gradient ascent. + +reason is that each image of ImageNet might contain multiple objects. Filtering with ML-predictions is too strict and would lead to a small recall of the detection of the correct sample. + +Data Augmentation For all data, we normalize the RGB-images by the mean: (0.485, 0.456, 0.406) and the standard variance (0.229, 0.224, 0.225). For training data, we perform a random rotation of up to 10 degrees, randomly resize images to $224 \times 224$ , apply random horizontal flip and random color jittering. This noise is needed in regular mean-teacher training. The jittering setting are: brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1. The validation data are resized to $256 \times 256$ and randomly cropped to $224 \times 224$ + +# A.3.3 SEMI-SUPERVISED LOSSES + +For the learning of wrongly labeled samples, Fig. 6 shows the relationship between the typical reweighting scheme and our baseline push-away-loss. Typically, reweighting is applied directly to the losses with samples weights $w^{(k)}$ for each sample $k$ as shown in Eq. 4 + +$$ +\min w _ {i} ^ {(k)} N L L \left(y _ {\text {l a b e l}} ^ {(k)} \mid x ^ {(k)}, D\right) \tag {1} +$$ + +$D$ is the dataset, $x^{(k)}$ and $y_{\text{label}}^{(k)}$ are the samples $k$ and its noisy label. $w_{i}^{(k)}$ is the samples weight for the sample $k$ at step $i$ . Negative samples weights $w_{i}^{(k)}$ are often assigned to push the network away from the wrong labels. Let $w_{i}^{(k)} = -c_{i}^{(k)}$ with $c_{i}^{(k)} > 0$ , then we have: + +$$ +\min - c _ {i} ^ {(k)} N L L \left(y _ {\text {l a b e l}} ^ {(k)} \mid x ^ {(k)}, D\right) \tag {2} +$$ + +Which results in: + +$$ +\max c _ {i} ^ {(k)} N L L \left(y _ {\text {l a b e l}} ^ {(k)} \mid x ^ {(k)}, D\right) \tag {3} +$$ + +In other words, we perform gradient ascent for wrongly labeled samples. However, the Negative-log-likelihood is not designed for gradient ascent. Hence the gradients of wrongly labeled samples vanish if the prediction is too close to the noisy label. This effect is similar to the training of Generative Adversarial Network (GAN) Goodfellow et al.. In the GAN-framework, the generator loss is not simply set to the negated version of the discriminator's loss for the same reason. + +Therefore, to provide a fair comparison with our framework, we suggest the push-away-loss $L_{Push - away}(y_{label}^{(k)}, x^{(k)}, D)$ with improved gradients as follows: + +$$ +\min \frac {1}{| Y | - 1} \sum_ {y, y \neq y _ {\text {l a b e l}} ^ {(k)}} c _ {i} ^ {(k)} N L L (y | x ^ {(k)}, D) \tag {4} +$$ + +Whereby $Y$ is the set of all classes in the training set. This loss has improved gradients to push the model away from the potentially wrong labels. + +![](images/c108eb2726eda42bf4cd242ec0c2c0e16c5bf87b3f9b295bf11819340d152ff8.jpg) +(a) + +![](images/341b15c1f41c1d7d0df9588eff8bdeebf78af710ec5dd807af228b399ff8c79c.jpg) +(b) +Figure 6: The entropy loss for semi-supervised learning. (a) Extreme predictions such as [0,1] are encouraged by minimizing the entropy on each prediction. (b) Additionally, maximizing the entropy of the mean prediction on the entire dataset or a large batch forces the model to balance its predictions over multiple samples. + +Table 8: Accuracy of the complete removal of samples during iterative filtering on CIFAR-10 and CIFAR-100. The underlying model is the MeanTeacher based on Resnet26. When samples are completely removed from the training set, they are no longer used for either supervised-or-unsupervised learning. This common strategy from previous works leads to rapid performance breakdown. + +
NOISE RATIOCIFAR-10CIFAR-100
40%80%40%80%
USING NOISY DATA ONLY
DATA REMOVAL93.459.9868.9935.53
SELF (OURS)93.769.9171.9842.09
WITH CLEAN VALIDATION SET
COMPL. REMOVAL94.3970.9371.8636.61
SELF (OURS)95.179.9374.7646.43
+ +Entropy minimization The typical entropy loss for semi-supervised learning is shown in Fig. 6. It encourages the model to provide extreme predictions (such as 0 or 1) for each sample. Over a large number of samples, the model should balance its predictions over all classes. + +The entropy loss can easily be applied to all samples to express the uncertainty about the provided labels. Alternatively, the loss can be combined with a strict filtering strategy, as in our work, which removes the labels of potentially wrongly labeled samples. + +For a large noise ratio, predictions of wrongly labeled samples fluctuate strongly over previous training iterations. Amplifying these network decisions could lead to even noisier models model. Combined with iterative filtering, the framework will have to rely on a single noisy model snapshot. In the case of an unsuitable snapshot, the filtering step will make many wrong decisions. + +# A.4 MORE EXPERIMENTS RESULTS + +# A.4.1 COMPLETE REMOVAL OF SAMPLES + +Tab. 8 shows the results of deleting samples from the training set. It leads to significant performances gaps compared to our strategy (SELF), which considers the removed samples as unlabeled data. In case of a considerable label noise of $80\%$ , the gap is close to $9\%$ . + +Continuously using the filtered samples lead to significantly better results. The unsupervised-loss provides meaningful learning signals, which should be used for better model training. + +![](images/4d5f0e7c6c3604ec4fd74423f53bce8465c183e77ed803276603bb1c21720619.jpg) +Figure 7: Sample training curves of our approach SELF on CIFAR-100 with (a) $60\%$ and (b) $80\%$ noise, using noisy validation data. Note that with our approach, the training loss remains close to 0. Further, note that the mean-teacher continuously outperforms the noisy student models. This shows the positive effect of temporal emsembling to counter label noise. + +![](images/be88f30a56c3e733345eb81698de4f0adb82592750ccfc5e8718a791bc95edc7.jpg) + +# A.4.2 SAMPLE TRAINING PROCESS + +Fig. 7 shows the sample training processes of SELF under $60\%$ and $80\%$ noise on CIFAR-100. The mean-teacher always outperform the student models. Further, note that regular training leads to rapid over-fitting to label noise. + +Contrary, with our effective filtering strategy, both models slowly increase their performance while the training accuracy approaches $100\%$ . Hence, by using progressive filtering, our model could erase the inconsistency in the provided labels set. \ No newline at end of file diff --git a/selflearningtofilternoisylabelswithselfensembling/images.zip b/selflearningtofilternoisylabelswithselfensembling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4fb38cc81a0f2d5ee512351e2ca61545a12abe7c --- /dev/null +++ b/selflearningtofilternoisylabelswithselfensembling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63e67e7ef4e9ae7228a9be1e72624890ce6721b6cf728b1fe75afc7c9e8f66bc +size 747005 diff --git a/selflearningtofilternoisylabelswithselfensembling/layout.json b/selflearningtofilternoisylabelswithselfensembling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ea3c91340b3c3d0b44057b5b67d10a68599db36f --- /dev/null +++ b/selflearningtofilternoisylabelswithselfensembling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05e075ae0e5967edee3f350517ee2f6bbc45a5adfac913fe3dde8bbb58f25ed3 +size 465376 diff --git a/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_content_list.json b/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3f5c14308f9ffaf1f7c49a6140c430316f8d6f11 --- /dev/null +++ b/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3828feb6bbf9d08e275aead443bb1404a9b0cb1e17ea70ae4ef9e894f43abd3 +size 92669 diff --git a/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_model.json b/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_model.json new file mode 100644 index 0000000000000000000000000000000000000000..97a16b03a4d7f90b2e16d05dbc7f795a5990ea7d --- /dev/null +++ b/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1fd59c9877476dcd2e730af4364af89c30c29cf37dc435af44d0cad29f5e727 +size 115552 diff --git a/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_origin.pdf b/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..816492e4f6a2ab2e199023e0ff31a957e72f6d55 --- /dev/null +++ b/selfsupervisedlearningofapplianceusage/b209163a-7480-437b-878e-7dcfc1f1a675_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c519fac09e2ce6ec0ab8a245f0be2c1b4b2b3cb6e8887d4179b2f3212fab5cf5 +size 1237925 diff --git a/selfsupervisedlearningofapplianceusage/full.md b/selfsupervisedlearningofapplianceusage/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b711b0d9bf1e5d50bd8a11870b0fd4f34663fae9 --- /dev/null +++ b/selfsupervisedlearningofapplianceusage/full.md @@ -0,0 +1,364 @@ +# SELF-SUPERVISED LEARNING OF APPLIANCE USAGE + +Chen-Yu Hsu, Abbas Zeitoun, Guang-He Lee, Dina Katabi & Tommi Jaakkola + +Computer Science and Artificial Intelligence Lab + +Massachusetts Institute of Technology + +Cambridge, MA 02139, USA + +{cyhsu,zeitoun,guanghe,dk}@mit.edu,tommi@csail.mit.edu + +# ABSTRACT + +Learning home appliance usage is important for understanding people's activities and optimizing energy consumption. The problem is modeled as an event detection task, where the objective is to learn when a user turns an appliance on, and which appliance it is (microwave, hair dryer, etc.). Ideally, we would like to solve the problem in an unsupervised way so that the method can be applied to new homes and new appliances without any labels. To this end, we introduce a new deep learning model that takes input from two home sensors: 1) a smart electricity meter that outputs the total energy consumed by the home as a function of time, and 2) a motion sensor that outputs the locations of the residents over time. The model learns the distribution of the residents' locations conditioned on the home energy signal. We show that this cross-modal prediction task allows us to detect when a particular appliance is used, and the location of the appliance in the home, all in a self-supervised manner, without any labeled data. + +# 1 INTRODUCTION + +Learning home appliance usage patterns is useful for understanding user habits and optimizing electricity consumption. For example, knowing when a person uses their microwave, stove, oven, coffee machine or toaster provides information about their eating patterns. Similarly, understanding when they use their TV, air-conditioner, or washer and dryer provides knowledge of their behavior and habits. Such information can be used to encourage energy saving by optimizing appliance usage (Armel et al., 2013), to track the wellbeing of elderly living alone (Donini et al., 2013; Debes et al., 2016), or to provide users with behavioral analytics (Zhou & Yang, 2016; Zippperer et al., 2013). This data is also useful for various businesses such as home insurance companies interested in assessing accident risks and utility companies interested in optimizing energy efficiency (Armel et al., 2013). + +The problem can be modeled as event detection - i.e., given the total energy consumed by the house as a function of time, we want to detect when various appliances are turned on. Past work has looked at analyzing the energy signal from the home utility meter to detect when certain appliances are on. Most solutions, however, assume that the energy pattern for each appliance is unique and known, and use this knowledge to create labeled data for their supervised models. (Kolter et al., 2010; Zhong et al., 2014; 2015; Kelly & Knottenbelt, 2015; Zhang et al., 2018; Bonfigli et al., 2018). Unfortunately, such solutions do not generalize well because the energy pattern of an appliance depends on its brand and can differ from one home to another (Kelly & Knottenbelt, 2015; Bonfigli et al., 2018). The literature also contains some unsupervised methods, but they typically have limited accuracy (Kim et al., 2011; Kolter & Jaakkola, 2012; Johnson & Willsky, 2013; Parson et al., 2014; Wytock & Kolter, 2014; Zhao et al., 2016; Lange & Berges, 2018). + +Unsupervised event detection in a data stream is intrinsically challenging because we do not know what patterns to look for. In our task, not only may appliance energy patterns be unknown, but also the energy signal may include many background events unrelated to appliance activation, such as the fridge or HVAC power cycling events. + +One way to address this challenge is to consider the self-supervised paradigm. If a different stream of data also observes the events of interest, we can use this second modality to provide self-supervising + +signals for event detection. To that end, we leverage the availability of new fine-resolution motion sensors which track the locations of people at home (Adib et al., 2015; Joshi et al., 2015; Li et al., 2016; Ghourchian et al., 2017; Hsu et al., 2017b). Such sensors operate as a consumer radar, providing decimeter-level location accuracy. They do not require people to wear sensors on their bodies, can operate through walls, and track people's locations in different rooms. + +These location sensors indirectly observe the events of interest. Specifically, they capture the change in user locations as they reach out to an appliance to set it up or turn it on (e.g. put food in a microwave and turn it on). Hence, the output of such sensors can provide a second modality for self-supervision. + +But how should one design the model? We cannot directly use location as a label for appliance activation events. People can be next to an appliance but neither activate it nor interact with it. Moreover, we do not assume appliance locations are known a priori. We also cannot use the two modalities to learn a joint representation of the event in a shared space. This is because location and energy are unrelated most of the time and become related only when the event of interest occurs. Furthermore, there are typically multiple residents in the home, making it hard to tell which of them interacted with the appliance. + +Our model is based on cross-modal prediction. We train a neural network that, given the home energy at a particular time, predicts the location of the home residents. Our intuition is that appliance activation events have highly predictable locations, typically the location of the appliance. In contrast, background energy events (e.g. power cycling of the fridge) do not lead to predictable locations. Thus, our model uses this learned predictability along with the associated location and energy representation to cluster the events in the energy stream. In addition, we use a mixture distribution to disentangle irrelevant location information of other residents in the home. Interestingly, our model not only learns when each appliance is activated but also discovers the location of that appliance in the home, all without any labeled data. + +We summarize the contributions of this paper as follows: + +- The paper introduces a new method for self-supervised event detection from weakly related data streams. The method combines neural cross-modal prediction with custom clustering based on the learned predictability and representation. We apply it to the task of detecting appliance usage events using unlabeled data from two sensors in the home; the energy meter, and a location sensor. +- To evaluate our design, we have created the first dataset with concurrent streams of home energy and location data, collected from 4 homes over a period of 7 months. For each home, data was collected for 2 to 4 months. Ground truth measurements are provided via smart plugs connected directly to each appliance. +- Compared to past work on unsupervised learning of appliance usage and a new baseline that leverages the two modalities, our method achieves significant improvements of $67.3\%$ and $51.9\%$ respectively for the average detection F1 score. + +We will release our code and dataset to encourage future work on multi-modal models for understanding appliance usage patterns and the underlying user behavior. + +# 2 RELATED WORK + +Energy disaggregation Our work is related to past work on energy disaggregation, which refers to the problem of separating appliance-level energy from a home's total (or aggregate) energy signal. Past work in this domain can be broadly classified into two categories: supervised and unsupervised. + +Supervised methods assume that the power signatures of individual appliances are available. They use data from individual appliances to obtain models for each appliance power signature, and then use those models to detect appliance events from the aggregate energy signal. Early work learns sparse codes for different appliances (Kolter et al., 2010) or uses a Factorial HMM (FHMM) (Ghahramani & Jordan, 1996) to model each appliance as an HMM (Zhong et al., 2014; 2015). Other work uses matrix factorization approaches to estimate monthly energy breakdowns (Batra et al., 2017; 2018). More recently, neural networks have been used to model appliances (Kelly & Knottenbelt, 2015; Zhang et al., 2018; Jia et al., 2019; Bonfigli et al., 2018), where extracting appliance-level energy is formulated as a de-noising problem. However, supervised solutions typically do not generalize well + +![](images/13c869bbfef0a3bed21df36b3eadd657d40025670c56558f0f46c44836a348b7.jpg) +(a) Aggregate energy data (over one day) + +![](images/c48a2b2f75f834ab77291b2d09d3bf7ab0d1d4a6c15c74169fdef6ffdb7e5f39.jpg) +(c) Location data for two people over one minute + +![](images/2d38485c167901a40a07fe0c73afc7214fa8223806a744be3e279014c707ae30.jpg) + +![](images/e8a30df3290b986e616b31f0bff25d0e1a6b9126cfb254f62cbb2a178cc4d82c.jpg) +(b) Zoom in around 20:30 in (a) +(d) Location data for two people (top-down view) +Figure 1: Aggregate energy signal and people's indoor location data + +to new homes (Kelly & Knottenbelt, 2015; Bonfigli et al., 2018). This is because two appliances of the same type (e.g. coffee machine) in different homes are often manufactured by different brands, and thus have different power signatures. + +Unsupervised methods do not assume prior knowledge of appliance signatures; they attempt to learn those signatures from the aggregate energy signal. Early approaches use variants of FHMM, and learn appliance HMMs with Expectation-Maximization (Kim et al., 2011), approximate footprint extraction procedures (Kolter & Jaakkola, 2012), or using expert knowledge to configure prior parameters (Johnson & Willsky, 2013; Parson et al., 2014). Some papers propose using contextual information (such as temperature, hour of the day, and day of the week) (Wytock & Kolter, 2014), or use event-based signal processing methods to cluster appliances (Zhao et al., 2016). More recently, Lange & Berges (2018) proposed using a recurrent neural network as the variational distribution in learning the FHMM. In contrast, our work leverages people's location data as a self-supervising signal. We cluster appliance events through learning the relation between energy events and people's locations, and also learn appliance locations as a by-product. + +Passive location sensing Motivated by new in-home applications and continuous health monitoring, recent years have witnessed an increasing number of indoor location sensing systems (Adib et al., 2015; Joshi et al., 2015; Li et al., 2016; Ghourchian et al., 2017). They infer people's locations passively by analyzing how people change the surrounding radio signals (e.g. WiFi) and do not require people to wear any sensors. These sensors have been used for various applications including activity recognition (Wang et al., 2014; 2015), sleep monitoring (Zhao et al., 2017; Hsu et al., 2017a), mobility and behavioral sensing (Hsu et al., 2017b; 2019), and health monitoring (Kaltiokallio et al., 2012). In our work, we leverage the availability of such sensors to introduce location data as an additional data modality for learning appliance usage patterns. + +Self-supervised multi-modal learning Our work is related to a growing body of work on multimodal learning. Most approaches learn to encode the multi-modal data into a shared space (Gomez et al., 2017; Harwath et al., 2018; Owens & Efros, 2018; Zhao et al., 2018; 2019). In contrast, since our two modalities are mostly unrelated and become related only when an activation event happens, we learn to predict one modality conditioned on the other. Our work is also related to cross-modal prediction (Krishna et al., 2017; Owens et al., 2016; Zhang et al., 2017) but differs from it in an essential way. Past work on cross-modal prediction typically uses the prediction as the target outcome (e.g. output text for video captioning). In contrast, our objective is to discover the hidden appliance activation events. Thus, we design our method to leverage the learned predictability and cross-modal mapping for clustering activation events. Furthermore, we introduce a mixture prediction design to disentangle unrelated information in our predicted modality (location measurements unrelated to energy events). + +# 3 PROBLEM FORMULATION + +Our goal is to learn appliance activation events in an unsupervised way, using two input streams: home aggregate energy and residents' location data. Figure 1 shows the two data modalities. We describe each of them formally and define appliance "events" below. + +Aggregate energy signal A household's total energy consumption is measured by a utility meter regularly. This measures the sum of the energy consumed by all appliances at each point in time. We denote the aggregate energy signal by $\pmb{y} = (y_{1}, y_{2}, \dots, y_{T})$ , where $y_{t} \in \mathbb{R}_{+}$ . Suppose there are a total of $K$ appliances in a home, and each appliance's energy signal is denoted by $\pmb{x}_k = (x_{1,k}, x_{2,k}, \dots, x_{T,k})$ , where $x_{t,k} \in \mathbb{R}_{+}$ . Only the aggregate energy signal is observed, $y_{t} = \sum_{k=1}^{K} x_{t,k} + \epsilon_t$ , where $\epsilon_t \sim \mathcal{N}(0, \sigma^2)$ is the background noise. + +Figure 1a shows one day of an aggregate energy signal. The base power level shifts constantly throughout the day, depending on the background load (e.g. ceiling lights). Added on top of the base level are the various appliance events. Figure 1b zooms in around 20:30, and shows examples of those events. The stove was turned on around 20:28, and its power continued to cycle between a few levels. While the stove was on, the microwave was also turned on and ran for a few minutes, and the garbage disposer was turned on shortly. + +Indoor location data We use a single location sensor similar to that in Hsu et al. (2017b) to measure people's indoor locations passively. The sensor sends out radio signals and analyzes the reflections to localize multiple people. Similarly to a regular WiFi router, the sensor has a limited coverage area of up to 40 feet. Suppose there are $P_{t}$ people in the coverage area at time $t$ . The location data is denoted by $l_{t} = (l_{t,1}, l_{t,2}, \ldots, l_{t,P_{t}})$ , where $l_{t,p} \in \mathbb{R}^{2}$ is the x-y location of person $p$ at time $t$ . We can represent the location data over multiple time frames as $l_{1:T} = (l_{1}, l_{2}, \ldots, l_{T})$ . Figure 1c shows one minute of location data from two people, and Figure 1d shows the data from a top-down view. + +Appliance activation events When an appliance is turned on, it causes a jump in energy consumption, i.e. a leading edge in the energy signal, as shown in Figure 1b. We call such a pattern an appliance activation event. On the other hand, when an appliance changes its internal state, it can also cause a change in the energy signal as shown in the same figure. We call such a pattern a background event. We are interested in discovering activation events to learn appliance usage patterns. Thus, for each jump in the aggregate signal, we take a time window (of 25 seconds) centered around that jump, and analyze it to detect whether it is an activation event and which appliance it corresponds to. + +# 4 MODEL + +Our model operates on time windows (25 seconds) centered around jumps in aggregate energy signal, and the corresponding time windows of location data. The model aims to detect appliance activation events by finding windows with highly predictable user locations conditioned on the energy signal. + +Figure 2 shows our model. The idea underlying our model is to first learn a representation of appliance event windows that separates the information about appliance type, $z_{t,cat}$ , from the shape of the energy signal, $z_{t,cont}$ . This is achieved through the appliance energy encoder $E$ . We can then use the appliance type encoding to predict the location data through the location predictor $L_{e}$ , which is conditioned on $z_{t,cat}$ . Since people's locations have information unrelated to appliance events, the total location predictor is a mixture of $L_{e}$ and a second module $L_{g}$ which captures event-independent location information. Below, we describe the design of these modules. More details about the neural network parameters and implementation are discussed in Appendix 8.4. + +Appliance Energy Encoder Given a window of aggregate energy signal $y_{t:t + w_1} = (y_t, y_{t + 1}, \dots, y_{t + w_1})$ , the encoder $E$ encodes the series into an event vector $z_t$ . We break the event vector into two parts: a categorical vector $z_{t,cat}$ and a continuous vector $z_{t,cont}$ . We aim to capture the appliance type with $z_{t,cat}$ (e.g. microwave vs. dishwasher) and use $z_{t,cont}$ to capture the variability within the signature of the same appliance. A softmax layer is applied to $z_{t,cat}$ to ensure that it is a valid distribution over appliance types. $E$ is parametrized using convolution layers, with one fully connected layer to produce $z_{t,cont}$ and another for $z_{t,cat}$ . We denote by $\theta_E$ the parameters of the encoder. + +![](images/e249ffa00b7de967951b169c939f0ae6b8e593b9a397b1c73525a2e0a9e445dd.jpg) +Figure 2: Model architecture. The model learns to encode energy signals into event vectors while learning to predict the concurrent location data. The location predictor $L_{e}$ is conditioned on the energy event $z_{t,cat}$ , and $L_{g}$ is conditioned on context features. The decoder $D$ takes event vectors and learns to reconstruct the original energy signal. + +![](images/aa8563a081b8973ae38fc4273cd31ab12722b84e30c5fc55d59761fb157e01f8.jpg) + +![](images/cb29af38e9a278ff5bdad02a402c932cccfe09ff6d1d522b427b327f99e163f0.jpg) +Figure 3: Total energy signal (top) and location information (bottom) as seen by the model + +Location predictors We try to predict the location data conditioned on the appliance event, i.e. we predict a window of locations $\boldsymbol{l} = \boldsymbol{l}_{t:t + w_2} = (l_t, l_{t+1}, \dots, l_{t + w_2})$ centered around the appliance event. We handle multiple people's locations with a mixture model. Specifically, we use $L_e$ to predict locations related to energy events and $L_g$ to handle other locations. The final prediction is a mixture of predictions from $L_e$ and $L_g$ : + +$$ +p _ {\boldsymbol {\theta} _ {L}} (\boldsymbol {l} | \boldsymbol {y} _ {t: t + w _ {1}}, \boldsymbol {c}) = \alpha * p _ {\boldsymbol {\theta} _ {L _ {e}}} (\boldsymbol {l} | \boldsymbol {z} _ {t, c a t}) + (1 - \alpha) * p _ {\boldsymbol {\theta} _ {L _ {g}}} (\boldsymbol {l} | \boldsymbol {c}), +$$ + +where $p_{\theta_{L_e}}(\cdot)$ is parametrized by $L_{e}$ with parameters $\pmb{\theta}_{L_e}$ , $p_{\pmb{\theta}_{L_g}}(\cdot)$ is parametrized by $L_{g}$ with parameters $\pmb{\theta}_{L_g}$ , $\pmb{\theta}_L = \{\pmb{\theta}_{L_e},\pmb{\theta}_{L_g}\}$ , and $c$ includes context features. We use the number of people in the window (reported by the location sensor), the time of day, and the day of the week as the context features. The weight $\alpha$ depends on the number of people in the current window $\alpha = 1 / P_t$ . + +To represent location data, we blur each location measurement with a Gaussian kernel on the x-y plane to create an image, and process the window of locations $l_{t:t + w_2}$ into frames of images (Figure 3). We reuse the notation $l \in \mathbb{R}^{|X| \times |Y| \times |T|}$ to represent frames of location images, where $|X|, |Y|, |T|$ are the number of discretized points on the x-y and time dimensions. By presenting location data as images, we also remove the variable $P_t$ while handling a variable number of people in each frame. + +We choose $p_{\theta_{L_e}}(\boldsymbol{l}|\boldsymbol{z}_{t,cat})$ to be a multivariate Gaussian with a diagonal covariance structure: $p_{\theta_{L_e}}(\boldsymbol{l}|\boldsymbol{z}_{t,cat}) \triangleq \mathcal{N}(\boldsymbol{l}; \boldsymbol{\mu}_e, \boldsymbol{\Sigma}_e) = \prod_{x,y,t} \mathcal{N}(\boldsymbol{l}_{x,y,t}; \boldsymbol{\mu}_{x,y,t}, \boldsymbol{\sigma}_{x,y,t}^2)$ where $\boldsymbol{\mu}_e = L_e(\boldsymbol{z}_{t,cat}; \boldsymbol{\theta}_{L_e}) \in \mathbb{R}^{|X| \times |Y| \times |T|}$ and we choose $\boldsymbol{\sigma}_{x,y,t}$ to be a constant. We use 3D deconvolution networks to model $L_e$ , which takes $\boldsymbol{z}_{t,cat}$ as input and outputs the means of the location distributions. We model $p_{\theta_{L_g}}(\cdot)$ and $L_g$ in a similar way. + +During training, given a window of data $(l, y_{t:t + w_1}, c)$ , we minimize the negative log likelihood of the mixture distribution in predicting the locations: + +$$ +\mathcal {L} _ {l o c} \left(\boldsymbol {\theta} _ {E}, \boldsymbol {\theta} _ {L}\right) = - \log p _ {\boldsymbol {\theta} _ {L}} \left(\boldsymbol {l} \mid \boldsymbol {y} _ {t: t + w _ {1}}, \boldsymbol {c}\right). +$$ + +Note that since the gradient flows through $z_{t,cat}$ , the likelihood is a function of both $\theta_E$ and $\theta_L$ . Hence, the encoder $E$ also learns to encode the energy series based on the concurrent location data. + +Energy Decoder The decoder $D$ takes both $\mathbf{z}_{t,cat}$ and $\mathbf{z}_{t,cont}$ and learns to reconstruct the original input energy series by predicting $\hat{\mathbf{y}}_{t:t + w_1}$ . The decoder $D$ is parametrized using deconvolution layers. We minimize the reconstruction loss during training: + +$$ +\mathcal {L} _ {r e c} \left(\boldsymbol {\theta} _ {E}, \boldsymbol {\theta} _ {D}\right) = \left\| \hat {\boldsymbol {y}} _ {t: t + w _ {1}} - \boldsymbol {y} _ {t: t + w _ {1}} \right\| _ {2} +$$ + +The reconstruction loss encourages the encoder $E$ to produce good initial vectors for $L_{e}$ to predict locations. At the same time, it serves as a regularizer to prevent encoder $E$ from generating meaningless vectors by overfitting location predictions. + +Training We train all components to jointly optimize the location predictions and energy reconstruction. We minimize the total loss: $\mathcal{L}_{total} = \mathcal{L}_{loc} + \lambda * \mathcal{L}_{rec}$ over all windows of data, where $\lambda$ is a parameter to balance the two terms. The training details are discussed in Appendix 8.4. + +# 4.1 CLUSTERING APPLIANCE EVENTS WITH CROSS-MODAL PREDICTIONS + +Once the model is trained, we obtain for each window of energy data its appliance event vector $\mathbf{z}_{t,\text{cat}}$ and its cross-modal location prediction $p_{\theta_{L_e}}(\cdot | \mathbf{z}_{t,\text{cat}})$ . Next, we use these two vectors for clustering. We design a density-based clustering algorithm leveraging the cross-modal relation we learned. Our intuition is that activation events for the same appliance will cluster together since they have the same appliance type and the same location. We omit the cat notation below for brevity. + +It is typically difficult to cluster in a space learned by a neural encoder because the transformation is highly non-linear and the distance metric is not well-defined. We circumvent this problem by associating the encoded space with a Euclidean space, in which we can easily measure distance. Specifically, for two event vectors $z_{1}$ and $z_{2}$ , we can measure their distance in the location space using $p_{\theta_{L_e}}(\cdot |z_1)$ and $p_{\theta_{L_e}}(\cdot |z_2)$ . + +The location prediction $p_{\theta_{L_e}}(\cdot | \mathbf{z}_i)$ represents the likelihood of observing any location $l_{x,y,t}$ around the time of the appliance event. We found that for events related to human activities (e.g., turning on a kettle or microwave), $p_{\theta_{L_e}}(\cdot | \mathbf{z}_i)$ shows a peak value at the location of the appliance in the x-y space at the time when a person interacted with the appliance. For events not related to human activities (e.g. fridge cycles or random background events), $p_{\theta_{L_e}}(\cdot | \mathbf{z}_i)$ has low values and is diffused. + +We define the location predictability score (or the confidence of location prediction) as $s(z_i) = \max_{x,y,t} p_{\pmb{\theta}_{L_e}}(\pmb{l}_{x,y,t} = 1|\pmb{z}_i)$ , and the location distance $D_{\mathrm{loc}}$ between two events as: $D_{\mathrm{loc}}(z_1,z_2) = ||(x_1^* - x_2^*, y_1^* - y_2^*)||_2$ , where $(x_i^*, y_i^*, t_i^*) = \arg \max_{x,y,t} p_{\pmb{\theta}_{L_e}}(\pmb{l}_{x,y,t} = 1|\pmb{z}_i)$ . Similarly, the neighborhood distance $D_{\mathrm{nb}}$ between two events is defined as $D_{\mathrm{nb}}(z_1,z_2) = ||z_1 - z_2||_2$ . + +Our clustering algorithm starts with a $\mathbf{z}_i$ with high predictability score $s(\mathbf{z}_i)$ . It expands the cluster around $\mathbf{z}_i$ 's local neighborhood in the $\mathbf{z}$ space. It stops expanding if a neighbor's location distance $D_{\mathrm{loc}}$ is too far from the cluster center. If all neighbors of the current cluster are visited and none has a small enough $D_{\mathrm{loc}}$ , we start a new cluster from another event with high predictability score. The algorithm is described formally in Algorithm 1. We discuss the choice of parameters in Appendix 8.5. + +Algorithm 1 Clustering energy events with the learned cross-modal relations +Input: $\{\pmb {z}_i\}$ and $s(\cdot)$ : event vectors and their location predictability scores $\eta_{s},\eta_{D_{\mathrm{loc}}},\eta_{z}$ : thresholds for predictability score, location distance, neighborhood distance $N_{\mathrm{min}}$ : the minimum number of samples to form a valid cluster +Output: Clusters of appliance activation events that are associated with a consistent location +1: procedure EL-SCAN( $\eta_s,\eta_{D_{\mathrm{loc}}},\eta_z,N_{\mathrm{min}}$ 1 +2: $\mathcal{Z}\gets \{\pmb {z}_i|\pmb {s}(\pmb {z}_i) > \eta_s\} ,k\gets 0$ +3: while $\mathcal{Z}\neq \emptyset$ do +4: $\pmb {z}_{\mathrm{seed}} = \arg \max_{\mathcal{Z}}s(\pmb {z}_i)$ +5: clusterk $\leftarrow \{\pmb {z}_{\mathrm{seed}}\} ,k\leftarrow k + 1$ ▷ Start a new cluster +6: ExpandCluster(k, zseed, $\eta_{D_{\mathrm{loc}}},\eta_z)$ +7: end while +8: return clusters with at least $N_{min}$ examples +9: end procedure +10: +11: function EXPANDCLUSTER(k,z, $\eta_{D_{\mathrm{loc}}},\eta_z)$ +12: $\pmb{z}_{u_k}\gets$ compute current cluster center +13: Znb $\leftarrow \{\pmb {z}_i\in \mathcal{Z}|D_{\mathrm{nb}}(\pmb {z}_i,\pmb {z}) < \eta_z$ and $D_{\mathrm{loc}}(\pmb {z}_i,\pmb {z}_{u_k}) < \eta_{D_{\mathrm{loc}}}\}$ ▷ Find valid neighbors +14: Z $\leftarrow \mathcal{Z}\backslash \mathcal{Z}_{nb}$ +15: clusterk $\leftarrow$ clusterk U Znb +16: Repeat ExpandCluster.) for all $z_i$ in $Z_{nb}$ +17: end function + +# 5 DATASET + +We collected concurrent streams of aggregate energy signal and location data from 4 homes over 7 months. We use this dataset for our evaluation. To obtain ground truth labels of appliance events, we deployed programmable smart plugs on the power outlet associated with each appliance. Since not all appliances can be measured by a smart plug (e.g. some appliances do not have accessible power outlets), we also developed a labeling tool for manual labeling. The tool allows labelers to label appliance events from the aggregate energy signal, with the help of smart plug data and information collected from the home residents. The choice of sensors and their sampling rates are detailed in Appendix 8.1. + +# 6 RESULTS + +We evaluate our model and clustering algorithm on unsupervised appliance activation event detection and their learned appliance locations. We name our approach SAPPLE (Self-supervised APPliance usage Learning). + +# 6.1 UNSUPERVISED APPLIANCE EVENT DETECTION + +For appliance event detection, we compare with four baselines. Our method and two baselines have access to location information. EL-Kmeans takes both energy and location data as input and directly clusters them using k-means (Arthur & Vassilvitskii, 2007). $E$ -only-Kmeans clusters only the energy signal with k-means. Methods with location information pre-filter the events and discard events without any location data, as they are unlikely to be activation events. The other two baselines only take the total energy signal as input: AFAMAP (Kolter & Jaakkola, 2012) uses factorial HMM, and VarBOLT (Lange & Berges, 2018) uses a recurrent neural network to model aggregate appliance signals. We use publicly available implementations for these methods (implementation, a;b). + +We use the same hyper-parameters for the network architecture, training, and clustering algorithm across all homes. As our clustering algorithm is non-parametric, we choose the same number of clusters that it discovers for other methods if possible. For VarBOLT, we report results using 10 clusters, since the training time grows exponentially with the number of clusters and training with more clusters is prohibitively slow. As in past unsupervised work, we report the detection F1 scores based on the best cluster assignments with the ground truth appliances. + +Table 1 shows that SAPPLE has an average detection F1 score of $72.8\%$ , outperforming other baselines ranging from $4.0\%$ to $20.9\%$ . As reported by Bonfigli et al. (2018), AFAMAP performs better when appliance-level data is available for training the HMMs. In the unsupervised setting, however, its footprint extraction procedure does not always produce meaningful HMMs for individual appliances (Bonfigli et al., 2018; Beckel et al., 2014), causing degraded performance. VarBOLT's training objective focuses on explaining the total amount of energy in a home. Thus, it often uses multiple components (clusters) to model appliances that are on for a long period (e.g. fridge, heater, and dryer/washer). These types of appliances generate many background events, making the algorithm focus less on activation events of other appliances. + +Comparing our method with baselines that also have location information (E-only-Kmeans and EL-Kmeans), our approach still outperforms them significantly. E-only-Kmeans performs better than AFAMAP and VarBOLT, showing that the presence of location data is highly related to activation events. However, naively using location data for clustering does not improve the results by much, as EL-Kmeans performs only slightly better than E-only-Kmeans. This is because not all location data is related to appliance events and vice versa. Our approach "cleans up" the data by learning the relation between the two modalities and discovers clusters with strong cross-modal predictability. + +Table 2 shows a break down of our results for different appliances. + +# 6.2 ABLATION STUDY + +We perform an ablation study to show our results are contributed by all components in our method. As shown in Table 3, we compare our clustering algorithm (Method 1) with a different algorithm that concatenates the learned multi-modal embeddings $(z_{t,cat}$ and $p_{\theta_{L_e}}(\cdot |z_{t,cat}))$ and directly clusters + +Table 1: Unsupervised Appliance Event Detection. Averaged F1 scores $(\%)$ of all appliances. + +
NappliancesMethods w/ location informationMethods w/o location
SAPPLEEL-KmeansE-only-KmeansAFAMAPVarBOLT
Home 1882.326.910.26.04.4
Home 2869.119.519.85.53.7
Home 3676.216.515.14.13.3
Home 4663.620.520.36.54.5
Average-72.820.916.45.54.0
+ +Table 2: Unsupervised Appliance Event Detection. Our method's F1 score $(\%)$ for each appliance. + +
Home 1Home 2Home 3Home 4
Kettle91.9--98.6
Hair dryer88.0 / 98.3--1.1
Coffee machine96.175.690.4-
Microwave81.982.188.196.7
Stove-activation90.676.7-92.7
Disposer62.578.553.1-
Toaster-49.171.1-
Blender-6.2--
Dryer-88.1--
Iron--71.1-
Rice Cooker---0.0
Others49.496.183.492.3
Average82.369.176.263.6
+ +Table 3: Ablation study. + +
MethodsAvg. F1 (%)
1SAPPLE72.8
2Learned embeddings + K-means58.8
3Remove Lg68.5
4Remove Le & Lg27.6
+ +them with k-means (Method 2). Our clustering algorithm is more effective than directly clustering the multi-modal embeddings, providing an improvement of $14.0\%$ in the average F1 score. This is because our clustering algorithm treats the two modalities differently. For location predictions, we can leverage our understanding of physical distance to set cluster boundaries. For the energy embedding, since it is a non-linear mapping with no clear distance metric, our algorithm iteratively groups together events in the embedding neighborhood that have approximately the same locations. + +Apart from our clustering algorithm, we evaluate the benefits of our mixture component $L_{g}$ by experimenting with removing $L_{g}$ from the model, which reduces the F1 score by $4.3\%$ (Method 1 vs 3). This shows the importance of having $L_{g}$ extract background motion to allow the location predictor $L_{e}$ to focus on modeling the person who interacts with the appliance. + +We also consider removing both $L_{g}$ and $L_{e}$ , and clustering the input based only on the energy embedding $z_{t,cat}$ since there is no learned location predictions. The results shown under Method 4 demonstrate the importance of the location embedding generated by the combination of $L_{g}$ and $L_{e}$ . + +# 6.3 LEARNED APPLIANCE LOCATIONS + +Our model also learns the locations where people interacted with appliances, which are typically close to the appliances' physical locations (we discuss remotely activated appliances in Appendix 8.6). For each appliance event, we take the location predicted by $L_{e}$ with the highest predictability score, and compare that with the ground truth appliance location measured by a laser meter. The average location prediction error is 0.65 meters with a standard deviation of 0.17 meters across homes. The errors are mostly due to location offsets between the person and the appliance. Figure 4a shows the location predictions and their ground truth of several appliance events in Home 1. The corresponding energy signals are shown in Figure 4b - Figure 4e. + +The location information also helps disambiguate appliances with similar energy signals. For example, although the hair dryer and kettle (Figure 4d and Figure 4e) have very similar energy signatures, their different locations (green and orange in Figure 4a) guide the model to encode their events differently. + +![](images/faa4ca2586587adcdec07928dc05896d788156555b875504e1740edbcc4307a7.jpg) +(a) Learned locations + +![](images/9d152c54aa77d20ea19f19378a8e547abcaf53bb200f37a4922a05f86f2de709.jpg) +(b) Microwave + +![](images/261cb58b9dfdd594ef6834e3df8611a2af83871b5fba4562f704c2e558e655f1.jpg) +(c) Disposer +Figure 4: Energy signals of discovered activation events and their learned locations from $L_{e}$ . + +![](images/463277c6ff30ff6a4b066399d8bd8655b6ac0b2b196598ab73cac3595a90138c.jpg) +(d) Hair dryer + +![](images/0b04adf65de8588bc5d26b89cae7ceb259baa514c9e7a406a0335b1115f7ec0a.jpg) +(e) Kettle + +# 6.4 LOCATION PREDICTIONS OF $L_{e}$ VS $L_{g}$ + +We visualize the location predictions from the event-related predictor $L_{e}$ and the event-independent predictor $L_{g}$ to illustrate how they handle scenarios with multiple people. Figure 5 shows an example of how the mixture design handles the two types of locations. Since $L_{e}$ is conditioned on energy events, it naturally learns to predict locations related to appliance events. In this case, the location of the hair dryer is predicted by $L_{e}$ (Figure 5b). On the other hand, $L_{g}$ predicts the typical locations people tend to stay (e.g., the couch in Figure 5c) based on the context. Having $L_{g}$ to explain the other locations helps $L_{e}$ focus on learning the event-related locations. + +![](images/6b3477a08c571ece1654bb015de78964a9fe0bf13ad9887cb89d631ecd35efde.jpg) +(a) Observed locations (two people) (b) $L_{e}$ 's prediction (event-related) (c) $L_{g}$ 's prediction (other locations) +Figure 5: Observed locations and predictions of $L_{e}$ and $L_{g}$ at a given time for a hair dryer event. + +![](images/8d56ed0fc6cfd4487d8c7f5f171faa253d62051168ec8a4ec6913cf040793cdc.jpg) + +![](images/a6c4de50c2cb79aaefd0cd1f3e781ebfd0ffbba4c3164a38358d61514436970d.jpg) + +# 6.5 CONTEXTUAL LOCATION INFORMATION AND CLUSTER VISUALIZATIONS + +In Appendix 8.2, we discuss emerging contextual relations between indoor locations through learning cross-modal predictions. We also visualize the learned event vectors to shed light on the design rationales behind our clustering algorithm in Appendix 8.3. + +# 7 CONCLUSION + +We introduced a self-supervised solution for learning appliance usage patterns in homes. We infer appliance usage by learning from data streams of two modalities: the total energy consumed by the home and the residents' location data. Our approach improves on unsupervised appliance event detection significantly, and learns appliance locations and usage patterns without any supervision.9 + +# ACKNOWLEDGMENTS + +The authors thank the members of NETMIT at MIT and the reviewers for their feedback and helpful comments. We thank the participants in our study for facilitating the sensor deployments in their homes. We also thank the various companies sponsoring the MIT Center for Wireless Networks and Mobile Computing. + +# REFERENCES + +Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. + +Fadel Adib, Zachary Kabelac, and Dina Katabi. Multi-person localization via rf body reflections. In 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 15), pp. 279-292, 2015. +K Carrie Armel, Abhay Gupta, Gireesh Shrimali, and Adrian Albert. Is disaggregation the holy grail of energy efficiency? the case of electricity. Energy Policy, 52:213-234, 2013. +David Arthur and Sergei Vassilvitskii. k-means++: The advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pp. 1027-1035. Society for Industrial and Applied Mathematics, 2007. +Nipun Batra, Hongning Wang, Amarjeet Singh, and Kamin Whitehouse. Matrix factorisation for scalable energy breakdown. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. +Nipun Batra, Yiling Jia, Hongning Wang, and Kamin Whitehouse. Transferring decomposed tensors for scalable energy breakdown across regions. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Christian Beckel, Wilhelm Kleiminger, Romano Cicchetti, Thorsten Staake, and Silvia Santini. The eco data set and the performance of non-intrusive load monitoring algorithms. In Proceedings of the 1st ACM Conference on Embedded Systems for Energy-Efficient Buildings, pp. 80-89. ACM, 2014. +Roberto Bonfigli, Andrea Felicetti, Emanuele Principi, Marco Fagiani, Stefano Squartini, and Francesco Piazza. Denoising autoencoders for non-intrusive load monitoring: improvements and comparative evaluation. Energy and Buildings, 158:1461-1474, 2018. +Christian Debes, Andreas Merentitis, Sergey Sukhanov, Maria Niessen, Nikolaos Frangiadakis, and Alexander Bauer. Monitoring activities of daily living in smart homes: Understanding human behavior. IEEE Signal Processing Magazine, 33(2):81-94, 2016. +Lorenzo Maria Donini, Eleonora Poggiogalle, Maria Piredda, Alessandro Pinto, Mario Barbagallo, Domenico Cucinotta, and Giuseppe Sergi. Anorexia and eating patterns in the elderly. *PloS one*, 8 (5):e63539, 2013. +emonPi. Open energy monitor https://openenergymonitor.org/. +Zoubin Ghahramani and Michael I Jordan. Factorial hidden markov models. In Advances in Neural Information Processing Systems, pp. 472-478, 1996. +Negar Ghourchian, Michel Allegue-Martinez, and Doina Precup. Real-time indoor localization in smart homes using semi-supervised learning. In Twenty-Ninth IAAI Conference, 2017. +Lluis Gomez, Yash Patel, Marçal Rusinol, Dimosthenis Karatzas, and CV Jawahar. Self-supervised learning of visual features through embedding images into text topic spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4230-4239, 2017. +David Harwath, Adria Recasens, Dídac Surís, Galen Chuang, Antonio Torralba, and James Glass. Jointly discovering visual objects and spoken words from raw sensory input. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 649-665, 2018. +Chen-Yu Hsu, Aayush Ahuja, Shichao Yue, Rumen Hristov, Zachary Kabelac, and Dina Katabi. Zero-effort in-home sleep and insomnia monitoring using radio signals. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(3):1-18, 2017a. +Chen-Yu Hsu, Yuchen Liu, Zachary Kabelac, Rumen Hristov, Dina Katabi, and Christine Liu. Extracting gait velocity and stride length from surrounding radio signals. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2116-2126. ACM, 2017b. +Chen-Yu Hsu, Rumen Hristov, Guang-He Lee, Mingmin Zhao, and Dina Katabi. Enabling identification and behavioral sensing in homes using radio reflections. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2019. + +AFAMAP implementation. https://github.com/beckel/nilm-eval/tree/master/Matlab/algorithms/kolter_alg.a. +Variational BOLT implementation. https://github.com/INFERLab/varbolt, b. +Yiling Jia, Nipun Batra, Hongning Wang, and Kamin Whitehouse. A tree-structured neural network model for household energy breakdown. In The World Wide Web Conference, pp. 2872-2878, 2019. +Matthew J Johnson and Alan S Willsky. Bayesian nonparametric hidden semi-markov models. Journal of Machine Learning Research, 14(Feb):673-701, 2013. +Kiran Joshi, Dinesh Bharadia, Manikanta Kotaru, and Sachin Katti. Video: fine-grained device-free motion tracing using rf backscatter. In 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 15), pp. 189-204, 2015. +Ossi Kaltiokallio, Maurizio Bocca, and Neal Patwari. Follow@ grandma: Long-term device-free localization for residential monitoring. In Local Computer Networks Workshops (LCN Workshops), 2012 IEEE 37th Conference on, pp. 991-998. IEEE, 2012. +Jack Kelly and William Knottenbelt. Neural nilm: Deep neural networks applied to energy disaggregation. In Proceedings of the 2nd ACM International Conference on Embedded Systems for Energy-Efficient Built Environments, pp. 55-64. ACM, 2015. +Hyungsul Kim, Manish Marwah, Martin Arlitt, Geoff Lyon, and Jiawei Han. Unsupervised disaggregation of low frequency power measurements. In Proceedings of the 2011 SIAM international conference on data mining, pp. 747-758. SIAM, 2011. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +J Zico Kolter and Tommi Jaakkola. Approximate inference in additive factorial hmms with application to energy disaggregation. In Artificial intelligence and statistics, pp. 1472-1482, 2012. +J Zico Kolter, Siddharth Batra, and Andrew Y Ng. Energy disaggregation via discriminative sparse coding. In Advances in Neural Information Processing Systems, pp. 1153-1161, 2010. +Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-captioning events in videos. In Proceedings of the IEEE international conference on computer vision, pp. 706-715, 2017. +Henning Lange and Mario Berges. Variational bolt: approximate learning in factorial hidden markov models with application to energy disaggregation. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Xiang Li, Shengjie Li, Daqing Zhang, Jie Xiong, Yasha Wang, and Hong Mei. Dynamic-music: accurate device-free indoor localization. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 196-207. ACM, 2016. +Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605, 2008. +Andrew Owens and Alexei A Efros. Audio-visual scene analysis with self-supervised multisensory features. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631-648, 2018. +Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H Adelson, and William T Freeman. Visually indicated sounds. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2405-2413, 2016. +Oliver Parson, Siddhartha Ghosh, Mark Weal, and Alex Rogers. An unsupervised training method for non-intrusive appliance load monitoring. Artificial Intelligence, 217:1-19, 2014. +TP-link. Tp-link smart plug hs110 https://www.tp-link.com/uk/home-networking/smart-plug/hs110/. + +Wei Wang, Alex X Liu, Muhammad Shahzad, Kang Ling, and Sanglu Lu. Understanding and modeling of wifi signal based human activity recognition. In Proceedings of the 21st annual international conference on mobile computing and networking, pp. 65-76. ACM, 2015. +Yan Wang, Jian Liu, Yingying Chen, Marco Gruteser, Jie Yang, and Hongbo Liu. E-eyes: device-free location-oriented activity identification using fine-grained wifi signatures. In Proceedings of the 20th annual international conference on Mobile computing and networking, pp. 617-628. ACM, 2014. +Matt Wytock and J Zico Kolter. Contextually supervised source separation with application to energy disaggregation. In Twenty-eighth AAAI conference on artificial intelligence, 2014. +Chaoyun Zhang, Mingjun Zhong, Zongzuo Wang, Nigel Goddard, and Charles Sutton. Sequence-to-point learning with neural networks for non-intrusive load monitoring. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Richard Zhang, Phillip Isola, and Alexei A Efros. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1058-1067, 2017. +Bochao Zhao, Lina Stankovic, and Vladimir Stankovic. On a training-less solution for non-intrusive appliance load monitoring using graph signal processing. IEEE Access, 4:1784-1799, 2016. +Hang Zhao, Chuang Gan, Andrew Rouditchenko, Carl Vondrick, Josh McDermott, and Antonio Torralba. The sound of pixels. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 570-586, 2018. +Hang Zhao, Chuang Gan, Wei-Chiu Ma, and Antonio Torralba. The sound of motions. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1735-1744, 2019. +Mingmin Zhao, Shichao Yue, Dina Katabi, Tommi S Jaakkola, and Matt T Bianchi. Learning sleep stages from radio signals: A conditional adversarial architecture. In International Conference on Machine Learning, pp. 4100-4109, 2017. +Mingjun Zhong, Nigel Goddard, and Charles Sutton. Signal aggregate constraints in additive factorial hmms, with application to energy disaggregation. In Advances in Neural Information Processing Systems, pp. 3590-3598, 2014. +Mingjun Zhong, Nigel Goddard, and Charles Sutton. Latent bayesian melding for integrating individual and population models. In Advances in neural information processing systems, pp. 3618-3626, 2015. +Kaile Zhou and Shanlin Yang. Understanding household energy consumption behavior: The contribution of energy big data analytics. Renewable and Sustainable Energy Reviews, 56:810-819, 2016. +Adam Zippperer, Patricia A Aloise-Young, Siddharth Suryanarayanan, Robin Roche, Lieko Earle, Dane Christensen, Pablo Bauleo, and Daniel Zimmerle. Electric energy management in the smart home: Perspectives on enabling technologies and consumer behavior. Proceedings of the IEEE, 101(11):2397-2408, 2013. + +# 8 APPENDIX + +# 8.1 SENSORS DETAILS + +In this section, we describe details of the sensors used in our dataset collection. + +Aggregate energy signal For flexible data collection, we install a sensor (emonPi) at the main circuit breaker in each house as a proxy for the utility meter. We programmed the sensor to collect the raw aggregate energy signal at $1.2\mathrm{kHz}$ . We down-sampled the data to $10\mathrm{Hz}$ for our problem to emulate the achievable data rate from a utility meter hardware (Armel et al., 2013). + +Location data The wireless location sensor is built on a design similar to Hsu et al. (2017b). It is a single stand-alone sensor that hangs on the wall, and passively collects multiple people's locations with decimeter-level accuracy. We down-sampled the location streams to $1\mathrm{Hz}$ + +Appliance-level data (for ground truth labeling) We use TP-Link smart plugs (TP-link) with energy monitoring features for collecting appliance-level data. We wrote custom software using available APIs to collect appliance energy signals at $1\mathrm{Hz}$ . For appliances that cannot be connected to a smart plug, we asked the residents to write down appliance usage times to help with manual labeling. + +# 8.2 CONTEXTUAL LOCATION INFORMATION VIA LEARNED APPLIANCE EVENTS + +By analyzing the location predictions of $L_{e}$ conditioned on different appliance events, we also discover interesting contextual relations between different indoor locations. Figure 6 visualizes the location predictions at different frames around a kettle event. We plot the per-frame location predictability score (or prediction confidence) over time in Figure 6a. The score peaks around $t = 0\mathrm{s}$ , the time of the event. This is because when people turn on a kettle, they may approach it from different locations, but the location when they push the button is consistent and can be predicted confidently. As a result, the prediction at $t = 0\mathrm{s}$ correctly shows the kettle's location (Figure 6d). + +Interestingly, a smaller peak of predictability score shows at $t = -10\mathrm{s}$ in Figure 6a. If we look at the location prediction from $t = -10\mathrm{s}$ to $t = 0\mathrm{s}$ (Figure 6b - Figure 6d), we see how the prediction moves from the sink to the kettle10. This is because people often fill water at the sink before starting the kettle. Through learning the cross-modal relation, contextual information among locations also emerges as different appliance events are discovered. + +![](images/fb2d3798370c896c060800a743ff4a467f0bae361e265b7e91171d97a3f3937d.jpg) +(a) Prediction confidence over time + +![](images/268d7522007a8ad603ab6d2d39945c05d5644f0bc951e7cfdec4772a716b35d0.jpg) +(b) $t = -10s$ +Figure 6: Visualizing location predictions at different times conditioned on a kettle event. + +![](images/116c3879fc38f3a4f8e999157ff99a1c05ed9b6626f0185ca6de63dbcf4e772c.jpg) +(c) $t = -6\mathrm{s}$ + +![](images/614c7492377320d3d6e7275a25bf0a89aaefda8424e7c2b48077c35961a3aebe.jpg) +(d) $t = 0\mathrm{s}$ + +# 8.3 VISUALIZATION OF THE LEARNED EVENT VECTORS AND LOCATION PREDICTABILITY + +To illustrate what the model learns and the design rationales behind our clustering algorithm, we visualize the space of the learned event vectors $\mathbf{z}_{t,\mathrm{cat}}$ and their location predictability score $s(\mathbf{z})$ . Figure 7 shows the t-SNE (Maaten & Hinton, 2008) visualization of the event vectors on a 2-dimensional space. We color coded the events with three metrics: location predictability scores (Figure 7a), cluster ID discovered by our algorithm (Figure 7b), and ground truth label (Figure 7c). The predictability score depends on how strongly an appliance event co-occurred with a particular + +location. As shown in Figure 7a, most appliances related to human activities have high predictability scores (e.g., kettle, hairdryer, microwave, coffee machine, etc). On the other hand, appliances that cycle in the background (e.g., heater) have very low predictability. The stove has many clusters of background events. This is because when the stove is on, it cycles between a few power levels, and the cycle durations depend on the heating levels. Interestingly, we found that stove clusters with higher power levels ("stove-big-cycle") also have high predictability scores, while others with cycling states ("stove-cycle") show low scores. This is likely because people are next to the stove more often when the heating level is high. + +We can also see that without clustering using both location predictions and event vectors, it is hard to separate some of the cluster boundaries. Besides, learning to relate energy events to location data enables us to measure the distances of events in a well-defined physical space. + +![](images/be7757cdc543b692c44089d537f050df60b823a96372703fb2fb88bbd3943415.jpg) +(a) Location predictability scores + +![](images/7e2d4f3f76050fc076a8045340f741f443fcb586abaaa11693e6a3f8706406a0.jpg) +(b) Discovered clusters +Figure 7: t-SNE visualization of the learned event vectors colored by (a) location predictability scores (b) discovered clusters, and (c) ground truth labels. + +![](images/2ff2b8b023ddfa676e76e9853d7adb6bcd4c5e493d2fe1742b17e4aa6fe349e1.jpg) +(c) Ground truth appliance labels + +# 8.4 NETWORK IMPLEMENTATION AND TRAINING DETAILS + +In this section, we provide implementation and training details of our neural network model. We use convolution and deconvolution layers for the energy encoder and decoder. Each module has 8 layers with a kernel size of 3 and a stride of 2. We choose the dimensions of $z_{t,\text{cat}}$ and $z_{t,\text{cont}}$ to be 128 and 3. The location predictors have 5 layers of 3D deconvolution with a kernel size of 3 and a stride of 2 in each dimension. The frames of location images for each time window have $32 \times 32 \times 32$ pixels. We discretize the x, y, and time dimensions into 32 points, where the range of the x-y dimensions are 10 meters and the time dimension has 32 seconds. The neural networks are implemented in Tensorflow (Abadi et al., 2016). For training, we use the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001 and a batch size of 64. + +# 8.5 CLUSTERING PARAMETERS AND DETAILS + +In all experiments, we set $\eta_{D_{\mathrm{loc}}} = 0.4$ meters, $\eta_z = 0.03$ , $\eta_s = 0.2$ , and $N_{\mathrm{min}} = 10$ . These values are chosen based on physical and computational constraints. The value of $\eta_{D_{\mathrm{loc}}}$ is based on the minimum physical separation between two appliances. The value of $\eta_z$ only affects the search space in each iteration, and is chosen to be small for computational efficiency. The minimum predictability score $\eta_s$ is chosen based on a validation set from one of the homes. The value of $N_{\mathrm{min}}$ is set to 10 to say that we need the appliance to appear in the data at least 10 times before we trust that it is a real appliance. + +# 8.6 LIMITATIONS + +We discuss the limitations of our approach in this section. One limitation is that some remotely activated appliances may not have predictable locations. However, from our experience collecting the dataset, the vast majority of the appliances used on a daily basis (Table 2) require human interaction. For example, a person has to put food into a microwave before turning it on, to hold a hair dryer while drying hair, and to push a button to get a coffee machine running. Even for an appliance with a remote controller, as long as the person has a regular place to interact with the appliance from (e.g., always turning the TV on while sitting on the couch), our model can still learn to predict the location of interaction. Another limitation is that our location sensor has a limited coverage area (around 40 feet in radius). This is enough to cover a typical one-bedroom apartment. For a larger house, one could deploy a second sensor, similarly to how a WiFi repeater extends the coverage area. \ No newline at end of file diff --git a/selfsupervisedlearningofapplianceusage/images.zip b/selfsupervisedlearningofapplianceusage/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1aabc6960b084daa35bf59967a1b172731ab92ce --- /dev/null +++ b/selfsupervisedlearningofapplianceusage/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a9084716b18cd745679cbae6b6d207d20e1751c927fb069092aa5ba068737c1 +size 336778 diff --git a/selfsupervisedlearningofapplianceusage/layout.json b/selfsupervisedlearningofapplianceusage/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4cf080d29db6dba900d0b8fdfc7b0e10331612d1 --- /dev/null +++ b/selfsupervisedlearningofapplianceusage/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:108ec5df1144e1d9aa00626a5e01e0104e1d31076fd2d53feb8f0bb05031c794 +size 543418 diff --git a/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_content_list.json b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fee1a097ad2e771305072ffef9a30095208edac6 --- /dev/null +++ b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c89672d5bbe6fab536dfde404ae3f89d3a029d17eaa3ea421fbcd2d6470f611 +size 77612 diff --git a/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_model.json b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_model.json new file mode 100644 index 0000000000000000000000000000000000000000..81ebdeb007812f93fd8cc9a1f3219a45db3249a3 --- /dev/null +++ b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1363f1c2f729cc91175ee2702bdd1bc860e14b2ccbdcd96ddfacd2cc347fb85c +size 93886 diff --git a/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_origin.pdf b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d5a013203b6bd459d2f7d947c8ce697075eb7ce8 --- /dev/null +++ b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/070933bf-41e5-4777-94a6-aa01f7fcf423_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b024ad6872c4abb45417c8029c3423886fc64b57aaf4ddb2b83f67c466ea1036 +size 8133168 diff --git a/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/full.md b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/full.md new file mode 100644 index 0000000000000000000000000000000000000000..789d578281e5ef2eabd342aad4f4f5600a87432d --- /dev/null +++ b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/full.md @@ -0,0 +1,244 @@ +# SEMANTICALLY-GUIDED REPRESENTATION LEARNING FOR SELF-SUPERVISED MONOCULAR DEPTH + +Vitor Guizilini $^{1}$ Rui Hou $^{1,2}$ Jie Li $^{1}$ Rares Ambrus $^{1}$ Adrien Gaidon $^{1}$ + +1Toyota Research Institute (TRI) + +{first.last}@tri.global + +$^{2}$ University of Michigan + +rayhou@umich.edu + +# ABSTRACT + +Self-supervised learning is showing great promise for monocular depth estimation, using geometry as the only source of supervision. Depth networks are indeed capable of learning representations that relate visual appearance to 3D properties by implicitly leveraging category-level patterns. In this work we investigate how to leverage more directly this semantic structure to guide geometric representation learning, while remaining in the self-supervised regime. Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions. Furthermore, we propose a two-stage training process to overcome a common semantic bias on dynamic objects via resampling. Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories. $^{\dagger}$ + +# 1 INTRODUCTION + +Accurate depth estimation is a key problem in computer vision and robotics, as it is instrumental for perception, navigation, and planning. Although perceiving depth typically requires dedicated sensors (e.g., stereo rigs, LiDAR), learning to predict depth from monocular imagery can provide useful cues for a wide array of tasks (Michels et al., 2005; Kendall et al., 2018; Manhardt et al., 2019; Lee et al., 2019). Going beyond supervised learning from direct measurements (Eigen et al., 2014), self-supervised methods exploit geometry as supervision (Guo et al., 2018; Pillai et al., 2019; Zou et al., 2018; Yang et al., 2017), therefore having the potential to leverage large scale datasets of raw videos to outperform supervised methods (Guizilini et al., 2019). + +Although depth from a single image is an ill-posed inverse problem, monocular depth networks are able to make accurate predictions by learning representations connecting the appearance of scenes and objects with their geometry in Euclidean 3D space. Due to perspective, there is indeed an equivariance relationship between the visual appearance of an object in 2D and its depth, when conditioned on the object's category. For instance, a car 25 meters away appears smaller (on the image plane) than a car only 5 meters away but bigger than a truck 50 meters away. Current depth estimation methods either do not leverage this structure explicitly or rely on strong semantic supervision to jointly optimize geometric consistency and a semantic proxy task in a multi-task objective (Ochs et al., 2019; Chen et al., 2019), thus departing from the self-supervised paradigm. + +In this paper, we explore how we can leverage semantic information to improve monocular depth prediction in a self-supervised way. Our main contribution is a novel architecture that uses a fixed pre-trained semantic segmentation network to guide geometric representation learning in a self-supervised monocular depth network. In contrast to standard convolutional layers, our architecture uses pixel-adaptive convolutions (Su et al., 2019) to learn semantic-dependent representations that can better capture the aforementioned equivariance property. Leveraging semantics may nonetheless introduce category-specific biases. Our second contribution is a two-stage training process where we automatically detect the presence of a common bias on dynamic objects (projections at infinity) and resample the training set to de-bias it. Our method improves upon the state of the art in self-supervised monocular depth estimation on the standard KITTI benchmark (Geiger et al., 2013), both on average over pixels, over classes, and for dynamic categories in particular. + +![](images/dd0ee2e5a0dca4b90fb16a0da940ea95cdd165ee8a2f5359073a73f5dace0882.jpg) +Figure 1: Example of a pointcloud generated using our proposed semantically-guided architecture, colored by RGB values from the input image and corresponding predicted semantic labels. + +# 2 RELATED WORK + +Since the seminal work of Eigen et al. (2014), substantial progress has been done to improve the accuracy of supervised depth estimation from monocular images, including the use of Conditional Random Fields (CRFs) (Li et al., 2015), joint optimization of surface normals (Qi et al., 2018), fusion of multiple depth maps (Lee et al., 2018), and ordinal classification (Fu et al., 2018). Consequently, as supervised techniques for depth estimation advanced rapidly, the availability of large-scale depth labels became a bottleneck, especially for outdoor applications. Garg et al. (2016) and Godard et al. (2017) provided an alternative self-supervised strategy involving stereo cameras, where Spatial Transformer Networks (Jaderberg et al., 2015) can be used to geometrically warp, in a differentiable way, the right image into a synthesized left image, using the predicted depth from the left image. The photometric consistency loss between the resulting synthesized and original left images can then be minimized in an end-to-end manner using a Structural Similarity term (Wang et al., 2004) and additional depth regularization terms. Following Godard et al. (2017) and Ummenhofer et al. (2017), Zhou et al. (2017) generalized this to the purely monocular setting, where a depth and a pose networks are simultaneously learned from unlabeled monocular videos. Rapid progress in terms of architectures and objective functions (Yin & Shi, 2018; Mahjourian et al., 2018; Casser et al., 2019; Zou et al., 2018; Klodt & Vedaldi, 2018; Wang et al., 2018; Yang et al., 2018) have since then turned monocular depth estimation into one of the most successful applications of self-supervised learning, even outperforming supervised methods (Guizilini et al., 2019). + +The introduction of semantic information to improve depth estimates has been explored in prior works, and can be broadly divided into two categories. The first one uses semantic (or instance) information to mask out or properly model dynamic portions of the image, which are not accounted for in the photometric loss calculation. Güney & Geiger (2015) leveraged object knowledge in a Markov Random Field (MRF) to resolve stereo ambiguities, while Bai et al. (2016) used a conjunction of instance-level segmentation and epipolar constraints to reduce uncertainty in optical flow estimation. Casser et al. (2019) used instance-level masks to estimate motion models for different objects in the environment, and account for their external motion in the resulting warped image. The second category attempts to learn both tasks in a single framework, and uses consistency losses to ensure that both are optimized simultaneously and regularize each other, so the information contained in one task can be transferred to improve the other. For instance, Ochs et al. (2019) estimated depth with an ordinal classification loss similar to the standard semantic classification loss, and used empirical weighting to combine them into a single loss for optimization. Similarly, Chen et al. (2019) used a unified conditional decoder that can generate either semantic or depth estimates, and both outputs are used to generate a series of losses also combined using empirical weighting to generate the final loss to be optimized. + +Our approach focuses instead on representation learning, exploiting semantic features into the self-supervised depth network by using a pretrained semantic segmentation network to guide the generation of depth features. This is done using pixel-adaptive convolutions, recently proposed in Su et al. (2019) and applied to tasks such as depth upsampling using RGB images for feature guidance. We show that different depth networks can be readily modified to leverage this semantic feature guidance, ranging from widely used ResNets (He et al., 2016) to the current state-of-the-art PackNet (Guizilini et al., 2019), with a consistent gain in performance across these architectures. + +# 3 SELF-SUPERVISED STRUCTURE-FROM-MOTION + +Our semantically-guided architecture is developed within a self-supervised monocular depth estimation setting, commonly known as structure-from-motion (SfM). Learning in a self-supervised structure-from-motion setting requires two networks: a monocular depth model $f_{D}: I \to D$ , that outputs a depth prediction $\hat{D} = f_{D}(I(p))$ for every pixel $p$ in the target image $I$ ; and a monocular ego-motion estimator $f_{\mathbf{x}}: (I_t, I_S) \to \mathbf{x}_{t \to S}$ , that predicts the 6 DoF transformations for all $s \in S$ given by $\mathbf{x}_{t \to s} = (\mathbf{R}_{0}^{\mathbf{t}} \mathbf{1}) \in \mathrm{SE}(3)$ between the target image $I_t$ and a set of temporal context source images $I_s \in I_S$ . In all reported experiments we use $I_{t-1}$ and $I_{t+1}$ as source images. + +# 3.1 THE SELF-SUPERVISED OBJECTIVE LOSS + +We train the depth and pose networks simultaneously, using the same protocols and losses as described in Guizilini et al. (2019). Our self-supervised objective loss consists of an appearance matching term $\mathcal{L}_p$ that is imposed between the synthesized $\hat{I}_t$ and original $I_t$ target images, and a depth regularization term $\mathcal{L}_s$ that ensures edge-aware smoothing in the depth estimates $\hat{D}_t$ . The final objective loss is averaged per pixel, pyramid scale and image batch, and is defined as: + +$$ +\mathcal {L} \left(I _ {t}, \hat {I} _ {t}\right) = \mathcal {L} _ {p} \left(I _ {t}, \hat {I} _ {t}\right) + \lambda_ {1} \mathcal {L} _ {s} (\hat {D} _ {t}) \tag {1} +$$ + +where $\lambda_{1}$ is a weighting coefficient between the photometric $\mathcal{L}_p$ and depth smoothness $\mathcal{L}_s$ loss terms. Following Godard et al. (2017) and Zhou et al. (2017), the similarity between synthesized $\hat{I}_t$ and original $I_t$ target images is estimated using a Structural Similarity (SSIM) term (Wang et al., 2004) combined with an L1 loss term, inducing the following overall photometric loss: + +$$ +\mathcal {L} _ {p} \left(I _ {t}, \hat {I} _ {t}\right) = \alpha \frac {1 - \operatorname {S S I M} \left(I _ {t} , \hat {I} _ {t}\right)}{2} + (1 - \alpha) \| I _ {t} - \hat {I} _ {t} \| \tag {2} +$$ + +In order to regularize the depth in low gradient regions, we incorporate an edge-aware term similar to Godard et al. (2017). This loss is weighted for each of the pyramid levels, decaying by a factor of 2 on each downsampling, starting with a weight of 1 for the $0^{\text{th}}$ pyramid level. + +$$ +\mathcal {L} _ {s} (\hat {D} _ {t}) = \left| \delta_ {x} \hat {D} _ {t} \right| e ^ {- \left| \delta_ {x} I _ {t} \right|} + \left| \delta_ {y} \hat {D} _ {t} \right| e ^ {- \left| \delta_ {y} I _ {t} \right|} \tag {3} +$$ + +We also incorporate some of the insights introduced in Godard et al. (2018), namely auto-masking, minimum reprojection error, and inverse depth map upsampling to further improve depth estimation performance in our self-supervised monocular setting. + +# 3.2 DEPTH AND POSE NETWORKS + +Our baseline depth and pose networks are based on the PackNet architecture introduced by Guizilini et al. (2019), which proposes novel packing and unpacking blocks to respectively downsample and upsample feature maps during the encoding and decoding stages. This network was selected due to its state-of-the-art performance in the task of self-supervised monocular depth estimation, so we can analyze if our proposed architecture is capable of further improving the current state-of-the-art. However, there are no restrictions as to which models our proposed semantically-guided architecture can be applied to, and in Section 5.4 we study its application to different depth networks. + +# 4 SEMANTICALLY-GUIDED GEOMETRIC REPRESENTATION LEARNING + +In this section, we describe our method to inject semantic information into a self-supervised depth network via its augmentation with semantic-aware convolutions. Our proposed architecture is depicted in Figure 2 and is composed of two networks: a primary one, responsible for the generation of depth predictions $\hat{D} = f_{D}(I(p))$ ; and a secondary one, capable of producing semantic predictions. Only the first network is optimized during self-supervised learning; the semantic network is initialized from pretrained weights and is not further optimized. This is in contrast to the common practice of supervised (ImageNet) pretraining of depth encoders (Godard et al., 2018; Casser et al., 2019; Zou et al., 2018): here instead of fine-tuning from pre-trained weights, we preserve these secondary weights to guide the feature learning process of the primary depth network. Our approach also differs from learning without forgetting (Li & Hoiem, 2017) by leveraging fixed intermediate feature representations as a way to maintain consistent semantic guidance throughout training. + +![](images/894a8a9e031725fa84dbc3b4c32514f9b2f9ec342ca62dd38fa3bbfd48795c46.jpg) +Figure 2: Diagram of our proposed architecture for self-supervised monocular depth estimation with semantically-guided feature learning. The semantic network is fixed and initialized from pretrained weights, while the depth network is trained end-to-end in a self-supervised way, including pixel-adaptive convolutions (Guidance) on its decoder to learn semantic-dependent geometric features. + +# 4.1 SEMANTICALLY-GUIDED DEPTH FEATURES + +We leverage the information from the pretrained semantic network in the depth network through the use of pixel-adaptive convolutions (Su et al., 2019). They were recently proposed to address some limitations inherent to the standard convolution operation, namely its translation invariance making it content-agnostic. While this significantly reduces the number of parameters of the resulting network, this might also lead to sub-optimal solutions under certain conditions important for geometric representation learning. For example, spatially-shared filters globally average the loss gradients over the entire image, forcing the network to learn weights that cannot leverage location-specific information beyond their limited receptive fields. Content-agnostic filters are unable to distinguish between different pixels that are visually similar (i.e. dark areas due to shadows or black objects) or generalize to similar objects that are visually different (i.e. cars with varying colors). In this work, we use pixel-adaptive convolutions to produce semantic-aware depth features, where the fixed information encoded in the semantic network is used to disambiguate geometric representations for the generation of multi-level depth features. + +As shown in Figure 2, we extract multi-level feature maps from the semantic network. For each feature map, we apply a $3 \times 3$ and a $1 \times 1$ convolutional layer followed by Group Normalization (Wu & He, 2018) and ELU non-linearities (Clevert et al., 2016). These processed semantic feature maps are then used as guidance on their respective pixel-adaptive convolutional layers, following the formulation proposed in Su et al. (2019): + +$$ +\mathbf {v} _ {i} ^ {\prime} = \sum_ {j \in \Omega (i)} K (\mathbf {f} _ {i}, \mathbf {f} _ {j}) \mathbf {W} [ \mathbf {p} _ {i} - \mathbf {p} _ {j} ] \mathbf {v} _ {j} + \mathbf {b} \tag {4} +$$ + +In the above equation, $\mathbf{f} \in \mathcal{R}^D$ are processed features from the semantic network that will serve to guide the pixel-adaptive convolutions from the depth network, $\mathbf{p} = (x,y)^T$ are pixel coordinates, with $[\mathbf{p}_i - \mathbf{p}_j]$ denoting 2D spatial offsets between pixels, $\mathbf{W}_{k\times k}$ are convolutional weights with kernel size $k$ , $\Omega_i$ defines a $k\times k$ convolutional window around $i$ , $\mathbf{v}$ is the input signal to be convolved, and $\mathbf{b} \in \mathcal{R}^1$ is a bias term. $K$ is the kernel used to calculate the correlation between guiding features, here chosen to be the standard Gaussian kernel: + +$$ +K \left(\mathbf {f} _ {i}, \mathbf {f} _ {j}\right) = \exp \left(- \frac {1}{2} \left(\mathbf {f} _ {i} - \mathbf {f} _ {j}\right) ^ {T} \Sigma_ {i j} ^ {- 1} \left(\mathbf {f} _ {i} - \mathbf {f} _ {j}\right)\right) \tag {5} +$$ + +where $\Sigma_{ij}$ is the covariance matrix between features $\mathbf{f}_i$ and $\mathbf{f}_j$ , here chosen to be a diagonal matrix $\sigma^2 \cdot I_D$ , with $\sigma$ as an extra learnable parameter for each convolutional filter. These kernel evaluations can be seen as a secondary set of weights applied to the standard convolutional weights, changing their impact on the resulting depth features depending on the content stored in the guiding semantic features. For example, the information contained in depth features pertaining to the sky should not be used to generate depth features describing a pedestrian, and this behavior is now captured as a larger distance between their corresponding semantic features, which in turn produces smaller weights for that particular convolutional filter. Note that the standard convolution can be considered a special case of the pixel-adaptive convolution, where $\forall ij, K(\mathbf{f}_i, \mathbf{f}_j) = 1$ . + +![](images/d389258f84e33605abe3227b436aa9766be65ed21c04da571f746d0650a0bdd8.jpg) +Figure 3: Qualitative results of our proposed two-stage training to address the infinite depth problem. Top images were obtained evaluating the first-stage depth network, and bottom images were obtained using the second-stage depth network, trained with a filtered dataset. + +# 4.2 SEMANTIC GUIDANCE NETWORK + +As the secondary network used to provide semantic guidance for the generation of depth features, we use a Feature Pyramid Network (FPN) with ResNet backbone (Lin et al., 2017). This architecture has been shown to be efficient for both semantic and instance-level predictions towards panoptic segmentation (Kirillov et al., 2019; Li et al., 2018; Xiong et al., 2019; Porzi et al., 2019). While our proposed semantically-guided architecture is not restricted to any particular network, we chose this particular implementation to facilitate the future exploration of different sources for guidance information. Architectural details follow the protocols described in Li et al. (2018), and unless mentioned otherwise the same pretrained model was used in all reported experiments. The semantic network is assumed fixed, pretrained on a held out dataset different than the raw data used for self-supervised learning, i.e. we do not require any semantic ground truth on the target dataset. + +# 4.3 TWO-STAGE TRAINING + +One well-known limitation of the self-supervised photometric loss is its inability to model dynamic objects, due to a static world assumption that only accounts for camera ego-motion (Godard et al., 2018; Casser et al., 2019). A resulting common failure mode is the infinite depth problem, which is caused by the presence of objects moving at the same speed as the camera. This typically causes distinct holes in the predicted depth maps, with arbitrarily large values where these objects should be. This severely hinders the applicability of such models in real-world applications, particularly for automated driving, where the ability to detect and properly model dynamic objects is crucial. Moreover, this limitation may be further accentuated in our proposed semantically-guided architecture, as the infinite depth problem occurs mostly on dynamic categories (i.e. cars and motorcycles) and the semantic-aware features may reinforce this bias. + +We propose a simple and efficient two-stage training method to detect and remove this bias from the training set. In the first stage, we learn a standard depth network on all available training data. This network, exhibiting the infinite depth problem, is then used to resample the dataset by automatically filtering out sequences with infinite depth predictions that violate a basic geometric prior. We indeed find that depth predictions for pixels corresponding to the nearby ground plane are generally robust. This enables getting a coarse estimate of the ground plane using RANSAC and detecting the number of pixels whose predicted depth projects them significantly below the ground. If that number is above a threshold, then the corresponding image is subsequently ignored (we found a conservative threshold of 10 to work well in all our experiments, filtering out roughly $5\%$ of the KITTI training dataset). During the second stage, we retrain the network on the subsampled dataset (from scratch to avoid the previous local optimum). As this subsampled dataset is de-biased, the network learns better depth estimates on dynamic objects. This process can be repeated, but we find that two stages are enough to remove any traces of infinite depth in our experiments, as shown in Figure 3. + +# 5 EXPERIMENTAL RESULTS + +# 5.1 DATASETS + +We use the standard KITTI benchmark (Geiger et al., 2013) for self-supervised training and evaluation. More specifically, we adopt the training, validation and test splits used in Eigen et al. (2014) with the pre-processing from Zhou et al. (2017) to remove static frames, which is more suitable for + +
MethodSuperv.Lower is BetterHigher is Better
Abs RelSq RelRMSERMSElogδ < 1.25δ < 1.252δ < 1.253
Garg et al. (2016)M0.1521.2265.8490.2460.7840.9210.967
Zou et al. (2018)M0.1501.1245.5070.2230.8060.9330.973
Godard et al. (2017)M0.1411.1865.6770.2380.8090.9280.969
Zhan et al. (2018)M0.1351.1325.5850.2290.8200.9330.971
Godard et al. (2018) (R18)M0.1150.9034.8630.1930.8770.9590.981
Godard et al. (2018) (R50)M0.1120.8514.7540.1900.8810.9600.981
Guizilini et al. (2019) (MR)M0.1080.7274.4260.1840.8850.9630.983
Guizilini et al. (2019) (HR)M0.1040.7584.3860.1820.8950.9640.982
Casser et al. (2019)S+Inst0.1411.0255.2900.2150.8160.9450.979
Chen et al. (2019)S+Sem0.1180.9055.0960.2110.8390.9450.977
Ochs et al. (2019)D+Sem0.1160.9454.9160.2080.8610.9520.968
Ours (MR)M+Sem0.1020.6984.3810.1780.8960.9640.984
Ours (HR)M+Sem0.1000.7614.2700.1750.9020.9650.982
+ +Table 1: Quantitative performance comparison of our proposed architecture on KITTI for depths up to $80\mathrm{m}$ . $M$ refers to methods that train using monocular images, $S$ refers to methods that train using stereo pairs, $D$ refers to methods that use ground-truth depth supervision, $Sem$ refers to methods that include semantic information, and $Inst$ refers to methods that include semantic and instance information. $MR$ indicates $640 \times 192$ input images, and $HR$ indicates $1280 \times 384$ input images. Our proposed architecture is able to further improve the current state of the art in self-supervised monocular depth estimation, and outperforms other methods that exploit semantic information (including ground truth labels) by a substantial margin. + +monocular self-supervised learning. This results in 39810 images for training, 4424 for validation, and 697 for evaluation. Following common practice, we pretrain our depth and pose networks on the CityScapes dataset (Cordts et al., 2016), consisting of 88250 unlabeled images. Unless noted otherwise, input images are downsampled to $640 \times 192$ resolution and output inverse depth maps are upsampled to full resolution using bilinear interpolation. Our fixed semantic segmentation network is pretrained on Cityscapes, achieving a mIoU of $75\%$ on the validation set. + +# 5.2 IMPLEMENTATION DETAILS + +We implement our models with PyTorch (Paszke et al., 2017) and follow the same training protocols of Guizilini et al. (2019) when optimizing our depth and pose networks. The initial training stage is conducted on the CityScapes dataset for 50 epochs, with a batch size of 4 per GPU and initial depth and pose learning rates of $2 \cdot 10^{-4}$ and $5 \cdot 10^{-4}$ respectively, that are halved every 20 epochs. Afterwards, the depth and pose networks are fine-tuned on KITTI for 30 epochs, with the same parameters and halving the learning rates after every 12 epochs. This fine-tuning stage includes the proposed architecture, where information from the fixed semantic network, pretrained separately, is used to directly guide the generation of depth features. There is no direct supervision at any stage during depth training, all semantic information is derived from the fixed secondary network. + +When pretraining the semantic segmentation network, we use a ResNet-50 backbone withImagenet (Deng et al., 2009) pretrained weights and optimize the network for $48k$ iterations on the CityScapes dataset with a learning rate of 0.01, momentum of 0.9, weight decay of $10^{-4}$ , and a batch size of 1 per GPU. Random scaling between (0.7, 1.3), random horizontal flipping, and a crop size of $1000 \times 2000$ are used for data augmentation. We decay the learning rate by a factor of 10 at iterations $36k$ and $44k$ . Once training is complete, the semantic segmentation network is fixed and becomes the only source of semantic information when fine-tuning the depth and pose networks on KITTI. + +# 5.3 DEPTH ESTIMATION PERFORMANCE + +Our depth estimation results are summarized in Table 1, where we compare our proposed architecture with other published works. From these results we can see that the introduction of semantically-guided geometric representation learning further improves upon the current state of the art in self-supervised monocular depth estimation from Guizilini et al. (2019), which served as our baseline. Our approach also outperforms other methods that leverage semantic information by a substantial margin, even those using ground-truth KITTI semantic segmentation and depth labels during train- + +
NetworkSEMTSTLower is BetterHigher is BetterClass-Avg. Abs Rel
Abs RelSq RelRMSERMSElogδ < 1.25δ < 1.252δ < 1.253
ResNet-180.1200.8964.8690.1980.8680.9570.9810.149
0.1170.8544.7140.1910.8730.9630.9810.139
ResNet-500.1170.9004.8260.1960.8730.9670.9800.144
0.1130.8314.6630.1890.8780.9710.9830.136
PackNet0.1080.7274.4260.1840.8850.9630.9830.132
0.1030.7104.3010.1790.8950.9640.9840.121
0.1020.6984.3810.1780.8960.9630.9840.117
+ +Table 2: Ablative analysis of our semantic guidance (SEM) and two-stage-training (TST) contributions. The last column indicates class-average Abs. Rel. obtained by averaging all class-specific depth errors in Figure 4, while other columns indicate pixel-average metrics. + +ing (Ochs et al., 2019). Furthermore, in Figure 5 we also present qualitative results showing the improvements in depth estimation generated by our proposed framework, compared to our baseline. Note how our semantically-guided architecture produces sharper boundaries and better object delineation, especially in structures further away or not clearly distinguishable in the input image. + +# 5.4 ABLATIVE ANALYSIS + +# 5.4.1 DIFFERENT DEPTH NETWORKS + +To better evaluate our main contribution, we provide an ablative analysis showing how it generalizes to different depth networks. To this end, we consider two variations of the widely used ResNet architecture as the encoder for our depth network: ResNet-18 and ResNet-50 (the same pretrained semantic network was used in all experiments). Depth estimation results considering these variations are shown in Table 2, where we can see that our proposed semantically-guided architecture is able to consistently improve the performance of different depth networks, for all considered metrics. + +# 5.4.2 CLASS-SPECIFIC DEPTH PERFORMANCE + +To further showcase the benefits of our semantically-guided architecture, we also provide class-specific evaluation metrics, as shown in Figure 4. As we do not have ground-truth semantic segmentation for these images, we use the prediction of the semantic network to bin pixels per predicted category, and evaluate only on those pixels. From these results we can see that our proposed architecture consistently improves depth performance for pixels across all predicted classes, especially those containing fine-grained structures and sharp boundaries, e.g. poles and traffic signs. + +![](images/5a11ba3f3de280ec07634b86cb97d46c448fefa0395a025ba20b5671f427862b.jpg) +Figure 4: Class-specific depth evaluation for our proposed architecture (blue), relative to our baseline (red). The rightmost column indicates class-average depth metrics, obtained by averaging all individual classes. The introduction of semantically-guided features, in conjunction with our proposed two-stage training methodology to address the infinite depth problem, consistently improved depth results for all considered classes (lower is better). + +![](images/7a224786cf4e863faa85d970e2fb9455981e9475f55a173fe311911d77f60313.jpg) +Figure 5: Qualitative results of our proposed architecture. The left, middle, and right columns show respectively input images, baseline predicted depth maps (Guizilini et al., 2019), and the depths maps obtained using our proposed architecture. Our semantic-aware depth network predicts sharper boundaries and fine-grained details on distant objects. The dotted lines indicate class-average errors, obtained by averaging all the class-specific depth errors. + +![](images/4f6658c3d85288ca5e7b79a5a0139a75db6469445ed4240efb91202c4fb21922.jpg) + +We also measure the impact of our two-stage training process, which is expected to address the infinite depth problem in dynamic objects. Although we find the pixel-average difference in performance to not be significant (see Table 2), there is a significant improvement in class-average depth estimation, from 0.121 to 0.117 Abs-Rel. This is because the number of pixels affected by the infinite depth problem is vastly smaller than the total number of pixels. However, when considering class-average depth evaluation, the improvement over classes such as cars (0.200 to 0.177 Abs-Rel) and motorcycles (0.091 to 0.069) becomes statistically significant. This further exemplifies the importance of fine-grained metrics in depth evaluation, so these underlying behaviors can be properly observed and accounted for in the development of new techniques. + +# 6 CONCLUSION + +This paper introduces a novel architecture for self-supervised monocular depth estimation that leverages semantic information from a fixed pretrained network to guide the generation of multi-level depth features via pixel-adaptive convolutions. Our monodepth network learns semantic-aware geometric representations that can disambiguate photometric ambiguities in a self-supervised learning structure-from-motion context. Furthermore, we introduce a two-stage training process that resembles training data to overcome a common bias on dynamic objects resulting in predicting them at infinite depths. Our experiments on challenging real-world data shows that our proposed architecture consistently improves the performance of different monodepth architectures, thus establishing a new state of the art in self-supervised monocular depth estimation. Future directions of research include leveraging other sources of guidance (i.e. instance masks, optical flow, surface normals), as well as avenues for self-supervised fine-tuning of the semantic network. + +# REFERENCES + +Min Bai, Wenjie Luo, Kaustav Kundu, and Raquel Urtasun. Exploiting semantic information and deep matching for optical flow. In ECCV, 2016. 2 +Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. CoRR, 2019. 13, 14 +Vincent Casser, Soeren Pirk, Reza Mahjourian, and Anelia Angelova. Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos. AAAI, 2019. 2, 3, 5, 6 +Po-Yi Chen, Alexander H. Liu, Yen-Cheng Liu, and Yu-Chiang Frank Wang. Towards scene understanding: Unsupervised monocular depth estimation with semantic-aware representation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 1, 2, 6 +Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In ICLR, 2016. 4 +Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213-3223, 2016. 6, 12 +Jia Deng, Wei Dong, Richard Socher, Li jia Li, Kai Li, and Li Fei-fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009. 6, 12 +David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, 2014. 1, 2, 5 +Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, and Dacheng Tao. Deep ordinal regression network for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2002-2011, 2018. 2 +Ravi Garg, Vijay Kumar BG, Gustavo Carneiro, and Ian Reid. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In European Conference on Computer Vision, pp. 740-756. Springer, 2016. 2, 6 +Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11):1231-1237, 2013. 1, 5 +Clément Godard, Oisin Mac Aodha, and Gabriel J Brostow. Unsupervised monocular depth estimation with left-right consistency. In CVPR, volume 2, pp. 7, 2017. 2, 3, 6 +Clément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J. Brostow. Digging into self-supervised monocular depth prediction. arXiv:1806.01260, 2018. 3, 5, 6, 14 +Vitor Guizilini, Sudeep Pillai, Rares Ambrus, and Adrien Gaidon. Packnet-sfm: 3d packing for self-supervised monocular depth estimation. arXiv preprint arXiv:1905.02693, 2019. 1, 2, 3, 6, 8, 14 +Fatma Güney and Andreas Geiger. Displets: Resolving stereo ambiguities using object knowledge. In Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 2 +Xiaoyang Guo, Hongsheng Li, Shuai Yi, Jimmy Ren, and Xiaogang Wang. Learning monocular depth by distilling cross-domain stereo networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 484-500, 2018. 1 +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. 2 + +Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pp. 2017-2025, 2015. 2 +Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7482-7491, 2018. 1 +Alexander Kirillov, Ross Girshick, Kaiming He, and Piotr Dólar. Panoptic feature pyramid networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6399-6408, 2019. 5 +Maria Klodt and Andrea Vedaldi. Supervising the new with the old: Learning sfm from sfm. In European Conference on Computer Vision, pp. 713-728. Springer, 2018. 2 +Jae-Han Lee, Minhyeok Heo, Kyung-Rae Kim, and Chang-Su Kim. Single-image depth estimation based on fourier domain analysis. In International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 330-339, 2018. 2 +Kuan-Hui Lee, German Ros, Jie Li, and Adrien Gaidon. Spigan: Privileged adversarial learning from simulation. In ICLR, 2019. 1 +Bo Li, Chunhua Shen, Yuchao Dai, Anton Van, and Mingyi He. Depth and surface normal estimation from monocular images using regression on deep features and hierarchical crfs. In International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1119-1127, 2015. 2 +Jie Li, Allan Raventos, Arjun Bhargava, Takaaki Tagawa, and Adrien Gaidon. Learning to fuse things and stuff. arXiv preprint arXiv:1812.01192, 2018. 5 +Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE TPAMI, 40(12):2935-2947, 2017. 3 +Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117-2125, 2017. 5 +Reza Mahjourian, Martin Wicke, and Anelia Angelova. Unsupervised learning of depth and egomotion from monocular video using 3d geometric constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5667-5675, 2018. 2 +Fabian Manhardt, Wadim Kehl, and Adrien Gaidon. Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape. International Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1 +Jeff Michels, Ashutosh Saxena, and Andrew Y Ng. High speed obstacle avoidance using monocular vision and reinforcement learning. In Proceedings of the 22nd international conference on Machine learning, pp. 593-600. ACM, 2005. 1 +Matthias Ochs, Adrian Kretz, and Rudolf Mester. Sdnet: Semantically guided depth estimation network. In arXiv:1907.10659, 2019. 1, 2, 6, 7 +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017. 6 +Sudeep Pillai, Rares Ambrus, and Adrien Gaidon. Superdepth: Self-supervised, super-resolved monocular depth estimation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2019. 1 +Lorenzo Porzi, Samuel Rota Bulo, Aleksander Colovic, and Peter Kontschieder. Seamless scene segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8277-8286, 2019. 5 +Xiaojuan Qi, Renjie Liao, Zhengzhe Liu, Raquel Urtasun, and Jiaya Jia. Geonet: Geometric neural network for joint depth and surface normal estimation. In International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 283-291, 2018. 2 + +Hang Su, Varun Jampani, Deqing Sun, Orazio Gallo, Erik Learned-Miller, and Jan Kautz. Pixel-adaptive convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 2, 4 +Benjamin Ummenhofer, Huizhong Zhou, Jonas Uhrig, Nikolaus Mayer, Eddy Ilg, Alexey Dosovitskiy, and Thomas Brox. Demon: Depth and motion network for learning monocular stereo. In IEEE Conference on computer vision and pattern recognition (CVPR), volume 5, pp. 6, 2017. 2 +Chaoyang Wang, José Miguel Buenaposada, Rui Zhu, and Simon Lucey. Learning depth from monocular videos using direct methods. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2022-2030, 2018. 2 +Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 2004. 2, 3 +Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the 15th European Conference on Computer Vision (ECCV), pp. 3-19, 2018. 4 +Yuwen Xiong, Renjie Liao, Hengshuang Zhao, Rui Hu, Min Bai, Ersin Yumer, and Raquel Urtasun. Upsnet: A unified panoptic segmentation network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8818-8826, 2019. 5 +Nan Yang, Rui Wang, Jörg Stuckler, and Daniel Cremers. Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry. arXiv:1807.02570, 2018. 2 +Zhenheng Yang, Peng Wang, Wei Xu, Liang Zhao, and Ramakant Nevatia. Unsupervised learning of geometry with edge-aware depth-normal consistency. arXiv preprint arXiv:1711.03665, 2017. 1 +Zhichao Yin and Jianping Shi. Geonet: Unsupervised learning of dense depth, optical flow and camera pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2018. 2 +Huangying Zhan, Ravi Garg, Chamara Saroj Weerasekera, Kejie Li, Harsh Agarwal, and Ian Reid. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 340-349, 2018. 6 +Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, volume 2, pp. 7, 2017. 2, 3, 5 +Yuliang Zou, Zelun Luo, and Jia-Bin Huang. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. In European Conference on Computer Vision, 2018. 1, 2, 3, 6 + +# A PRE-TRAINING THE SEMANTIC SEGMENTATION NETWORK + +The introduction of a semantic segmentation network to the depth estimation task increases the depth estimation performance, however it also increases model complexity (e.g. number of trainable parameters). To investigate that the increased performance for the depth estimation task is indeed due to the semantic features encoded in the secondary network, we perform an in-depth analysis (summarized in Table 3) where we explore the impact of pre-training the semantic segmentation network before it is used to guide the generation of depth features. From these results we can see that the presence of semantic information encoded in the secondary network indeed leads to an increase in performance, and that fine-tuning this secondary network for the speficic task of depth estimation actually decreases performance. + +In the first two rows an untrained semantic network is utilized, with only its encoder initialized from ImageNet (Deng et al., 2009) weights. Two different scenarios are explored: in the first one (D) only the depth network is fine-tuned in a self-supervised fashion, while in D+S both networks are fine-tuned together in the same way. As expected, using untrained features as guidance leads to significantly worse results, since there is no structure encoded in the secondary network and the primary network needs to learn to filter out all this spurious information. When both networks are fine-tuned simultaneously, results improve because now the added complexity from the secondary network can be leveraged for the task of depth estimation, however there is still no improvement over the baseline. + +Next, the semantic network was pre-trained on only half of the CityScapes (Cordts et al., 2016) dataset (samples chosen randomly), leading to a worse semantic segmentation performance (validation mIoU of around $70\%$ vs. $75\%$ for the fully trained one). This partial pre-training stage was enough to enable the transfer of useful information between networks, leading to improvements over the baseline. Interestingly, fine-tuning both networks for the task of depth estimation actually hurt performance this time, which we attribute to forgetting the information contained in the secondary network, as both networks are optimized for the depth task. When the semantic network is pretrained with all of CityScapes (last two rows), these effects are magnified, with fine-tuning only the depth network leading to our best reported performance (Table 1) and fine-tuning both networks again leading to results similar to the baseline. + +
MethodPre-TrainFine-TuneLower is BetterHigher is Better
Abs RelSq RelRMSERMSElogδ < 1.25δ < 1.252δ < 1.253
BaselineD0.1080.7274.4260.1840.8850.9630.983
ProposedID+S0.1160.8474.7510.1920.8790.9600.981
ID0.1971.3236.1140.2650.7760.9180.966
CS (1/2)D+S0.1090.7374.3890.1850.8840.9620.982
CS (1/2)D0.1040.7164.3220.1800.8930.9640.984
CSD+S0.1070.7414.4070.1830.8830.9630.983
CSD0.1020.6984.3810.1780.8960.9640.984
+ +Table 3: Analysis of the impact of pre-training the semantic segmentation network. On the PreTrain column, $I$ indicates ImageNet (Deng et al., 2009) pretraining and $CS$ indicates CityScapes (Cordts et al., 2016) pretraining, with $1/2$ indicating the use of only half the dataset (samples chosen randomly). In the Fine-Tune column, $D$ indicates fine-tuning the depth network and $S$ indicates fine-tuning the semantic network (note that this is a self-supervised fine-tuning for the depth task, using the objective described in Section 3.1). + +# B UNCERTAINTY AND GENERALIZATION TO DIFFERENT OBJECTS + +In a self-supervised setting, increasing the number of unlabeled videos used for depth training is expected to lead to an increasing specialization away from the domain in which the semantic network was pre-trained. This might result in harmful guidance if our method is not robust to this gap. However, our approach does not use semantic predictions directly, but rather the decoded features of the semantic network themselves, which represent general appearance information that should be more robust to this domain gap. To validate our hypothesis, we further explore the impact + +of erroneous semantic information in the performance of our proposed semantically-guided depth framework. In Figure 6 we present qualitative results highlighting situations in which our pretrained semantic network failed to generate correct semantic predictions for certain objects in the scene, and yet our proposed framework was still able to properly recover depth values for that portion of the environment. These exemplify possible scenarios for erroneous semantic prediction. + +- Imprecise boundaries: in the first row, we can see that the semantic segmentation network does not correctly detect the traffic sign, yet the semantically-guided depth network predicts its shape and depth accurately. +- Wrong classification: in the second row, the truck was mistakenly classified as partially "road" and "building", however our semantically-guided depth network was still able to properly recover its overall shape with sharp delineation that was not available from its semantic contour. A similar scenario happens in the same image, with "fence" being partially labeled as "bicycle". +- Missing ontology: there is no "trash can" class on the CityScapes ontology, however in the third row our semantically-guided depth network was able to correctly reconstruct such object even though it was classified as "fence", similarly to its surroundings. +- Object Hallucination: in the fourth row, the contour of a "person" was erroneously introduced in the image and correctly removed by our semantically-guided framework. + +These examples are evidence that our proposed framework is able to reason over the uncertainty inherent to semantic classification, leveraging this information when accurate to achieve the results reported in this paper, but also discarding it if necessary to generate a better reconstruction according to the self-supervised photometric loss. + +![](images/7fa3c1aab1e4801ebb78e312a38e726e4a5db46ad4f26d1722078a75b9b19150.jpg) +Figure 6: Examples of erroneous semantic predictions that still led to accurate depth predictions using our proposed semantically-guided depth framework. + +# C GENERALIZATION TO DIFFERENT DATASETS + +In the previous sections, we show that our proposed framework is robust to a degraded semantic network, both by pretraining the semantic network with fewer annotated labels (Appendix A) and also by providing evidence that the depth network is able to reason over erroneous predictions to still generate accurate reconstructions (Appendix B). We now go one step further and analyze how our proposed semantically-guided framework generalizes to a dataset that was used neither during pre-training nor for fine-tuning. To this end, we evaluate our KITTI depth model on the recently released NuScenes dataset (Caesar et al., 2019). The official NuScenes validation split is used, containing 6019 images from the front camera with ground-truth depth maps generated by LiDAR reprojection. Results presented in Table 4 provide additional evidence that our method indeed results in generalization improvements, even on significantly different data from different platforms + +and environments (Karlsruhe, Germany for KITTI vs Boston, USA and Singapore for NuScenes), outperforming the state of the art methods and our baseline (Guizilini et al., 2019). + +
MethodAbs RelSq RelRMSE\( RMSE_{log} \)δ<1.25δ<1.252δ<1.253
Godard et al. (2018) (R18)0.2121.9187.9580.3230.6740.8980.954
Godard et al. (2018) (R50)0.2102.0178.1110.3280.6970.9030.960
Guizilini et al. (2019) (MR)0.1871.8527.6360.2890.7420.9170.961
Ours (MR)0.1811.5057.2370.2710.7650.9310.969
+ +Table 4: Generalization capability of different networks, trained on both KITTI and CityScapes datasets and evaluated on the NuScenes (Caesar et al., 2019) dataset. Our proposed semantically-guided architecture is able to further improve upon the baseline from Guizilini et al. (2019), which only used unlabeled image sequences for self-supervised depth training. \ No newline at end of file diff --git a/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/images.zip b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1ff1fcb7d222156ae26ee7a59666000be8b93fa8 --- /dev/null +++ b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6753fc8ac58472b3713a3c6f9cd9f9422d365f9db31c9bc8ecaeca551957b209 +size 625457 diff --git a/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/layout.json b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8ac86c1d12b04c02dd15eed1d895b92dea5a0056 --- /dev/null +++ b/semanticallyguidedrepresentationlearningforselfsupervisedmonoculardepth/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5a70a8904951c35ab5aebe9d32a5cb9f151bbcd0b02cdb9d41b3da935183308 +size 342018 diff --git a/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_content_list.json b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c980b28b764abc589bdb6d486da58e7ad6eecaaa --- /dev/null +++ b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbae1676d0b49afd960d1341aa1277c9af7ae7d34bcb80c46761e5150bf4e8a8 +size 100066 diff --git a/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_model.json b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7b5898b904a3355656b42186e5d60b830a06cdb0 --- /dev/null +++ b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4afa6b3551af1591a6e936a6866b60d053a35892fca86135d34602809bf35501 +size 125824 diff --git a/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_origin.pdf b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..93316545e3ad6e8be3b13ee11ca42ae99764aea5 --- /dev/null +++ b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/77f66ac0-4a60-4eb6-a3f7-c7ee12faf248_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53b4c5bcf5f4f57b9eb21cd339b1d4f49bf74f65f6597468930b926b470fe421 +size 1851998 diff --git a/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/full.md b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..84356c5c367ff26baf20ab324f3bfd5fdddb313a --- /dev/null +++ b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/full.md @@ -0,0 +1,381 @@ +# SEMI-SUPERVISED GENERATIVE MODELING FOR CONTROLLABLE SPEECH SYNTHESIS + +Raza Habib $^{1*}$ Soroosh Mariooryad $^{2}$ Matt Shannon $^{2}$ Eric Battenberg $^{2}$ RJ Skerry-Ryan $^{2}$ Daisy Stanton $^{2}$ David Kao $^{2}$ Tom Bagby $^{2}$ + +1University College London (UCL) 2Google Research. + +raza.habib@cs.ucl.ac.uk + +{soroosh, mattshannon, ebattenberg, rjryan, daisy, davidkao, tombagby}@google.com + +# ABSTRACT + +We present a novel generative model that combines state-of-the-art neural text-to-speech (TTS) with semi-supervised probabilistic latent variable models. By providing partial supervision to some of the latent variables, we are able to force them to take on consistent and interpretable purposes, which previously hasn't been possible with purely unsupervised TTS models. We demonstrate that our model is able to reliably discover and control important but rarely labelled attributes of speech, such as affect and speaking rate, with as little as 30 minutes supervision. Even at such low supervision levels we do not observe a degradation of synthesis quality compared to a state-of-the-art baseline. Audio samples are available on the web1. + +# 1 INTRODUCTION + +The ability to reliably control high level attributes of speech, such as emotional expression (affect) or speaking rate, is often desirable in speech synthesis applications. Achieving this control however is made difficult by the necessity of acquiring a large quantity of high quality labels. In this paper we show that semi-supervised latent variable models can take us a significant step closer towards solving this problem. + +Combining state-of-the-art neural text-to-speech (TTS) systems with probabilistic latent variable models provides a natural framework for discovering aspects of speech that are rarely labelled or even difficult to describe. Both inferring the latent prosody and generating samples with sufficient variety requires reasoning about uncertainty and is thus a natural fit for deep generative models. + +There has been recent progress in applying stochastic gradient variational Bayes (SGVB) (Kingma & Welling, 2013; Rezende et al., 2014) to training probabilistic neural TTS models. Battenberg et al. (2019) and Hsu et al. (2018) have shown that it is possible to use latent variable models to discover features such as speaking style, speaking rate, arousal, gender and even the quality of the recording environment. + +However, these models are formally non-identifiable (Hyvärinen & Pajunen, 1999; Locatello et al., 2019) and this implies that repeated training runs will not reliably discover the same latent attributes. Even if they did, a lengthy human post-processing stage is necessary to identify what the model has learned on any given training run. In order to be of practical use for control, it is not enough for the models to discover latent attributes, they need to do so reliably and in a way that is robust to random initialization and to changes in the model. We demonstrate that the addition of even modest amounts of supervision can be sufficient to achieve this reliability. + +By augmenting state-of-the-art neural TTS with semi-supervised deep generative models within the VAE framework (Kingma et al., 2014; Narayanaswamy et al., 2017), we show that it is possible to not only discover latent attributes of speech but to do so in a reliable and controllable manner. In particular we are able to achieve reliable control over affect, speaking rate and F0 variation (F0 is the + +![](images/44e912eb2cedc59bd58f239eb4c392cbaab194d108ae7cacf5b1f4230950f3f2.jpg) +(a) CBHG block +(b) Seq-to-seq network +Figure 1: Schematic showing how we parameterize the conditional likelihood $p(x|y, z_u, z_s)$ . Left: A block of 1-D convolutions and RNNs originally introduced by Wang et al. (2017) and described in detail in the appendix. Right: Schematic of the sequence-to-sequence network that outputs the means of our auto-regressive distribution. At each decoder time step, the network outputs the means for the next two spectrogram frames. + +fundamental frequency of oscillation of the vocal folds). Further, we provide demonstrations that it is possible to transfer controllability to speakers for whom we have no labels. Our core contributions are: + +- To combine semi-supervised latent variable models with Neural TTS systems, producing a system that can reliably discover attributes of speech we wish to control. +- To demonstrate that as little as 30 minutes supervision can be sufficient to improve prosody and allow control over speaking rate, fundamental frequency (F0) variation and affect, a problem of interest to the speech community for well over two decades (Schroder, 2001). +- To imbue TTS models with control over affect, F0 and speaking rate whilst still maintaining prosodic variation when sampling. + +# 2 GENERATIVE MODEL + +Our generative model, shown in figures 1 and 2a, consists of an autoregressive distribution over a sequence of acoustic features, $x_{1\dots t}$ , that are generated conditioned on a sequence of text, $y_{1\dots k}$ , and on two latent variables, $z_{u}$ and $z_{s}$ . The latent variables can be discrete or continuous. $z_{s}$ represents the variations in prosody that we seek to control and is semi-supervised. $z_{u}$ is fully unobserved and represents latent variations in prosody (intonation, rhythm, stress) that we wish to model but not explicitly control. Once trained, our model can be used to synthesize acoustic features from text. Similar to Tacotron 2 (Shen et al., 2018), we then generate waveforms by training a second network such as WaveNet (van den Oord et al., 2016) or WaveRNN (Kalchbrenner et al., 2018) to act as a vocoder. In our case we use WaveRNN. + +We parameterize our likelihood $p(x_{1\dots t}|y_{1\dots k},z_u,z_s,\theta)$ by a sequence-to-sequence neural network with attention (Shen et al., 2018; Graves, 2013; Bahdanau et al., 2014) that is shown schematically in figure 1. Details largely follow Tacotron (Wang et al., 2017) and are given in appendix A. At each time step we model a mel-spectrogram frame with a fixed variance isotropic Laplace distribution whose mean is output by the neural network. We condition each of the latent variables by concatenating the vectors $z_{u}$ and $z_{s}$ to the representation of the text-encoder, before the application of the attention mechanism. In the case of continuous $z$ we use a standard normal prior and in the case of discrete $z$ we use a uniform categorical prior with one-hot encoding. + +![](images/35bf62786c0fa749cf35dbb2b3c6f67f4f59d0525bb4411e116b4e2b23ea04af.jpg) +(a) Generative model + +![](images/d284e46fd4ffc153f365d66c91a32f54abe76541ea71734014236db319c794ab.jpg) +(b) Unsupervised posterior +Figure 2: Left: The graphical model showing the conditional independence assumptions between each of the stochastic variables. Centre: The structure of the variational distribution used to approximate the posterior for fully unsupervised data points and Right: supervised points. + +![](images/10929cfb449a2068ddddb084ca8c461b439aee3cf28027d0393ad5b31ca723ac.jpg) +(c) Supervised posterior + +# 2.1 SEMI-SUPERVISED TRAINING + +Following Kingma et al. (2014); Narayanaswamy et al. (2017), we train our model via stochastic gradient variational Bayes (SGVB). That is we approximately maximize the log-likelihood of our training data by maximizing a variational lower bound using stochastic gradient ascent. Since we are training with semi-supervision we in fact need two lower bounds: one for the data points for which $z_{s}$ is observed; one for the case where $z_{s}$ is unobserved. In our models the fully latent variable $z_{u}$ is always continuous but the semi-supervised latent $z_{s}$ can be continuous or discrete. The conditional independence structure of our variational distributions is shown in figures 2b and 2c. On supervised data, the per-datapoint bound is: + +$$ +\begin{array}{l} \log p (x, z _ {s} | y, \theta) = \log \int p (x, z _ {u}, z _ {s} | y, \theta) \mathrm {d} z _ {u} \\ \geq \mathbb {E} _ {q (z _ {u} | x, y, z _ {s}, \phi)} \left[ \log \left(\frac {p (x | y , z _ {u} , z _ {s} , \theta) p (z _ {u}) p (z _ {s})}{q (z _ {u} | x , y , z _ {s} , \phi)}\right) \right] \\ = \mathbb {E} _ {q (z _ {u} | x, y, z _ {s}, \phi)} [ \log p (x | y, z _ {u}, z _ {s}, \theta) ] + \log p (z _ {s}) - D _ {\mathrm {K L}} (q (z _ {u} | x, y, z _ {s}, \phi) \| p (z _ {u})) \\ = \mathcal {L} _ {s} (\theta , \phi ; x, y, z _ {s}) \\ \end{array} +$$ + +Where $q(z_{u}|x,y,z_{s},\phi)$ is a parametric variational distribution introduced to approximately marginalize $z_{u}$ . $\theta$ are the parameters of the generative model and $\phi$ are the parameters of the variational distributions. The intractable integrals are approximated with reparameterized samples. For the cases where $z_{s}$ is unobserved and discrete, the bound is: + +$$ +\begin{array}{l} \log p (x \mid y, \theta) = \log \int \sum_ {z _ {s}} p \left(x, z _ {u}, z _ {s} \mid y, \theta\right) \mathrm {d} z _ {u} (1) \\ \geq \sum_ {z _ {s}} q \left(z _ {s} | x, y, \phi\right) \mathcal {L} _ {s} \left(\theta , \phi ; x, y, z _ {s}\right) + H \left(q \left(z _ {s} | x, y, \phi\right)\right) (2) \\ = \mathcal {L} _ {u} (\theta , \phi ; x, y) (3) \\ \end{array} +$$ + +and when $z_{s}$ is continuous we replace the sum above with an integral and again approximate with reparameterized samples. The variational distributions are parameterized by a neural network that takes as input the text, spectrograms and other conditioning variables and outputs the parameters of the distribution. The exact structure of this network is given in appendix A. We have implicitly assumed that $q(z_{u},z_{s}|x,y,\phi)$ may be factorized as $q(z_{u},z_{s}|x,y,\phi) = q(z_{u}|x,y,z_{s},\phi)q(z_{s}|x,y,\phi)$ with shared parameters between these two distributions (see appendix A). Optimizing the variational objective with respect to the parameters $\phi$ encourages the variational distributions to match the posterior of the generative model $p(z_{u},z_{s}|x,y,\theta)$ . Unlike previous work (Hsu et al., 2018), we do not assume that the posterior on the latents is independent of the text, as this dependence likely exists in the model due to explaining away. That is to say that although the text and the latents are independent in our prior, observing the spectrogram correlates them in the posterior because they + +![](images/f842dab39824896eea11ae007ae3d23a15024fe870f57c2ea05d95500c0c7ea5.jpg) +Figure 3: The circumplex model of emotion. Each possible emotion is represented in a 2 dimensional plane consisting of an arousal dimension and valence dimension. This figure is borrowed from Munoz-de Escalona & Canas (2017). + +both explain variation in the spectrogram. This has been shown to be significant by Battenberg et al. (2019). + +If we define: + +$$ +\tilde {q} \left(z _ {s} \mid x, y\right) = \left\{ \begin{array}{l l} q \left(z _ {s} \mid x, y, \phi\right) & \text {i f u n s u p e r v i s e d} \\ \gamma \delta \left(z _ {s} - z _ {s _ {\text {o b s e r v e d}}}\right) & \text {i f s u p e r v i s e d} \end{array} \right. \tag {4} +$$ + +then we can write the overall objective over both the supervised and unsupervised points succinctly as2: + +$$ +\mathcal {L} (\theta , \phi) = \mathbb {E} _ {x, y} \left[ \sum_ {z _ {s}} \tilde {q} \left(z _ {s} \mid x, y, \phi\right) \mathcal {L} _ {s} \left(\theta , \phi ; x, y, z _ {s}\right) + H \left(\tilde {q} \left(z _ {s} \mid x, y, \phi\right)\right) \right] \tag {5} +$$ + +where summation would again be replaced by integration for continuous $z_{s}$ and $\gamma$ (shown in equation 4) is a weighting factor that pre-multipplies the loss for any supervised point. This weighting was also used in previous work such as Narayanaswamy et al. (2017), who showed it to be beneficial at very low levels of supervision. + +Writing the objective in this form allows an intuitive interpretation for the semi-supervised training procedure. When supervision is provided, our objective function is evaluated at the observed value of $z_{s}$ . When supervision is not provided, we evaluate the objective function for every possible value of $z_{s}$ and take a (potentially infinite for continuous $z_{s}$ ) weighted average. The weighting in the average is given by $q(z_{s}|x,y,\phi)$ , which is simultaneously trained to approximate the posterior $p(z_{s}|x,y,\theta)$ . In other words, on unsupervised utterances, we evaluate our objective for each possible value of the latent attribute and weight by the (approximate) posterior probability that this value of the latent was responsible for generating the utterance. + +As $q(z_{s}|x,y,\phi)$ is trained to approximate $p(z_{s}|x,y,\theta)$ we can expect it to become a reasonable classifier/regressor for the semi-supervised latent attribute as the model improves. For example when $z_{s}$ represents an affect label, $p(z_{s}|x,y,\theta)$ is the posterior probability, of the model, over affect given text and speech. By taking the most likely posterior class, this distribution can be used as an affect classifier. However, this variational distribution is only trained on unsupervised training points and so does not benefit directly from the supervised data. To overcome this problem we follow Kingma et al. (2014) and add a classification loss to our objective. The overall objective becomes: + +$$ +\mathcal {L} _ {\text {t o t a l}} (\theta , \phi) = \mathcal {L} (\theta , \phi) + \alpha \mathbb {E} _ {x, y, z _ {s}} [ \log q (z _ {s} | x, y, \phi) ] \tag {6} +$$ + +Where $\alpha$ is a hyperparameter which adjusts the contribution of this term. + +# 3 DATA + +We have used a proprietary high quality labeled data-set of 40 English speakers. The training set consists of 72,405 utterances with durations of at most 5 seconds (45 hours). The validation and test sets each contain 745 utterances or roughly 30 minutes of data. We vary the amount of supervision in the experiments below. We also experimented with transferring controllability to a fully unlabeled data-set of audiobook recordings by Catherine Byers (the speaker from the 2013 Blizzard Challenge), which exhibits high variation in affect and prosody and to other speakers who were less expressive. We strongly encourage the reader to listen to the synthesized samples on our demo page3. + +In this work we chose to focus on learning to control affect with a discrete representation, as well as speaking rate and F0 variation with a continuous representation, as these are challenging aspects of prosody to control. Our method could be applied to other factors without modification. + +# 3.1 AFFECT CONTROL + +The best way to represent emotion is an actively researched area and many models of affect exist. In this work we chose to follow the circumplex model of emotion (Russell, 1980) which posits that most affective states can be represented in a 2 dimensional plane with one axis representing arousal and the other axis representing valence. Arousal measures the level of excitement or energy and valence measures positivity or negativity. Figure 3, shows a chart of emotions plotted in the arousal- valence plane where we can see that, for example, high arousal and high valence corresponds to joy or happiness whereas high arousal and low valence might correspond to anger or frustration. + +Our data-set was recorded under studio conditions with trained voice actors who were prompted to read dialogues in one of three valences: -2, -1, +2 and two arousal values: -2 (low), +2 (high). This was achieved by prompting the actors to read dialogues in either a happy, sad or angry voice at two levels of arousal. This results in 6 possible affective states which we chose to model as discrete and use as our supervision labels. + +# 3.2 SPEAKING RATE AND F0 VARIATION CONTROL + +In order to demonstrate that we can control continuous attributes we also created approximate real-valued labels for speaking rate and arousal for all of our data. We generate the approximate speaking rate as number of syllables per second in each utterance. F0, also known as the fundamental frequency, measures the frequency of vibration of the vocal folds during voiced sounds. Variation in F0 is highly correlated with arousal and roughly measures how expressive an utterance is. To create approximate arousal labels we extracted the F0 contour from each of our utterances, using the YIN algorithm (De Cheveigné & Kawahara, 2002), and measured its standard deviation. We then performed a whitening transform on these two approximate labels in order to match it to our standard normal prior. + +These artificial labels would of course be cheap to obtain for the entire data-set and would not justify the use of semi-supervision in real applications. But, our objective here is to evaluate/demonstrate the efficacy of semi-supervision rather than to specifically control a particular attribute. We have chosen syllable rate and F0 standard deviation, because they both correspond to subjectively distinct variations of interest, and they are more easily quantifiable than affect and so provide strong evidence of controllability. For the continuous latents we are not only able to interpolate speaking-rates and F0 variations but also to extrapolate outside of our training data. We provide examples on our demo page of samples with significantly greater/ lower speed and F0 variation than typically observed in natural speech. + +# 4 EXPERIMENTS AND RESULTS + +To evaluate the efficacy of semi-supervised latent variable models for controllable TTS we trained the model described in section 2 on the above data-sets at varying levels of supervision as well as for + +
baseline vs. angryValence +baseline vs. sadbaseline vs. happyArousal +low vs. high
preference +score27 min (1%)-0.20 ± 0.10-0.60 ± 0.08-0.43 ± 0.09-0.50 ± 0.09
135 min (5%)-0.74 ± 0.07-0.83 ± 0.06-0.83 ± 0.06-0.57 ± 0.08
270 min (10%)-0.71 ± 0.07-0.86 ± -0.95-0.61 ± 0.08-0.59 ± 0.08
+ +Table 1: Subjective metrics for affect control. Negative is a preference for the controlled model. +1 and -1 indicate a preference for samples A and B, respectively. For valence, raters are told that a sample is intended to convey a particular emotion, e.g. happy, and then presented with sample from baseline without control (A), and controlled model (B), and asked to choose between them. For arousal, raters are told to choose the sample that is more vocally aroused, and presented with controlled samples in low (A) and high (B) arousal. To avoid bias, the orders are randomly altered during rating. We show preference score and $95\%$ confidence intervals at multiple supervision levels. + +varying settings of the hyperparameters: $\alpha$ which controls the supervision loss and $\gamma$ , which over emphasizes supervised training points. We found that a value of $\alpha = 1$ was optimal for the discrete experiments and $\alpha = 0$ for the continuous experiments, which corresponds to simply optimizing the ELBO. For each experiment we report the results for the best $\gamma$ found, and $\gamma = 1$ . $\gamma = 1$ corresponds to experiments with no over-weighting of the supervised points. All models were trained using the ADAM optimizer with learning rate of $10^{-3}$ and run for 300,000 training steps with a batch size of 256, distributed across 32 Google Cloud TPU chips. All models were implemented using tensorflow 1 (Abadi et al., 2016). + +Assessing the degree of control is challenging as interpreting affect is subjective. We used two objective metrics of control as well as subjective evaluation from human raters and a third objective metric of overall quality. For affect, the first objective metric we introduced was the test-set accuracy of a 6-class affect classifier trained on the ground truth training data and applied to generated samples from the model (shown in figure 4a). The classifier is a convolutional neural network whose structure mirrors the posterior network $q(z_{s}|x,y,\phi)$ and its exact architecture is given in appendix A. We also provide subjective metrics of controllability, shown in table 1. For speaking rate control, we measure the mean syllable rate error on a held out test-set. The syllable rate error is calculated as the absolute difference in syllable rate between the desired syllable rate and that measured from the synthesized sample. We calculate an analogous error rate for F0 variation. + +Whilst the two metrics above measure controllability they don't tell us if this comes at the expense of a degradation in synthesis quality. To probe quality we use three further metrics. The first was Mel-Cepstral-Distortion-Dynamic-Time-Warping (MCD-DTW) (Kubichek, 1993) on a held out test-set, shown in figure 4d. MCD-DTW is a measure of the difference between the ground-truth spectrogram and the synthesized mel spectrogram that is known to correlate well with human perception (Kubichek, 1993). The second metric of quality was crowd sourced mean-opinion-scores (MOS). The third metric of quality is speech recognition word error rate (WER) and character error rate (CER) on audio samples. The MOS and speech recognition results are summarized in table 2. + +To demonstrate that semi-supervision by including unlabelled data is beneficial, we also provide MOS and speech recognition errors for fully supervised subsets of the data in table 2. These show that at least close to 5 hours of data is required to train a reasonable quality TTS model, far above the 30 minutes supervision needed to control prosodic aspects of speech. + +We provide further details of all of these metrics in the appendix B, and sample spectrograms are provided in appendix C. + +# 5 DISCUSSION + +The classification accuracy (see figure 4a), subjective metrics (see table 1) and error-rate results (see figure 4b-4c) provide a clear demonstration that using semi-supervised latent variables, we are able to achieve control of both continuous and discrete attributes of speech. There is not a significant degradation in the overall quality and this is evidenced by the mean opinion scores which are above the baseline, Tacotron, and also speech recognition errors (see table 2). We also include a baseline of our Tacotron model augmented only by the unsupervised latent $z_{s}$ , to aid comparison. + +![](images/5ec7e86f8c718087e04be5831146bb861e3e5aab3c3d51bf2d016cd5e23d088f.jpg) + +![](images/063582f53f79907e292dcc8d0140ab2d78f42caf58150b414456b3c853d66c41.jpg) + +![](images/d695fc45382bc8632a59121a5f40d573737826c6860b027ec88c7368c98e9fb6.jpg) +(a) Affect classification accuracy +(c) F0 variation error +Figure 4: Objective evaluation metrics as a function of supervision fraction. $100\%$ supervision corresponds to 45 hours of supervised training data and $0\%$ supervision corresponds to base tacotron. For MCD-DTW and error-rates lower is better. + +![](images/a1d7e416c148f6cffa7580c4e2f48c384ebb7f90661b680efff4363eae6555fa.jpg) +(b) Speaking-rate error +(d) MCD-DTW + +
Semi-Supervised (10% supervision)
continuous latentdiscrete latent
ground truthbaselinebaseline with zuF0speaking-rateaffect
MOS4.52±0.074.09±0.094.24±0.084.28±0.074.16±0.084.17±0.09
WER4.494.934.934.062.536.09
CER2.112.382.221.801.143.12
+ +Table 2: Metrics of overall quality: Mean Opinion Scores (MOS) alongside $95\%$ confidence intervals, speech recognition word error rate (WER), and character error rate (CER). The results show no degradation in performance compared to the baseline. + +
27 min (1%)54 min (2%)108 min (4%)135 min (5%)270 min (10%)45 hours (100%)
MOSunintelligibleunintelligible3.20±0.133.52±0.114.03±0.084.08±0.09
WER91.9596.3119.837.555.564.93
CER74.5778.912.544.462.792.22
+ +Table 3: Metrics of overall quality for fully supervised data at varying data-set sizes, showing significant degradation below 270 minutes. + +The MCD-DTW scores for F0 variation and affect are improved at all levels of supervision (figure 4d). Whilst the MCD-DTW is degraded for speaking rate, this is likely a misleading metric when targeting changes in timing as the dynamic-time-warping component of MCD-DTW changes exactly the aspect we wish to control. For speaking rate the combination of MOS and samples is a better indication of the overall quality. We are able to reduce the supervision level to levels as low as $1\%$ or 30 minutes and still have a significant degree of control. We show on our demo page4 that even + +at 3 minutes of supervision we can still achieve control of speaking rate and that we are able to extrapolate outside the range of values seen during training. On the affect data our classification accuracy doesn't degrade significantly until we reach $10\%$ (300 minutes) supervision and remains significantly above chance down to levels as low as $1\%$ (30 minutes), see figure 4a and table 1. Obtaining 30 minutes of supervised data is likely within reach of most teams constructing TTS systems. Unlike previous work on generative modelling for control (Hsu et al., 2018; Wang et al., 2018), we do not require a post-processing stage to determine what our latent variables control and we can pre-determine what aspects we wish to control through choice of data. By separating our latent variables into those that are partially supervised and those which are fully unsupervised we retain the ability to model other latent aspects of prosody; this means that we can still draw samples of varying prosody whilst holding constant the affect, speaking rate or F0 variation. + +We observe the greatest degree of affect control, as measured by classifier accuracy, when $\alpha = 1$ and $\gamma = 100$ . This means that to achieve the highest controllability we needed to 1) provide extra information to our approximate posterior $q(z_{s}|x,y,\phi)$ and 2) to over-represent the supervised data at low levels of supervision. Although both of these hyperparameters have been used in the literature before (Narayanaswamy et al., 2017; Kingma et al., 2014) and shown to be either beneficial or necessary, they aren't strictly required by our probabilistic framework and so it is worth considering why they are needed. There are three potential sources of error in any generative model trained with SGVB: the model itself may be mis-specified such that the true data-generating distribution is not in the model class, the parametric family chosen to approximate the posterior may be overly restrictive and finally the optimization landscape may contain undesirable local minima. These problems have afflicted previous work with deep latent variable models trained with SGVB, resulting in models that don't use their latent variables unless trained with complex annealing schedules (Bowman et al., 2015). In our case we believe that the necessity to set $\alpha$ and $\gamma$ arises from a combination of model mis-specification and local minima. If $\alpha$ is set to 0, then at the start of training $q(z_{s}|x,y,\phi)$ is trained only to approximate $p(z_{s}|x,y,\theta_{0})$ , which is randomly initialized. We found empirically that in our discrete-latent experiments this resulted in $q(z_{s}|x,y,\phi)$ collapsing early in training to a point-mass on a single class. Having ended up in this undesirable local minimum the posterior distribution never recovered, despite this being an obviously poor approximation to the model posterior later in training. The addition of the classification loss and supervision weighting were sufficient to overcome this collapse and allow $q$ to continue to model the posterior. + +The optimization landscape is strongly affected by the relative size of the conditional likelihood and KL terms in our objective. These are in turn strongly affected by our choice of conditional independence assumptions and output-distributions. Thus, a natural direction for further work is to increase the expressivity of the conditional likelihood $p(x|y,z_s,z_u,\phi)$ to reduce model mis-specification. This could be done by learning the variance of the Laplace-distribution we currently use or by parameterizing more expressive output distributions that do not assume conditional independence across spectrogram channels. We conjecture that with more expressive output distributions, it may be possible to reduce the need for the $\alpha$ and $\gamma$ terms in the objective. In this work we chose to use quite simple unconditional diagonal Gaussian priors, as our primary goal was to demonstrate the practicality of semi-supervision. Another natural extension would be to use conditional-priors $p(z|y)$ and to use more expressive priors such as mixtures as was done in Hsu et al. (2018). + +# 5.1 RELATED WORK + +There has been enormous recent progress in neural TTS with numerous novel models proposed in recent years to synthesize speech directly from characters or phonemes (Shen et al., 2018; Arik et al., 2017; Gibiansky et al., 2017; Ping et al., 2017; Vasquez & Lewis, 2019; Taigman et al., 2017). Differentiating factors between these models include the degree of parallelism, with some models using Transformer based architectures (Ren et al., 2019), the choice of conditional independence assumptions made (Vasquez & Lewis, 2019) or the number of separately trained components (Gibiansky et al., 2017). Our work here is largely orthogonal to the exact structure of the conditional likelihood $p(x|y,z_s,z_u)$ and could be combined with all of the above methods. + +Much of the recent research focus has been on modeling latent aspects of prosody. Early attempts include Global Style Tokens (Wang et al., 2018) which attempted to learn a trainable set of style-embeddings. Wang et al. (2018) condition the Tacotron decoder on a linear combination of embedding vectors whose weights during training are predicted from the ground-truth spectrogram. + +They were able to achieve prosodic control but there is no straightforward way to sample utterances of varying prosody. More recently, attempts have also been made to combine probabilistic latent variable models trained using SGVB (Akuzawa et al., 2018; Wan et al., 2019). These models use a fully unsupervised and non-identifiable approach, which makes it difficult to disentangle or interpret their latent variables for control. Hsu et al. (2018) attempt to overcome this problem by using a Gaussian mixture as the latent prior and so perform clustering in the latent space. Battenberg et al. (2019) introduce a hierarchical latent variable model to separate the modelling of style from prosody. However, all of these methods are fully unsupervised and this results in latents that can be hard to interpret or require complex post-processing. + +The work most similar to ours is Wu et al. (2019) which also attempts to achieve affect control using semi-supervision with a heuristic approach based on Global Style Tokens (Wang et al., 2018). Wu et al. (2019) add a cross-entropy objective to the weightings of the style-tokens that encourages them to be one-hot on points with supervision. Similar to our method, they are able to achieve control over affect but unlike our method they do not have a principled probabilistic interpretation nor the ability to simultaneously model aspects of prosody other than emotion. The result is that their method is not able to draw samples of varying prosody for the same utterance with fixed emotion. Furthermore, whilst our method can be applied to both continuous and discrete controllable factors, its not clear how to extend the style-token based approach to handle continuous latent factors. + +In the wider generative modelling literature, the combination of semi-supervision and deep latent variable models was first introduced in Kingma et al. (2014) who focus on using unlabelled data to improve classification accuracy. The potential to use the same technique for controllable generation was recognized by Narayanaswamy et al. (2017) who also provided demonstrations on image synthesis tasks. Since that work, interest in learning disentangled latent variables has grown but generally pursued alternate directions such as re-weighting the ELBO (Higgins et al., 2017), augmenting the objective to encourage factorization (Kim & Mnih, 2018) or using adversarial training (Mathieu et al., 2016). The ability to transfer controllability to speakers for whom we do not have supervision is referred to as domain transfer and our model bears similarities to that introduced by Ilse et al. (2019) but they use a mixture in their latent space more similar to Hsu et al. (2018). + +# 5.2 ETHICAL CONSIDERATIONS + +As with many advances in speech synthesis, progress in controllability raises the prospect that bad actors may misuse the technology either for misinformation or to commit fraud. Improvements in data efficiency and realism increase these risks and, when publishing, a consideration has to be made as to whether the benefits of the developments outweigh the risks. It is the opinion of the authors in this case that, since the focus of this work is on improved prosody, with potential benefits to human-computer interfaces, the benefits likely outweigh the risks. We nonetheless urge the research community to take seriously the potential for misuse both of this work and broader advances in TTS. + +# 6 CONCLUSION + +We have shown that the combination of semi-supervised latent variable models with neural TTS presents a practical and principled path towards building speech synthesizers we can control. Unlike previous fully unsupervised methods, we are able to consistently and reliably learn to control predetermined aspects of prosody. Our method can be applied to any latent attribute of speech for which a modest amount of labelling can be obtained, whether it be continuous or discrete. In our experiments we found that 30 minutes of supervision was sufficient, a volume of data that is within the reach of most research teams. We are able to learn to control subtle characteristics of speech such as affect and for continuous attributes we have provided demonstrations of extrapolation to ranges never seen during training, and to speakers with no supervision. Augmenting existing state-of-the-art TTS systems with latent variables does not degrade synthesis quality and we evidence this with crowd sourced mean opinion scores. Unlike similar heuristic methods, our probabilistic formulation, allows us to draw samples of varying prosody whilst holding constant some attribute we wish to control. + +# REFERENCES + +M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. TensorFlow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265-283, 2016. +K. Akuzawa, Y. Iwasawa, and Y. Matsuo. Expressive speech synthesis via modeling expressions with variational autoencoder. In Interspeech 2018, 2018. +S. Ö. Arik, M. Chrzanowski, A. Coates, G. Diamos, A. Gibiansky, Y. Kang, X. Li, J. Miller, A. Ng, and J. Raiman. Deep voice: Real-time neural text-to-speech. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, 2017. +D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. +E. Battenberg, S. Mariooryad, D. Stanton, R. Skerry-Ryan, M. Shannon, D. Kao, and T. Bagby. Effective use of variational embedding capacity in expressive end-to-end speech synthesis. arXiv preprint arXiv:1906.03402, 2019. +E. Battenberg, R. Skerry-Ryan, S. Mariooryad, D. Stanton, D. Kao, M. Shannon, and T. Bagby. Location-relative attention mechanisms for robust long-form speech synthesis. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020. +S. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. +A. De Cheveigné and H. Kawahara. YIN, a fundamental frequency estimator for speech and music. The Journal of the Acoustical Society of America, pp. 1917-1930, 2002. +A. Gibiansky, S. Arik, G. Diamos, J. Miller, K. Peng, W. Ping, J. Raiman, and Y. Zhou. Deep voice 2: Multi-speaker neural text-to-speech. In Advances in neural information processing systems, pp. 2962-2970, 2017. +A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. +I. Higgins, L. Matthewy, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. $\beta$ -VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. *ICLR*, 2017. +W. Hsu, Y. Zhang, R. J. Weiss, H. Zen, Y. Wu, Y. Wang, Y. Cao, Y. Jia, Z. Chen, J. Shen, et al. Hierarchical generative modeling for controllable speech synthesis. International Conference On Learning Representations, 2018. +A. Hyvarinen and P. Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 1999. +M. Ilse, J. M. Tomczak, C. Louizos, and M. Welling. DIVA: Domain invariant variational autoencoders. arXiv preprint arXiv:1905.10427, 2019. +N. Kalchbrenner, E. Olsen, K. Simonyan, S. Noury, N. Casagrande, E. Lockhart, F. Stimberg, A. Van den Oord, S. Dieleman, and K. Kavukcuoglu. Efficient neural audio synthesis. arXiv preprint arXiv:1802.08435, 2018. +H. Kim and A. Mnih. Disentangling by factorising. arXiv preprint arXiv:1802.05983, 2018. +D. P. Kingma and M. Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013. +D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pp. 3581-3589, 2014. +R. Kubichek. Mel-cepstral distance measure for objective speech quality assessment. In Proceedings of IEEE Pacific Rim Conference on Communications Computers and Signal Processing, 1993. + +F. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Scholkopf, and O. Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 4114-4124, Long Beach, California, USA, 2019. PMLR. +M. F. Mathieu, J. J. Zhao, J. Zhao, A. Ramesh, P. Sprechmann, and Y. LeCun. Disentangling factors of variation in deep representation using adversarial training. In Advances in Neural Information Processing Systems, pp. 5040-5048, 2016. +E. Munoz-de Escalona and J. J. Canas. Online measuring of available resources. In H-Workload 2017: The first international symposium on human mental workload. Dublin Institute of Technology, 2017. +S. Narayanaswamy, B. T. Paige, J. Van de Meent, A. Desmaison, N. Goodman, P. Kohli, F. Wood, and P. Torr. Learning disentangled representations with semi-supervised deep generative models. In Advances in Neural Information Processing Systems, pp. 5925-5935, 2017. +W. Ping, K. Peng, A. Gibiansky, S. O. Arik, A. Kannan, S. Narang, J. Raiman, and J. Miller. Deep voice 3: Scaling text-to-speech with convolutional sequence learning. arXiv preprint arXiv:1710.07654, 2017. +Y. Ren, Y. Ruan, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T. Liu. FastSpeech: Fast, robust and controllable text-to-speech. arXiv preprint arXiv:1905.09263, 2019. +D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The International Conference on Machine Learning, 2014. +J. A. Russell. A circumplex model of affect. Journal of personality and social psychology, 1980. +T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma. PixelCNN++: Improving the PixelCNN with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. +M. Schröder. Emotional speech synthesis: A review. In Seventh European Conference on Speech Communication and Technology, 2001. +J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, and R. Skerrv-Ryan. Natural TTS synthesis by conditioning WaveNet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018. +R. Skerry-Ryan, E. Battenberg, Y. Xiao, Y. Wang, D. Stanton, J. Shor, R. Weiss, R. Clark, and R. A. Saurous. Towards end-to-end prosody transfer for expressive speech synthesis with Tacotron. In Proceedings of the 35th International Conference on Machine Learning, 2018. +Y. Taigman, L. Wolf, A. Polyak, and E. Nachmani. Voiceloop: Voice fitting and synthesis via a phonological loop. arXiv preprint arXiv:1707.06588, 2017. +A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016. +S. Vasquez and M. Lewis. MelNet: A generative model for audio in the frequency domain. arXiv preprint arXiv:1906.01083, 2019. +V. M. Velichko and N. G. Zagoruyko. Automatic recognition of 200 words. International Journal of Man-Machine Studies, 2(3):223-234, 1970. +V. Wan, C. Chan, T. Kenter, J. Vit, and R. Clark. CHiVE: Varying prosody in speech synthesis with a linguistically driven dynamic hierarchical conditional variational network. arXiv preprint arXiv:1905.07195, 2019. + +Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio, et al. Tacotron: Towards end-to-end speech synthesis. arXiv preprint arXiv:1703.10135, 2017. +Y. Wang, D. Stanton, Y. Zhang, R. J. Skerry-Ryan, E. Battenberg, J. Shor, Y. Xiao, F. Ren, Y. Jia, and R. A. Saurous. Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis. arXiv preprint arXiv:1803.09017, 2018. +P. Wu, Z. Ling, L. Liu, Y. Jiang, H. Wu, and L. Dai. End-to-end emotional speech synthesis using style tokens and semi-supervised training. arXiv preprint arXiv:1906.10859, 2019. +H. Zen, V. Dang, R. Clark, Y. Zhang, R. J. Weiss, Y. Jia, Z. Chen, and Y. Wu. LibriTTS: A corpus derived from LibriSpeech for text-to-speech. In Interspeech 2019, 2019. + +A NEURAL NETWORK ARCHITECTURE + +
Modulehyperparameters
InputText normalized phonemes
Phoneme embedding256-D
Pre-netFC-256-Relu-Dropout(0.5)→FC-128-Relu-Dropout(0.5)
CBHG text encoderConv1D bank: K=16, conv-k-128-Relu→Max pooling with stride=1 width=2→Conv1D projections: conv-3-128-Relu→conv-3-128-Linear→Highway net: 4 layers of FC-128-Relu→Bidirectional GRU: 128 cells
Attention type5 component GMM attention w/ softplus (Graves, 2013)
Attention RNNLSTM-256-Zoneout(0.1)→FC-128-tanh
DecoderRNN2-layer residual-LSTM-265-zoneout(0.1)→FC-80-Linear
Frames-per-timestep (reduction factor)2
WaveRNN5 layers DilatedConv1D-512→2 layers TransposeConv + ReLu→GRU-768 conditioned on 5 previous samples→FC-768-relu→3 component MoL, 24kHz sample rate
Variational posteriorSpectrogram→6 Conv-layers 32-32-64-64-128→LSTM-128→FC-128-tanh
OptimizerADAM with learning rate 10-3, batch-size 256
Speaker embedding64-D
+ +Table 4: Summary of the hyperparameters described below. + +Sequence-to-sequence model Our sequence-to-sequence network is modelled on Tacotron (Wang et al., 2018) but uses some modifications introduced in Skerry-Ryan et al. (2018). Input to the model consists of sequences of phonemes produced by a text normalization pipeline rather than character inputs. The CBHG text encoder from Wang et al. (2017) is used to convert the input phonemes into a sequence of text embeddings. The phoneme inputs are converted to learned 256-dimensional embeddings and passed through a pre-net composed of two fully connected ReLU layers (with 256 and 128 units, respectively), with dropout of 0.5 applied to the output of each layer, before being fed to the encoder. For multi-speaker models, a learned embedding for the target speaker is broadcast-concatenated to the output of the text encoder. The attention module uses a single LSTM layer with 256 units and zoneout of 0.1 followed by an MLP with 128 tanh hidden units to compute parameters for the monotonic 5-component GMM attention window. We use GMMv2b attention mechanism described in Battenberg et al. (2020). Instead of using the exponential function to compute the shift and scale parameters of the GMM components as in Graves (2013), GMMv2vb uses the softmax function, and also adds initial bias to these parameters, which we found leads to faster alignment and more stable optimization. The attention weights predicted by the attention network are used to compute a weighted sum of output of the text encoder, producing a context vector. The context vector is concatenated with the output of the attention LSTM layer before being passed to the first decoder LSTM layer. The autoregressive decoder module consists of 2 LSTM layers each with 256 units, zoneout of 0.1, and residual connections between the layers. The spectrogram output is produced using a linear layer on top of the 2 LSTM layers, and we use a reduction factor of 2, meaning we predict two spectrogram frames for each decoder step. The decoder is fed the last frame of its most recent prediction (or the previous ground truth frame during training) and the current context as computed by the attention module. Before being fed to the decoder, the previous prediction is passed through a pre-net with the same same structure used before the text encoder above but its own parameters. + +CBHG text encoder We reuse the CHGB text encoder introduced in Wang et al. (2018). The text encoder consists of a bank of 1-D convolutional filters, followed by highway networks and a bidirectional gated recurrent unit (GRU) recurrent neural net (RNN). The input sequence is first convolved with K sets of 1-D convolutional filters, where the $k$ -th set contains $C_k$ filters of width $k$ . The convolution outputs are stacked together and further max pooled, preserving time. As in the original paper we use a stride of 1 to preserve the original time resolution. We further pass the processed sequence to a few fixed-width 1-D convolutions, whose outputs are added with the original input sequence via residual connections. Batch normalization is used for all convolutional layers. The convolution outputs are fed into a multi-layer highway network to extract high-level features. Finally, we stack a bidirectional GRU RNN on top to extract sequential features from both forward and backward context. + +Variational posteriors The variational distributions $q(z_{s}|x,y)$ and $q(z_{u}|x,y,z_{s})$ are both structured as diagonal Gaussian distributions whose mean and variance are parameterized by neural networks. For discrete supervision we replace $q(z_{s}|x,y)$ by a categorical distribution and use the same network to output just the mean. The input to the distribution starts from the mel spectrogram $x$ and passes it through a stack of 6 convolutional layers, each using ReLU non-linearities, 3x3 filters, 2x2 stride, and batch normalization. The 6 layers have 32, 32, 64, 64, 128, and 128 filters, respectively. The output of this convolution stack is fed into a unidirectional LSTM with 128 units. We pass the final output of this LSTM (and potentially vectors describing the text and/or speaker) through an MLP with 128 tanh hidden units to produce the parameters of the diagonal Gaussian posterior which we sample from. All but the last linear layer of these networks is shared between the two distributions $q(z_{s}|x,y)$ and $q(z_{u}|x,y,z_{s})$ . The resulting sample is broadcast-concatenated to the output of the text encoder. In our experiments $z_{u}$ is always 32-dimensional and $z_{s}$ is either a one-hot vector across 6 classes or a 1 dimensional continuous value. + +Conditional inputs When providing information about the text to the variational posterior, we pass the sequence of text embeddings produced by the text encoder to a unidirectional RNN with 128 units and use its final output as a fixed-length text summary that is passed to the posterior MLP. Speaker information is passed to the posterior MLP via a learned speaker embedding. + +WaveRNN We used a WaveRNN model similar to that described in Kalchbrenner et al. (2018) as our vocoder. Our WaveRNN uses discretized mixture of logistics output as described in Salimans et al. (2017) instead of the dual softmax from that paper, and conditions on 5 previous samples at each step instead of only 1 previous. We trained the network to map from synthesized melspectrograms to waveforms, training on 900 sample windows. A conditioning stack of dilated convolution and transpose convolutions is applied to the input spectrogram before tiling to upsample to the audio sample rate. + +# B EVALUATION + +Mel spectrograms The mel spectrograms the model predicts are computed from $24\mathrm{kHz}$ audio using a frame size of $50~\mathrm{ms}$ , a hop size of $12.5\mathrm{ms}$ , an FFT size of 2048, and a Hann window. From the FFT energies, we compute 80 mel bins distributed between $80\mathrm{Hz}$ and $12\mathrm{kHz}$ . + +MCD-DTW To compute mel cepstral distortion (MCD) (Kubichek, 1993), we use the same mel spectrogram parameters described above and take the discrete-cosine-transform to compute the first 13 MFCCs (not including the 0th coefficient). The MCD between two frames is the Euclidean distance between their MFCC vectors. Then we use the dynamic time warping (DTW) algorithm (Velichko & Zagoruyko, 1970) (with a warp penalty of 1.0) to find an alignment between two spectrograms that produces the minimum MCD cost (including the total warp penalty). We report the average per-frame MCD-DTW. + +Affect classifier The affect classifier has a very similar structure to the variational posterior. The input to the classifier starts from the mel spectrogram $x$ and passes it through a stack of 6 convolutional layers, each using ReLU non-linearities, 3x3 filters, 2x2 stride, and batch normalization. The 6 layers have 32, 32, 64, 64, 128, and 128 filters, respectively. The output of this convolution stack is + +fed into a unidirectional LSTM with 128 units. The final output of the LSTM is then passed through a softmax non-linearity to get logits over the training classes. We use the same data splits described in section 3 to train and evaluate the classifier. The classifier is tuned on the validation set achieving $84.33\%$ classification accuracy, generalizing well to the test set with $83.94\%$ accuracy. + +Mean opinion scores We use a human rating service similar to Amazon's Mechanical Turk, with a large pool of English speakers to collect MOS evaluations. The MOS template is shown in figure 5. A human rater is presented with a single speech sample and is asked to rate perceived naturalness on a scale of 1 to 5, where 1 is "Bad" and 5 is "Excellent". We have selected the utterances of one male, and one female speaker in our test set, totalling 371 utterances to evaluate. For each sample, we collect 1 rating, and no rater is used for more than 6 items in a single evaluation set. In total, 270 unique raters completed the 6 evaluation sets presented in table 2. Since raters are randomly selected for each set, some raters have assessed multiple methods. Across the 6 evaluation sets, the average and median of total number of ratings per rater was 8.24, and 6, respectively. To analyze the data from these subjective tests, we average the scores and compute $95\%$ confidence intervals. Natural human speech is typically rated around 4.5. Samples used for MOS from our model were drawn using the mean of $z_{u}$ , whilst sampling $z_{s}$ . + +# Instruction + +# IMPORTANT: + +In this project, you will listen to audio samples. Please release this task if any of the following is true: +1) You do not have headphones +2) You think you do not have good listening ability +3) There is considerable background noise (street noise, loud fan/air-conditioner, open TV/radio, people talking, etc). +4) For any reason, you can't hear the audio samples + +# AUDIO DEVICE (Headphones): + +1) There are many types of headphones. If you have more than one type, this is the preferred order: (a) closed-back headphones, (b) open-back headphones, (c) any other type of headphones. +If you are not sure which type you have, please see this Wikipedia article. +2) Please set the volume of your audio device to a comfortable level. + +In this task, we would like you to listen to a speech sentence and then choose a score for the audio sample you've just heard. This score should reflect your opinion of how natural or unnatural the sentence sounded. You should not judge the grammar or the content of the sentence, just how it sounds. + +# Please: + +1) Listen to each sample at least twice, with at least a one sec break between them. +2) Use the given 5-point scale to rate the naturalness of the speech sample. The following table provides a description of each naturalness level of the scale, as well as one or more reference speech example(s) for each level. Review the table and listen to all of the references. Important note: you do not need to listen to the references if you have listened to them before. + +In-Between Ratings: Please note that you are allowed to assign "in-between" ratings (for example, a rating between "Excellent and Good"). Feel free to use them if you think the quality of the speech sample falls between two levels. + +Naturalness Scale: + +
ScoreNaturalnessDescriptionReference
5.0ExcellentCompletely natural speechListen
4.0GoodMostly natural speechListen
3.0FairEqually natural and unnatural speechListen
2.0PoorMostly unnatural speechListen
1.0BadCompletely unnatural speechListen
+ +# How are you listening to the speech sample? + +Headphones, with no noise in the background. I am listening to the speech sample using headphones and there is no noise around me (people talking, music playing, air-conditioners, and fans, etc.). + +Headphones, with some low-level noise in the background. I am listening to the speech sample using headphones and there is some low-level noise around me (people talking, music playing, air-conditioners, and fans, etc.). + +Audio speakers or other. + +# Speech sample (please listen at least twice) + +![](images/9a734bd2c29fbba4622f47bbd42c0c9ee47bfbd841c8ce564d922718ce19112f.jpg) +Figure 5: Mean opinion score (MOS) evaluation template. For each utterance, the human raters assign a 1–5 score of the perceived naturalness, with 1 being “Bad” and 5 being “Excellent”. + +# Please rate the naturalness of the speech sample: + +Score Naturalness Description +5.0 Excellent Completely natural speech +4.5 +4.0 +3.5 +30 +25 +20 +1.5 +10 + +Comments + +Subjective affect control evaluation We use the same rater pool, and the same set of 371 utterances used for the MOS evaluations. The A/B template is shown in figure 6. For each utterance, the human rater is presented with a pair of utterances to choose the one that better conveys the target emotion (e.g., happy in the figure). Both utterances are generated with the same text. To evaluate the + +control over valence, we present baseline (i.e., no control) against utterances generated in specific valence category (angry, happy and sad). To evaluate the control over arousal, we present samples generated at low arousal against samples generated at high arousal, and ask the rater to choose the utterance that is more vocally aroused. We use mean of $z_{u}$ to generate all the samples. + +# Instructions + +# IMPORTANT: + +This task requires you listen to audio samples using headphones in a quiet environment. + +Please release this task if: + +1. You do not have headphones, or +2. There is background noise, or +3. You think you do not have good listening ability, or +4. For any reason, you can't hear the audio samples. + +In this task, your job is to listen to two different audio samples containing speech. The speech samples are intended to convey a particular speaking style. The text spoken will be the same for both speech samples and both samples are intended to convey the same described style. Please listen to both samples before selecting a rating. If you can't tell the difference, make a quick intuitive guess. + +# Listening conditions + +How are you listening to these speech samples? + +Headphones, with no noise in the background +Headphones, with some low-level noise in the background +Other + +IMPORTANT: If you don't have headphones, please release this task for the reason, "I do not meet the upfront requirements for this task." + +Tasks + +
InstructionsPlease listen to both samples before selecting a rating. If you can't tell the difference, make a quick intuitive guess.
Emotion the speech is intended to conveyHappy?
Speech samples0:00 / 0:020:00 / 0:02
Which side sounds more Happy??BetterBetter
+ +Figure 6: A/B evaluation affect control evaluation template. The emotion label (Happy in the figure) varies depending on the task. + +# C SAMPLE SPECTROGRAMS + +Controlling affect Table 5 shows the effect of varying the valence and arousal latent variable on the spectrogram and F0 track. We can see that a low valence for sadness corresponds to the flattest F0 track, and high arousal manifests in higher F0 values and variations. + +Controlling speaking rate and pitch variations Table 6 shows the effect of varying speaking rate, and F0 variation control variables on a sample spectrogram and F0 track. When controlling speaking rate (first column), the duration reduces as we increase the input speaking rate control, while the F0 variation remains stable. When controlling the F0 variation (second column), the pitch dynamic range increases, while the duration remains constant, which demonstrates controllability and also some degree of disentanglement. + +# D REPRODUCING RESULTS ON LIBRITTS PUBLIC DATASET + +To verify the reproducibility of our results on a public dataset, we trained models to control speaking rate and F0 variation on clean subset of LibriTTS dataset (Zen et al., 2019). We only use the utterances below 5 seconds, which is 62 hours of data. We have done no tuning on this dataset, and directly used the hyperparameters we used for our internal dataset. Figures 7a and 7b show the + +![](images/86e9268843a1cd93ad4073c90037f7228f7fc60b7169d78519867a7ce77e53aa.jpg) +Table 5: Sample spectrogram and F0 track plots, generated by varying affect labels, valence in y-axis, and arousal in x-axis. + +![](images/09e7fffe900527f17c808744f897c85d72e2691ff00fcdf93b2918d6c2774cc6.jpg) + +![](images/0cbfd38705e7519370872940753e347f9e673d773db5b5fdae2e318c6dc8c28c.jpg) +(a) Speaking-rate error as a function of supervision level. +(b) F0 variation error as a function of supervision level +Figure 7: Objective controllability of speaking rate and F0 variation evaluation metrics presented at multiple supervision levels, on LibriTTS (Zen et al., 2019) datasets. $100\%$ supervision corresponds to 62 hours of supervised data. + +errors of producing the desired speaking rate, and F0 standard deviation, which generally go down as function of supervision level, with exception of $10\%$ supervision for controlling F0 variation5. Given, this is a lower quality dataset, with many more speakers and with much smaller data per speaker and also the fact that we have done zero hyperparameter tuning on this dataset, this result looks very encouraging. + +![](images/93259abd13a0a77b7bd60ccab26a5698ad7353ef22b46d8a1fcef51aeb426ef5.jpg) +Table 6: Sample spectrogram and F0 track plots, generated by varying the speaking rate (first column) and F0 variation (second column). We use standard normal prior for these factors and this table demos varying the control factor from $-5\sigma$ to $5\sigma$ , demonstrating the controllability, interpolation and extrapolation of conditional generation, and also disentanglement of these factors. + +![](images/e1d233833c50356e93425628f1b112d8815cb3272fbd488735eac008bde0c7e5.jpg) \ No newline at end of file diff --git a/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/images.zip b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7c807293ba34342415266dcd6ef9aaf5f22ac224 --- /dev/null +++ b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f05582a2c4b7f1d7441b2d9d3d20305bcf73fa0bd85b1b461265f53445e0dcce +size 725665 diff --git a/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/layout.json b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d5f5a2286302c6775a2fd3a7f24cb11af8980047 --- /dev/null +++ b/semisupervisedgenerativemodelingforcontrollablespeechsynthesis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06338aacd710d4ac09915d9416b42ff4713f69edc4c0e84c09cade43326e7bf1 +size 505019 diff --git a/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_content_list.json b/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a7c76f21661b41c05b377db7fe6b52db28d1704e --- /dev/null +++ b/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ab0b6c1cbff1112574f51696cbd129da600295075a1b2bf6c1965d1a41616d2 +size 125003 diff --git a/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_model.json b/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..43a5d483ba0c74ec36991e2bdb105b6dabf62451 --- /dev/null +++ b/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fee984e71192182b260a603e71c3e023c9d18e208df3af9c275a22ba5865b0c0 +size 146634 diff --git a/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_origin.pdf b/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e146a208826cef1d8f81d931d8096bfe7f695e44 --- /dev/null +++ b/sharingknowledgeinmultitaskdeepreinforcementlearning/b62a8357-d543-42c0-a967-ee44f023de4a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76da173caf6628a98b81952997778a672bbb698367574f7c0f77d04e77e8a70c +size 594176 diff --git a/sharingknowledgeinmultitaskdeepreinforcementlearning/full.md b/sharingknowledgeinmultitaskdeepreinforcementlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..db815ce7f3230ae975331ebc0aec6ce20c72b82a --- /dev/null +++ b/sharingknowledgeinmultitaskdeepreinforcementlearning/full.md @@ -0,0 +1,526 @@ +# SHARING KNOWLEDGE IN MULTI-TASK DEEP REINFORCEMENT LEARNING + +Carlo D'Eramo & Davide Tateo + +Department of Computer Science + +TU Darmstadt, IAS + +Hochschulstraße 10, 64289, Darmstadt, Germany + +{carlo. deramo, davide. tateo}@tu-darmstadt.de + +Andrea Bonarini & Marcello Restelli + +Politecnico di Milano, DEIB + +Piazza Leonardo da Vinci 32, 20133, Milano + +{andrea.bonarini,marcello/restelli}@polimi.it + +Jan Peters + +TU Darmstadt, IAS + +Hochschulstraße 10, 64289, Darmstadt, Germany + +Max Planck Institute for Intelligent Systems + +Max-Planck-Ring 4, 72076, Tübingen, Germany + +jan.peters@tu-darmstadt.de + +# ABSTRACT + +We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance. + +# 1 INTRODUCTION + +Multi-Task Learning (MTL) ambitiously aims to learn multiple tasks jointly instead of learning them separately, leveraging the assumption that the considered tasks have common properties which can be exploited by Machine Learning (ML) models to generalize the learning of each of them. For instance, the features extracted in the hidden layers of a neural network trained on multiple tasks have the advantage of being a general representation of structures common to each other. This translates into an effective way of learning multiple tasks at the same time, but it can also improve the learning of each individual task compared to learning them separately (Caruana, 1997). Furthermore, the learned representation can be used to perform Transfer Learning (TL), i.e. using it as a preliminary knowledge to learn a new similar task resulting in a more effective and faster learning than learning the new task from scratch (Baxter, 2000; Thrun & Pratt, 2012). + +The same benefits of extraction and exploitation of common features among the tasks achieved in MTL, can be obtained in Multi-Task Reinforcement Learning (MTRL) when training a single agent on multiple Reinforcement Learning (RL) problems with common structures (Taylor & Stone, 2009; Lazaric, 2012). In particular, in MTRL an agent can be trained on multiple tasks in the same + +domain, e.g. riding a bicycle or cycling while going towards a goal, or on different but similar domains, e.g. balancing a pendulum or balancing a double pendulum1. Considering recent advances in Deep Reinforcement Learning (DRL) and the resulting increase in the complexity of experimental benchmarks, the use of Deep Learning (DL) models, e.g. deep neural networks, has become a popular and effective way to extract common features among tasks in MTRL algorithms (Rusu et al., 2015; Liu et al., 2016; Higgins et al., 2017). However, despite the high representational capacity of DL models, the extraction of good features remains challenging. For instance, the performance of the learning process can degrade when unrelated tasks are used together (Caruana, 1997; Baxter, 2000); another detrimental issue may occur when the training of a single model is not balanced properly among multiple tasks (Hessel et al., 2018). + +Recent developments in MTRL achieve significant results in feature extraction by means of algorithms specifically developed to address these issues. While some of these works rely on a single deep neural network to model the multi-task agent (Liu et al., 2016; Yang et al., 2017; Hessel et al., 2018; Wulfmeier et al., 2019), others use multiple deep neural networks, e.g. one for each task and another for the multi-task agent (Rusu et al., 2015; Parisotto et al., 2015; Higgins et al., 2017; Teh et al., 2017). Intuitively, achieving good results in MTRL with a single deep neural network is more desirable than using many of them, since the training time is likely much less and the whole architecture is easier to implement. In this paper we study the benefits of shared representations among tasks. We theoretically motivate the intuitive effectiveness of our method, deriving theoretical guarantees that exploit the theoretical framework provided by Maurer et al. (2016), in which the authors present upper bounds on the quality of learning in MTL when extracting features for multiple tasks in a single shared representation. The significance of this result is that the cost of learning the shared representation decreases with a factor $\mathcal{O}(1 / \sqrt{T})$ , where $T$ is the number of tasks for many function approximator hypothesis classes. The main contribution of this work is twofold. + +1. We derive upper confidence bounds for Approximate Value-Iteration (AVI) and Approximate Policy-Iteration $(\mathrm{API})^2$ (Farahmand, 2011) in the MTRL setting, and we extend the approximation error bounds in Maurer et al. (2016) to the case of multiple tasks with different dimensionalities. Then, we show how to combine these results resulting in, to the best of our knowledge, the first proposed extension of the finite-time bounds of AVI/API to MTRL. Despite being an extension of previous works, we derive these results to justify our approach showing how the error propagation in AVI/API can theoretically benefit from learning multiple tasks jointly. +2. We leverage these results proposing a neural network architecture, for which these bounds hold with minor assumptions, that allow us to learn multiple tasks with a single regressor extracting a common representation. We show an empirical evidence of the consequence of our bounds by means of a variant of Fitted $Q$ -Iteration (FQI) (Ernst et al., 2005), based on our shared network and for which our bounds apply, that we call Multi Fitted $Q$ -Iteration (MFQI). Then, we perform an empirical evaluation in challenging RL problems proposing multitask variants of the Deep $Q$ -Network (DQN) (Mnih et al., 2015) and Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) algorithms. These algorithms are practical implementations of the more general AVI/API framework, designed to solve complex problems. In this case, the bounds apply to these algorithms only with some assumptions, e.g. stationary sampling distribution. The outcome of the empirical analysis joins the theoretical results, showing significant performance improvements compared to the single-task version of the algorithms in various RL problems, including several MuJoCo (Todorov et al., 2012) domains. + +# 2 PRELIMINARIES + +Let $B(\mathcal{X})$ be the space of bounded measurable functions w.r.t. the $\sigma$ -algebra $\sigma_{\mathcal{X}}$ , and similarly $B(\mathcal{X}, L)$ be the same bounded by $L < \infty$ . + +A Markov Decision Process (MDP) is defined as a 5-tuple $\mathcal{M} = < S, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma >$ , where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{P}: \mathcal{S} \times \mathcal{A} \to \mathcal{S}$ is the transition distribution where $\mathcal{P}(s'|s,a)$ + +is the probability of reaching state $s'$ when performing action $a$ in state $s$ , $\mathcal{R} : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to \mathbb{R}$ is the reward function, and $\gamma \in (0,1]$ is the discount factor. A deterministic policy $\pi$ maps, for each state, the action to perform: $\pi : \mathcal{S} \to \mathcal{A}$ . Given a policy $\pi$ , the value of an action $a$ in a state $s$ represents the expected discounted cumulative reward obtained by performing $a$ in $s$ and following $\pi$ thereafter: $Q^{\pi}(s,a) \triangleq \mathbb{E}[\sum_{k=0}^{\infty} \gamma^{k} r_{i+k+1} |s_{i} = s, a_{i} = a, \pi]$ , where $r_{i+1}$ is the reward obtained after the $i$ -th transition. The expected discounted cumulative reward is maximized by following the optimal policy $\pi^{*}$ which is the one that determines the optimal action values, i.e., the ones that satisfy the Bellman optimality equation (Bellman, 1954): $Q^{*}(s,a) \triangleq \int_{\mathcal{S}} \mathcal{P}(s'|s,a) [\mathcal{R}(s,a,s') + \gamma \max_{a'} Q^{*}(s',a')] ds'$ . The solution of the Bellman optimality equation is the fixed point of the optimal Bellman operator $\mathcal{T}^{*}: B(\mathcal{S} \times \mathcal{A}) \to B(\mathcal{S} \times \mathcal{A})$ defined as $(\mathcal{T}^{*}Q)(s,a) \triangleq \int_{\mathcal{S}} \mathcal{P}(s'|s,a) [\mathcal{R}(s,a,s') + \gamma \max_{a'} Q(s',a')] ds'$ . In the MTRL setting, there are multiple MDPs $\mathcal{M}^{(t)} = < S^{(t)}, A^{(t)}, P^{(t)}, R^{(t)}, \gamma^{(t)} >$ where $t \in \{1, \dots, T\}$ and $T$ is the number of MDPs. For each MDP $\mathcal{M}^{(t)}$ , a deterministic policy $\pi_{t}: S^{(t)} \to A^{(t)}$ induces an action-value function $Q_{t}^{\pi_{t}}(s^{(t)}, a^{(t)}) = \mathbb{E}[\sum_{k=0}^{\infty} \gamma^{k} r_{i+k+1}^{(t)} |s_{i} = s^{(t)}, a_{i} = a^{(t)}, \pi_{t}]$ . In this setting, the goal is to maximize the sum of the expected cumulative discounted reward of each task. + +In our theoretical analysis of the MTRL problem, the complexity of representation plays a central role. As done in Maurer et al. (2016), we consider the Gaussian complexity, a variant of the well-known Rademacher complexity, to measure the complexity of the representation. Given a set $\bar{\mathbf{X}}\in \mathcal{X}^{Tn}$ of $n$ input samples for each task $t\in \{1,\dots ,T\}$ , and a class $\mathcal{H}$ composed of $k\in \{1,\ldots ,K\}$ functions, the Gaussian complexity of a random set $\mathcal{H}(\bar{\mathbf{X}}) = \{(h_k(X_{ti})):h\in \mathcal{H}\} \subseteq \mathbb{R}^{KTn}$ is defined as follows: + +$$ +G \left(\mathcal {H} (\bar {\mathbf {X}})\right) = \mathbb {E} \left[ \sup _ {h \in \mathcal {H}} \sum_ {t k i} \gamma_ {t k i} h _ {k} \left(X _ {t i}\right) \mid X _ {t i} \right], \tag {1} +$$ + +where $\gamma_{tki}$ are independent standard normal variables. We also need to define the following quantity, taken from Maurer (2016): let $\gamma$ be a vector of $m$ random standard normal variables, and $f\in \mathcal{F}: Y\to \mathbb{R}^m$ , with $Y\subseteq \mathbb{R}^n$ , we define + +$$ +O (\mathcal {F}) = \sup _ {y, y ^ {\prime} \in Y, y \neq y ^ {\prime}} \mathbb {E} \left[ \sup _ {f \in \mathcal {F}} \frac {\langle \gamma , f (y) - f \left(y ^ {\prime}\right) \rangle}{\| y - y ^ {\prime} \|} \right]. \tag {2} +$$ + +Equation 2 can be viewed as a Gaussian average of Lipschitz quotients, and appears in the bounds provided in this work. Finally, we define $L(\mathcal{F})$ as the upper bound of the Lipschitz constant of all the functions $f$ in the function class $\mathcal{F}$ . + +# 3 THEORETICAL ANALYSIS + +The following theoretical study starts from the derivation of theoretical guarantees for MTRL in the AVI framework, extending the results of Farahmand (2011) in the MTRL scenario. Then, to bound the approximation error term in the AVI bound, we extend the result described in Maurer (2006) to MTRL. As we discuss, the resulting bounds described in this section clearly show the benefit of sharing representation in MTRL. To the best of our knowledge, this is the first general result for MTRL; previous works have focused on finite MDPs (Brunskill & Li, 2013) or linear models (Lazaric & Restelli, 2011). + +# 3.1 MULTI-TASK REPRESENTATION LEARNING + +The multi-task representation learning problem consists in learning simultaneously a set of $T$ tasks $\mu_t$ , modeled as probability measures over the space of the possible input-output pairs $(x, y)$ , with $x \in \mathcal{X}$ and $y \in \mathbb{R}$ , being $\mathcal{X}$ the input space. Let $w \in \mathcal{W}: \mathcal{X} \to \mathbb{R}^J$ , $h \in \mathcal{H}: \mathbb{R}^J \to \mathbb{R}^K$ and $f \in \mathcal{F}: \mathbb{R}^K \to \mathbb{R}$ be functions chosen from their respective hypothesis classes. The functions in the hypothesis classes must be Lipschitz continuous functions. Let $\bar{\mathbf{Z}} = (\mathbf{Z}_1, \ldots, \mathbf{Z}_T)$ be the multi-sample over the set of tasks $\boldsymbol{\mu} = (\mu_1, \ldots, \mu_T)$ , where $\mathbf{Z}_t = (Z_{t1}, \ldots, Z_{tn}) \sim \mu_t^n$ and $Z_{ti} = (X_{ti}, Y_{ti}) \sim \mu_t$ . We can formalize our regression problem as the following minimization + +problem: + +$$ +\min \left\{\frac {1}{n T} \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {N} \ell \left(f _ {t} \left(h \left(w _ {t} \left(X _ {t i}\right)\right)\right), Y _ {t i}\right): \mathbf {f} \in \mathcal {F} ^ {T}, h \in \mathcal {H}, \mathbf {w} \in \mathcal {W} ^ {T} \right\}, \tag {3} +$$ + +where we use $\mathbf{f} = (f_1, \dots, f_T)$ , $\mathbf{w} = (w_1, \dots, w_T)$ , and define the minimizers of Equation (3) as $\hat{\mathbf{w}}$ , $\hat{h}$ , and $\hat{\mathbf{f}}$ . We assume that the loss function $\ell: \mathbb{R} \times \mathbb{R} \to [0,1]$ is 1-Lipschitz in the first argument for every value of the second argument. While this assumption may seem restrictive, the result obtained can be easily scaled to the general case. To use the principal result of this section, for a generic loss function $\ell'$ , it is possible to use $\ell(\cdot) = \ell'(\cdot) / \epsilon_{\max}$ , where $\epsilon_{\max}$ is the maximum value of $\ell'$ . The expected loss over the tasks, given $\mathbf{w}$ , $h$ and $\mathbf{f}$ is the task-averaged risk: + +$$ +\varepsilon_ {\mathrm {a v g}} (\mathbf {w}, h, \mathbf {f}) = \frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} [ \ell (f _ {t} (h (w _ {t} (X))), Y) ] \tag {4} +$$ + +The minimum task-averaged risk, given the set of tasks $\pmb{\mu}$ and the hypothesis classes $\mathcal{W}$ , $\mathcal{H}$ and $\mathcal{F}$ is $\varepsilon_{\mathrm{avg}}^{*}$ , and the corresponding minimizers are $\mathbf{w}^{*}$ , $h^{*}$ and $\mathbf{f}^{*}$ . + +# 3.2 MULTI-TASK APPROXIMATE VALUE ITERATION BOUND + +We start by considering the bound for the AVI framework which applies for the single-task scenario. + +Theorem 1. (Theorem 3.4 of Farahmand (2011)) Let $K$ be a positive integer, and $Q_{\max} \leq \frac{R_{\max}}{1 - \gamma}$ . Then for any sequence $(Q_k)_{k=0}^K \subset B(\mathcal{S} \times \mathcal{A}, Q_{\max})$ and the corresponding sequence $(\varepsilon_k)_{k=0}^{K-1}$ , where $\varepsilon_k = \| Q_{k+1} - \mathcal{T}^* Q_k\|_\nu^2$ , we have: + +$$ +\| Q ^ {*} - Q ^ {\pi_ {K}} \| _ {1, \rho} \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {V I, \rho , \nu} ^ {\frac {1}{2}} (K; r) \mathcal {E} ^ {\frac {1}{2}} (\varepsilon_ {0}, \dots , \varepsilon_ {K - 1}; r) + \frac {2}{1 - \gamma} \gamma^ {K} R _ {m a x} \right], \tag {5} +$$ + +where + +$$ +\begin{array}{l} C _ {V I, \rho , \nu} (K; r) = \left(\frac {1 - \gamma}{2}\right) ^ {2} \sup _ {\pi_ {1} ^ {\prime}, \dots , \pi_ {K} ^ {\prime}} \sum_ {k = 0} ^ {K - 1} a _ {k} ^ {2 (1 - r)} \left[ \right. \sum_ {m \geq 0} \gamma^ {m} \Bigl (c _ {V I _ {1}, \rho , \nu} (m, K - k; \pi_ {K} ^ {\prime}) \\ \left. \left. + c _ {V I _ {2}, \rho , \nu} \left(m + 1; \pi_ {k + 1} ^ {\prime}, \dots , \pi_ {K} ^ {\prime}\right)\right) \right] ^ {2}, \tag {6} \\ \end{array} +$$ + +with $\mathcal{E}(\varepsilon_0,\ldots ,\varepsilon_{K - 1};r) = \sum_{k = 0}^{K - 1}\alpha_k^{2r}\varepsilon_k$ , the two coefficients $c_{VI_1,\rho ,\nu},c_{VI_2,\rho ,\nu}$ , the distributions $\rho$ and $\nu$ , and the series $\alpha_{k}$ are defined as in Farahmand (2011). + +In the multi-task scenario, let the average approximation error across tasks be: + +$$ +\varepsilon_ {\text {a v g}, k} \left(\hat {\mathbf {w}} _ {k}, \hat {h} _ {k}, \hat {\mathbf {f}} _ {k}\right) = \frac {1}{T} \sum_ {t = 1} ^ {T} \| Q _ {t, k + 1} - \mathcal {T} _ {t} ^ {*} Q _ {t, k} \| _ {\nu} ^ {2}, \tag {7} +$$ + +where $Q_{t,k + 1} = \hat{f}_{t,k} \circ \hat{h}_k \circ \hat{w}_{t,k}$ , and $T_{t}^{*}$ is the optimal Bellman operator of task $t$ . + +In the following, we extend the AVI bound of Theorem 1 to the multi-task scenario, by computing the average loss across tasks and pushing inside the average using Jensen's inequality. + +Theorem 2. Let $K$ be a positive integer, and $Q_{\text{max}} \leq \frac{R_{\text{max}}}{1 - \gamma}$ . Then for any sequence $(Q_k)_{k=0}^K \subset B(\mathcal{S} \times \mathcal{A}, Q_{\text{max}})$ and the corresponding sequence $(\varepsilon_{avg,k})_{k=0}^{K-1}$ , where $\varepsilon_{avg,k} = \frac{1}{T} \sum_{t=1}^{T} \| Q_{t,k+1} - \mathcal{T}_t^* Q_{t,k} \|_\nu^2$ , we have: + +$$ +\frac {1}{T} \sum_ {t = 1} ^ {T} \| Q _ {t} ^ {*} - Q _ {t} ^ {\pi_ {K}} \| _ {1, \rho} \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {V I} ^ {\frac {1}{2}} (K; r) \mathcal {E} _ {\text {a v g}} ^ {\frac {1}{2}} \left(\varepsilon_ {\text {a v g}, 0}, \dots , \varepsilon_ {\text {a v g}, K - 1}; r\right) + \frac {2 \gamma^ {K} R _ {\max , a v g}}{1 - \gamma} \right] \tag {8} +$$ + +$$ +\begin{array}{l} w i t h \mathcal {E} _ {a v g} = \sum_ {k = 0} ^ {K - 1} \alpha_ {k} ^ {2 r} \varepsilon_ {a v g, k}, \gamma = \max _ {t \in \{1, \dots , T \}} \gamma_ {t}, C _ {V I} ^ {\frac {1}{2}} (K; r) = \max _ {t \in \{1, \dots , T \}} C _ {V I, \rho , \nu} ^ {\frac {1}{2}} (K; t, r), R _ {m a x, a v g} = \\ \frac {1}{T} \sum_ {t = 1} ^ {T} R _ {m a x, t} a n d \alpha_ {k} = \left\{ \begin{array}{l l} \frac {(1 - \gamma) \gamma^ {K - k - 1}}{1 - \gamma^ {K + 1}} & 0 \leq k < K, \\ \frac {(1 - \gamma) \gamma^ {K}}{1 - \gamma^ {K + 1}} & k = K \end{array} \right.. \\ \end{array} +$$ + +Remarks Theorem 2 retains most of the properties of Theorem 3.4 of Farahmand (2011), except that the regression error in the bound is now task-averaged. Interestingly, the second term of the sum in Equation (8) depends on the average maximum reward for each task. In order to obtain this result we use an overly pessimistic bound on $\gamma$ and the concentrability coefficients, however this approximation is not too loose if the MDPs are sufficiently similar. + +# 3.3 MULTI-TASK APPROXIMATION ERROR BOUND + +We bound the task-averaged approximation error $\varepsilon_{\mathrm{avg}}$ at each AVI iteration $k$ involved in (8) following a derivation similar to the one proposed by Maurer et al. (2016), obtaining: + +Theorem 3. Let $\mu, \mathcal{W}, \mathcal{H}$ and $\mathcal{F}$ be defined as above and assume $0 \in \mathcal{H}$ and $f(0) = 0, \forall f \in \mathcal{F}$ . Then for $\delta > 0$ with probability at least $1 - \delta$ in the draw of $\bar{\mathbf{Z}} \sim \prod_{t=1}^{T} \mu_t^n$ we have that + +$$ +\begin{array}{l} \varepsilon_ {a v g} (\hat {\mathbf {w}}, \hat {h}, \hat {\mathbf {f}}) \leq L (\mathcal {F}) \left(c _ {1} \frac {L (\mathcal {H}) \sup _ {l \in \{1 , \dots , T \}} G (\mathcal {W} (\mathbf {X} _ {l}))}{n} + c _ {2} \frac {\sup _ {\mathbf {w}} \| \mathbf {w} (\bar {\mathbf {X}}) \| O (\mathcal {H})}{n T} \right. \\ \left. + c _ {3} \frac {\operatorname* {m i n} _ {p \in P} G (\mathcal {H} (p))}{n T}\right) + c _ {4} \frac {\operatorname* {s u p} _ {h , \mathbf {w}} \| h (\mathbf {w} (\bar {\mathbf {X}})) \| O (\mathcal {F})}{n \sqrt {T}} + \sqrt {\frac {8 \ln \left(\frac {3}{\delta}\right)}{n T}} + \varepsilon_ {a v g} ^ {*}. \tag {9} \\ \end{array} +$$ + +Remarks The assumptions $0 \in \mathcal{H}$ and $f(0) = 0$ for all $f \in \mathcal{F}$ are not essential for the proof and are only needed to simplify the result. For reasonable function classes, the Gaussian complexity $G(\mathcal{W}(\mathbf{X}_l))$ is $\mathcal{O}(\sqrt{n})$ . If $\sup_{\mathbf{w}} \| \mathbf{w}(\bar{\mathbf{X}}) \|$ and $\sup_{h,\mathbf{w}} \| h(\mathbf{w}(\bar{\mathbf{X}})) \|$ can be uniformly bounded, then they are $\mathcal{O}(\sqrt{nT})$ . For some function classes, the Gaussian average of Lipschitz quotients $O(\cdot)$ can be bounded independently from the number of samples. Given these assumptions, the first and the fourth term of the right hand side of Equation (9), which represent respectively the cost of learning the meta-state space $\mathbf{w}$ and the task-specific $\mathbf{f}$ mappings, are both $\mathcal{O}(1/\sqrt{n})$ . The second term represents the cost of learning the multi-task representation $h$ and is $\mathcal{O}(1/\sqrt{nT})$ , thus vanishing in the multi-task limit $T \to \infty$ . The third term can be removed if $\forall h \in \mathcal{H}, \exists p_0 \in P : h(p) = 0$ ; even when this assumption does not hold, this term can be ignored for many classes of interest, e.g. neural networks, as it can be arbitrarily small. + +The last term to be bounded in (9) is the minimum average approximation error $\varepsilon_{\mathrm{avg}}^*$ at each AVI iteration $k$ . Recalling that the task-averaged approximation error is defined as in (7), applying Theorem 5.3 by Farahmand (2011) we obtain: + +Lemma 4. Let $Q_{t,k}^{*}, \forall t \in \{1, \dots, T\}$ be the minimizers of $\varepsilon_{avg,k}^{*}$ , $\check{t}_k = \arg \max_{t \in \{1, \dots, T\}} \| Q_{t,k+1}^* - \mathcal{T}_t^* Q_{t,k} \|_\nu^2$ , and $b_{k,i} = \| Q_{\check{t}_k,i+1} - \mathcal{T}_{\check{t}}^* Q_{\check{t}_k,i} \|_\nu$ , then: + +$$ +\varepsilon_ {a v g, k} ^ {*} \leq \left(\| Q _ {\tilde {t} _ {k}, k + 1} ^ {*} - \left(\mathcal {T} _ {\tilde {t}} ^ {*}\right) ^ {k + 1} Q _ {\tilde {t} _ {k}, 0} \| _ {\nu} + \sum_ {i = 0} ^ {k - 1} \left(\gamma_ {\tilde {t} _ {k}} C _ {A E} (\nu ; \check {t} _ {k}, P)\right) ^ {i + 1} b _ {k, k - 1 - i}\right) ^ {2}, \tag {10} +$$ + +with $C_{AE}$ defined as in Farahmand (2011). + +Final remarks The bound for MTRL is derived by composing the results in Theorems 2 and 3, and Lemma 4. The results above highlight the advantage of learning a shared representation. The bound in Theorem 2 shows that a small approximation error is critical to improve the convergence towards the optimal action-value function, and the bound in Theorem 3 shows that the cost of learning the shared representation at each AVI iteration is mitigated by using multiple tasks. This is particularly beneficial when the feature representation is complex, e.g. deep neural networks. + +# 3.4 DISCUSSION + +As stated in the remarks of Equation (9), the benefit of MTRL is evinced by the second component of the bound, i.e. the cost of learning $h$ , which vanishes with the increase of the number of tasks. Obviously, adding more tasks require the shared representation to be large enough to include all of them, undesirably causing the term $\sup_{h,\mathbf{w}}\| h(\mathbf{w}(\bar{\mathbf{X}}))\|$ in the fourth component of the bound to increase. This introduces a tradeoff between the number of features and number of tasks; however, for + +![](images/bc614009960eea813b899ba8e78c40e2950804e67c36b5d9515152c5da0f4bb4.jpg) +(a) Shared network + +![](images/514f7c98439984a76009bb72eb38da0e8b78074e116a4f5ec7010ab18ed6a8e5.jpg) +(b) FQI vs MFQI + +![](images/e5f4993e8b145b1ca85b9556e7962673f5366895bca5a52f10f640527deebbe7.jpg) + +![](images/033a17117fa419467af846f0d01c68b14a0c50cd743d0e4ad67dd63f943f70f2.jpg) +(c) #Task analysis +Figure 1: (a) The architecture of the neural network we propose to learn $T$ tasks simultaneously. The $w_{t}$ block maps each input $x_{t}$ from task $\mu_{t}$ to a shared set of layers $h$ which extracts a common representation of the tasks. Eventually, the shared representation is specialized in block $f_{t}$ and the output $y_{t}$ of the network is computed. Note that each block can be composed of arbitrarily many layers. (b) Results of FQI and MFQI averaged over 4 tasks in Car-On-Hill, showing $\| Q^{*} - Q^{\pi K}\|$ on the left, and the discounted cumulative reward on the right. (c) Results of MFQI showing $\| Q^{*} - Q^{\pi K}\|$ for increasing number of tasks. Both results in (b) and (c) are averaged over 100 experiments, and show the $95\%$ confidence intervals. + +a reasonable number of tasks the number of features used in the single-task case is enough to handle them, as we show in some experiments in Section 5. Notably, since the AVI/API framework provided by Farahmand (2011) provides an easy way to include the approximation error of a generic function approximator, it is easy to show the benefit in MTRL of the bound in Equation (9). Despite being just multi-task extensions of previous works, our results are the first one to theoretically show the benefit of sharing representation in MTRL. Moreover, they serve as a significant theoretical motivation, besides to the intuitive ones, of the practical algorithms that we describe in the following sections. + +# 4 SHARING REPRESENTATIONS + +We want to empirically evaluate the benefit of our theoretical study in the problem of jointly learning $T$ different tasks $\mu_t$ , introducing a neural network architecture for which our bounds hold. Following our theoretical framework, the network we propose extracts representations $w_t$ from inputs $x_t$ for each task $\mu_t$ , mapping them to common features in a set of shared layers $h$ , specializing the learning of each task in respective separated layers $f_t$ , and finally computing the output $y_t = (f_t \circ h \circ w_t)(x_t) = f_t(h(w_t(x_t)))$ (Figure 1(a)). The idea behind this architecture is not new in the literature. For instance, similar ideas have already been used in DQN variants to improve exploration on the same task via bootstrapping (Osband et al., 2016) and to perform MTRL (Liu et al., 2016). + +The intuitive and desirable property of this architecture is the exploitation of the regularization effect introduced by the shared representation of the jointly learned tasks. Indeed, unlike learning a single task that may end up in overfitting, forcing the model to compute a shared representation of the tasks helps the regression process to extract more general features, with a consequent reduction in the variance of the learned function. This intuitive justification for our approach, joins the theoretical benefit proven in Section 3. Note that our architecture can be used in any MTRL problem involving a regression process; indeed, it can be easily used in value-based methods as a $Q$ -function regressor, or in policy search as a policy regressor. In both cases, the targets are learned for each task $\mu_t$ in its respective output block $f_t$ . Remarkably, as we show in the experimental Section 5, it is straightforward to extend RL algorithms to their multi-task variants only through the use of the proposed network architecture, without major changes to the algorithms themselves. + +# 5 EXPERIMENTAL RESULTS + +To empirically evince the effect described by our bounds, we propose an extension of FQI (Ernst et al., 2005; Riedmiller, 2005), that we call MFQI, for which our AVI bounds apply. Then, to empirically evaluate our approach in challenging RL problems, we introduce multi-task variants of two well-known DRL algorithms: DQN (Mnih et al., 2015) and DDPG (Lillicrap et al., 2015), which we call Multi Deep $Q$ -Network (MDQN) and Multi Deep Deterministic Policy Gradient (MDDPG) respectively. Note that for these methodologies, our AVI and API bounds hold only with + +![](images/fa85bb41a67af7cd367de87e1285e790dfcf4a34bd1d881c5e85ad44e2b1852e.jpg) +Figure 2: Discounted cumulative reward averaged over 100 experiments of DQN and MDQN for each task and for transfer learning in the Acrobot problem. An epoch consists of 1,000 steps, after which the greedy policy is evaluated for 2,000 steps. The $95\%$ confidence intervals are shown. + +the simplifying assumption that the samples are i.i.d.; nevertheless they are useful to show the benefit of our method also in complex scenarios, e.g. MuJoCo (Todorov et al., 2012). We remark that in these experiments we are only interested in showing the benefit of learning multiple tasks with a shared representation w.r.t. learning a single task; therefore, we only compare our methods with the single task counterparts, ignoring other works on MTRL in literature. Experiments have been developed using the MushroomRL library (D'Eramo et al., 2020), and run on an NVIDIA® DGX Station™ and Intel® AI DevCloud. Refer to Appendix B for all the details and our motivations about the experimental settings. + +# 5.1 MULTIFITTED $Q$ -ITERATION + +As a first empirical evaluation, we consider FQI, as an example of an AVI algorithm, to show the effect described by our theoretical AVI bounds in experiments. We consider the Car-On-Hill problem as described in Ernst et al. (2005), and select four different tasks from it changing the mass of the car and the value of the actions (details in Appendix B). Then, we run separate instances of FQI with a single task network for each task respectively, and one of MFQI considering all the tasks simultaneously. Figure 1(b) shows the $L_{1}$ -norm of the difference between $Q^{*}$ and $Q^{\pi_K}$ averaged over all the tasks. It is clear how MFQI is able to get much closer to the optimal $Q$ -function, thus giving an empirical evidence of the AVI bounds in Theorem 2. For completeness, we also show the advantage of MFQI w.r.t. FQI in performance. Then, in Figure 1(c) we provide an empirical evidence of the benefit of increasing the number of tasks in MFQI in terms of both quality and stability. + +# 5.2 MULTI DEEP $Q$ -NETWORK + +As in Liu et al. (2016), our MDQN uses separate replay memories for each task and the batch used in each training step is built picking the same number of samples from each replay memory. Furthermore, a step of the algorithm consists of exactly one step in each task. These are the only minor changes to the vanilla DQN algorithm we introduce, while all other aspects, such as the use of the target network, are not modified. Thus, the time complexity of MDQN is considerably lower than vanilla DQN thanks to the learning of $T$ tasks with a single model, but at the cost of a higher memory complexity for the collection of samples for each task. We consider five problems with similar state spaces, sparse rewards and discrete actions: Cart-Pole, Acrobot, Mountain-Car, Car-On-Hill, and Inverted-Pendulum. The implementation of the first three problems is the one provided by the OpenAI Gym library Brockman et al. (2016), while Car-On-Hill is described in Ernst et al. (2005) and Inverted-Pendulum in Lagoudakis & Parr (2003). + +Figure 2(a) shows the performance of MDQN w.r.t. to vanilla DQN that uses a single-task network structured as the multi-task one in the case with $T = 1$ . The first three plots from the left show good performance of MDQN, which is both higher and more stable than DQN. In Car-On-Hill, MDQN is slightly slower than DQN to reach the best performance, but eventually manages to be more stable. Finally, the Inverted-Pendulum experiment is clearly too easy to solve for both approaches, but it is still useful for the shared feature extraction in MDQN. The described results provide important hints about the better quality of the features extracted by MDQN w.r.t. DQN. To further demonstrate this, we evaluate the performance of DQN on Acrobot, arguably the hardest of the five problems, using a single-task network with the shared parameters in $h$ initialized with the weights of a multi-task + +![](images/779703cde602a4b8444c078a2b9dbd71140925a51820b26e9890c296a25c750a.jpg) +(a) Multi-task for pendulums + +![](images/bfee2b3e0fd7ba1ce469176ca46c298fbbb0350fb500eea8ea406051edbf51ec.jpg) + +![](images/302cab225a5e10e738a054a33ee616a792849f2cd1e50078f5f37cc2cf9f73da.jpg) +(b) Transfer for pendulums + +![](images/7f35196ce4d75c065582fabadbabfe8041843fabd2fd74e6be23abc1ae238a4b.jpg) +(c) Multi-task for walkers + +![](images/9c5f58f73c0f0bc983e337d86bb0946a0a66e4093cb09d3fd658ce622eab875f.jpg) +Figure 3: Discounted cumulative reward averaged over 40 experiments of DDPG and MDDPG for each task and for transfer learning in the Inverted-Double-Pendulum and Hopper problems. An epoch consists of 10,000 steps, after which the greedy policy is evaluated for 5,000 steps. The $95\%$ confidence intervals are shown. + +![](images/e6f629032afb12663787b607a709c15b6a34bccc624990303204a954aa3e132a.jpg) + +![](images/db02faa2fd61c7a466496011d1bdd8067d7de93e5c7709702b2d9ff830884e09.jpg) +(d) Transfer for walkers + +network trained with MDQN on the other four problems. Arbitrarily, the pre-trained weights can be adjusted during the learning of the new task or can be kept fixed and only the remaining randomly initialized parameters in $\mathbf{w}$ and $\mathbf{f}$ are trained. From Figure 2(b), the advantages of initializing the weights are clear. In particular, we compare the performance of DQN without initialization w.r.t. DQN with initialization in three settings: in Unfreeze-0 the initialized weights are adjusted, in No-Unfreeze they are kept fixed, and in Unfreeze-10 they are kept fixed until epoch 10 after which they start to be optimized. Interestingly, keeping the shared weights fixed shows a significant performance improvement in the earliest epochs, but ceases to improve soon. On the other hand, the adjustment of weights from the earliest epochs shows improvements only compared to the uninitialized network in the intermediate stages of learning. The best results are achieved by starting to adjust the shared weights after epoch 10, which is approximately the point at which the improvement given by the fixed initialization starts to lessen. + +# 5.3 MULTI DEEP DETERMINISTIC POLICY GRADIENT + +In order to show how the flexibility of our approach easily allows to perform MTRL in policy search algorithms, we propose MDDPG as a multi-task variant of DDPG. As an actor-critic method, DDPG requires an actor network and a critic network. Intuitively, to obtain MDDPG both the actor and critic networks should be built following our proposed structure. We perform separate experiments on two sets of MuJoCo Todorov et al. (2012) problems with similar continuous state and action spaces: the first set includes Inverted-Pendulum, Inverted-Double-Pendulum, and Inverted-Pendulum-Swingup as implemented in the pybullet library, whereas the second set includes Hopper-Stand, Walker-Walk, and Half-Cheetah-Run as implemented in the DeepMind Control SuiteTassa et al. (2018). Figure 3(a) shows a relevant improvement of MDDPG w.r.t. DDPG in the pendulum tasks. Indeed, while in Inverted-Pendulum, which is the easiest problem among the three, the performance of MDDPG is only slightly better than DDPG, the difference in the other two problems is significant. The advantage of MDDPG is confirmed in Figure 3(c) where it performs better than DDPG in Hopper and equally good in the other two tasks. Again, we perform a TL evaluation of DDPG in the problems where it suffers the most, by initializing the shared weights of a single-task network with the ones of a multi-task network trained with MDDPG on the other problems. Figures 3(b) and 3(d) show evident advantages of pre-training the shared weights and a significant difference between keeping them fixed or not. + +# 6 RELATED WORKS + +Our work is inspired from both theoretical and empirical studies in MTL and MTRL literature. In particular, the theoretical analysis we provide follows previous results about the theoretical properties of multi-task algorithms. For instance, Cavallanti et al. (2010) and Maurer (2006) prove the theoretical advantages of MTL based on linear approximation. More in detail, Maurer (2006) derives bounds on MTL when a linear approximator is used to extract a shared representation among tasks. Then, Maurer et al. (2016), which we considered in this work, describes similar results that extend to the use of non-linear approximators. Similar studies have been conducted in the context of MTRL. Among the others, Lazaric & Restelli (2011) and Brunskill & Li (2013) give theoretical proofs of the advantage of learning from multiple MDPs and introduces new algorithms to empirically support their claims, as done in this work. + +Generally, contributions in MTRL assume that properties of different tasks, e.g. dynamics and reward function, are generated from a common generative model. About this, interesting analyses consider Bayesian approaches; for instance Wilson et al. (2007) assumes that the tasks are generated from a hierarchical Bayesian model, and likewise Lazaric & Ghavamzadeh (2010) considers the case when the value functions are generated from a common prior distribution. Similar considerations, which however does not use a Bayesian approach, are implicitly made in Taylor et al. (2007), Lazaric et al. (2008), and also in this work. + +In recent years, the advantages of MTRL have been empirically evinced also in DRL, especially exploiting the powerful representational capacity of deep neural networks. For instance, Parisotto et al. (2015) and Rusu et al. (2015) propose to derive a multi-task policy from the policies learned by DQN experts trained separately on different tasks. Rusu et al. (2015) compares to a therein introduced variant of DQN, which is very similar to our MDQN and the one in Liu et al. (2016), showing how their method overcomes it in the Atari benchmark Bellemare et al. (2013). Further developments, extend the analysis to policy search (Yang et al., 2017; Teh et al., 2017), and to multi-goal RL (Schaul et al., 2015; Andrychowicz et al., 2017). Finally, Hessel et al. (2018) addresses the problem of balancing the learning of multiple tasks with a single deep neural network proposing a method that uniformly adapts the impact of each task on the training updates of the agent. + +# 7 CONCLUSION + +We have theoretically proved the advantage in RL of using a shared representation to learn multiple tasks w.r.t. learning a single task. We have derived our results extending the AVI/API bounds (Farahmand, 2011) to MTRL, leveraging the upper bounds on the approximation error in MTL provided in Maurer et al. (2016). The results of this analysis show that the error propagation during the AVI/API iterations is reduced according to the number of tasks. Then, we proposed a practical way of exploiting this theoretical benefit which consists in an effective way of extracting shared representations of multiple tasks by means of deep neural networks. To empirically show the advantages of our method, we carried out experiments on challenging RL problems with the introduction of multi-task extensions of FQI, DQN, and DDPG based on the neural network structure we proposed. As desired, the favorable empirical results confirm the theoretical benefit we described. + +# ACKNOWLEDGMENTS + +This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. #640554 (SKILLS4ROBOTS) and No. #713010 (GOAL-Robots). This project has also been supported by grants from NVIDIA, the NVIDIA DGX Station, and the Intel® AI DevCloud. The authors thank Alberto Maria Metelli, Andrea Tirinzoni and Matteo Papini for their helpful insights during the development of the project. + +# REFERENCES + +Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pp. 5048-5058, 2017. +Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12: 149-198, 2000. +Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47: 253-279, 2013. +Richard Bellman. The theory of dynamic programming. Technical report, RAND Corp Santa Monica CA, 1954. +Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. +Emma Brunskill and Lihong Li. Sample complexity of multi-task reinforcement learning. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, 2013. +Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997. +Giovanni Cavallanti, Nicolo Cesa-Bianchi, and Claudio Gentile. Linear algorithms for online multitask classification. Journal of Machine Learning Research, 11(Oct):2901-2934, 2010. +Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, and Jan Peters. *Mushroomr: Simplifying reinforcement learning research.* arXiv:2001.01102, 2020. +Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6(Apr):503-556, 2005. +Amir-massoud Farahmand. Regularization in reinforcement learning. 2011. +Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, and Hado van Hasselt. Multi-task deep reinforcement learning with popart. arXiv:1809.04474, 2018. +Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthey, Christopher Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and Alexander Lerchner. Darla: Improving zero-shot transfer in reinforcement learning. In International Conference on Machine Learning, pp. 1480-1490, 2017. +Michail G Lagoudakis and Ronald Parr. Least-squares policy iteration. Journal of machine learning research, 4(Dec):1107-1149, 2003. +Alessandro Lazaric. Transfer in reinforcement learning: a framework and a survey. In Reinforcement Learning, pp. 143-173. Springer, 2012. +Alessandro Lazaric and Mohammad Ghavamzadeh. Bayesian multi-task reinforcement learning. In ICML-27th International Conference on Machine Learning, pp. 599-606. Omnipress, 2010. +Alessandro Lazaric and Marcello Restelli. Transfer from multiple mdps. In Advances in Neural Information Processing Systems, pp. 1746-1754, 2011. +Alessandro Lazaric, Marcello Restelli, and Andrea Bonarini. Transfer of samples in batch reinforcement learning. In Proceedings of the 25th international conference on Machine learning, pp. 544-551. ACM, 2008. + +Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. +Lydia Liu, Urun Dogan, and Katja Hofmann. Decoding multitask dqn in the world of apache. In European Workshop on Reinforcement Learning, 2016. +Andreas Maurer. Bounds for linear multi-task learning. Journal of Machine Learning Research, 7 (Jan):117-139, 2006. +Andreas Maurer. A chain rule for the expected suprema of gaussian processes. Theoretical Computer Science, 650:109-122, 2016. +Andreas Maurer, Massimiliano Pontil, and Bernardino Romero-Paredes. The benefit of multitask representation learning. The Journal of Machine Learning Research, 17(1):2853-2884, 2016. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. In Advances in neural information processing systems, pp. 4026-4034, 2016. +Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342, 2015. +Martin Riedmiller. Neural fitted q iteration-first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pp. 317-328. Springer, 2005. +Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. arXiv preprint arXiv:1511.06295, 2015. +Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International Conference on Machine Learning, pp. 1312-1320, 2015. +Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdelmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. Deepmind control suite. CoRR, abs/1801.00690, 2018. +Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633-1685, 2009. +Matthew E Taylor, Peter Stone, and Yaxin Liu. Transfer learning via inter-task mappings for temporal difference learning. Journal of Machine Learning Research, 8(Sep):2125-2167, 2007. +Yee Teh, Victor Bapat, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4496-4506, 2017. +Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012. +Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012. +Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pp. 1015-1022. ACM, 2007. +Markus Wulfmeier, Abbas Abdelmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Tim Hertweck, Thomas Lampe, Noah Siegel, Nicolas Heess, and Martin Riedmiller. Regularized hierarchical policies for compositional transfer in robotics. arXiv:1906.11228, 2019. +Zhaoyang Yang, Kathryn E Merrick, Hussein A Abbass, and Lianwen Jin. Multi-task deep reinforcement learning for continuous action control. In IJCAI, pp. 3301-3307, 2017. + +# A PROOFS + +# A.1 APPROXIMATED VALUE-ITERATION BOUNDS + +Proof of Theorem 2. We compute the average expected loss across tasks: + +$$ +\begin{array}{l} \frac {1}{T} \sum_ {t = 1} ^ {T} \| Q _ {t} ^ {*} - Q _ {t} ^ {\pi_ {K}} \| _ {1, \rho} \\ \leq \frac {1}{T} \sum_ {t = 1} ^ {T} \frac {2 \gamma_ {t}}{(1 - \gamma_ {t}) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {\mathrm {V I}, \rho , \nu} ^ {\frac {1}{2}} (K; t, r) \mathcal {E} ^ {\frac {1}{2}} (\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r) + \frac {2}{1 - \gamma_ {t}} \gamma_ {t} ^ {K} R _ {\max, t} \right] \\ \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \frac {1}{T} \sum_ {t = 1} ^ {T} \left[ \inf _ {r \in [ 0, 1 ]} C _ {\mathrm {V I}, \rho , \nu} ^ {\frac {1}{2}} (K; t, r) \mathcal {E} ^ {\frac {1}{2}} (\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r) + \frac {2}{1 - \gamma_ {t}} \gamma_ {t} ^ {K} R _ {\max, t} \right] \\ \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \frac {1}{T} \sum_ {t = 1} ^ {T} \left(\inf _ {r \in [ 0, 1 ]} C _ {\mathrm {V I}, \rho , \nu} ^ {\frac {1}{2}} (K; t, r) \mathcal {E} ^ {\frac {1}{2}} \left(\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r\right)\right) + \frac {2}{1 - \gamma} \gamma^ {K} R _ {\max , \text {a v g}} \right] \\ \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} \frac {1}{T} \sum_ {t = 1} ^ {T} \left(C _ {\mathrm {V I}, \rho , \nu} ^ {\frac {1}{2}} (K; t, r) \mathcal {E} ^ {\frac {1}{2}} (\varepsilon_ {t, 0}, \ldots , \varepsilon_ {t, K - 1}; t, r)\right) + \frac {2}{1 - \gamma} \gamma^ {K} R _ {\max, \mathrm {a v g}} \right] \\ \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {\mathrm {V I}} ^ {\frac {1}{2}} (K; r) \frac {1}{T} \sum_ {t = 1} ^ {T} \left(\mathcal {E} ^ {\frac {1}{2}} \left(\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r\right)\right) + \frac {2}{1 - \gamma} \gamma^ {K} R _ {\max , \text {a v g}} \right] \tag {11} \\ \end{array} +$$ + +with $\gamma = \max_{t\in \{1,\dots,T\}}\gamma_t,C_{\mathrm{VI}}^{\frac{1}{2}}(K;r) = \max_{t\in \{1,\dots,T\}}C_{\mathrm{VI},\rho ,\nu}^{\frac{1}{2}}(K;t,r)$ , and $R_{\mathrm{max,avg}} = 1 / T\sum_{t = 1}^{T}R_{\mathrm{max},t}$ . + +Considering the term $1 / T\sum_{t = 1}^{T}\left[\mathcal{E}^{\frac{1}{2}}(\varepsilon_{t,0},\dots ,\varepsilon_{t,K - 1};t,r)\right] = 1 / T\sum_{t = 1}^{T}\left(\sum_{k = 0}^{K - 1}\alpha_{t,k}^{2r}\varepsilon_{t,k}\right)^{\frac{1}{2}}$ let + +$$ +\alpha_ {k} = \left\{ \begin{array}{l l} \frac {(1 - \gamma) \gamma^ {K - k - 1}}{1 - \gamma^ {K + 1}} & 0 \leq k < K, \\ \frac {(1 - \gamma) \gamma^ {K}}{1 - \gamma^ {K + 1}} & k = K \end{array} \right., +$$ + +then we bound + +$$ +\frac {1}{T} \sum_ {t = 1} ^ {T} \left(\sum_ {k = 0} ^ {K - 1} \alpha_ {t, k} ^ {2 r} \varepsilon_ {t, k}\right) ^ {\frac {1}{2}} \leq \frac {1}{T} \sum_ {t = 1} ^ {T} \left(\sum_ {k = 0} ^ {K - 1} \alpha_ {k} ^ {2 r} \varepsilon_ {t, k}\right) ^ {\frac {1}{2}}. +$$ + +Using Jensen's inequality: + +$$ +\frac {1}{T} \sum_ {t = 1} ^ {T} \left(\sum_ {k = 0} ^ {K - 1} \alpha_ {k} ^ {2 r} \varepsilon_ {t, k}\right) ^ {\frac {1}{2}} \leq \left(\sum_ {k = 0} ^ {K - 1} \alpha_ {k} ^ {2 r} \frac {1}{T} \sum_ {t = 1} ^ {T} \varepsilon_ {t, k}\right) ^ {\frac {1}{2}}. +$$ + +So, now we can write (11) as + +$$ +\begin{array}{l} \frac {1}{T} \sum_ {t = 1} ^ {T} \| Q _ {t} ^ {*} - Q _ {t} ^ {\pi_ {K}} \| _ {1, \rho} \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {\mathrm {V I}} ^ {\frac {1}{2}} (K; r) \mathcal {E} _ {\mathrm {a v g}} ^ {\frac {1}{2}} (\varepsilon_ {\mathrm {a v g}, 0}, \dots , \varepsilon_ {\mathrm {a v g}, K - 1}; r) \right. \\ \left. + \frac {2}{1 - \gamma} \gamma^ {K} R _ {\mathrm {m a x , a v g}} \right], \\ \end{array} +$$ + +with $\varepsilon_{\mathrm{avg},k} = 1 / T\sum_{t = 1}^{T}\varepsilon_{\mathrm{t},k}$ and $\mathcal{E}_{\mathrm{avg}}(\varepsilon_{\mathrm{avg},0},\dots ,\varepsilon_{\mathrm{avg},K - 1};r) = \sum_{k = 0}^{K - 1}\alpha_k^{2r}\varepsilon_{\mathrm{avg},k}$ . + +![](images/722fb6410fc560d3bae8ce0d3a45633d3e55daff32f051eaae54967cc25f0562.jpg) + +Proof of Lemma 4. Let us start from the definition of optimal task-averaged risk: + +$$ +\varepsilon_ {\mathrm {a v g}, k} ^ {*} = \frac {1}{T} \sum_ {t = 1} ^ {T} \| Q _ {t, k + 1} ^ {*} - \mathcal {T} _ {t} ^ {*} Q _ {t, k} \| _ {\nu} ^ {2}, +$$ + +where $Q_{t,k}^{*}$ , with $t \in [1, T]$ , are the minimizers of $\varepsilon_{\mathrm{avg},k}$ . + +Consider the task $\tilde{t}$ such that + +$$ +\check {t} _ {k} = \arg \max _ {t \in \{1, \dots , T \}} \| Q _ {t, k + 1} ^ {*} - \mathcal {T} _ {t} ^ {*} Q _ {t, k} \| _ {\nu} ^ {2}, +$$ + +we can write the following inequality: + +$$ +\sqrt {\varepsilon_ {\operatorname {a v g} , k} ^ {*}} \leq \| Q _ {\tilde {t} _ {k}, k + 1} ^ {*} - \mathcal {T} _ {\tilde {t}} ^ {*} Q _ {\tilde {t} _ {k}, k} \| _ {\nu}. +$$ + +By the application of Theorem 5.3 by Farahmand (2011) to the right hand side, and defining $b_{k,i} = \| Q_{\tilde{t}_k,i + 1} - \mathcal{T}_{\tilde{t}}^* Q_{\tilde{t}_k,i}\|_{\nu}$ , we obtain: + +$$ +\sqrt {\varepsilon_ {\mathrm {a v g} , k} ^ {*}} \leq \| Q _ {\tilde {t} _ {k}, k + 1} ^ {*} - (\mathcal {T} _ {\tilde {t}} ^ {*}) ^ {k + 1} Q _ {\tilde {t} _ {k}, 0} \| _ {\nu} + \sum_ {i = 0} ^ {k - 1} \left(\gamma_ {\tilde {t} _ {k}} C _ {\mathrm {A E}} (\nu ; \check {t} _ {k}, P)\right) ^ {i + 1} b _ {k, k - 1 - i}. +$$ + +Squaring both sides yields the result: + +$$ +\varepsilon_ {\mathrm {a v g}, k} ^ {*} \leq \left(\| Q _ {\tilde {t} _ {k}, k + 1} ^ {*} - \left(\mathcal {T} _ {\tilde {t}} ^ {*}\right) ^ {k + 1} Q _ {\tilde {t} _ {k}, 0} \| _ {\nu} + \sum_ {i = 0} ^ {k - 1} \left(\gamma_ {\tilde {t} _ {k}} C _ {\mathrm {A E}} (\nu ; \check {t} _ {k}, P)\right) ^ {i + 1} b _ {k, k - 1 - i}\right) ^ {2}. +$$ + +![](images/bca5a786bac5cde2021cfc9863cf5ed68594414124a5c4efb68ec70ce6405d20.jpg) + +# A.2 APPROXIMATED POLICY-ITERATION BOUNDS + +We start by considering the bound for the API framework: + +Theorem 5. (Theorem 3.2 of Farahmand (2011)) Let $K$ be a positive integer, and $Q_{max} \leq \frac{R_{max}}{1 - \gamma}$ . Then for any sequence $(Q_k)_{k=0}^{K-1} \subset B(\mathcal{S} \times \mathcal{A}, Q_{max})$ and the corresponding sequence $(\varepsilon_k)_{k=0}^{K-1}$ , where $\varepsilon_k = \|Q_k - Q^{\pi_k}\|_\nu^2$ , we have: + +$$ +\left\| Q ^ {*} - Q ^ {\pi_ {K}} \right\| _ {1, \rho} \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {P I, \rho , \nu} ^ {\frac {1}{2}} (K; r) \mathcal {E} ^ {\frac {1}{2}} \left(\varepsilon_ {0}, \dots , \varepsilon_ {K - 1}; r\right) + \gamma^ {K - 1} R _ {\max } \right], \tag {12} +$$ + +where + +$$ +C _ {P l, \rho , \nu} (K; r) = +$$ + +$$ +\begin{array}{l} \left(\frac {1 - \gamma}{2}\right) ^ {2} \sup _ {\pi_ {0} ^ {\prime}, \ldots , \pi_ {K} ^ {\prime}} \sum_ {k = 0} ^ {K - 1} a _ {k} ^ {2 (1 - r)} \qquad \left(\sum_ {m \geq 0} \gamma^ {m} c _ {P I _ {1}, \rho , \nu} (K - k - 1, m + 1; \pi_ {k + 1} ^ {\prime}) + \right. \\ \left. \sum_ {m \geq 1} \gamma^ {m} c _ {P I _ {2}, \rho , \nu} (K - k - 1, m; \pi_ {k + 1} ^ {\prime}, \pi_ {k} ^ {\prime}) + c _ {P I _ {3}, \rho , \nu}\right) ^ {2}; \tag {13} \\ \end{array} +$$ + +with $\mathcal{E}(\varepsilon_0,\ldots ,\varepsilon_{K - 1};r) = \sum_{k = 0}^{K - 1}\alpha_k^{2r}\varepsilon_k$ , the three coefficients $c_{PI_1,\rho ,\nu},c_{PI_2,\rho ,\nu},c_{PI_3,\rho ,\nu}$ , the distributions $\rho$ and $\nu$ , and the series $\alpha_{k}$ are defined as in Farahmand (2011). + +From Theorem 5, by computing the average loss across tasks and pushing inside the average using Jensen's inequality, we derive the API bounds averaged on multiple tasks. + +Theorem 6. Let $K$ be a positive integer, and $Q_{\text{max}} \leq \frac{R_{\text{max}}}{1 - \gamma}$ . Then for any sequence $(Q_k)_{k=0}^{K-1} \subset B(\mathcal{S} \times \mathcal{A}, Q_{\text{max}})$ and the corresponding sequence $(\varepsilon_{avg,k})_{k=0}^{K-1}$ , where $\varepsilon_{avg,k} = \frac{1}{T} \sum_{t=1}^{T} \| Q_{t,k} - Q_t^{\pi_k} \|_\nu^2$ , we have: + +$$ +\begin{array}{l} \frac {1}{T} \sum_ {t = 1} ^ {T} \| Q _ {t} ^ {*} - Q _ {t} ^ {\pi_ {K}} \| _ {1, \rho} \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {P I} ^ {\frac {1}{2}} (K; r) \mathcal {E} _ {a v g} ^ {\frac {1}{2}} (\varepsilon_ {a v g, 0}, \dots , \varepsilon_ {a v g, K - 1}; r) \right. \\ \left. + \gamma^ {K - 1} R _ {\text {m a x , a v g}} \right], \tag {14} \\ \end{array} +$$ + +$$ +\begin{array}{l} w i t h \mathcal {E} _ {a v g} = \sum_ {k = 0} ^ {K - 1} \alpha_ {k} ^ {2 r} \varepsilon_ {a v g, k}, \gamma = \max _ {t \in \{1, \dots , T \}} \gamma_ {t}, C _ {P I} ^ {\frac {1}{2}} (K; r) = \max _ {t \in \{1, \dots , T \}} C _ {P I, \rho , \nu} ^ {\frac {1}{2}} (K; t, r), R _ {m a x, a v g} = \\ \frac {1}{T} \sum_ {t = 1} ^ {T} R _ {m a x, t} a n d \alpha_ {k} = \left\{ \begin{array}{l l} \frac {(1 - \gamma) \gamma^ {K - k - 1}}{1 - \gamma^ {K + 1}} & 0 \leq k < K, \\ \frac {(1 - \gamma) \gamma^ {K}}{1 - \gamma^ {K + 1}} & k = K \end{array} \right.. \\ \end{array} +$$ + +Proof of Theorem 6. The proof is very similar to the one for AVI. We compute the average expected loss across tasks: + +$$ +\begin{array}{l} \frac {1}{T} \sum_ {t = 1} ^ {T} \| Q _ {t} ^ {*} - Q _ {t} ^ {\pi_ {K}} \| _ {1, \rho} \\ \leq \frac {1}{T} \sum_ {t = 1} ^ {T} \frac {2 \gamma_ {t}}{(1 - \gamma_ {t}) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {\mathbf {P I}, \rho , \nu} ^ {\frac {1}{2}} (K; t, r) \mathcal {E} ^ {\frac {1}{2}} (\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r) + \gamma_ {t} ^ {K - 1} R _ {\max , t} \right] \\ \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \frac {1}{T} \sum_ {t = 1} ^ {T} \left[ \inf _ {r \in [ 0, 1 ]} C _ {\mathrm {P I}, \rho , \nu} ^ {\frac {1}{2}} (K; t, r) \mathcal {E} ^ {\frac {1}{2}} (\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r) + \gamma_ {t} ^ {K - 1} R _ {\max, t} \right] \\ \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \frac {1}{T} \sum_ {t = 1} ^ {T} \left(\inf _ {r \in [ 0, 1 ]} C _ {\mathrm {P I}, \rho , \nu} ^ {\frac {1}{2}} (K; t, r) \mathcal {E} ^ {\frac {1}{2}} \left(\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r\right)\right) + \gamma^ {K - 1} R _ {\max , \text {a v g}} \right] \\ \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} \frac {1}{T} \sum_ {t = 1} ^ {T} \left(C _ {\mathrm {P I}, \rho , \nu} ^ {\frac {1}{2}} (K; t, r) \mathcal {E} ^ {\frac {1}{2}} \left(\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r\right)\right) + \gamma^ {K - 1} R _ {\max , \text {a v g}} \right] \\ \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {\mathrm {P I}} ^ {\frac {1}{2}} (K; r) \frac {1}{T} \sum_ {t = 1} ^ {T} \left(\mathcal {E} ^ {\frac {1}{2}} \left(\varepsilon_ {t, 0}, \dots , \varepsilon_ {t, K - 1}; t, r\right)\right) + \gamma^ {K - 1} R _ {\max , \text {a v g}} \right]. \tag {15} \\ \end{array} +$$ + +Using Jensen's inequality as in the AVI scenario, we can write (15) as: + +$$ +\begin{array}{l} \frac {1}{T} \sum_ {t = 1} ^ {T} \| Q _ {t} ^ {*} - Q _ {t} ^ {\pi_ {K}} \| _ {1, \rho} \leq \frac {2 \gamma}{(1 - \gamma) ^ {2}} \left[ \inf _ {r \in [ 0, 1 ]} C _ {\mathrm {P I}} ^ {\frac {1}{2}} (K; r) \mathcal {E} _ {\mathrm {a v g}} ^ {\frac {1}{2}} (\varepsilon_ {\mathrm {a v g}, 0}, \dots , \varepsilon_ {\mathrm {a v g}, K - 1}; r) \right. \\ \left. + \gamma^ {K - 1} R _ {\max , \text {a v g}} \right], \tag {16} \\ \end{array} +$$ + +with $\varepsilon_{\mathrm{avg},k} = 1 / T\sum_{t = 1}^{T}\varepsilon_{\mathrm{t},k}$ and $\mathcal{E}_{\mathrm{avg}}(\varepsilon_{\mathrm{avg},0},\dots ,\varepsilon_{\mathrm{avg},K - 1};r) = \sum_{k = 0}^{K - 1}\alpha_k^{2r}\varepsilon_{\mathrm{avg},k}$ . + +# A.3 APPROXIMATION BOUNDS + +Proof of Theorem 3. Let $w_{1}^{*},\ldots ,w_{T}^{*},h^{*}$ and $f_{1}^{*},\ldots ,f_{T}^{*}$ be the minimizers of $\varepsilon_{\mathrm{avg}}^*$ , then: + +$$ +\begin{array}{l} \varepsilon_ {\mathrm {a v g}} (\hat {\mathbf {w}}, \hat {h}, \hat {\mathbf {f}}) - \varepsilon_ {\mathrm {a v g}} ^ {*} = \underbrace {\left(\varepsilon_ {\mathrm {a v g}} (\hat {\mathbf {w}} , \hat {h} , \hat {\mathbf {f}}) - \frac {1}{n T} \sum_ {t i} \ell (\hat {f} _ {t} (\hat {h} (\hat {w} _ {t} (X _ {t i}))), Y _ {t i})\right)} _ {A} \\ + \underbrace {\left(\frac {1}{n T} \sum_ {t i} \ell (\hat {f} _ {t} (\hat {h} (\hat {w} _ {t} (X _ {t i}))), Y _ {t i}) - \frac {1}{n T} \sum_ {t i} \ell (f _ {t} ^ {*} (h ^ {*} (w _ {t} ^ {*} (X _ {t i}))), Y _ {t i})\right)} _ {B} \\ + \underbrace {\left(\frac {1}{n T} \sum_ {t i} \ell \left(f _ {t} ^ {*} \left(h ^ {*} \left(w _ {t} ^ {*} \left(X _ {t i}\right)\right)\right) , Y _ {t i}\right) - \varepsilon_ {\mathrm {a v g}} ^ {*}\right)} _ {C}. \tag {17} \\ \end{array} +$$ + +We proceed to bound the three components individually: + +- $C$ can be bounded using Hoeffding's inequality, with probability $1 - \frac{\delta}{2}$ by $\sqrt{\ln(2 / \delta) / (2nT)}$ , as it contains only $nT$ random variables bounded in the interval $[0, 1]$ ; + +- $B$ can be bounded by 0, by definition of $\hat{\mathbf{w}}, \hat{h}$ and $\hat{\mathbf{f}}$ , as they are the minimizers of Equation (3); +- the bounding of $A$ is less straightforward and is described in the following. + +We define the following auxiliary function spaces: + +- $\mathcal{W}' = \{x \in \mathcal{X} \rightarrow (w_t(x_{ti})): (w_1, \ldots, w_T) \in \mathcal{W}^T\}$ , +$\bullet \mathcal{F}' = \left\{y \in \mathbb{R}^{KTn} \to (f_t(y_{ti})): (f_1, \ldots, f_T) \in \mathcal{F}^T\right\}$ , + +and the following auxiliary sets: + +- $S = \left\{(\ell(f_t(h(w_t(X_{ti})), Y_{ti})): f \in \mathcal{F}^T, h \in \mathcal{H}, w \in \mathcal{W}^T\right\} \subseteq \mathbb{R}^{Tn}$ , +- $S' = \mathcal{F}'(\mathcal{H}(\mathcal{W}'(\bar{\mathbf{X}}))) = \big\{(f_t(h(w_t(X_{ti}))) : f \in \mathcal{F}^T, h \in \mathcal{H}, w \in \mathcal{W}^T\big\} \subseteq \mathbb{R}^{Tn}$ , +- $S'' = \mathcal{H}(\mathcal{W}'(\bar{\mathbf{X}})) = \big\{(h(w_t(X_{ti}))) : h \in \mathcal{H}, w \in \mathcal{W}^T\big\} \subseteq \mathbb{R}^{KTn}$ , + +which will be useful in our proof. + +Using Theorem 9 by Maurer et al. (2016), we can write: + +$$ +\begin{array}{l} \varepsilon_ {\mathrm {a v g}} (\hat {\mathbf {w}}, \hat {h}, \hat {\mathbf {f}}) - \frac {1}{n T} \sum_ {t i} \ell (\hat {f} _ {t} (\hat {h} (\hat {w} _ {t} (X _ {t i}))), Y _ {t i}) \\ \leq \sup _ {\mathbf {w} \in \mathcal {W} ^ {T}, h \in \mathcal {H}, \mathbf {f} \in \mathcal {F} ^ {T}} \left(\varepsilon_ {\mathrm {a v g}} (\mathbf {w}, h, \mathbf {f}) - \frac {1}{n T} \sum_ {t i} \ell \left(f _ {t} \left(h \left(w _ {t} \left(X _ {t i}\right)\right)\right), Y _ {t i}\right)\right) \\ \leq \frac {\sqrt {2 \pi} G (S)}{n T} + \sqrt {\frac {9 \ln \left(\frac {2}{\delta}\right)}{2 n T}}, \tag {18} \\ \end{array} +$$ + +then by Lipschitz property of the loss function $\ell$ and the contraction lemma Corollary 11 Maurer et al. (2016): $G(S) \leq G(S')$ . By Theorem 12 by Maurer et al. (2016), for universal constants $c_1'$ and $c_2'$ : + +$$ +G \left(S ^ {\prime}\right) \leq c _ {1} ^ {\prime} L \left(\mathcal {F} ^ {\prime}\right) G \left(S ^ {\prime \prime}\right) + c _ {2} ^ {\prime} D \left(S ^ {\prime \prime}\right) O \left(\mathcal {F} ^ {\prime}\right) + \min _ {y \in Y} G (\mathcal {F} (y)), \tag {19} +$$ + +where $L(\mathcal{F}')$ is the largest value for the Lipschitz constants in the function space $\mathcal{F}'$ , and $D(S'')$ is the Euclidean diameter of the set $S''$ . + +Using Theorem 12 by Maurer et al. (2016) again, for universal constants $c_1''$ and $c_2''$ : + +$$ +G \left(S ^ {\prime \prime}\right) \leq c _ {1} ^ {\prime \prime} L (\mathcal {H}) G \left(\mathcal {W} ^ {\prime} (\bar {\mathbf {X}})\right) + c _ {2} ^ {\prime \prime} D \left(\mathcal {W} ^ {\prime} (\bar {\mathbf {X}})\right) O (\mathcal {H}) + \min _ {p \in P} G (\mathcal {H} (p)). \tag {20} +$$ + +Putting (19) and (20) together: + +$$ +\begin{array}{l} G (S ^ {\prime}) \leq c _ {1} ^ {\prime} L (\mathcal {F} ^ {\prime}) \left(c _ {1} ^ {\prime \prime} L (\mathcal {H}) G (\mathcal {W} ^ {\prime} (\bar {\mathbf {X}})) + c _ {2} ^ {\prime \prime} D (\mathcal {W} ^ {\prime} (\bar {\mathbf {X}})) O (\mathcal {H}) + \min _ {p \in P} G (\mathcal {H} (p))\right) \\ + c _ {2} ^ {\prime} D (S ^ {\prime \prime}) O (\mathcal {F} ^ {\prime}) + \min _ {y \in Y} G (\mathcal {F} (y)) \\ = c _ {1} ^ {\prime} c _ {1} ^ {\prime \prime} L (\mathcal {F} ^ {\prime}) L (\mathcal {H}) G (\mathcal {W} ^ {\prime} (\bar {\mathbf {X}})) + c _ {1} ^ {\prime} c _ {2} ^ {\prime \prime} L (\mathcal {F} ^ {\prime}) D (\mathcal {W} ^ {\prime} (\bar {\mathbf {X}})) O (\mathcal {H}) + c _ {1} ^ {\prime} L (\mathcal {F} ^ {\prime}) \min _ {p \in P} G (\mathcal {H} (p)) \\ + c _ {2} ^ {\prime} D \left(S ^ {\prime \prime}\right) O \left(\mathcal {F} ^ {\prime}\right) + \min _ {y \in Y} G (\mathcal {F} (y)). \tag {21} \\ \end{array} +$$ + +At this point, we have to bound the individual terms in the right hand side of (21), following the same procedure proposed by Maurer et al. (2016). + +Firstly, to bound $L(\mathcal{F}^{\prime})$ , let $y, y^{\prime} \in \mathbb{R}^{KTn}$ , where $y = (y_{ti})$ with $y_{ti} \in \mathbb{R}^{K}$ and $y^{\prime} = (y_{ti}^{\prime})$ with $y_{ti}^{\prime} \in \mathbb{R}^{K}$ . We can write the following: + +$$ +\begin{array}{l} \left\| f (y) - f \left(y ^ {\prime}\right) \right\| ^ {2} = \sum_ {t i} \left(f _ {t} \left(y _ {t i}\right) - f _ {t} \left(y _ {t i} ^ {\prime}\right)\right) ^ {2} \\ \leq L (\mathcal {F}) ^ {2} \sum_ {t i} \| y _ {t i} - y _ {t i} ^ {\prime} \| ^ {2} \\ = L \left(\mathcal {F}\right) ^ {2} \| y - y ^ {\prime} \| ^ {2}, \tag {22} \\ \end{array} +$$ + +whence $L(\mathcal{F}^{\prime})\leq L(\mathcal{F})$ + +Then, we bound: + +$$ +\begin{array}{l} G \left(\mathcal {W} ^ {\prime} (\bar {\mathbf {X}})\right) = \mathbb {E} \left[ \sup _ {\mathbf {w} \in \mathcal {W} ^ {T}} \sum_ {k t i} \gamma_ {k t i} w _ {t k} \left(X _ {t i}\right) \Bigg | X _ {t i} \right] \leq \sum_ {t} \sup _ {l \in \{1, \dots , T \}} \mathbb {E} \left[ \sup _ {w \in \mathcal {W}} \sum_ {k i} \gamma_ {k l i} w _ {k} \left(X _ {l i}\right) \Bigg | X _ {l i} \right] \\ = T \sup _ {l \in \{1, \dots , T \}} G \left(\mathcal {W} \left(\mathbf {X} _ {l}\right)\right). \tag {23} \\ \end{array} +$$ + +Then, since it is possible to bound the Euclidean diameter using the norm of the supremum value in the set, we bound $D(S'') \leq 2\sup_{h,\mathbf{w}}\| h(\mathbf{w}(\overline{\mathbf{X}}))\|$ and $D(\mathcal{W}'(\overline{\mathbf{X}})) \leq 2\sup_{\mathbf{w} \in \mathcal{W}^T}\|\mathbf{w}(\overline{\mathbf{X}})\|$ . + +Also, we bound $O(\mathcal{F}^{\prime})$ : + +$$ +\begin{array}{l} \mathbb {E} \left[ \sup _ {g \in \mathcal {F} ^ {\prime}} \langle \boldsymbol {\gamma}, g (y) - g (y ^ {\prime}) \rangle \right] = \mathbb {E} \left[ \sup _ {\boldsymbol {f} \in \mathcal {F} ^ {T}} \sum_ {t i} \gamma_ {t i} \left(f _ {t} \left(y _ {t i}\right) - f _ {t} \left(y _ {t i} ^ {\prime}\right)\right) \right] \\ = \sum_ {t} \mathbb {E} \left[ \sup _ {f \in \mathcal {F}} \sum_ {i} \gamma_ {i} \left(f \left(y _ {t i}\right) - f \left(y _ {t i} ^ {\prime}\right)\right) \right] \\ \leq \sqrt {T} \left(\sum_ {t} \mathbb {E} \left[ \sup _ {f \in \mathcal {F}} \sum_ {i} \gamma_ {i} \left(f \left(y _ {t i}\right) - f \left(y _ {t i} ^ {\prime}\right)\right) \right] ^ {2}\right) ^ {\frac {1}{2}} \\ \leq \sqrt {T} \left(\sum_ {t} O (\mathcal {F}) ^ {2} \sum_ {i} \| y _ {t i} - y _ {t i} ^ {\prime} \| ^ {2}\right) ^ {\frac {1}{2}} \\ = \sqrt {T} O (\mathcal {F}) \| y - y ^ {\prime} \|, \tag {24} \\ \end{array} +$$ + +whence $O(\mathcal{F}') \leq \sqrt{T} O(\mathcal{F})$ . + +To minimize the last term, it is possible to choose $y_0 = 0$ , as $f(0) = 0, \forall f \in \mathcal{F}$ , resulting in $\min_{y \in Y} G(\mathcal{F}(y)) = G(\mathcal{F}(0)) = 0$ . + +Then, substituting in (21), and recalling that $G(S) \leq G(S')$ : + +$$ +\begin{array}{l} G(S)\leq c_{1}^{\prime}c_{1}^{\prime \prime}L(\mathcal{F})L(\mathcal{H})T\sup_{l\in \{1,\ldots ,T\}}G(\mathcal{W}(\mathbf{X}_{l})) + 2c_{1}^{\prime}c_{2}^{\prime \prime}L(\mathcal{F})\sup_{\mathbf{w}\in \mathcal{W}^{T}}\| \mathbf{w}(\bar{\mathbf{X}})\| O(\mathcal{H}) \\ + c _ {1} ^ {\prime} L (\mathcal {F}) \min _ {p \in P} G (\mathcal {H} (p)) + 2 c _ {2} ^ {\prime} \sup _ {h, \mathbf {w}} \| h (\mathbf {w} (\bar {\mathbf {X}})) \| \sqrt {T} O (\mathcal {F}). \tag {25} \\ \end{array} +$$ + +Now, the first term $A$ of (17) can be bounded substituting (25) in (18): + +$$ +\begin{array}{l} \varepsilon_ {\mathrm {a v g}} (\hat {\mathbf {w}}, \hat {h}, \hat {\mathbf {f}}) - \frac {1}{n T} \sum_ {t i} \ell (\hat {f} _ {t} (\hat {h} (\hat {w} _ {t} (X _ {t i}))), Y _ {t i}) \\ \leq \frac {\sqrt {2 \pi}}{n T} \left(c _ {1} ^ {\prime} c _ {1} ^ {\prime \prime} L (\mathcal {F}) L (\mathcal {H}) T \sup _ {l \in \{1, \dots , T \}} G (\mathcal {W} (\mathbf {X} _ {l})) + 2 c _ {1} ^ {\prime} c _ {2} ^ {\prime \prime} L (\mathcal {F}) \sup _ {\mathbf {w} \in \mathcal {W} ^ {T}} \| \mathbf {w} (\bar {\mathbf {X}}) \| O (\mathcal {H}) \right. \\ \left. + c _ {1} ^ {\prime} L (\mathcal {F}) \operatorname * {m i n} _ {p \in P} G (\mathcal {H} (p)) + 2 c _ {2} ^ {\prime} \sup _ {h, \mathbf {w}} \| h (\mathbf {w} (\bar {\mathbf {X}})) \| \sqrt {T} O (\mathcal {F})\right) + \sqrt {\frac {9 \ln \left(\frac {2}{\delta}\right)}{2 n T}} \\ = c _ {1} \frac {L (\mathcal {F}) L (\mathcal {H}) \sup _ {l \in \{1 , \dots , T \}} G (\mathcal {W} (\mathbf {X} _ {l}))}{n} + c _ {2} \frac {\sup _ {\mathbf {w}} \| \mathbf {w} (\bar {\mathbf {X}}) \| L (\mathcal {F}) O (\mathcal {H})}{n T} \\ + c _ {3} \frac {L (\mathcal {F}) \operatorname* {m i n} _ {p \in P} G (\mathcal {H} (p))}{n T} + c _ {4} \frac {\operatorname* {s u p} _ {h , \mathbf {w}} \| h (\mathbf {w} (\bar {\mathbf {X}})) \| O (\mathcal {F})}{n \sqrt {T}} + \sqrt {\frac {9 \ln \left(\frac {2}{\delta}\right)}{2 n T}}. \\ \end{array} +$$ + +A union bound between $A$ , $B$ and $C$ of (17) completes the proof: + +$$ +\begin{array}{l} \varepsilon_ {\mathrm {a v g}} (\hat {\mathbf {w}}, \hat {h}, \hat {\mathbf {f}}) - \varepsilon_ {\mathrm {a v g}} ^ {*} \leq c _ {1} \frac {L (\mathcal {F}) L (\mathcal {H}) \sup _ {l \in \{1 , \dots , T \}} G (\mathcal {W} (\mathbf {X} _ {l}))}{n} \\ + c _ {2} \frac {\sup _ {\mathbf {w}} \| \mathbf {w} (\bar {\mathbf {X}}) \| L (\mathcal {F}) O (\mathcal {H})}{n T} \\ + c _ {3} \frac {L (\mathcal {F}) \operatorname* {m i n} _ {p \in P} G (\mathcal {H} (p))}{n T} \\ + c _ {4} \frac {\sup _ {h , \mathbf {w}} \| h (\mathbf {w} (\bar {\mathbf {X}})) \| O (\mathcal {F})}{n \sqrt {T}} \\ + \sqrt {\frac {8 \ln \left(\frac {3}{\delta}\right)}{n T}}. \\ \end{array} +$$ + +![](images/6081d003cff6bb419b9991589228b01162ff4c7988678852570d60df3e6c0d28.jpg) + +# B ADDITIONAL DETAILS OF EMPIRICAL EVALUATION + +# B.1 MULTI FITTED $Q$ -ITERATION + +We consider Car-On-Hill problem with discount factor 0.95 and horizon 100. Running Adam optimizer with learning rate 0.001 and using a mean squared loss, we train a neural network composed of 2 shared layers of 30 neurons each, with sigmoidal activation function, as described in Riedmiller (2005). We select 8 tasks for the problem changing the mass of the car $m$ and the value of the discrete actions $a$ (Table 1). Figure 1(b) is computed considering the first four tasks, while Figure 1(c) considers task 1 in the result with 1 task, tasks 1 and 2 for the result with 2 tasks, tasks 1, 2, 3, and 4 for the result with 4 tasks, and all the tasks for the result with 8 tasks. To run FQI and MFQI, for each task we collect transitions running an extra-tree trained following the procedure and setting in Ernst et al. (2005), using an $\epsilon$ -greedy policy with $\epsilon = 0.1$ , to obtain a small, but representative dataset. The optimal $Q$ -function for each task is computed by tree-search3 for 100 states uniformly picked from the state space, and the 2 discrete actions, for a total of 200 state-action tuples. + +# B.2 MULTI DEEP $Q$ -NETWORK + +The five problems we consider for this experiment are: Cart-Pole, Acrobot, Mountain-Car, Car-On-Hill, and Inverted-Pendulum4. The discount factors are respectively 0.99, 0.99, 0.99, 0.95, and 0.95. The horizons are respectively 500, 1,000, 1,000, 100, and 3,000. The network we use consists of 80 ReLu units for each $w_{t}, t \in \{1,\dots,T\}$ block, with $T = 5$ . Then, the shared block $h$ consists of one + +
TaskMassAction set
11.0{-4.0; 4.0}
20.8{-4.0; 4.0}
31.0{-4.5; 4.5}
41.2{-4.5; 4.5}
51.0{-4.125; 4.125}
61.0{-4.25; 4.25}
70.8{-4.375; 4.375}
80.85{-4.0; 4.0}
+ +Table 1: Different values of the mass of the car and available actions chosen for the Car-On-Hill tasks in the MFQI empirical evaluation. + +layer with 80 ReLu units and another one with 80 sigmoid units. Eventually, each $f_{t}$ has a number of linear units equal to the number of discrete actions $a_{i}^{(t)}, i \in \{1, \dots, \# \mathcal{A}^{(t)}\}$ of task $\mu_{t}$ which outputs the action-value $Q_{t}(s, a_{i}^{(t)}) = y_{t}(s, a_{i}^{(t)}) = f_{t}(h(w_{t}(s)), a_{i}^{(t)}), \forall s \in S^{(t)}$ . The use of sigmoid units in the second layer of $h$ is due to our choice to extract meaningful shared features bounded between 0 and 1 to be used as input of the last linear layer, as in most RL approaches. In practice, we have also found that sigmoid units help to reduce task interference in multi-task networks, where instead the linear response of ReLu units cause a problematic increase in the feature values. Furthermore, the use of a bounded feature space reduces the $\sup_{h,\mathbf{w}} \| h(\mathbf{w}(\bar{\mathbf{X}})) \|$ term in the upper bound of Theorem 3, corresponding to the upper bound of the diameter of the feature space, as shown in Appendix A. The initial replay memory size for each task is 100 and the maximum size is 5,000. We use Huber loss with Adam optimizer using learning rate $10^{-3}$ and batch size of 100 samples for each task. The target network is updated every 100 steps. The exploration is $\varepsilon$ -greedy with $\varepsilon$ linearly decaying from 1 to 0.01 in the first 5,000 steps. + +# B.3 MULTI DEEP DETERMINISTIC POLICY GRADIENT + +The two set of problems we consider for this experiment are: one including Inverted-Pendulum, Inverted-Double-Pendulum, and Inverted-Pendulum-Swingup, and another one including Hopper-Stand, Walker-Walk, and Half-Cheetah-Run5. The discount factors are 0.99 and the horizons are 1,000 for all problems. The actor network is composed of 600 ReLu units for each $w_{t}, t \in \{1,\dots,T\}$ block, with $T = 3$ . The shared block $h$ has 500 units with ReLu activation function as for MDQN. Finally, each $f_{t}$ has a number of tanh units equal to the number of dimensions of the continuous actions $a^{(t)} \in \mathcal{A}^{(t)}$ of task $\mu_t$ which outputs the policy $\pi_t(s) = y_t(s) = f_t(h(w_t(s))), \forall s \in S^{(t)}$ . On the other hand, the critic network consists of the same $w_{t}$ units of the actor, except for the use of sigmoidal units in the $h$ layer, as in MDQN. In addition to this, the actions $a^{(t)}$ are given as input to $h$ . Finally, each $f_{t}$ has a single linear unit $Q_{t}(s,a^{(t)}) = y_{t}(s,a^{(t)}) = f_{t}(h(w_{t}(s),a^{(t)}))$ , $\forall s \in S^{(t)}$ . The initial replay memory size for each task is 64 and the maximum size is 50,000. We use Huber loss to update the critic network and the policy gradient to update the actor network. In both cases the optimization is performed with Adam optimizer and batch size of 64 samples for each task. The learning rate of the actor is $10^{-4}$ and the learning rate of the critic is $10^{-3}$ . Moreover, we apply $\ell_2$ -penalization to the critic network using a regularization coefficient of 0.01. The target networks are updated with soft-updates using $\tau = 10^{-3}$ . The exploration is performed using the action computed by the actor network adding a noise generated with an Ornstein-Uhlenbeck process with $\theta = 0.15$ and $\sigma = 0.2$ . Note that most of these values are taken from the original DDPG paper Lillicrap et al. (2015), which optimizes them for the single-task scenario. \ No newline at end of file diff --git a/sharingknowledgeinmultitaskdeepreinforcementlearning/images.zip b/sharingknowledgeinmultitaskdeepreinforcementlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ee2f63e9b2ccead252eadc49909696294820b998 --- /dev/null +++ b/sharingknowledgeinmultitaskdeepreinforcementlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdc976fd0ffeb32003cbcef8c69b6e871d68b9754b53c0da0a2f37ded62b7da1 +size 867855 diff --git a/sharingknowledgeinmultitaskdeepreinforcementlearning/layout.json b/sharingknowledgeinmultitaskdeepreinforcementlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..435cc90d38d8cf0c5da86e3e6b08c06e6cafef9b --- /dev/null +++ b/sharingknowledgeinmultitaskdeepreinforcementlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56fe49f02e0414df4b581cca865dac730495000e557937a23a5177a7b048c21d +size 700613 diff --git a/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_content_list.json b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..72fd67a7a5b9d8aae874753258fab68de82d0bc6 --- /dev/null +++ b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe76b707c96b39d5eec04ff743ae83877e911d7a9f88dbc377efc9fb8023577c +size 64545 diff --git a/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_model.json b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..38631e4ad682d705591ceab545c86fb1ba51e837 --- /dev/null +++ b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a91a3a5752f83a6da255dd6b89bde2f20b558e5958e975307286f5eb8cdee36 +size 81492 diff --git a/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_origin.pdf b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dde79f9bb5e9e4c02749bea1fd73d6a57ad26155 --- /dev/null +++ b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/6405cd38-d9ed-40ff-ae6c-ba53128e905c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94b26ad8bcd92dab78b481250640c886aba3d6df2f1daa79dd14aad7271b5de6 +size 496852 diff --git a/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/full.md b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bfadda1859c1589132ed7f354d2b00f9d35a2695 --- /dev/null +++ b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/full.md @@ -0,0 +1,301 @@ +# SHIFTED AND SQUEEZED 8-BIT FLOATING POINT FORMAT FOR LOW-PRECISION TRAINING OF DEEP NEURAL NETWORKS + +Léopold Cambier $^{1*†}$ , Anahita Bhiwandiwalla $^{2†}$ , Ting Gong $^{2}$ , Mehran Nekuii $^{2}$ , Oguz H Elibol $^{2}$ and Hanlin Tang $^{2}$ + +$^{1}$ ICME, Stanford University + +Intel AI Lab + +lcambier@stanford.edu + +{anhita.bhiwandiwalla,ting.gong}@intel.com + +{mehran.nekuii,oguz.h.elibol,hanlin.tang}@intel.com + +# ABSTRACT + +Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models: ResNet-50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization. + +# 1 INTRODUCTION + +Deep neural networks have achieved state-of-the-art performance on a wide variety of computer vision, audio, and natural language processing (NLP) tasks. This has resulted in an explosion of interest around techniques to reduce the memory footprint and energy consumption of neural network training and inference (Guo, 2018). Although there are a number of methods to address some of these issues for inference, the most effective method for training is using reduced precision numerical formats. + +While 32-bit floating point (FP32) is the most common data format for neural network training, recent hardware have leveraged techniques that allow for training with 16-bit data formats (Köster et al., 2017; Micikevicius et al., 2018). However, 8-bit precision training remains an open challenge (Johnson, 2018; Kalamkar et al., 2019). Current FP8 training methodologies (Wang et al., 2018; Mellempudi et al., 2019) require either specialized chunk-based accumulation, stochastic rounding techniques, loss scaling or maintaining some layers of the network in higher precision. Tuning these knobs is non-intuitive and requires significant experimentation for each individual network. + +Accelerating the adoption of 8-bit data in training DNNs requires a hardware-friendly and out-of-the-box implementation of FP8. Due to the reduced number of mantissa bits, 8-bit multipliers are smaller and consume less power compared to higher bit representations. In this work we describe a novel 8-bit floating point (FP8) format - shifted and squeezed FP8 (S2FP8) - which has the following advantages compared to previously proposed 8-bit training methodologies: + +- S2FP8 eliminates the need for loss scaling, which requires significant tuning of the loss scale values and schedule for individual topologies +- Leveraged by the forward and backward passes of model training, S2FP8 is effective in adjusting the range of gradients and also of activations and weights +- S2FP8 does not require keeping the first and last layer in FP32 precision, which is needed for other approaches (Mellempudi et al., 2019), however maintains the master weights and accumulations inside the matrix multipliers in FP32 + +We demonstrate across image classification, translation, and recommendation models that S2FP8 outperforms previous 8-bit approaches, and reaches the accuracy of FP32 models without any additional hyperparameter tuning. + +# 2 RELATED WORK + +The success of 32-bit floating point data type in training deep neural networks has increased interest in the feasibility of even lower precision training. The exponential demand for compute involved in training these deep neural networks has lead to multiple advancements in lower precision data types. + +Several studies have developed techniques such as loss scaling, stochastic rounding, and others to train effectively in 16-bit (Micikevicius et al., 2018; Das et al., 2018; Azim), along with associated hardware support (Markidis et al., 2018). Using 16-bit fixed point, (Gupta et al., 2015) showed that stochastic rounding techniques were crucial for model convergence even for simple convolutional neural networks. As noted in (Kalamkar et al., 2019), Google's bfloat16 format has the same number of exponent bits as FP32, leading the success of that format without commonly requiring hardware intensive requirements such as stochastic rounding or other framework level techniques such as loss scaling. + +Although 8-bit formats have significant performance and memory advantages, convergence is especially challenging due to loss of accuracy in the backpropogated gradient values. Wang et al. (2018) demonstrated training models with matrix multiplications and convolutions in FP8 but they use FP16 with chunk-based accumulations and stochastic rounding hardware. Mellempudi et al. (2019) also demonstrated success with FP8, accumulating in FP32 and using loss scaling techniques on ResNets, Transformer and GNMT networks. However, they too require the first and last layers of the model to be in FP32, and similar to (Banner et al., 2018) leverage Stochastic Rounding techniques to maintain model accuracy. Unlike S2FP8 proposed in this work, both of these FP8 training techniques emphasize the need for efficient loss scaling, rounding hardware and restriction on some layers being in higher precision. + +Zhou et al. (2016) quantized weights, activations and gradients of AlexNet (Krizhevsky et al., 2012) to 1, 2 and 6 bits respectively. But they also need to maintain the first and last convolution layers in full precision and stochastically quantize the gradients. Wu et al. (2018) demonstrate using integers for training LeNet-5 (LeCun et al., 1998) and AlexNet with 8-bits for activations, error and gradients and 2-bits for weights. However, these approaches also required custom tuning such as novel initialization techniques and layer wise scaling instead of Batch Normalization and Softmax. These approaches lack generalizability to other models, requiring significant fine tuning. + +To the best of our knowledge, there does not exist an out-of-the-box solution using FP8 in training deep learning topologies without the need for tuned loss scaling techniques, requirements of certain layers being in full precision along with efficient hardware rounding schemes like Stochastic Rounding. + +# 3 SHFTED AND SQUEEZED 8-BIT FLOATING POINT FORMAT + +# 3.1 CHALLENGES OF 8-BIT FLOATING POINT FORMAT + +The FP8 format, with 2 bits of mantissa and 5 bits of exponent (Mellempudi et al., 2019) is both narrow (i.e., its dynamic range is very limited, from $2^{-16}$ to $2^{16}$ ) and has lower accuracy (the machine epsilon is only $2^{-3}$ ). Figure A1 illustrates the range and accuracy of FP8. In contrast, FP32 ranges from $2^{-149}$ to $2^{128}$ with a machine-epsilon of $2^{-24}$ (Table A1). + +![](images/6bb5ac8106dcbb068ec2363b50836b09de7a45015081002d419641cdfb3a3c85.jpg) +Figure 1: The distribution of tensor elements over the course of training for three tensors from the Transformer tiny model on the English-Vietnamese translation dataset. Blue bar indicates the representable range of FP8. Left: Many of the tensor elements fall outside of FP8's representable range. Center: Few tensor elements fall outside of FP8's representable range. Right: Initially, most elements are within FP8's representable range, but after training, many fall outside of the representable range + +On the other hand, tensors involved in neural networks (weights, activations and gradients) are spread across varying scales. As illustrated in Figure 1, the tensor distributions change over the course of training, spanning different orders of magnitude. + +As a result, 8-bit training usually requires a combination of multiple techniques to capture the full dynamic range of values for model training. Some of these techniques include: + +- Loss scaling (Micikevicius et al., 2018) scales the loss $\mathcal{L}(w)$ by a constant $\lambda$ before backpropagation. This makes the gradients artificially larger, allowing them to fit within the FP8 range. Gradients are then scaled down before being accumulated into the trainable weights as shown in Equation 6 +- Stochastic rounding (Maxfield, 2006) alleviate quantization errors by capturing some of the information discarded when truncating to lower precision at the output of a GEMM operation + +Between these two techniques, loss scaling is more critical; once the magnitude of the gradients can no longer be represented in the FP8 range, training convergence will not be possible. However, loss scaling only modifies the gradients. Weights and activations can also (albeit admittedly less frequently) exceed the FP8's representable range of $[2^{-16}, 2^{16}]$ . In those scenarios, convergence can also be affected. + +The issue with loss scaling is that it requires user interaction. Models have to be modified, and, more importantly, tedious empirical tuning is required to find the correct loss scaling schedule. While some networks can be trained with constant loss scaling, some, notably Transformers (Mellempudi et al., 2019), require dynamic "back-off" and improved loss scaling. This requires significant trial and error to tune the scaling schedule, slowing down wide adoption of low-precision numerical formats. + +# 3.2 SHIFTED AND SQUEEZED FP8 + +To alleviate these issues and make neural network training possible with no model modifications or hyperparameter tuning, we propose a new 8-bit floating point format. Consider a tensor $X$ of size $N$ , i.e., $X = \{X_{i}\}_{i=1}^{N}$ . Instead of directly encoding each $X_{i}$ in FP8, we store $X$ using $N$ FP8 numbers $\{Y_{i}\}_{i=1}^{N}$ accompanied by two (squeeze and shift) factors $\alpha$ and $\beta$ (the "statistics" — see Figure 2). + +![](images/d0a04cd14d64084fd272d74372141b0e00da7f1910b264a1723159fc344834f1.jpg) +NFP8numbers +2 FP32 numbers, "statistics" +Figure 2: The S2FP8 format. A tensor $X$ of $N$ numbers is represented by $\alpha$ , $\beta$ and $N$ FP8 numbers $Y$ , related to $X$ through Equation 1. + +![](images/4d26522a01eb40cf6704a2ac0d5d4b968f83a0d7ebcef1fae63462f820bb1c1a.jpg) + +![](images/2304c4777e5e70315fb281ca9f0a4bb35ac9b4e74236198878b6cf8ae73c9e66.jpg) + +![](images/357bf0e9913a682e0249e5219bdbe516134f3842312fc53f34f312a48783c83c.jpg) +(a) $Y$ , the usual FP8 distribution. +(b) $X$ , for $\alpha = 1$ and $\beta < 0$ +(c) $X$ , for $\alpha < 1$ and $\beta = 0$ +Figure 3: Impact of the Shifted and Squeezed transformation $\log_2|Y| = \alpha \log_2|X| + \beta$ . $\alpha$ let the distribution be as wide as necessary (though, with an associated loss of precision), and $\beta$ let us shift the distribution around any value. + +For $X_{i} \neq 0$ , $X$ and $Y$ are then related through + +$$ +\log_ {2} \left(\left| Y _ {i} \right|\right) = \alpha \log_ {2} \left(\left| X _ {i} \right|\right) + \beta \Leftrightarrow Y _ {i} = \pm 2 ^ {\beta} \left| X _ {i} \right| ^ {\alpha} \tag {1} +$$ + +where the $\pm$ is chosen so that $X_{i}$ and $Y_{i}$ have the same sign. This representation allows for $\alpha$ and $\beta$ be chosen so that together with tensor $Y$ they capture most of the dynamic range of the tensor $X$ . As we will see in section 4, this is all that is necessary to train networks using 8-bit floating point numbers. + +In order for $Y$ to be a tensor suitable to be represented by FP8 numbers, we enforce that it has zero mean and a maximum value within the dynamic range of FP8 (e.g. 15): + +$$ +\sum_ {i = 1} ^ {N ^ {\prime}} \log_ {2} (| Y _ {i} |) = 0 \quad \text {a n d} \max _ {i = 1, \dots , N ^ {\prime}} \log_ {2} (| Y _ {i} |) = 1 5 (= \log_ {2} (2 ^ {1 5})) \tag {2} +$$ + +where the ' notation indicates that the sum and the max, respectively, ignore any $i$ such that $Y_{i} = 0$ . Those equations ensure that $\log_2(|Y|)$ values are distributed with zero mean and each is less than 15, which is ideal for an FP8 format. + +By inserting Equation 2 into Equation 1, and by denoting + +$$ +\mu = \sum_ {i = 1} ^ {N ^ {\prime}} \log_ {2} (| X _ {i} |) \quad \text {a n d} \quad m = \max _ {i} \log_ {2} (| X _ {i} |) \tag {3} +$$ + +we find + +$$ +\alpha = \frac {1 5}{m - \mu}, \quad \beta = - \alpha \mu \tag {4} +$$ + +This new tensor format results in the training procedure (forward pass, backward pass, weight update) described in Figure 4. Forward and backward MatMul use this new S2FP8 format. Master weights are kept in FP32 and updated using S2FP8 gradients. Accumulations inside the GEMM kernel are kept in full FP32 precision. Figure 3 illustrates the impact of $\alpha$ and $\beta$ . By having those two extra degrees of freedom for each tensor, majority of the dynamic range of each tensor can now be captured, whether very small ( $\beta > 0$ ), very large ( $\beta < 1$ ), very narrow ( $\alpha > 1$ ) or very wide ( $\alpha < 1$ ). + +# 3.3 LEARNING THE TENSOR DISTRIBUTION + +One way to interpret $\alpha$ and $\beta$ is to consider them as parameters of a distribution generating the tensor values $\log_2(|X_i|)$ . We can then say that, by continuously computing $\alpha$ and $\beta$ , we are effectively learning the distribution of $\log_2(|X_i|)$ . Figure 5c shows the evolution of $\mu$ , $m$ , $\alpha$ and $\beta$ for a particular tensor of ResNet-20. We see that $\alpha$ and $\beta$ converge to, approximately, 5 and 21, respectively. From Equation 1, we conclude that: + +![](images/35e1eee4c36e0947481b07fa23843fbd60f6a1f9be26928565b8176f9bafd240.jpg) +Figure 4: Low precision training with S2FP8. $T$ represent the truncation described in Equation 5, from FP32 to S2FP8. When using S2FP8 for training, forward and backward GEMM's only use S2FP8. The master weights are kept in FP32 and updated during the update step. + +- since $\alpha > 1$ , this means that $X$ is expanded into $Y$ , i.e., $X$ is more narrow than what FP8 allows +- since $\beta > 0$ , this means that $X$ is right-shifted into $Y$ , i.e., $X$ is smaller than what FP8 allows + +At convergence, those $\alpha$ and $\beta$ values represent the distribution of each converged tensor. Notice that all statistics stabilize in the last third of the training, where the learning rate is decreased, indicating the network is converging to its final state. + +# 4 EXPERIMENTAL RESULTS + +In this section, we compare S2FP8 training with baseline FP32 and FP8 training with and without loss scaling for: Residual Networks (He et al., 2016) of varying depths on the CIFAR-10 and ImageNet (Deng et al., 2009) datasets, Transformer (Vaswani et al., 2017) on IWSLT'15 English-Vietnamese dataset (Luong & Manning, 2015), and Neural Collaborative Filtering (NCF) (He et al., 2017) on MovieLens 1 Million dataset (Harper & Konstan, 2016). + +For our experiments, we use the open source Tensorflow Models1 repository for ResNet and NCF, Tensor2Tensor (Vaswani et al., 2018) for Transformer with added S2FP8 data type simulation support using the methodology described in subsection 4.1. For a given model, we keep the hyperparameters consistent across FP32, FP8 and S2FP8 evaluations. + +# 4.1 SIMULATION METHODOLOGY + +We simulated S2FP8 by inserting appropriate truncation function throughout the network, before and after every convolution and matrix-matrix product operations, during both the forward and backward passes. The rest of the network is kept in FP32, and those truncation simulate the low-precision training described in subsection 3.2. + +The truncation function takes as input a tensor $X$ , computes its magnitude mean and maximum, computes the appropriate $\alpha$ and $\beta$ and finally truncates $X$ by computing + +$$ +X _ {\text {t r u n c a t e d}} = \left[ 2 ^ {- \beta} \left\{\operatorname {t r u n c a t e} _ {\mathrm {F P 8}} \left(2 ^ {\beta} | X | ^ {\alpha}\right) \right\} \right] ^ {1 / \alpha} \tag {5} +$$ + +where $\mathrm{truncate}_{\mathrm{FP8}}$ is a usual FP8 truncation function with RNE (round-to-nearest, with ties broken by rounding to even) rounding which is easier to implement and most widely supported in hardware. + +![](images/911e939279af036cb7b2a6dd164497e077cb3311eca32d1cf7deb3c5512263c8.jpg) +(a) Distribution of the magnitude $\log_2(|X|)$ of original tensor $X$ before scaling using $\alpha$ and $\beta$ + +![](images/8efc969f84faccd7086c2ac0c07d2a001f9c168a7b29c5a60afdbc4cc095a5da.jpg) +(b) Distribution of the magnitude $\log_2(|Y|)$ of shifted and squeezed tensor $Y$ with $|Y_i| = 2^\beta |X_i|^{\alpha}$ + +![](images/576e58b83da68011edbaa941469928acfe43f41b44115a54a43dc08487861a44.jpg) +(c) The computed statistics during training for the scale $(\beta)$ , shift $(\alpha)$ , as well as the mean of the log values $(\mu)$ and the maximum log value $(m)$ . + +![](images/c8719e11599851b478b28d16643b452a4976c3e2fd20aa951739d22423dd0f92.jpg) + +![](images/c5de4e6b785dc325104d62016419f86e8a6b05f365f11ba00e21321d5a790ca2.jpg) +Figure 5: Evolution of the average and maximum magnitude, as well as $\alpha$ and $\beta$ for CIFAR-10 with ResNet-20. This illustrates how the network is actually implicitly learning the tensors distribution, by repeatedly computing magnitudes $\alpha$ and $\beta$ through $\mu$ and $m$ . + +![](images/af147118fbf24534c0989a265cd14448b18d3317cb9370dba9384bc764b76690.jpg) + +# 4.2 RESIDUAL NETWORKS + +We first present results with Residual Networks of varying depths on the CIFAR-10 image recognition dataset. We trained the model on 1 GPU using standard parameters: 250 epochs, batchsize of 128, SGD with momentum of 0.9, initial learning rate of 0.1 decreased by a factor of 10 after epochs 100, 150 and 200. + +Table 1 and Figure A2 presents the results. We observe that S2FP8 reaches almost exactly the FP32 baseline, sometimes even improving over it. Out-of-the-box FP8 does not converge and has very poor accuracy. Finally, FP8 with constant loss scaling of 100 (FP8+LS(100)) can reach the baseline. Both S2FP8 and FP8+LS(100) have similar performances, but S2FP8 can do so without any extra hyperparameters or tuning from the user's perspective. + +
CIFAR-10FP32S2FP8ΔFP8FP8+LS(100)
ResNet-2091.591.10.417.991.1
ResNet-3492.592.00.513.592.0
ResNet-5093.093.2-0.211.592.9
+ +Table 1: Validation accuracy (in %) for image recognition on CIFAR-10 with ResNet-20/34/50. + +We also evaluate S2FP8 on the 1000 class ImageNet dataset. Here, we trained the network on 4 GPUs using standard parameters: 90 epochs, batchsize of 256, SGD with momentum of 0.9, initial learning rate of 0.1 decreased by a factor of 10 after epochs 30, 60, 80 and 90. Table 2 and Figure 6 present the results. + +Again, we observe that S2FP8 gets very close to the FP32 baseline. Out-of-the-box FP8 quickly diverges and does not converge at all. For FP8 with loss scaling to converge, one has to not truncate the first and last layer, as consistent with (Mellempudi et al., 2019), which we denote as Ex in Table 2 below. A loss scaling of 10,000 can then be used to reach the baseline $(\mathrm{FP8 + LS(10k) + Ex})$ . Finally, stochastic rounding can be added and it slightly improves the precision $(\mathrm{FP8 + LS(100k) + Ex + SR})$ . However, both those cases are not out-of-the-box, as they require loss scaling tuning and some layers + +to be kept in full precision. S2FP8 does not suffer from that, thanks to its improved quantization: all layers can be truncated and no loss scaling is required. + +
Imagenet1kFP32S2FP8ΔFP8FP8+LS(10k)+ExFP8+LS(100k)+Ex+SR
ResNet-1870.369.6-0.7NaN68.768.9
ResNet-5076.275.2-1.0NaN75.375.5
+ +![](images/94c1e980dbd8b85046f4d958e35cf2881c43b072b4daa2e1063d6da4627a884e.jpg) +Top-1 accuracy $(\%)$ + +![](images/95d0ee8a302ac7c47d151626ce459d77e6055fed53f206d7c39d79cde543e78d.jpg) +Loss +Figure 6: Comparing Top-1 accuracy and Loss of S2FP8 with FP32 for ResNet-50 onImagenet1k + +![](images/5ef6230e5712a773d7b657bcca5c1d0d28ce23d8cc80051f4e0bb67bafa048e0.jpg) +$L_{2}$ Loss + +# 4.3 TRANSFORMER + +We also tested S2FP8 on a small Transformer (Transformer Tiny) on the English-Vietnamese dataset. The model has 2 hidden layers of size 128, and a filter of size 512, and is trained using Adam optimizer (Kingma & Ba, 2014). + +Table 3 and Figure 7 show the result, where we compare FP32, S2FP8 and FP8 with exponential loss scaling. We tried many loss scaling schedules (constant and exponential, with various initializations) and report the best result. As one can see, S2FP8 reaches the baseline with no hyperparameter tuning. FP8, on the other hand, does not, even after extensive loss scaling tuning. This shows the value of an out-of-the-box method for the user. + +Table 2: Validation accuracy (in %) for image recognition on Imagenet1k with ResNet-18/50 + +
En-ViFP32S2FP8ΔFP8FP8+LS(exp)
Transformer tiny25.325.30.0NaN21.3
+ +Table 3: BLEU Score (Papineni et al., 2002) (from 0 to 100) for translation task on the English-Vietnamese dataset with Transformer tiny. + +# 4.4 NEURAL COLLABORATIVE FILTERING + +The Neural Collaborative Filtering (NCF) network comprises of embeddings for users and items from the MovieLens dataset, that are then passed to a Multi-Layer Perceptron(MLP) network to learn the user-item interaction. Matrix-multiplication operations are the building blocks of such models. We compare S2FP8 with FP32 and FP8 without loss scaling. We simulate Matrix-Multiplications and look-ups from the embeddings in S2FP8 and compare it to FP8 with RNE. We trained the model on the MovieLens 1 Million dataset with the following standard paramaters: 20 iterations, batchsize of 1024 on 4 GPUs, 8 predictive factors, learning rate of 0.0005 using the Adam optimizer. Figure 8 and Table 4 show the result, where we compare FP32, S2FP8 and FP8 without loss scaling. + +This again shows that S2FP8 easily reaches the baseline out-of-the-box, without tuning of any sort. FP8 gets relatively close, but cannot reach the baseline. + +![](images/e5e238200c00dba0962526df858eb0f7677b8cbad154c53960cb5f3bec725054.jpg) +BLEU Score + +![](images/4d703d16b2f1a3ccc4b0d6ccc76483de12c7618e30b72adef3a6b88cb802488c.jpg) +Loss +Figure 7: Comparing BLEU score and Loss of S2FP8 and FP32 for Transformer tiny on En-Vi dataset + +![](images/1093f2f3485207491f144e394d458d246df4c9ff64f766154350c2dc533938a2.jpg) +Hit Ratio +Figure 8: Comparing Hit Ratio, NDCG and Loss of S2FP8 and FP32 for NCF on MovieLens-1M + +![](images/e677fced925423ebe5459ef236dc80553fc0ba4a43f2a36a001df7e2d92ed746.jpg) +NDCG + +![](images/eabf7652f49bea22c9b9c595a96f775a8f62e22539744e55ea847d9a3d304ca1.jpg) +Loss + +# 5 HARDWARE ASPECTS + +S2FP8 is a new data type and requires its own circuitry to be implemented in a tensor processing engine. However, the added overhead is very minimal and affects neither data throughput nor compute speed. In order to convert FP32 tensors into S2FP8, two hardware (HW) components are needed. One is to calculate each tensor's statistics (Equation 3), which bring minimal HW complexity. To make compute operations even easier these statistics could be stored in lower precision such as FP8/INT8. The other component is to adjust the exponent and mantissa of all those tensor elements by applying the squeeze $(\alpha)$ and shift $(\beta)$ factors in Equation 4 before truncating them into their 8-bit placeholders. The shift could be done using simple element-wise add/subtract operations on the exponents, and element-wise squeeze could be applied to the mantissa portions. Another consideration is within the tensor processing engine(e.g., GEMM engine) which requires the $\alpha$ and $\beta$ factors while doing the calculations. The FP32 result will be converted back to S2FP8 when needed (e.g., to store back in memory) as shown in Figure 4. + +# 6 CONCLUSION + +We introduce a novel 8-bit floating point data type (S2FP8), that gives competitive performance in comparison to state-of-the-art FP32 baselines over a range of representative networks. S2FP8 makes use of shifted and squeezed factors to shift and rescale the range of tensors prior to truncation. S2FP8 allows training of neural networks with an 8-bit format while eliminating the need for loss scaling tuning, hardware-complex rounding techniques. In addition, compared to existing FP8 implementations we also eliminate the restriction of maintaining the first and last layers in FP32. Decreasing + +
Movielens 1 millionFP32S2FP8ΔFP8
NCF0.6660.6630.0030.633
+ +Table 4: HR Score for NCF on the Movielens 1 million dataset. + +the number of bits enables larger models to fit on a single device and results in faster training. As part of future work, we plan to extend the use of S2FP8 to train additional DNN topologies and also simplify the squeeze and shift statistics from a hardware implementation point of view. We also plan to explore the use of reduced precision to store the statistics and the extendability of this approach to efficiently represent a broader suite of low precision formats like 8-bit POSIT (Gustafson & Yonemoto, 2017), 4-bit floating and integer data types. + +# ACKNOWLEDGMENTS + +We would like to thank Naveen Mellempudi, Pratap Prasad, Prasanna Singamsetty and Cory Stephenson for insightful discussions. + +# REFERENCES + +Anwarul Azim. Low precision arithmetic operations in deep neural networks: An overview. +Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. Scalable methods for 8-bit training of neural networks. In Advances in Neural Information Processing Systems, pp. 5145-5153, 2018. +Dipankar Das, Naveen Mellempudi, Dheevatsa Mudigere, Dhiraj Kalamkar, Sasikanth Avancha, Kunal Banerjee, Srinivas Sridharan, Karthik Vaidyanathan, Bharat Kaul, Evangelos Georganas, et al. Mixed precision training of convolutional neural networks using integer operations. arXiv preprint arXiv:1802.00930, 2018. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. +Yunhui Guo. A survey on methods and theories of quantized neural networks. CoRR, abs/1808.04752, 2018. URL http://arxiv.org/abs/1808.04752. +Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pp. 1737-1746, 2015. +John L Gustafson and Isaac T Yonemoto. Beating floating point at its own game: Posit arithmetic. Supercomputing Frontiers and Innovations, 4(2):71-86, 2017. +F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):19, 2016. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630-645. Springer, 2016. +Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web, pp. 173-182. International World Wide Web Conferences Steering Committee, 2017. +Jeff Johnson. Rethinking floating point for deep learning. CoRR, abs/1811.01721, 2018. URL http://arxiv.org/abs/1811.01721. +Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, et al. A study of bfloat16 for deep learning training. arXiv preprint arXiv:1905.12322, 2019. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Urs Köster, Tristan Webb, Xin Wang, Marcel Nassar, Arjun K Bansal, William Constable, Oguz Elibol, Scott Gray, Stewart Hall, Luke Hornof, et al. Flexpoint: An adaptive numerical format for efficient training of deep neural networks. In Advances in neural information processing systems, pp. 1742-1752, 2017. + +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. +Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +Minh-Thang Luong and Christopher D. Manning. Stanford neural machine translation systems for spoken language domain. In International Workshop on Spoken Language Translation, Da Nang, Vietnam, 2015. +Stefano Markidis, Steven Wei Der Chien, Erwin Laure, Ivy Bo Peng, and Jeffrey S Vetter. Nvidia tensor core programmability, performance & precision. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 522-531. IEEE, 2018. +Clive Maxfield. An introduction to different rounding algorithms. Programmable Logic Design Line, pp. 1-15, 2006. +Naveen Mellempudi, Sudarshan Srinivasan, Dipankar Das, and Bharat Kaul. Mixed precision training with 8-bit floating point. arXiv preprint arXiv:1905.12334, 2019. +Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed precision training. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=r1gs9JgRZ. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pp. 311-318, Stroudsburg, PA, USA, 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://doi.org/10.3115/1073083.1073135. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Lukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. Tensor2tensor for neural machine translation. CoRR, abs/1803.07416, 2018. URL http://arxiv.org/abs/1803.07416. +Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, and Kailash Gopalakrishnan. Training deep neural networks with 8-bit floating point numbers. In Advances in neural information processing systems, pp. 7675-7684, 2018. +Shuang Wu, Guoqi Li, Feng Chen, and Luping Shi. Training and inference with integers in deep neural networks. arXiv preprint arXiv:1802.04680, 2018. +Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. + +# A APPENDIX + +A.1 SUPPLEMENTARY TABLES AND FIGURES + +
FormatBitss/e/mMin sub-normalMin normal(Approx.) Max normalMachine epsilonRange
IEEE-FP32321/8/232-1492-12621282-242277
IEEE-FP16161/5/102-242-142162-11240
BF16161/8/72-1332-12621282-82261
FP881/5/22-162-142162-3232
+ +Table A1: Comparing several floating point formats. s/e/m indicates the number of sign (s), exponent (e) and mantissa (m) bits. + +
ModelsDatasetsFP32BF16FP8FP8+other recipesS2FP8
ResNet-20CIFAR-1091.591.717.991.1(Loss Scale=100)91.1
ResNet-50CIFAR-1093.093.211.592.9(Loss Scale=100)93.2
ResNet-50ImageNet76.276.5NaN75.3(Loss Scale=10K, FP32 for first and last layers)75.2
NCFMovieLens1M0.6660.6530.633-0.663
TransformerinyEn-Vi25.325.6NaN21.3(Loss Scale=Exp)25.3
+ +Table A2: Comparing FP32, BF16, vanilla FP8, FP8 with tuning and S2FP8 on the model ResNet(Top1-accuracy), NCF(Hit Ratio),Transformer-tiny(BLEU score). + +![](images/9a19b5713328f7e84b7db84ef29aba54a9451e761fa2f494dd468a04bb9355a1.jpg) +Figure A1: The range and precision of FP8. Bar indicate the number density between each power of 2. Since FP8 has 2 mantissa bit, the density is 4 (except in the denominals), and the associated machine epsilon is $2^{-3} = 1/8$ . The normal representable range goes from $2^{-14}$ to $(1 - 2^{-3})2^{16}$ , with denominals from $2^{-16}$ to $2^{-14}$ . + +# A.2 SUPPLEMENTARY EQUATIONS + +$$ +\frac {\partial (\lambda \mathcal {L})}{\partial w} (w) = \lambda \frac {\partial \mathcal {L}}{\partial w} (w) \Rightarrow w ^ {(k + 1)} = w ^ {(k)} - \alpha \frac {1}{\lambda} \frac {\partial (\lambda \mathcal {L})}{\partial w} (w ^ {(k)}). \tag {6} +$$ + +![](images/f00ac1f8f04b535f701f5e2250bb4e29de0997fddc52f5d549367dcc37f15633.jpg) +Top-1 accuracy $(\%)$ + +![](images/1a4a0319bf4965c48ea4acbba4d1447e4f6d59473bafaa4d35997a68dc21a872.jpg) +Loss + +![](images/e761a690e37ac227703ee7a196f8deb2f747ecbb3746088fcc84627a86b43e06.jpg) +$L_{2}$ Loss +Figure A2: Convergence of ResNet-50 with the CIFAR-10 dataset \ No newline at end of file diff --git a/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/images.zip b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..98f596b2780d4d3382330110042e3f667f6af383 --- /dev/null +++ b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99e0aca7414fce74e9bc85325a9174a964865237fad32a02fe76db9a602e0ca0 +size 458000 diff --git a/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/layout.json b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..85fb6a2c3598b75a73923018fd3839c2201dd264 --- /dev/null +++ b/shiftedandsqueezed8bitfloatingpointformatforlowprecisiontrainingofdeepneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd15bd366930fdc92680d3c59aa8a5d35bd2f486f801d73dc978dd6589ce5301 +size 414922 diff --git a/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_content_list.json b/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ca1fc688643935b02e337d42fc868d2931fcddbd --- /dev/null +++ b/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f26f74857cd0267293204c69f08f61548d52a7900a5ab5803502cf3154f77dab +size 123120 diff --git a/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_model.json b/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c99df1b4842602992ddd58a04677811bdc74984e --- /dev/null +++ b/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9982eda817affedb6b051660a439e91f0da0dd191e811ce5fcd02085639cb94d +size 150209 diff --git a/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_origin.pdf b/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..934e1324a16abf379a13a973714f5aa73daea564 --- /dev/null +++ b/shortandsparsedeconvolutionageometricapproach/f0b49af3-1517-4ed5-85b8-af1f07c9a9e0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:643ebc8453795dc83cb40036f5fcdb3ce3dc455d349bf9693f7f03b9cf4d6a21 +size 3268903 diff --git a/shortandsparsedeconvolutionageometricapproach/full.md b/shortandsparsedeconvolutionageometricapproach/full.md new file mode 100644 index 0000000000000000000000000000000000000000..682cad28f9df378c8e4fa639733115157a26fa3e --- /dev/null +++ b/shortandsparsedeconvolutionageometricapproach/full.md @@ -0,0 +1,605 @@ +# SHORT AND SPARSE DECONVOLUTION — A GEOMETRIC APPROACH + +Yenson Lau* + +Electrical Engineering + +Columbia University + +y.lau@columbia.edu + +Qing Qu* + +Center for Data Science + +New York University + +qq213@nyu.edu + +Han-wen Kuo + +Electrical Engineering + +Columbia University + +hk2673@columbia.edu + +Pengcheng Zhou + +Department of Statistics + +Columbia University + +zhoupc2018@gmail.com + +Yuqian Zhang + +Electrical & Computer Engineering + +Rutgers University + +yqz.zhang@rutgers.edu + +John Wright + +Electrical Engineering + +Columbia University + +jw2966@columbia.edu + +# ABSTRACT + +Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure. Variants of this problem arise in applications such as image deblurring, microscopy, neural spike sorting, and more. The problem is challenging in both theory and practice, as natural optimization formulations are nonconvex. Moreover, practical deconvolution problems involve smooth motifs (kernels) whose spectra decay rapidly, resulting in poor conditioning and numerical challenges. This paper is motivated by recent theoretical advances (Zhang et al., 2017; Kuo et al., 2019), which characterize the optimization landscape of a particular nonconvex formulation of SaSD and give a provable algorithm which exactly solves certain non-practical instances of the SaSD problem. We leverage the key ideas from this theory (sphere constraints, data-driven initialization) to develop a practical algorithm, which performs well on data arising from a range of application areas. We highlight key additional challenges posed by the ill-conditioning of real SaSD problems, and suggest heuristics (acceleration, continuation, reweighting) to mitigate them. Experiments demonstrate the performance and generality of the proposed method. + +# 1 INTRODUCTION + +Many signals arising in science and engineering can be modeled as superpositions of basic, recurring motifs, which encode critical information about a physical process of interest. Signals of this type can be modeled as the convolution of a zero-padded short kernel $\pmb{a}_0 \in \mathbb{R}^{p_0}$ (the motif) with a longer sparse signal $\pmb{x}_0 \in \mathbb{R}^m$ ( $m \gg p_0$ ) which encodes the locations of the motifs in the sample1: + +$$ +\boldsymbol {y} = \iota \boldsymbol {a} _ {0} * \boldsymbol {x} _ {0}. \tag {1} +$$ + +We term this a short-and-sparse (SaS) model. Since often only $\pmb{y}$ is observed, short-and-sparse deconvolution (SaSD) is the problem of recovering both $\pmb{a}_0$ and $\pmb{x}_0$ from $\pmb{y}$ . Variants of SaSD arise in areas such as microscopy (Cheung et al., 2018), astronomy (Briers et al., 2013), and neuroscience (Song et al., 2018). SaSD is a challenging inverse problem in both theory and practice. Natural formulations are nonconvex, and very little algorithmic theory was available. Moreover, practical instances are often ill-conditioned, due to the spectral decay of the kernel $\pmb{a}_0$ (Cheung et al., 2018). + +This paper is motivated by recent theoretical advances in nonconvex optimization and, in particular, on the geometry of SaSD. Zhang et al. (2017) and Kuo et al. (2019) study particular optimization + +formulations for SaSD and show that the landscape is largely driven by the problem symmetries of SaSD. They derive provable methods for idealized problem instances, which exactly recover $(\pmb{a}_0,\pmb{x}_0)$ up to trivial ambiguities. While inspiring, these methods are not practical and perform poorly on real problem instances. Where the emphasis of Zhang et al. (2017) and Kuo et al. (2019) is on theoretical guarantees, here we focus on practical computation. We show how to combine ideas from this theory with heuristics that better address the properties of practical deconvolution problems, to build a novel method that performs well on data arising in a range of application areas. A critical issue in moving from theory to practice is the poor conditioning of naturally-occurring deconvolution problems: we show how to address this with a combination of ideas from sparse optimization, such as momentum, continuation, and reweighting. The end result is a general purpose method, which we demonstrate on data from neural spike sorting, calcium imaging and fluorescence microscopy. + +Notation. The zero-padding operator is denoted by $\iota: \mathbb{R}^p \to \mathbb{R}^m$ . Projection of a vector $\boldsymbol{v} \in \mathbb{R}^p$ onto the sphere is denoted by $\mathcal{P}_{\mathbb{S}^{p-1}}(\boldsymbol{v}) \doteq \boldsymbol{v} / \| \boldsymbol{v} \|_2$ , and $\mathcal{P}_z(\boldsymbol{v}) \doteq \boldsymbol{v} - \langle \boldsymbol{v}, \boldsymbol{z} \rangle \boldsymbol{z}$ denotes projection onto the tangent space of $z \in \mathbb{S}^{p-1}$ . The Riemannian gradient of a function $f: \mathbb{S}^{p-1} \to \mathbb{R}$ at point $z$ on the sphere is given by $\operatorname{grad} f(z) \doteq \mathcal{P}_z(\nabla f(z))$ . + +Reproducible research. The code for implementations of our algorithms can be found online: + +https://github.com/qingqu06/sparse_deconvolution. + +For more details of our work on SaSD, we refer interested readers to our project website + +https://deconvlab.github.io/. + +# 2 SYMMETRY AND GEOMETRY IN SASD + +In this section, we begin by describing two intrinsic properties for SaSD. Later, we show how these play an important role in the geometry of optimization and the design of efficient methods. + +An important observation of the SaSD problem is that it admits multiple equivalent solutions. This is purely due to the cyclic convolution between $\mathbf{a}_0$ and $\mathbf{x}_0$ , which exhibits the trivial ambiguity2 + +$$ +\boldsymbol {y} = \iota \boldsymbol {a} _ {0} * \boldsymbol {x} _ {0} = (\alpha s _ {\ell} [ \iota \boldsymbol {a} _ {0} ]) * \left(\frac {1}{\alpha} s _ {- \ell} [ \boldsymbol {x} _ {0} ]\right), +$$ + +for any nonzero scalar $\alpha$ and cyclic shift $s_{\ell}[\cdot]$ . These scale and shift symmetries create several acceptable candidates for $a_0$ and $x_0$ , and in the absence of further information we only expect to recover $a_0$ and $x_0$ up to symmetry. Furthermore, they largely drive the behavior of certain nonconvex optimization problems formulated for SaSD. Since the success of SaSD requires distinguishing between overlapping copies of $a_0$ , its difficulty also depends highly on the "similarity" of the $a_0$ to its shifts. Here we capture this notion using the shift-coherence of $a_0$ , + +$$ +\mu \left(\boldsymbol {a} _ {0}\right) \doteq \max _ {\ell \neq 0} \left| \langle \iota \boldsymbol {a} _ {0}, \mathrm {s} _ {\ell} [ \iota \boldsymbol {a} _ {0} ] \rangle \right| \in [ 0, 1 ]. \tag {2} +$$ + +Intuitively, the shifts of $\pmb{a}_0$ become closer together as $\mu(\pmb{a}_0)$ increases (Figure 10), making objective landscapes for optimization less favorable for recovering any specific shift of $\pmb{a}_0$ . + +# 2.1 LANDSCAPE GEOMETRY UNDER SHIFT-INCOHERENCE + +A natural approach to solving SaSD is to formulate it as a suitable optimization problem. In this paper we will focus on the Bilinear Lasso (BL) problem, which minimizes the squared error between the observation $\pmb{y}$ and its reconstruction $\pmb{a} \circledast \pmb{x}$ , plus a $\ell_1$ -norm sparsity penalty on $\pmb{x}$ , + +$$ +\min _ {\boldsymbol {a} \in \mathbb {S} ^ {p - 1}, \boldsymbol {x} \in \mathbb {R} ^ {m}} \left[ \Psi_ {\mathrm {B L}} (\boldsymbol {a}, \boldsymbol {x}) \doteq \frac {1}{2} \| \boldsymbol {y} - \iota \boldsymbol {a} * \boldsymbol {x} \| _ {2} ^ {2} + \lambda \| \boldsymbol {x} \| _ {1} \right]. \tag {3} +$$ + +Later in this section, we will see that the kernel length $p$ should be set slightly larger than $p_0$ . + +The Bilinear Lasso is a nonconvex optimization problem, as the shift symmetries of SaSD create discrete local minimizers in the objective landscape. The regularization created by problem symmetries + +![](images/37cd2c84daecbcb4f6541b3ed9de6c3abe75bb6264dab92db70a99890d2a4963.jpg) +Figure 1: Geometry of $\varphi_{\mathrm{ABL}}$ near superpositions of shifts of $a_0$ (Kuo et al., 2019). (a) Regions near single shifts are strongly convex. (b) Regions between two shifts contain a saddle-point, with negative curvature towards each shift and positive curvature orthogonally. (c) The span of three shifts. For each figure, the top shows the function value in height, and the bottom shows function value over the sphere. (d,e) When $\mu_s(a_0) \approx 0$ , the Bilinear Lasso $\varphi_{\mathrm{BL}}(a) \doteq \min_x \Psi_{\mathrm{BL}}(a, x)$ and ABL $\varphi_{\mathrm{ABL}}(a)$ are empirically similar in the span of three shifts. + +![](images/1c16cfb96c3fb8a111609ea42060627cbd3342fcf790bbcc4844ee443608aec9.jpg) + +![](images/01611719a68b9e89250b33fbabfa0913129bae89699d7745dbde7a5b21f1f746.jpg) + +![](images/a1be974aee512b6cd7dcdb90b936fdf3aed5aff43902ed0c51381f2dda989117.jpg) + +![](images/64322c90d91446733afc886f0381e39944e74dc93c3c384fba4cf79c1a857820.jpg) + +in nonconvex inverse problems are a fairly general phenomenon (Sun et al., 2015) and, as Kuo et al. (2019) shows, its influence in SaSD extends beyond the neighborhoods of these local minimizers. Kuo et al. analyzed an Approximate Bilinear Lasso (ABL) objective3 $\Psi_{\mathrm{ABL}}$ , which satisfies + +$$ +\Psi_ {\mathrm {A B L}} (\boldsymbol {a}, \boldsymbol {x}) \simeq \Psi_ {\mathrm {B L}} (\boldsymbol {a}, \boldsymbol {x}), \quad \text {w h e n} \mu (\boldsymbol {a}) \simeq 0. +$$ + +This non-practical objective serves as a valid simplification of the Bilinear Lasso for analysis when the true kernel is itself incoherent, i.e. $\mu (\pmb{a}_0)\simeq 0$ (Figures 1d and 1e). Under its marginalization + +$$ +\varphi_ {\mathrm {A B L}} (\boldsymbol {a}) \doteq \min _ {\boldsymbol {x} \in \mathbb {R} ^ {m}} \Psi_ {\mathrm {A B L}} (\boldsymbol {a}, \boldsymbol {x}), \tag {4} +$$ + +certain crucial properties regarding its curvature can be characterized for generic choices of $\pmb{x}$ . The reason we choose to partial minimize $\pmb{x}$ instead of $\pmb{a}$ is because (i) the problem (4) is convex w.r.t. $\pmb{x}$ , and (ii) the dimension of the subspace of $\pmb{a}$ is significantly smaller than that of $\pmb{x}$ (i.e., $p \ll m$ ), which is the place that the measure concentrates. + +**Curvature in the span of a few shifts.** Suppose we set $p > p_0$ , which ensures that we can find an $\pmb{a} \simeq \alpha_1 s_{\ell_1}[\pmb{a}_0] + \alpha_2 s_{\ell_2}[\pmb{a}_0] \in \mathbb{S}^{p-1}$ that lies near the span of two shifts of $\pmb{a}_0$ . If $\alpha_1 \simeq \pm 1$ (or $\alpha_2 \simeq 0$ ) then, under suitable conditions on $\pmb{a}_0$ and $\pmb{x}_0$ , Kuo et al. (2019) asserts that $\pmb{a}$ lies in a strongly convex region of $\varphi_{\mathrm{ABL}}$ , containing a single minimizer near $s_{\ell_1}[\pmb{a}_0]$ (Figure 1a); the converse is also true. A saddle-point exists nearby when $\alpha_1 \simeq \alpha_2$ is balanced, characterized by large negative curvature along the two shifts and positive curvature in orthogonal directions (Figure 1b). Interpolating between these two cases, large negative gradients point towards individual shifts. + +The behavior of $\varphi_{\mathrm{ABL}}$ between two shifts of $a_0$ — strong convexity near single shifts, and saddlepoints near balanced points — extends to regions of the sphere spanned by several shifts (Figure 1c); we elaborate on this further in Appendix A.1. This regional landscape guarantees that $a_0$ can be efficiently recovered up to a signed shift using methods for first and second-order descent, as soon as $a$ can be brought sufficiently close to the span of a few shifts. + +Optimization over the sphere. For both the Bilinear Lasso and ABL, a unit-norm constraint on $\pmb{a}$ is enforced to break the scaling symmetry between $\pmb{a}_0$ and $\pmb{x}_0$ . Choosing the $\ell_2$ -norm, however, has surprisingly strong implications for optimization. The ABL objective, for example, is piecewise concave whenever $\pmb{a}$ is sufficiently far away from any shift of $\pmb{a}_0$ , but the sphere induces positive curvature near individual shifts to create strong convexity. These two properties combine to ensure recoverability of $\pmb{a}_0$ . In contrast, enforcing $\ell_1$ -norm constraints often leads to spurious minimizers for deconvolution problems (Levin et al., 2011; Benichoux et al., 2013; Zhang et al., 2017). + +Initializing near a few shifts. The landscape of $\varphi_{\mathrm{ABL}}$ makes single shifts of $a_0$ easy to locate if $\pmb{a}$ is initialized near a span of a few shifts. Fortunately, this is a relatively simple matter in SaSD, as $\pmb{y}$ is + +itself a sparse superposition of shifts. Therefore, one initialization strategy is to randomly choose a length- $p_0$ window $\widetilde{\boldsymbol{y}}_i \doteq [y_i, y_{i+1}, \dots, y_{i+p_0-1}]^T$ from the observation and set + +$$ +\boldsymbol {a} ^ {(0)} \doteq \mathcal {P} _ {\mathbb {S} ^ {p - 1}} \left(\left[ \begin{array}{l l} \boldsymbol {0} _ {p _ {0} - 1}; \widetilde {\boldsymbol {y}} _ {i}; \boldsymbol {0} _ {p _ {0} - 1} \end{array} \right]\right). \tag {5} +$$ + +This brings $\pmb{a}^{(0)}$ suitably close to the sum of a few shifts of $\pmb{a}_0$ (Appendix A.2); any truncation effects are absorbed by padding the ends of $\widetilde{\pmb{y}}_i$ , which also sets the length for $\pmb{a}$ to be $p = 3p_0 - 2$ . + +Implications for practical computation. The (regionally) benign optimization landscape of $\varphi_{\mathrm{ABL}}$ guarantees that efficient recovery is possible for SaSD when $a_0$ is incoherent. Applications of sparse deconvolution, however, are often motivated by sharpening or resolution tasks (Huang et al., 2009; Candès & Fernandez-Granda, 2014; Campisi & Egiazarian, 2016) where the motif $a_0$ is smooth and coherent (i.e. $\mu(a_0)$ is large). The ABL objective is a poor approximation of the Bilinear Lasso in such cases and fails to yield practical algorithms, so we should optimize the Bilinear Lasso directly. + +From Figures 1d and 1e, we can see that low-dimensional subspheres spanned by shifts of $a_0$ are empirically similar when $a_0$ is incoherent. Although this breaks down in the coherent case, as we illustrate in Appendix A.3, the symmetry breaking properties of $\varphi_{\mathrm{BL}}$ remain present. This allows us to apply the geometric intuition discussed here to create an optimization method that, with the help of a number of computational heuristics, performs well in for SaSD even in general problem instances. + +# Algorithm 1 Inertial Alternating Descent Method (iADM) + +Input: Initializations $\pmb{a}^{(0)}\in \mathbb{S}^{p - 1}$ $\pmb {x}\in \mathbb{R}^m$ ; observation $\pmb {y}\in \mathbb{R}^m$ ; penalty $\lambda \geqslant 0$ ; momentum $\alpha \in [0,1)$ Output: $(\pmb{a}^{(k)},\pmb{x}^{(k)})$ , a local minimizer of $\Psi_{\mathrm{BL}}$ + +Initialize $\pmb{a}^{(1)} = \pmb{a}^{(0)},\pmb{x}^{(1)} = \pmb{x}^{(0)}$ + +for $k = 1,2,\ldots$ until converged do + +Update $\pmb{x}$ with accelerated proximal gradient step: + +$$ +\boldsymbol {w} ^ {(k)} \leftarrow \boldsymbol {x} ^ {(k)} + \alpha \cdot \left(\boldsymbol {x} ^ {(k)} - \boldsymbol {x} ^ {(k - 1)}\right) +$$ + +$$ +\boldsymbol {x} ^ {(k + 1)} \leftarrow \operatorname {s o f t} _ {\lambda t _ {k}} \left[ \boldsymbol {w} ^ {(k)} - t _ {k} \cdot \nabla_ {\boldsymbol {x}} \psi_ {\lambda} \left(\boldsymbol {a} ^ {(k)}, \boldsymbol {w} ^ {(k)}\right) \right], +$$ + +where $\mathrm{soft}_{\lambda}(\pmb {v})\doteq \mathrm{sign}(\pmb {v})\odot \max (|\pmb {v} - \lambda |,\mathbf{0})$ denotes the soft-thresholding operator. + +Update $\mathbf{a}$ with accelerated Riemannian gradient step: + +$$ +\boldsymbol {z} ^ {(k)} \leftarrow \mathcal {P} _ {\mathbb {S} ^ {p - 1}} \left(\boldsymbol {a} ^ {(k)} + \frac {\alpha}{\langle \boldsymbol {a} ^ {(k)} , \boldsymbol {a} ^ {(k - 1)} \rangle} \cdot \mathcal {P} _ {\boldsymbol {a} ^ {(k - 1)}} \left(\boldsymbol {a} ^ {(k)}\right)\right) +$$ + +$$ +\boldsymbol {a} ^ {(k + 1)} \leftarrow \mathcal {P} _ {\mathbb {S} ^ {p - 1}} \big (\boldsymbol {z} ^ {(k)} - \tau_ {k} \cdot \operatorname {g r a d} _ {\boldsymbol {a}} \psi_ {\lambda} \big (\boldsymbol {z} ^ {(k)}, \boldsymbol {x} ^ {(k + 1)} \big) \big). +$$ + +end for + +![](images/36390a7b647582c8f946b3f11039469c457e18c9f3ec104ec323c12852b8a824.jpg) +(a) Gradient descent + +![](images/d86fdbc3934ceddd7e4bdbdeac33c0c57ff2ad958ed8a31e696aa85d8b3b1026.jpg) +(b) GD with momentum +Figure 2: Momentum acceleration. a) Iterates of gradient descent oscillate on ill-conditioned functions; each marker denotes one iteration. b) Momentum dampens oscillation and speeds up convergence. + +# 3 DESIGNING A PRACTICAL SASD ALGORITHM + +Several algorithms for SaSD-type problems have been developed for specific applications, such as image deblurring (Levin et al., 2011; Briers et al., 2013; Campisi & Egiazarian, 2016), neuroscience (Rey et al., 2015; Friedrich et al., 2017; Song et al., 2018), and image super-resolution (Baker & Kanade, 2002; Shtengel et al., 2009; Yang et al., 2010), or are augmented with additional structure (Wipf & Zhang, 2014; Ling & Strohmer, 2017; Walk et al., 2017). + +Here, we instead leverage the theory from Section 2 to build an algorithm for general practical settings. In addition to applying an appropriate initialization scheme (5) and optimizing on the sphere, we minimize the Bilinear Lasso (3) instead of the ABL (4) to more accurately account for interactions between shifts of $a_0$ in highly shift-coherent settings. Furthermore, we also address the negative effects of large coherence using a number of heuristics, leading to an efficient algorithm for SaSD. + +Momentum acceleration. In shift-coherent settings, the Hessian of $\Psi_{\mathrm{BL}}$ becomes ill-conditioned near shifts of $a_0$ , a situation known to cause slow convergence for first-order methods (Nesterov, 2013). A remedy is to add momentum (Polyak, 1964; Beck & Teboulle, 2009) to first-order iterations, for instance, by augmenting gradient descent on some smooth $f(z)$ with stepsize $\tau$ with the term $w$ , + +$$ +\boldsymbol {w} ^ {(k)} \leftarrow \boldsymbol {z} ^ {(k)} + \alpha \cdot \left(\boldsymbol {z} ^ {(k)} - \boldsymbol {z} ^ {(k - 1)}\right) \tag {6} +$$ + +$$ +\boldsymbol {z} ^ {(k + 1)} \leftarrow \boldsymbol {w} ^ {(k)} - \tau \cdot \nabla f (\boldsymbol {w} ^ {(k)}). \tag {7} +$$ + +Here, $\alpha$ controls the momentum added6. As illustrated in Figure 2, this additional term improves convergence by reducing oscillations of the iterates for ill-conditioned problems. Momentum has been shown to improve convergence for nonconvex and nonsmooth problems (Pock & Sabach, 2016; Jin et al., 2018). Here we provide an inertial alternating descent method (iADM) for finding local minimizers of $\Psi_{\mathrm{BL}}$ (Algorithm 1), which modifies iPALM (Pock & Sabach, 2016) to perform updates on $a$ via retraction on the sphere (Absil et al., 2009)7. + +# Algorithm 2 SaS-BD with homotopy continuation + +Input: Observation $\pmb{y} \in \mathbb{R}^{m}$ , motif size $p_{0}$ ; momentum $\alpha \in [0,1)$ ; initial $\lambda^{(1)}$ final $\lambda^{\star}$ , penalty decrease $\eta \in (0,1)$ ; precision factor $\delta \in (0,1)$ . + +Output: Solution path $\{(\hat{\pmb{a}}^{(n)},\hat{\pmb{x}}^{(n)};\lambda^{(n)})\}$ for SaSD. + +Set number of iterations $N\gets \left[\log (\lambda^{\star} / \lambda^{(1)}) / \log \eta \right]$ + +Initialize $\hat{\pmb{a}}^{(0)}\in \mathbb{R}^{3p_0 - 2}$ using (5), $\hat{\pmb{x}}^{(0)} = \pmb {0}\in \mathbb{R}^m$ + +for $n = 1,\dots ,N$ do + +Minimize $\Psi_{\lambda^{(n)}}$ to precision $\delta \lambda^{(n)}$ with Algorithm 1: + +$$ +\left(\hat {\boldsymbol {a}} ^ {(n)}, \hat {\boldsymbol {x}} ^ {(n)}\right) \leftarrow \mathrm {i A D M} \left(\hat {\boldsymbol {a}} ^ {(n - 1)}, \hat {\boldsymbol {x}} ^ {(n - 1)}; \boldsymbol {y}, \lambda^ {(n)}, \alpha\right). +$$ + +Update $\lambda^{(n + 1)}\gets \eta \lambda^{(n)}$ + +end for + +![](images/3e7e4691b24a9d24b0dcba603359382d02cbff8b01bb411feaa72fb85d1e888e.jpg) +Figure 3: Bilinear-lasso objective $\varphi_{\lambda}$ on the sphere $\mathbb{S}^{p - 1}$ , for $p = 3$ and varying $\lambda$ ; brighter colors indicate higher values. The function landscape of $\varphi_{\lambda}$ flattens as sparse penalty $\lambda$ decreases from left to right. + +![](images/70e7dd1770714b49a4444f38b93b68b28fb07900d658b231ddca93bf9b5782c4.jpg) + +![](images/3c52d5ea2e36e7a6b04ede6945c7ff5478ca43c9aab6fd4fafaefc4ee96c10fc.jpg) + +Homotopy continuation. It is also possible to improve optimization by modifying the objective $\Psi_{\mathrm{BL}}$ directly through the sparsity penalty $\lambda$ . Variations of this idea appear in both Zhang et al. (2017) and Kuo et al. (2019), and can also help to mitigate the effects of large shift-coherence. + +When solving (3) in the noise-free case, it is clear that larger choices of $\lambda$ encourage sparser solutions for $\pmb{x}$ . Conversely, smaller choices of $\lambda$ place local minimizers of the marginal objective $\varphi_{\mathrm{BL}}(\pmb{a}) \doteq \min_{\pmb{x}} \Psi_{\mathrm{BL}}(\pmb{a}, \pmb{x})$ closer to signed-shifts of $\pmb{a}_0$ by emphasizing reconstruction quality. When $\mu(\pmb{a}_0)$ is large, however, $\varphi_{\mathrm{BL}}$ becomes ill-conditioned as $\lambda \to 0$ due to the poor spectral conditioning of $\pmb{a}_0$ , leading to severe flatness near local minimizers and the creation spurious local minimizers when noise is present (Figure 3). Conversely, larger values of $\lambda$ limit $\pmb{x}$ to a small set of support patterns and simplify the landscape of $\varphi_{\mathrm{BL}}$ , at the expense of precision. + +It is therefore important both for fast convergence and accurate recovery for $\lambda$ to be chosen appropriately. When problem parameters — such as noise level, $p_0$ , or $\theta$ — are not known a priori, a homotopy continuation method (Hale et al., 2008; Wright et al., 2009; Xiao & Zhang, 2013) can be used to obtain a range of solutions for SaSD. Using initialization (5), a rough estimate $(\hat{\boldsymbol{a}}^{(1)},\hat{\boldsymbol{x}}^{(1)})$ + +is obtained by solving (3) with iADM using a large choice for $\lambda^{(1)}$ . This estimate is refined via a solution path $\{(\hat{\pmb{a}}^{(n)},\hat{\pmb{x}}^{(n)};\lambda^{(n)})\}$ by gradually decreasing $\lambda^{(n)}$ . By ensuring that $\pmb{x}$ remains sparse along the solution path, the objective $\Psi_{\mathrm{BL}}$ enjoys restricted strong convexity w.r.t. both $\pmb{a}$ and $\pmb{x}$ throughout optimization (Agarwal et al., 2010). As a result, homotopy achieves linear convergence for SaSD where sublinear convergence is expected otherwise (Figures 4c and 4d). We provide a complete algorithm for SaSD combining Bilinear Lasso and homotopy continuation in Algorithm 2. + +# 4 EXPERIMENTS + +# 4.1 SYNTHETIC EXPERIMENTS + +Here we perform SaSD in simulations on both coherent and incoherent settings. Coherent kernels are discretized from the Gaussian window function $\pmb{a}_0 = \pmb{g}_{p_0,0.5}$ , where $\pmb{g}_{p,\sigma} \doteq \mathcal{P}_{\mathbb{S}^{p-1}}\left(\left[\exp\left(-\frac{(2i-p-1)^2}{\sigma^2(p-1)^2}\right)\right]_{i=1}^p\right)$ . Incoherent kernels $\pmb{a}_0 \sim \mathrm{Unif}(\mathbb{S}^{p_0-1})$ are sampled uniformly on the sphere. + +![](images/312bf5518d33cf552afb4c4c921fcd02c8a0f4210487ab864de0a5098a08d2ef.jpg) +Figure 4: Synthetic experiments for Bilinear Lasso. Success probability (a, b): $\pmb{x}_0 \sim_{\mathrm{i.i.d.}} \mathcal{BR}(\theta)$ , the success probability of SaS-BD by solving (3), shown by increasing brightness, is large when the sparsity rate $\theta$ is sufficiently small compared to the length of $\pmb{a}_0$ , and vice versa. Success with a fixed sparsity rate is more likely when $\pmb{a}_0$ is incoherent. Algorithmic convergence (c, d): iterate convergence for iADM with $\alpha_k = (k - 1) / (k + 1)$ vs. $\alpha_k = 0$ (ADM); with and without homotopy. Homotopy significantly improves convergence rate, and momentum improves convergence when $\pmb{a}_0$ is coherent. + +![](images/3fb32e1aac06f454a872ee46de50a7060044424fe39fc328776acd4568883b21.jpg) + +![](images/5ee533edb2ca9baac730865f32e5d35f9d782356478425fbc38b04c9fcd878e9.jpg) + +![](images/d36cfd76f0917bcd2c044dbc840dce30b00860dcca4ac23fa1725361272cbc1d.jpg) + +![](images/d61f36acea6d682cc68967c3d5bf61c6334b5108f9563a91248648c675f53249.jpg) + +![](images/01ec992d71ab1233b80b3918a813e923d914f4e930c72540a7aa24399e66d36c.jpg) +Figure 5: Deconvolution for calcium imaging using Algorithm 2 with iADM and with reweighting (Appendix B). Simulated data: (a) recovered AR2 kernel; (b) estimate of spike train. Real data: (c) reconstructed calcium signal (d) estimate of spike train. Reweighting improves estimation quality in each case. + +![](images/6ed56cf00f47407f4676ada2b61974bff5197704ebfa404da0c914e23c94813b.jpg) + +Recovery performance. We test recovery probability for varying kernel lengths $p_0$ and sparsity rates $\theta$ . To ensure the problem size is sufficiently large, we set $m = 100p_0$ . For each $p_0$ and $\theta$ , we randomly generate $x \sim \mathrm{i.i.d.} \mathcal{BR}(\theta)$ for both coherent and incoherent $a_0$ . We solve ten trials of (3) on clean observation data $a_0 * x_0$ using iADM with $\lambda = \frac{10^{-2}}{\sqrt{p_0\theta}}$ . The probability of recovering a signed + +shift of $a_0$ is shown in Figure 4. Recovery is likely when sparsity is low compared to the kernel length. The coherent problem setting has a smaller success region compared to the incoherent setting. + +Momentum and homotopy. Next, we test the performance of Algorithm 1 with momentum $(\alpha_{k} = \frac{k - 1}{k + 2}$ ; see Pock & Sabach (2016)) and without $(\alpha = 0)$ . This is done by minimizing $\Psi_{\mathrm{BL}}$ with initialization (5), using clean observations with $p_0 = 10^2$ , $m = 10^4$ , and $\theta = p_0^{-3/4}$ for coherent and incoherent $a_0$ . We also apply homotopy (Algorithm 2) with $\lambda^{(1)} = \max_{\ell} |\langle s_\ell[\boldsymbol{a}^{(0)}], \boldsymbol{y} \rangle|$ — see Xiao & Zhang (2013), $\lambda^\star = \frac{0.3}{\sqrt{p_0 \lambda}}$ , $\eta = 0.8$ , and $\delta = 0.1$ . The final solve of (3) uses precision $\varepsilon^\star = 10^{-6}$ , regardless of method. Figures 4c and 4d show the comparison results on coherent problem settings. + +Comparison to existing methods. Finally, we compare iADM, and iADM with homotopy, against a number of existing methods for minimizing $\varphi_{\mathrm{BL}}$ . The first is alternating minimization (Kuo et al., 2019), which at each iteration $k$ minimizes $\pmb{a}^{(k)}$ with $\pmb{x}^{(k)}$ fixed using accelerated (Riemannian) gradient descent with backtracking, and vice versa. The next method is the popular alternating direction method of multipliers (Boyd et al., 2011). Finally, we compare against iPALM (Pock & Sabach, 2016) with backtracking, using the unit ball constraint on $\pmb{a}_0$ instead of the unit sphere. + +For each method, we deconvolve signals with $p_0 = 50$ , $m = 100p_0$ , and $\theta = p_0^{-3/4}$ for both coherent and incoherent $a_0$ . For both iADM, iADM with homotopy, and iPALM we set $\alpha = 0.3$ . For homotopy, we set $\lambda^{(1)} = \max_{\ell} |\langle s_\ell[\pmb{a}^{(0)}], \pmb{y} \rangle|$ , $\lambda^* = \frac{0.3}{\sqrt{p_0 \lambda}}$ , and $\delta = 0.5$ . Furthermore we set $\eta = 0.5$ or $\eta = 0.8$ and for ADMM, we set the slack parameter to $\rho = 0.7$ or $\rho = 0.5$ for incoherent and coherent $a_0$ respectively. From Figure 6, we can see that ADMM performs better than iADM in the incoherent case, but becomes less reliable in the coherent case. In both cases, iADM with homotopy is the best performer. Finally, we observe roughly equal performance between iPALM and iADM. + +![](images/2cada3a817f072572f9040b6eaad256cef2458928e50faeac0d44d1fda6ae071.jpg) +(a) Incoherent $a_0$ + +![](images/e3134bbe5863c54a0de4b6e593b6590dd01a83d88e714ef7deadc4666bf27166.jpg) +(b) Coherent $a_0$ +Figure 6: Algorithmic comparison. (a) Convergence of various methods minimizing $\Psi_{\mathrm{BL}}$ with incoherent $a_0$ over FFT operations used (for computing convolutions). The y-axis denotes the log of the angle between $\pmb{a}^{(k)}$ and the nearest shift of $a_0$ , and each marker denotes five iterations. (b) Convergence for coherent $a_0$ , and (c) with an AR2 kernel for modeling calcium signals. + +![](images/cb8ac8747b331485ba206e456f9996a9b9dcf7234686f096d62d945976dc2765.jpg) +(c) Calcium AR2 + +# 4.2 IMAGING APPLICATIONS + +Here we demonstrate the performance and generality of the proposed method. We begin with calcium fluorescence imaging, a popular modality for studying spiking activity in large neuronal populations (Grienberger & Konnerth, 2012), followed by stochastic optical reconstruction microscopy (STORM) (Rust et al., 2006; Huang et al., 2008; 2010), a superresolution technique for in vivo microscopy9. + +Sparse deconvolution of calcium signals. Neural spike trains created by action potentials, each inducing a transient response in the calcium concentration of the surrounding environment. The aggregate signal can be modeled as a convolution between the transient $\pmb{a}_0$ and the spike train $\pmb{x}_0$ . Whilst $\pmb{a}_0$ and $\pmb{x}_0$ both encode valuable information, neither are perfectly known ahead of time. + +Here, we first test our method on synthetic data generated using an AR2 model for $\pmb{a}_0$ , a shift-coherent kernel that is challenging for deconvolution, see e.g. Friedrich et al. (2017). We set $\pmb{x}_0 \sim_{\mathrm{i.i.d.}}$ Bernoulli $(p_0^{-4/5}) \in \mathbb{R}^{10^4}$ with additive noise $\pmb{n} \sim_{\mathrm{i.i.d.}} \mathcal{N}(0, 5 \cdot 10^{-2})$ . Figures 5a and 5b demonstrate accurate recovery of $\pmb{a}_0$ and $\pmb{x}_0$ in this synthetic setting. Next, we test our method on real data10; Figures 5c and 5d demonstrate recovery of spike locations. Although iADM provides + +decent performance, in the presence of large noise estimation quality can be improved by stronger sparsification methods, such as the reweighting technique by Candes et al. (2008), which we elaborate on in Appendix B. Additionally, Figure 6c shows that the proposed method converges to higher precision in comparison with state-of-the-art methods. + +![](images/56448d19ecac821dace3f30d8a639e94a2b750aeb0a04508162b26868a78f4cf.jpg) +(a) Frame 100, time = 4s + +![](images/ae695b4db8f23b3ebe0e00e3de95935c03e3055fdf17b7a1ddcf28fc1f6dc044.jpg) +Figure 7: SaSD for STORM imaging. (a, b) Individual frames (left) and predicted point process map using SaSD (right). (c, d) shows the original microscopy and the super-resolved image obtained by our method. + +![](images/58d2b5329e8d4e94823101db6aabcab20966a1898ee3889db27f489b218391fc.jpg) +(b) Frame 200, time = 8s + +![](images/fe762053a151a5ff97719c364606314ed89041ae3b8f30b0edd12fcd1bad3724.jpg) + +![](images/916880415ffac220a85cbed57edcfaab26bc0c5fca7b5ec4311d27b650afc377.jpg) +(c) Original + +![](images/3cb12e2d9896993bd7aae3fb59ce094d0ba33a9bb3723dba419f221a6266a381.jpg) +(d) Resolved + +![](images/51e91928388b6466b53c0140ab2654cc90c9b98153f3885192a466bba88247f5.jpg) +(a) Calcium image $\mathbf{Y}$ + +![](images/ecda072915c2f4e83decd49ed71d29db071f3313a3d35165ee15d03d8400e025.jpg) +(b) Estimated kernels $A_{k}$ +Figure 8: Classification of calcium images. (a) Original calcium image; (b) respective kernel estimates; (c) reconstructed images with the (left) neuron and (right) dendrite kernels; (d) respective occurrence map estimates. +$\underbrace{\mathbf{Y}_t}_{\mathrm{STORMframe}}$ + +![](images/fb2cd1c4162119f3314ca3ec3ebf36f89a6dd27bdfc0324163425b2ee7e2940c.jpg) + +![](images/c595ec7912ad514d9e5ce0b77fc086ae325156b57d75bf129f9b7488e2e35f9a.jpg) +(c) Reconstruction $\mathbf{A}_k \boxplus \mathbf{X}_k$ ( $k = 1,2$ ) + +![](images/d235b5532e1752b2396a7fd7c0b46ee0d115fc3f1601a734ba6e67b5d3c31651.jpg) + +![](images/ed930b76a32542e31cd5c056d425940205f863571929f24fc12c6df92c829509.jpg) +(d) Predicted activation maps $X_{k}$ + +![](images/e8cf7293dc16c3d1cd93e76142a4bfc8138d53cf07a9d48ac2ea6208d9ba16af.jpg) + +Super-resolution for fluorescence microscopy. Fluorescence microscopy is often spatially limited by the diffraction of light; its wavelength (several hundred nanometers) is often larger than typical molecular length-scales in cells, preventing a detailed characterization of subcellular structures. The STORM technique overcomes this resolution limit by using photoswitchable fluorescent probes to multiplex the image into multiple frames, each containing a subset of the molecules present (Figure 7). If the location of these molecules can be precisely determined for each frame, synthesizing all deconvolved frames will produce a super-resolution microscopy image with nanoscale resolution. For each image frame, the localization task can be formulated via the SaS model + +![](images/3dc7160be5aff39dec428dcc4b733bd16da9e775157e503cb81972b36bd6233a.jpg) +$\underbrace{\iota A_0}_{\text{point spread function}}$ +$\boxed{*}$ (20 $\underbrace{\pmb{X}_{0,t}}_{\mathrm{sparse~point~sources}}$ + +where $\boxed{*}$ denotes 2D convolution. Here we will solve this task on the single-molecule localization microscopy (SMLM) benchmarking dataset $^{11}$ via SaSD, recovering both the PSF $A_0$ and the point source maps $X_{0,t}$ simultaneously. We apply iADM with reweighting (Appendix B) on frames of size $128 \times 128$ from the video sequence "Tubulin"; each pixel is of $100\mathrm{nm}^2$ resolution $^{12}$ , the fluorescence wavelength is $690\mathrm{nm}$ , and the framerate is $f = 25\mathrm{Hz}$ . Figure 7 shows examples of recovered activation maps, and the aggregated super-resolution image from all 500 frames, accurately predicting the PSF (see Appendix D) and the activation map for each video frame to produce higher resolution microscopy images. + +Localization in calcium images. Our methods are easily extended to handle superpositions of multiple SaS signals. In calcium imaging, this can potentially be used to track the neurons in video sequences, a challenging task due to (non-) rigid motion, overlapping sources, and irregular + +background noise Pnevmatikakis et al. (2016); Giovannucci et al. (2019). We consider frames video obtained via the two-photon calcium microscopy dataset from the Allen Institute for Brain Science $^{13}$ , shown in Figure 8. Each frame contains the cross section of several neurons and dendrites, which have distinct sizes. We model this as the SaS signal $\mathbf{Y}_t = \iota \mathbf{A}_1 \boxplus \mathbf{X}_{1,t} + \iota \mathbf{A}_2 \boxplus \mathbf{X}_{2,t}$ , where each summand consists of neurons or dendrites exclusively. By extending Algorithm 2 to recover each of the kernels $\mathbf{A}_k$ and maps $\mathbf{X}_k$ , we can solve this convolutional dictionary learning (SaS-CDL; see Appendix C) problem which allows us to separate the dendritic and neuronal components from this image for localization of firing activity, etc. As a result, the application of SaS-CDL as a denoising or analysis tool for calcium imaging videos provides a very promising direction for future research. + +# 5 DISCUSSION + +Many nonconvex inverse problems, such as SaSD, are strongly regulated by their problem symmetries. Understanding this regularity and when or how it breaks down is important for developing effective algorithms. We illustrate this by combining geometric intuition with practical heuristics, motivated by common challenges in real deconvolution, to produce an efficient and general purpose method that performs well on data arising from a range of application areas. Our approach, therefore, can serve as a general baseline for studying and developing extensions to SaSD, such as SaS-CDL (Bristow & Lucey, 2014; Chun & Fessler, 2017; Garcia-Cardona & Wohlberg, 2018), Bayesian approaches (Babacan et al., 2008; Wipf & Zhang, 2014), and hierarchical SaS models (Chen et al., 2013). + +# ACKNOWLEDGMENTS + +This work was funded by NSF 1343282, NSF CCF 1527809, and NSF IIS 1546411. QQ also acknowledges supports from Microsoft PhD fellowship and the Moore-Sloan fellowship. We would like to thank Gongguo Tang, Shuyang Ling, Carlos Fernandez-Granda, Ruoxi Sun, and Liam Paninski for fruitful discussions. + +# REFERENCES + +Pierre-Antoine. Absil, Robert Mahoney, and Rodolphe Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2009. +Alekh Agarwal, Sahand Negahban, and Martin J Wainwright. Fast global convergence rates of gradient methods for high-dimensional statistical recovery. In Advances in Neural Information Processing Systems, pp. 37-45, 2010. +S Derin Babacan, Rafael Molina, and Aggelos K Katsaggelos. Variational bayesian blind deconvolution using a total variation prior. IEEE Transactions on Image Processing, 18(1):12-26, 2008. +Simon Baker and Takeo Kanade. Limits on super-resolution and how to break them. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1167-1183, 2002. +Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183-202, 2009. +Alexis Benichoux, Emmanuel Vincent, and Rémi Gribonval. A fundamental pitfall in blind deconvolution with sparse and shift-invariant priors. In ICASSP-38th International Conference on Acoustics, Speech, and Signal Processing-2013, 2013. +Eric Betzig, George H Patterson, Rachid Sougrat, O Wolf Lindwasser, Scott Olenych, Juan S Bonifacino, Michael W Davidson, Jennifer Lippincott-Schwartz, and Harald F Hess. Imaging intracellular fluorescent proteins at nanometer resolution. Science, 313(5793):1642-1645, 2006. +Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine learning, 3(1):1-122, 2011. + +David Briers, Donald D Duncan, Evan R Hirst, Sean J Kirkpatrick, Marcus Larsson, Wiendelt Steenbergen, Tomas Stromberg, and Oliver B Thompson. Laser speckle contrast imaging: theoretical and practical limitations. Journal of biomedical optics, 18(6):066018, 2013. +Hilton Bristow and Simon Lucey. Optimization methods for convolutional sparse coding. arXiv preprint arXiv:1406.2407, 2014. +Patrizio Campisi and Karen Egiazarian. *Blind image deconvolution: theory and applications*. CRC press, 2016. +Emmanuel J Candes and Carlos Fernandez-Granda. Towards a mathematical theory of superresolution. Communications on pure and applied Mathematics, 67(6):906-956, 2014. +Emmanuel J Candes, Michael B Wakin, and Stephen P Boyd. Enhancing sparsity by reweighted $\ell_1$ minimization. Journal of Fourier analysis and applications, 14(5-6):877-905, 2008. +Bo Chen, Gungor Polatkan, Guillermo Sapiro, David Blei, David Dunson, and Lawrence Carin. Deep learning with hierarchical convolutional factor analysis. IEEE transactions on pattern analysis and machine intelligence, 35(8):1887-1901, 2013. +Sky C Cheung, John Y Shin, Yenson Lau, Zhengyu Chen, Ju Sun, Yuqian Zhang, John N Wright, and Abhay N Pasupathy. Dictionary learning in fourier transform scanning tunneling spectroscopy. arXiv preprint arXiv:1807.10752, 2018. +II Yong Chun and Jeffrey A Fessler. Convolutional dictionary learning: Acceleration and convergence. IEEE Transactions on Image Processing, 27(4):1697-1712, 2017. +Chaitanya Ekanadham, Daniel Tranchina, and Eero P Simoncelli. A blind sparse deconvolution method for neural spike identification. In Advances in Neural Information Processing Systems, pp. 1440-1448, 2011. +Johannes Friedrich, Pengcheng Zhou, and Liam Paninski. Fast online deconvolution of calcium imaging data. PLoS Computational Biology, 13(3):e1005423, 2017. +Cristina Garcia-Cardona and Brendt Wohlberg. Convolutional dictionary learning: A comparative review and new algorithms. IEEE Transactions on Computational Imaging, 4(3):366-381, 2018. +Andrea Giovannucci, Johannes Friedrich, Pat Gunn, Jeremie Kalfon, Brandon L Brown, Sue Ann Koay, Jiannis Taxidis, Farzaneh Najafi, Jeffrey L Gauthier, Pengcheng Zhou, et al. Caiman an open source tool for scalable calcium imaging data analysis. *Elife*, 8:e38173, 2019. +Christine Grienberger and Arthur Konnerth. Imaging calcium in neurons. *Neuron*, 73(5):862-885, 2012. +Elaine T Hale, Wotao Yin, and Yin Zhang. Fixed-point continuation for $\backslash$ ell_1-minimization: Methodology and convergence. SIAM Journal on Optimization, 19(3):1107-1130, 2008. +Samuel T Hess, Thanu PK Girirajan, and Michael D Mason. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophysical journal, 91(11):4258-4272, 2006. +Bo Huang, Wenqin Wang, Mark Bates, and Xiaowei Zhuang. Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science, 319(5864):810-813, 2008. +Bo Huang, Mark Bates, and Xiaowei Zhuang. Super-resolution fluorescence microscopy. Annual Review of Biochemistry, 78:993-1016, 2009. +Bo Huang, Hazen Babcock, and Xiaowei Zhuang. Breaking the diffraction barrier: super-resolution imaging of cells. Cell, 143(7):1047-1058, 2010. +Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape saddle points efficiently. In Proceedings of the 34th International Conference on Machine Learning, pp. 1724-1732, 2017. + +Chi Jin, Praneeth Netrapalli, and Michael I Jordan. Accelerated gradient descent escapes saddle points faster than gradient descent. In Conference On Learning Theory, pp. 1042-1085, 2018. +Han-Wen Kuo, Yuqian Zhang, Yenson Lau, and John Wright. Geometry and symmetry in short-andsparse deconvolution. In International Conference on Machine Learning (ICML), June 2019. +Anat Levin, Yair Weiss, Fredo Durand, and William T Freeman. Understanding blind deconvolution algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2354-2367, 2011. +Shuyang Ling and Thomas Strohmer. Blind deconvolution meets blind demixing: Algorithms and performance bounds. IEEE Transactions on Information Theory, 63(7):4497-4520, 2017. +Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013. +Jorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006. +Eftychios A Pnevmatikakis, Daniel Soudry, Yuanjun Gao, Timothy A Machado, Josh Merel, David Pfau, Thomas Reardon, Yu Mu, Clay Lacefield, Weijian Yang, et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron, 89(2):285-299, 2016. +Thomas Pock and Shoham Sabach. Inertial proximal alternating linearized minimization (ipalm) for nonconvex and nonsmooth problems. SIAM Journal on Imaging Sciences, 9(4):1756-1787, 2016. +Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1-17, 1964. +Hernan Gonzalo Rey, Carlos Pedreira, and Rodrigo Quian Quiroga. Past, present and future of spike sorting techniques. *Brain Research Bulletin*, 119:106–117, 2015. +Michael J Rust, Mark Bates, and Xiaowei Zhuang. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm). Nature Methods, 3(10):793, 2006. +Gleb Shtengel, James A Galbraith, Catherine G Galbraith, Jennifer Lippincott-Schwartz, Jennifer M Gillette, Suliana Manley, Rachid Sougrat, Clare M Waterman, Pakorn Kanchanawong, Michael W Davidson, et al. Interferometric fluorescent super-resolution microscopy resolves 3d cellular ultrastructure. Proceedings of the National Academy of Sciences, 106(9):3125-3130, 2009. +Andrew H Song, Francisco Flores, and Demba Ba. Spike sorting by convolutional dictionary learning. arXiv preprint arXiv:1806.01979, 2018. +Ju Sun, Qing Qu, and John Wright. When are nonconvex problems not scary? arXiv preprint arXiv:1510.06096, 2015. +Philipp Walk, Peter Jung, Götz E Pfander, and Babak Hassibi. Blind deconvolution with additional autocorrelations via convex programs. arXiv preprint arXiv:1701.04890, 2017. +David Wipf and Haichao Zhang. Revisiting bayesian blind deconvolution. The Journal of Machine Learning Research, 15(1):3595-3634, 2014. +Stephen J Wright, Robert D Nowak, and Mário AT Figueiredo. Sparse reconstruction by separable approximation. IEEE Transactions on Signal Processing, 57(7):2479-2493, 2009. +Lin Xiao and Tong Zhang. A proximal-gradient homotopy method for the sparse least-squares problem. SIAM Journal on Optimization, 23(2):1062-1091, 2013. +Jianchao Yang, John Wright, Thomas S Huang, and Yi Ma. Image super-resolution via sparse representation. IEEE Transactions on Image Processing, 19(11):2861-2873, 2010. +Florence Yellin, Benjamin D Haeffele, and René Vidal. Blood cell detection and counting in holographic lens-free imaging by convolutional sparse dictionary learning and coding. In IEEE 14th International Symposium on Biomedical Imaging, pp. 650-653. IEEE, 2017. + +Yuqian Zhang, Yenson Lau, Han-Wen Kuo, Sky Cheung, Abhay Pasupathy, and John Wright. On the global geometry of sphere-constrained sparse blind deconvolution. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 4381-4389. IEEE, 2017. + +Yin Zhou, Hang Chang, Kenneth Barner, Paul Spellman, and Bahram Parvin. Classification of histology sections via multispectral convolutional sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3081-3088, 2014. + +# A APPROXIMATE BILINEAR LASSO OBJECTIVE + +Recall from Section 2.2 of the main text that SaSD can be formulated as the Bilinear Lasso problem + +$$ +\min _ {\boldsymbol {a} \in \mathbb {S} ^ {p - 1}, \boldsymbol {x} \in \mathbb {R} ^ {m}} \left[ \Psi_ {\mathrm {B L}} (\boldsymbol {a}, \boldsymbol {x}) \doteq \frac {1}{2} \| \boldsymbol {y} - \iota \boldsymbol {a} * \boldsymbol {x} \| _ {2} ^ {2} + \lambda \| \boldsymbol {x} \| _ {1} \right]. \tag {9} +$$ + +Unfortunately, this objective is challenging for analysis. A major culprit is that its marginalization + +$$ +\varphi_ {\mathrm {B L}} (\boldsymbol {a}) \doteq \min _ {\boldsymbol {x}} \left\{\frac {1}{2} \| \boldsymbol {y} - \iota \boldsymbol {a} * \boldsymbol {x} \| _ {2} ^ {2} + \lambda \| \boldsymbol {x} \| _ {1} \right\}, \tag {10} +$$ + +generally does not admit closed form solutions due the convolution with $\pmb{a}$ in the squared error term. This motivates Kuo et al. (2019) to study the nonconvex formulation + +$$ +\min _ {\boldsymbol {a} \in \mathbb {S} ^ {p - 1}, \boldsymbol {x} \in \mathbb {R} ^ {m}} \left[ \Psi_ {\mathrm {A B L}} (\boldsymbol {a}, \boldsymbol {x}) \doteq \frac {1}{2} \| \boldsymbol {x} \| _ {2} ^ {2} - \langle \iota \boldsymbol {a} * \boldsymbol {x}, \boldsymbol {y} \rangle + \| \boldsymbol {y} \| _ {2} ^ {2} + \lambda \| \boldsymbol {x} \| _ {1} \right]. \tag {11} +$$ + +We refer to (11) as the Approximate Bilinear Lasso formulation, and it is quite easy to see that $\Psi_{\mathrm{ABL}}(\pmb {a},\pmb {x})\approx \Psi_{\mathrm{BL}}(\pmb {a},\pmb {x})$ when $\| \pmb {a}\circledast \pmb {x}\| ^2\approx \| \pmb {x}\| ^2$ , i.e. if $\pmb{a}$ is shift-incoherent, or $\mu (\pmb {a})\approx 0$ . The marginalized objective function $\varphi_{\mathrm{BL}}(\pmb {a})\doteq \min_{\pmb{x}}\Psi_{\mathrm{DQ}}(\pmb {a},\pmb {x})$ now has the closed form expression + +$$ +\varphi_ {\mathrm {A B L}} (\boldsymbol {a}) \doteq - \frac {1}{2} \| \operatorname {s o f t} _ {\lambda} [ \check {\boldsymbol {a}} \circledast \boldsymbol {y} ] \| _ {2} ^ {2}. \tag {12} +$$ + +Here soft denotes the elementwise soft-thresholding operator $\mathrm{soft}_t(x_i) = \mathrm{sign}(x_i)\cdot \max (|x_i| - t,0)$ , and $\check{\pmb{a}}$ denotes the adjoint kernel of $\pmb{a}$ , i.e. the kernel s.t. $\langle \iota \pmb {a}\otimes \pmb {u},\pmb {v}\rangle = \langle \pmb {u},\check{\pmb{a}}\otimes \pmb {v}\rangle \forall \pmb {u},\pmb {v}\in \mathbb{R}^{m}$ . + +# A.1 LANDSCAPE GEOMETRY + +The rest of Section 2.2 discusses the regional characterization of $\varphi_{\mathrm{ABL}}$ in the span of a small number of shifts from $a_0$ . This language is made precise in the form of the subsphere + +$$ +\mathcal {S} _ {\mathcal {I}} \doteq \left\{\sum_ {\ell \in \mathcal {I}} \alpha_ {\ell} s _ {\ell} [ \iota \boldsymbol {a} _ {0} ]: \alpha_ {\ell} \in \mathbb {R} \right\} \cap \mathbb {S} ^ {p - 1}, \tag {13} +$$ + +spanned by a small set of cyclic shifts of $\iota a_0$ . Although we will not discuss the explicit distance function here, the characterization by Kuo et al. (2019) holds whenever $\pmb{a}$ is close enough to such a subsphere with $|\mathcal{I}| \leqslant 4\theta p_0$ , where $\theta$ is the probability that any individual entry of $\pmb{x}_0$ is nonzero. Suppose we have $\pmb{a} \approx \sum_{\ell \in \mathcal{I}} \alpha_\ell s_\ell [\iota a_0]$ for some appropriate index set $\mathcal{I}$ . Note that if $\mu_s a_0 \approx 0$ , then $\mu_s a \approx 0$ , $\forall a \in S_{\mathcal{I}}$ . Now let $\alpha_{(1)}$ and $\alpha_{(2)}$ be the first and second largest coordinates of the shifts participating in $\pmb{a}$ , and let $s_{(1)}[a_0]$ and $s_{(2)}[a_0]$ be the corresponding shifts. Then + +- If $\left|\frac{\alpha_{(2)}}{\alpha_{(1)}}\right| \approx 0$ , then $\pmb{a}$ is in a strongly convex region of $\varphi_{\mathrm{ABL}}$ , containing a single local minimizer corresponding to $s_{(1)}[a_0]$ . +- If $\left|\frac{\alpha_{(2)}}{\alpha_{(1)}}\right| \approx 1$ , then $\pmb{a}$ is near a saddle-point, with negative curvature pointing towards $s_{(1)}[a_0]$ and $s_{(2)}[a_0]$ . If $\left|\frac{\alpha_{(3)}}{\alpha_{(2)}}\right| \approx 0$ , i.e. $s_{(1)}[a_0]$ and $s_{(2)}[a_0]$ are the only two participating shifts, then $\varphi_{\mathrm{ABL}}$ is also characterized by positive curvature in all orthogonal directions. +- Otherwise, $\langle -\mathrm{grad}\varphi_{\mathrm{ABL}}(\pmb{a}),\pmb{z} - \pmb{a}\rangle$ takes on a large positive value, for either $u = s_{(1)}[a_0]$ or $u = s_{(2)}[a_0]$ , i.e. the negative Riemannian gradient is large and points towards one of the participating shifts. + +![](images/19caf4fb172a8ffb6ee36f2023e3b8755a2de356719445ab66f4a14c05c269a1.jpg) + +![](images/26af8ad362730894a80e30379bd48b74711343b6f495920bb674fc2c88d71c1e.jpg) + +![](images/5098785a3731c2c1f4faead022c3e37326cfe115abdb3e0d95731d6aaae78cc6.jpg) + +![](images/8eda44201988bbc931910d3e99616335268bc96569da8339118f228f6242c078.jpg) +Figure 9: Data-driven initialization for $\pmb{a}$ : using a piece of the observed data $\pmb{y}$ to generate a good initial point $\pmb{a}^{(0)}$ . Top: data $\pmb{y} = \pmb{a}_0 \circledast \pmb{x}_0$ is a superposition of shifts of the true kernel $\pmb{a}_0$ . Bottom: a length- $p_0$ window contains pieces of just a few shifts. Bottom-center: one step of the generalized power method approximately fills in the missing pieces, yielding an initialization that is close to a linear combination of shifts of $\pmb{a}_0$ (right). + +![](images/ae74ad1179e60e4e97466864747d1a86635c7a4e2c7aeba706d08b03747f59da.jpg) + +![](images/3dbbe848a50e783373d3b078357f2fc5963486f6cd0ab4b1c0d5aed5a1fee7ed.jpg) + +This is an example of a ridable saddle property (Jin et al., 2017) that allows many first and second-order methods to locate local minimizers. Since all local minimizers of $\varphi_{\mathrm{ABL}}$ near $S_{\mathcal{I}}$ must correspond to signed-shifts of $a_0$ , this guarantees that the Approximate Bilinear Lasso formulation can be efficiently solved to recover $a_0$ (and subsequently $x_0$ ) for incoherent $a_0$ , as long as $a$ is initialized near some appropriate subsphere and the sparsity coherence tradeoff $p_0\theta \lesssim (\mu_s(a_0))^{-1/2}$ is satisfied. We note that this is a poor tradeoff rate, which reflects that the Approximate Bilinear Lasso formulation is non-practical and cannot handle SaSD problems involving kernels with high shift-coherence. + +# A.2 DATA-DRIVEN INITIALIZATION + +For the SaS-BD problem, we usually initialize $\pmb{x}$ by $\pmb{x}^{(0)} = \mathbf{0}$ , so that our initialization is sparse. For the optimization variable $\pmb{a} \in \mathbb{R}^n$ , recall from Section 2.2 in the main text that it is desirable to obtain an initialization $\pmb{a}^0$ which is close to the intersection of $\mathbb{S}^{p-1}$ and a subsphere $S_{\mathcal{I}}$ spanned by a few shifts of $\pmb{a}_0$ . When $\pmb{x}_0$ is sparse, our measurement $\pmb{y}$ is a linear combination of a few shifts of $\pmb{a}_0$ . Therefore, an arbitrary consecutive $p_0$ -length window $\widetilde{\pmb{y}}_i \doteq [y_i y_{i+1} \dots y_{i+p_0-1}]^T$ of the data $\pmb{y}$ should be not far away from such a subspace $S_{\mathcal{I}}$ . As illustrated in Figure 9, one step of the generalized power method (Kuo et al., 2019) + +$$ +\widetilde {\boldsymbol {a}} ^ {(0)} \doteq \mathcal {P} _ {\mathbb {S} ^ {p - 1}} \left(\left[ \begin{array}{l l l} \boldsymbol {0} _ {p - 1}; & \widetilde {\boldsymbol {y}} _ {i}; & \boldsymbol {0} _ {p - 1} \end{array} \right]\right) \tag {14} +$$ + +$$ +\boldsymbol {a} ^ {(0)} = \mathcal {P} _ {\mathbb {S} ^ {p - 1}} \left(- \nabla \varphi_ {\mathrm {A B L}} \left(\tilde {\boldsymbol {a}} ^ {(0)}\right)\right) \tag {15} +$$ + +produces a refined initialization that is very close to a subspace $S_{\mathcal{I}}$ spanned by a few shifts of $\pmb{a}_0$ with $|\mathcal{I}| \approx \theta p_0$ . However, (15) is a relatively complicated for a simple idea. In practice, we find that the simple initialization $\pmb{a}^{(0)} = \widetilde{\pmb{a}}^{(0)}$ from (14) works suitably well for solving SaSD with (9). + +# A.3 COMPARISON TO THE BILINEAR LASSO + +Although it is easy to see that $\Psi_{\mathrm{ABL}}(\pmb{a})$ and $\Psi_{\mathrm{BL}}(\pmb{a})$ are similar as long as $\mu(\pmb{a}) \approx 0$ , it is also clear that these two quantities can be very different when $\mu(\pmb{a})$ is large. This is especially significant when $\mu(\pmb{a}_0)$ is itself large, as the desired solutions for $\pmb{a}$ are then also coherent. + +From Figure 10, we can see that these changes are reflected in the low-dimensional sub spheres (13) spanned by adjacent shifts of $a_0$ . Compared to the incoherent case, $\varphi_{\mathrm{BL}}$ also takes on small values in regions between adjacent shifts, creating a "global valley" on the subsphere. Theoretically, this makes it difficult to ensure exact recovery of up to symmetry when $a_0$ is coherent, and the objective function becomes much more complicated. This is not a significant issue in terms of practical computation, however, since adjacent shifts of $a_0$ become indistinguishable as $\mu(a_0) \to 1$ , meaning that one only needs to ensure that $a$ lands in the "global valley" to achieve good estimates of $a_0$ up to symmetry. + +![](images/3c9d7bf40759d93444ee4e9c38143ede3adbefdc1c20b0b078385cd7b791b3ce.jpg) +(a) $\varphi_{\mathrm{ABL}}$ , incoherent +Figure 10: Low-dimensional subspheres spanned by shifts of $\pmb{a}_0$ . Subfigures (a,b) present the optimization landscapes of $\varphi_{\mathrm{ABL}}(\pmb{a})$ and $\varphi_{\mathrm{BL}}(\pmb{a})$ , for $\pmb{a} \in \mathbb{S}^{p-1} \cap \operatorname{span}\{\pmb{a}_0, s_1[\pmb{a}_0], s_2[\pmb{a}_0]\}$ , with higher values being brighter. The red dots denote the shifts of $\pmb{a}_0$ . Subfigure (c) shows the landscape $\varphi_{\mathrm{BL}}$ when $\pmb{a}_0$ is coherent, which significantly departs from the landscapes of (a,b), but still retains symmetry breaking curvature. + +![](images/b88f4d691ede03033dec5e15bd9b21af9f8e9fe739b16912ea3055383bce2623.jpg) +(b) $\varphi_{\mathrm{BL}}$ , incoherent + +![](images/020b3ade8b095235f14ed724982ac7f008e874f9209431983514ad8a03503832.jpg) +(c) $\varphi_{\mathrm{BL}}$ ,coherent + +# B REWEIGHTED SPARSE PENALIZATION + +When $a_0$ is shift-coherent, minimization of the objective $\Psi_{\mathrm{BL}}$ with respect to $x$ becomes sensitive to perturbations, creating "smudging" effects on the recovered map $x$ . These resolution issues can be remedied with stronger concave regularizers. A simple way of facilitating this with the Bilinear Lasso is to use a reweighting technique (Candes et al., 2008). The basic idea is to adaptively adjust the penalty by considering a weighted variant of the original Bilinear Lasso problem from (9), + +$$ +\min _ {\boldsymbol {a} \in \mathbb {S} ^ {p - 1}, \boldsymbol {x} \in \mathbb {R} ^ {m}} \Psi_ {\mathrm {B L}} ^ {\boldsymbol {w}} (\boldsymbol {a}, \boldsymbol {x}) \doteq \frac {1}{2} \| \boldsymbol {y} - \boldsymbol {a} * \boldsymbol {x} \| _ {2} ^ {2} + \lambda \| \boldsymbol {w} \odot \boldsymbol {x} \| _ {1} \tag {16} +$$ + +where $\boldsymbol{w} \in \mathbb{R}_{+}^{m}$ and $\odot$ denotes the Hadamard product. Here we will set the weights $\boldsymbol{w}$ to be roughly inverse to the magnitude of the true signal $x_{0}$ , i.e., + +$$ +w _ {i} = \frac {1}{\left| x _ {0 , i} \right| + \varepsilon}. \tag {17} +$$ + +# Algorithm 3 Reweighted Bilinear Lasso + +Input: Initializations $\hat{\pmb{a}}^{(0)},\hat{\pmb{x}}^{(0)}$ , penalty $\lambda >0$ + +Output: Local minimizers $\hat{\pmb{a}}^{(j)},\hat{\pmb{x}}^{(j)}$ of $\Psi_{\mathrm{BL}}^{\pmb{w}^{(j)}}$ + +Initialize $\pmb{w}^{(1)} = \mathbf{1}_m, j \gets 1$ . + +while not converged do + +Using the initialization $\left(\hat{\pmb{a}}^{(j-1)}, \hat{\pmb{x}}^{(j-1)}\right)$ and weight $\pmb{w}^{(j)}$ , solve (16) — e.g. with iADM — to obtain solution $\left(\hat{\pmb{a}}^{(j)}, \hat{\pmb{x}}^{(j)}\right)$ ; + +Set $\varepsilon$ with (19) and update the weights as + +$$ +\boldsymbol {w} ^ {(j + 1)} = \frac {1}{| \hat {\boldsymbol {x}} ^ {(j)} | + \varepsilon}. \tag {18} +$$ + +Update $\ell \gets \ell +1$ + +end while + +In addition to choosing $\lambda > 0$ , here $\varepsilon > 0$ trades off between sparsification strength (small $\varepsilon$ ) and algorithmic stability (large $\varepsilon$ ). Let $|x|_{(i)}$ denote the $i$ -th largest entry of $|\pmb{x}|$ . For experiments in the main text, we set + +$$ +\varepsilon = \max \left\{\left| x \right| _ {\left(\lceil n / \log (m / n) \rceil\right)}, 1 0 ^ {- 3} \right\}. \tag {19} +$$ + +Starting with the initial weights $\pmb{w}^{(0)} = \mathbf{1}_m$ , Algorithm 3 successively solves (16), updating the weights using (17) at each outer loop iteration $j$ . As $j \to \infty$ , this method becomes equivalent to replacing the $\ell_1$ -norm in (9) with the nonconvex penalty $\sum_{i} \log(|x_i| + \varepsilon)$ (Candes et al., 2008). + +We can easily adopt our iADM algorithm to solve this subproblem, by taking the proximal gradient on $x$ with a different penalty $\lambda_{i}$ for each entry $x_{i}$ . Figure 11, as well as calcium imaging experiments in Section 4.2, Figure 5 of the main text, demonstrate improved estimation as a result of this method. + +![](images/65d39103368da43a6e1313c844c032d1c8135537f245f68bd1f00b643a4b87ae.jpg) +(a) True map $x_0$ + +![](images/3b5d9b7af09ce780652459a75ed17440abc9802dd51a8cbed3cded50505ca729.jpg) +(c) Noisy $\pmb{y}$ , $\ell_1$ only + +![](images/e72f8e1ba51989cf5a1655b68b9d914328270405cabee2436c641282e337bd42.jpg) +(e) Noisy $\pmb{y}$ , reweighted + +![](images/a1e8bace540159c0814a0ac940d1ac0f4266f3a9b5708d553d0823789e5a2487.jpg) +(b) True motif $a_0$ + +![](images/6348065b8035601c22c740f9d474072646e6119f6c3be6f9c70da525588fe4d5.jpg) +(d) Noisy $\pmb{a}$ , $\ell_{1}$ only + +![](images/39c2ccb54ce34ea13dc3811007249e4c1d3c6a8bfb48a2c99e0e4207a7767e75.jpg) +(f) Noisy $a$ , reweighted +Figure 11: Recovery of $x_0$ with $\ell_1$ -reweighting. (a, b) Truth signals. (c) Solving $\min_x \Psi_{\mathrm{BL}}(a, x)$ with noisy data and coherent $a_0$ leads to low-quality estimates of $x$ ; (d) performance suffers further when $a$ is a noisy estimate of $a_0$ . (e, f) Reweighted $\ell_1$ minimization alleviates this issue significantly. + +# C EXTENSION FOR CONVOLUTIONAL DICTIONARY LEARNING + +![](images/daacc2fa4faadb9d4615edc26f45c04c2518fe026240fc20f97336aac207825e.jpg) +Figure 12: Convolutional dictionary learning. Simultaneous recovery for multiple unknown kernels $\{\pmb{a}_{0,k}\}_{k=1}^{N}$ and sparse activation maps $\{\pmb{x}_{0,k}\}_{k=1}^{N}$ from $\pmb{y} = \sum_{k=1}^{N} \pmb{a}_{0,k} * \pmb{x}_{0,k}$ . + +![](images/36c58a6391c0745a0700d21d5037097000eb2a4a7135ccbc777c8a77fe7ddab7.jpg) + +![](images/27103a2c58803c84ee0b15729dd5d4a27dfd6c25de328e106f95a3ae351d6b83.jpg) + +![](images/a5df61e562e9b8c4b05cf75dee4e892fc38ea6eb1295bffab79750ddc2433a95.jpg) +(a) PSF in 2D +Figure 13: Estimated PSF for STORM imaging. The left hand side shows the estimated $8 \times 8$ PSF in 2D, the right hand side visualizes the PSF in 3D. + +![](images/57fbd79c215b4d25a8c02ee959aa013cd55510190c745347fd9971148c4b13a7.jpg) +(b) PSF in 3D + +The optimization methods we introduced for SaSD here can be naturally extended for sparse blind deconvolution problems with multiple kernels/motifs (a.k.a. convolutional dictionary learning; see Garcia-Cardona & Wohlberg (2018)), which have broad applications in microscopy data analysis (Yellin et al., 2017; Zhou et al., 2014; Cheung et al., 2018) and neural spike sorting (Ekanadham et al., 2011; Rey et al., 2015; Song et al., 2018). As illustrated in Figure 12, the new observation $\pmb{y}$ is + +the sum of $N$ convolutions between short kernels $\{\pmb{a}_{0,k}\}_{k=1}^{N}$ and sparse maps $\{\pmb{x}_{0,k}\}_{k=1}^{N}$ , + +$$ +\boldsymbol {y} = \sum_ {k = 1} ^ {N} \iota \boldsymbol {a} _ {0, k} * \boldsymbol {x} _ {0, k}, \quad \boldsymbol {a} _ {0, k} \in \mathbb {R} ^ {p _ {0}}, \quad \boldsymbol {x} _ {0, k} \in \mathbb {R} ^ {m}, \quad (1 \leqslant k \leqslant N). \tag {20} +$$ + +The natural extension of SaSD, then, is to recover $\{\pmb{a}_{0,k}\}_{k=1}^{N}$ and $\{\pmb{x}_{0,k}\}_{k=1}^{N}$ up to signed, shift, and permutation ambiguities, leading to the SaS convolutional dictionary learning (SaS-CDL) problem. The SaSD problem can be seen as a special case of SaS-CDL with $N = 1$ . Based on the Bilinear Lasso formulation in (9) for solving SaSD, we constrain all kernels $\pmb{a}_{0,k}$ over the sphere, and consider the following nonconvex objective: + +$$ +\min _ {\left\{\boldsymbol {a} _ {k} \right\} _ {k = 1} ^ {N}, \left\{\boldsymbol {x} _ {k} \right\} _ {k = 1} ^ {N}} \frac {1}{2} \left\| \boldsymbol {y} - \sum_ {k = 1} ^ {N} \boldsymbol {a} _ {k} * \boldsymbol {x} _ {k} \right\| _ {2} ^ {2} + \lambda \sum_ {k = 1} ^ {N} \| \boldsymbol {x} _ {k} \| _ {1}, \quad \text {s . t .} \quad \boldsymbol {a} _ {k} \in \mathbb {S} ^ {p - 1} \quad (1 \leqslant k \leqslant N). \tag {21} +$$ + +Similar to the idea of solving the Bilinear Lasso in (9), we optimize (21) via iADM, by taking alternating descent steps on $\{\pmb{a}_k\}_{k=1}^N$ and $\{\pmb{x}_k\}_{k=1}^N$ with the other variable fixed. + +# D SUPER-RESOLUTION WITH STORM IMAGING + +For point source localization in STORM frames, recall that we use the SaS model from Section 4.2.2, + +$$ +\underbrace {\boldsymbol {Y} _ {t}} _ {\text {S T O R M f r a m e}} = \underbrace {\iota \boldsymbol {A} _ {0}} _ {\text {p o i n t s p r e a d f u n c t i o n}} \quad * \underbrace {\boldsymbol {X} _ {0 , t}} _ {\text {s p a r s e p o i n t s o u r c e s}} + \underbrace {\boldsymbol {N} _ {t}} _ {\text {n o i s e}}. \tag {22} +$$ + +We then apply our SaSD method to recover both $A_0$ and $X_{0,t}$ from $Y_{t}$ . We show our recovery of $X_{0,t}$ as well as the super-resolved image using all available frames in Figure 6 of the main text. Since the main objective of STORM imaging is to recover the point sources, we have deferred the recovered PSF $A_0$ to Figure 13 here. \ No newline at end of file diff --git a/shortandsparsedeconvolutionageometricapproach/images.zip b/shortandsparsedeconvolutionageometricapproach/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f48b1fd392b3e1a660f3fac0acb0b69f0eb52e84 --- /dev/null +++ b/shortandsparsedeconvolutionageometricapproach/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aed6518bdd3b79093534371da96f7790391d54427a53e249d48f8792fec859d2 +size 647378 diff --git a/shortandsparsedeconvolutionageometricapproach/layout.json b/shortandsparsedeconvolutionageometricapproach/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0a4c7df471cc46a080f07c0ddddc88a5fa7d7082 --- /dev/null +++ b/shortandsparsedeconvolutionageometricapproach/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ae8b279ca375c0a84411aab3b5332e83f62373f77e0ba72a9ab57d67e29f52c +size 926278 diff --git a/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_content_list.json b/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8c3aa9bfb7590b35d62d8f3356abcba49af10632 --- /dev/null +++ b/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4d2ff7ea8639a745d92b61c424e4f8d9f317700da8bad483db7da92e4bcc695 +size 146884 diff --git a/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_model.json b/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9758efcadc50eed86988c374832009bca10833f9 --- /dev/null +++ b/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5384f8c73dc2e1a89f6c7270f85a8ccb542db0e80dbc00f7d676a00193a4b50 +size 168411 diff --git a/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_origin.pdf b/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c067087ca394334e20dda8dac7059b2421453aee --- /dev/null +++ b/signbitsareallyouneedforblackboxattacks/0b6f4bf1-0e62-4a8a-b1b0-3dbd2915ce5e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fbe4993bccc1b9abf0c250ee8cea9008ad40fecfa578e86acdab44a50767146 +size 2967844 diff --git a/signbitsareallyouneedforblackboxattacks/full.md b/signbitsareallyouneedforblackboxattacks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a99153b5a7efa8c396ab6cf79e13df9892639bf7 --- /dev/null +++ b/signbitsareallyouneedforblackboxattacks/full.md @@ -0,0 +1,593 @@ +# SIGN BITS ARE ALL YOU NEED FOR BLACK-BOX ATTACKS + +Abdullah Al-Dujaili + +CSAIL, MIT + +Cambridge, MA 02139 + +aldujail@mit.edu + +Una-May O'Reilly + +CSAIL, MIT + +Cambridge, MA 02139 + +unamay@csail.mit.edu + +# ABSTRACT + +We present a novel black-box adversarial attack algorithm with state-of-the-art model evasion rates for query efficiency under $\ell_{\infty}$ and $\ell_2$ metrics. It exploits a sign-based, rather than magnitude-based, gradient estimation approach that shifts the gradient estimation from continuous to binary black-box optimization. It adaptively constructs queries to estimate the gradient, one query relying upon the previous, rather than re-estimating the gradient each step with random query construction. Its reliance on sign bits yields a smaller memory footprint and it requires neither hyperparameter tuning or dimensionality reduction. Further, its theoretical performance is guaranteed and it can characterize adversarial subspaces better than white-box gradient-aligned subspaces. On two public black-box attack challenges and a model robustly trained against transfer attacks, the algorithm's evasion rates surpass all submitted attacks. For a suite of published models, the algorithm is $3.8\times$ less failure-prone while spending $2.5\times$ fewer queries versus the best combination of state of art algorithms. For example, it evades a standard MNIST model using just 12 queries on average. Similar performance is observed on a standard IMAGENET model with an average of 579 queries. + +# 1 INTRODUCTION + +Problem. Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are malicious inputs designed to fool the model's prediction—see (Biggio and Roli, 2018) for a comprehensive, recent overview of adversarial examples. Research on generating these malicious inputs started in the white-box setting, where access to the gradients of the models is assumed. Since the gradient points to the direction of steepest ascent, an input can be perturbed along the gradient's direction to maximize the network's loss, thereby potentially causing misclassification under class prediction, e.g. with images, or evasion under detection, e.g. with malware. The assumption of access to the underlying gradient does not however reflect real world scenarios. Attack algorithms under a more realistic, restrictive black-box threat model, which assumes access to predictions in lieu of gradients, are therefore studied. Central to their approaches is estimating the gradient. To estimate the magnitudes and signs of the gradient, the community at large has formulated a continuous optimization problem of $O(n)$ complexity where $n$ is the input dimensionality. Most recently work has sought to reduce this complexity by means of data-/time-dependent priors Ilyas et al. (2019). In this paper, we take a different tack and reduce the central problem to just estimating the signs of the gradients. Our intuition arises from observing that estimating the sign of the top $30\%$ gradient coordinates by magnitude is enough to achieve a rough misclassification rate of $70\%$ . Figure 1 reproducing Ilyas et al. (2019) illustrates this observation for the MNIST dataset—see Appendix A for other datasets. Therefore our goal is to recover the sign of the gradient with high query efficiency so we can use it to generate adversarial examples as effective as those generated by full gradient estimation approaches. + +Related Work. We organize the related work in two themes, namely Adversarial Example Generation and Sign-Based Optimization. The literature of the first theme primarily divides into white-box and black-box settings. The white-box setting, while not the focus of this work, follows from the works of Biggio et al. (2013) and Goodfellow et al. (2015) who introduced the Fast Gradient Sign Method (FGSM), including several methods to produce adversarial examples for various learning tasks and threat perturbation constraints (Carlini and Wagner, 2017; Moosavi-Dezfooli et al., 2016; Hayes and + +Danezis, 2017; Al-Dujaili et al., 2018; Kurakin et al., 2017; Shamir et al., 2019). Turning to the black-box setting and iterative optimization schemes, Narodytska and Kasiviswanathan (2017), without using any gradient information, use a naive policy of perturbing random segments of an image to generate adversarial examples. Bhagoji et al. (2017) reduce the dimensions of the feature space using Principal Component Analysis (PCA) and random feature grouping, before estimating gradients. Chen et al. (2017) introduce a principled approach by using gradient based optimization. They employ finite differences, a zeroth-order optimization means, to estimate the gradient and then use it to design a gradient-based attack. While this approach successfully generates adversarial examples, it is expensive in how many times the model is queried. Ilyas et al. (2018) substitute traditional finite differences methods with Natural Evolutionary Strategies (NES) to obtain an estimate of the gradient. Tu et al. (2018) provide an adaptive random gradient estimation algorithm that balances query counts and distortion, and introduces a trained auto-encoder to achieve attack acceleration. Ilyas et al. (2019) extend this line of work by proposing the idea of gradient priors and bandits: Bandits $_{TD}$ . Our work contrasts with the general approach of these works in two ways: a) We focus on estimating the sign of the gradient and investigate whether this estimation suffices to efficiently generate adversarial examples. b) The above methods employ random sampling in constructing queries to the model while our construction is adaptive. $^1$ Another approach involves learning adversarial examples for one model (with access to its gradient information) to transfer them against another (Liu et al., 2016; Papernot et al., 2017). Alternately, Xiao et al. (2018) use a Generative Adversarial Network (GAN) to generate adversarial examples which are based on small norm-bounded perturbations. These methods involve learning on a different model, which is expensive, and not amenable to comparison with setups—including ours—that directly query the model of interest. + +Sign-Based Optimization. In the context of general-purpose continuous optimization methods, sign-based stochastic gradient descent was studied in both zeroth- and first-order setups. In the latter, Bernstein et al. (2018) analyzed signSGD, a sign-based Stochastic Gradient Descent, and showed that it enjoys a faster empirical convergence than SGD in addition to the cost reduction of communicating gradients across multiple workers. Liu et al. (2019) extended signSGD to zeroth-order setup with the ZO-SignSGD algorithm. ZO-SignSGD (Liu et al., 2019) was shown to outperform NES against a black-box model on MNIST. These approaches use the sign of the gradient (or its zero-order estimate) to achieve better convergence, whereas our approach both estimates and uses the sign of the gradient. + +Contributions. We present the following contributions at the intersection of adversarial machine learning and black-box (zeroth-order) optimization: 1) We + +exploit the separability property of the directional derivative of the loss function of the model under attack in the direction of $\{\pm 1\}^n$ vectors, to propose a divide-and-conquer, adaptive, memory-efficient algorithm, we name SignHunter, to estimate the gradient sign bits. 2) We provide a worst-case theoretical guarantee on the number of queries required by SignHunter to perform at least as well as FGSM (Goodfellow et al., 2015), which has access to the model's gradient. To our knowledge, no black-box attack from the literature offers a similar performance guarantee. 3) We evaluate our approach on a rigorous set of experiments on both, standard and adversarially hardened models. All other previous works on this topic have published their results on a subset of the datasets and threat models we experimentally validate in this work. Through these experiments, we demonstrate that SignHunter's adaptive search for the gradient sign allows it to craft adversarial examples within a mere fraction of the theoretical number of queries thus outperforming FGSM and state-of-the-art black-box attacks. 4) We release a software framework to systematically benchmark adversarial + +![](images/c474b48e05ab035454bbcba661f8c8f58799ffe6f43613454eff55f1f97fb716.jpg) +Figure 1: Misclassification rate of an MNIST model on the noisy FGSM's adversarial examples as a function of correctly estimated coordinates of $\mathrm{sign}(\nabla_{\pmb{x}}f(\pmb{x},y))$ on 1000 random MNIST images. Estimating the sign of the top $30\%$ gradient coordinates (in terms of their magnitudes) is enough to achieve a rough misclassification rate of $70\%$ . More details can be found in Appendix A. + +black-box attacks, including SignHunter's, on MNIST, CIFAR10, and IMAGENET models in terms of success rate, query count, and other metrics. 5) We demonstrate how SignHunter can be used to characterize adversarial cones in a black-box setup and in doing so, highlight the gradient masking effect. + +Notation. Let $n$ denote the dimension of datapoint $\pmb{x}$ . Denote a hidden $n$ -dimensional binary code by $\pmb{q}^*$ . That is, $\pmb{q}^* \in \mathcal{H} \equiv \{-1, +1\}^n$ . Further, denote the directional derivative of some function $f$ at a point $\pmb{x}$ in the direction of a vector $\pmb{v}$ by $D_{\pmb{v}}f(\pmb{x}) \equiv v^{T}\nabla_{\pmb{x}}f(\pmb{x})$ which often can be approximated by the finite difference method. That is, for $\delta > 0$ , we have + +$$ +D _ {\boldsymbol {v}} f (\boldsymbol {x}) = \boldsymbol {v} ^ {T} \nabla_ {\boldsymbol {x}} f (\boldsymbol {x}) \approx \frac {f (\boldsymbol {x} + \delta \boldsymbol {v}) - f (\boldsymbol {x})}{\delta}. \tag {1} +$$ + +Let $\Pi_S(\cdot)$ be the projection operator onto the set $S$ , $B_p(\pmb{x}, \epsilon)$ be the $\ell_p$ ball of radius $\epsilon$ around $\pmb{x}$ . + +# 2 GRADIENT ESTIMATION + +At the heart of black-box adversarial attacks is generating a perturbation vector to slightly modify the original input $\pmb{x}$ so as to fool the network prediction of its true label $y$ . Put differently, an adversarial example $\pmb{x}'$ maximizes the network's loss $L(\pmb{x}', y)$ but still remains $\epsilon$ -close to the original input $\pmb{x}$ . Although the loss function $L$ can be non-concave, gradient-based techniques are often very successful in crafting an adversarial example Madry et al. (2017). That is, setting the perturbation vector as a step in the direction of $\nabla_{\pmb{x}} L(\pmb{x}, y)$ . Consequently, the bulk of black-box attack methods try to estimate the gradient by querying an oracle that returns, for a given input/label pair $(\pmb{x}, y)$ , the value of the network's loss $L(\pmb{x}, y)$ , consulting prediction or classification accuracy. Using only such value queries, the basic approach relies on the finite difference method to approximate the directional derivative (Eq. 1) of the function $L$ at the input/label pair $(\pmb{x}, y)$ in the direction of a vector $\pmb{v}$ , which corresponds to $\pmb{v}^T \nabla_{\pmb{x}} L(\pmb{x}, y)$ . With $n$ linearly independent vectors $\{\pmb{v}_i^T \nabla_{\pmb{x}} L(\pmb{x}, y) = d_i\}_{1 \leq i \leq n}$ , one can construct a linear system of equations to recover the full gradient. Clearly, this approach's query complexity is $O(n)$ , which can be prohibitively expensive for large $n$ (e.g., $n = 268, 203$ for the IMAGENET dataset). Recent works try to mitigate this issue by exploiting data- and/or time-dependent priors (Tu et al., 2018; Ilyas et al., 2018; 2019). However, the queries are not adaptive, they are constructed based on i.i.d. random vectors $\{\pmb{v}_i\}$ . They fail to make use of the past queries' responses to construct the new query and recover the full gradient more efficiently. As stated in the introduction, we solve the smaller problem of gradient sign estimation with adaptive queries based on the observation that simply leveraging (noisy) sign bits of the gradient yields successful attacks—see Figure 1. + +Definition 1. (Gradient Sign Estimation Problem) For an input/label pair $(\pmb{x},y)$ and a loss function $L$ , let $\pmb{g}^{*} = \nabla_{\pmb{x}}L(\pmb{x},y)$ be the gradient of $L$ at $(\pmb{x},y)$ and $\pmb{q}^{*} = \mathrm{sign}(\pmb{g}^{*}) \in \mathcal{H}$ be the sign bit vector of $\pmb{g}^{*}$ . Then the goal of the gradient sign estimation problem is to find a binary vector $\pmb{q} \in \mathcal{H}$ maximizing the directional derivative + +$$ +\max _ {\boldsymbol {q} \in \mathcal {H}} D _ {\boldsymbol {q}} L (\boldsymbol {x}, y), \tag {2} +$$ + +from a limited number of (possibly adaptive) function value queries $L(\pmb{x}', y)$ . + +# 3 A METHOD FOR ESTIMATING SIGN OF THE GRADIENT FROM ADAPTIVE QUERIES + +Our goal is to estimate the gradient sign bits of the loss function $L$ of the model under attack at an input/label pair $(x, y)$ from a limited number of loss value adaptive queries $L(x', y)$ . To this end, we examine the basic concept of directional derivatives that has been employed in recent black-box + +adversarial attacks. Based on the definition of the directional derivative (Eq. 1), the following can be stated. + +Property 1 (Separability of $D_qL(\pmb{x},y)$ ). The directional derivative $D_qL(\pmb{x},y)$ of the loss function $L$ at an input/label pair $(\pmb{x},y)$ in the direction of a binary code $\pmb{q}$ is separable. That is, + +$$ +\max _ {\boldsymbol {q} \in \mathcal {H}} D _ {\boldsymbol {q}} L (\boldsymbol {x}, y) = \max _ {\boldsymbol {q} \in \mathcal {H}} \boldsymbol {q} ^ {T} \boldsymbol {g} ^ {*} = \sum_ {i = 1} ^ {n} \max _ {q _ {i} \in \{- 1, + 1 \}} q _ {i} g _ {i} ^ {*}. \tag {3} +$$ + +This reformulates the gradient sign estimation problem from single $n$ -dimensional to $n$ 1-dimensional binary black-box optimization problems, reducing the search space of sign bits from $2^{n}$ to $2n$ . Subsequently, one could recover the gradient sign bits with $n + 2$ queries as follows: i. Start with an arbitrary sign vector $\mathbf{q}$ and compute the directional derivative $D_{\mathbf{q}}L(\mathbf{x},y)$ . Using Eq. 1, this requires two queries: $L(\mathbf{x} + \delta \mathbf{q},y)$ and $L(\mathbf{x},y)$ . ii. For the remaining $n$ queries, flip $\mathbf{q}$ 's bits (coordinates) one by one and compute the corresponding directional derivative—one query each $L(\mathbf{x} + \delta \mathbf{q},y)$ . iii. Retain bit flips that maximize the directional derivative $D_{\mathbf{q}}L(\mathbf{x},y)$ and revert those otherwise. This, however, still suffers from the $O(n)$ complexity of full gradient estimation methods. Further, each query recovers at most one sign bit and the natural question to ask is: can we recover more sign bits per query? + +Consider the case where all the gradient coordinates have the same magnitude, i.e., $|\{|g_i^*\} |_{1\leq i\leq n}| = 1$ , and let the initial guess $\pmb{q}_1$ have $r$ correct bits and $n - r$ wrong ones. Instead of flipping its bits sequentially, we can flip them all at once to get $\pmb {q}_2 = -\pmb {q}_1$ . If $D_{q_2}L(\pmb {x},y)\geq D_{q_1}L(\pmb {x},y)$ , then we retain $\pmb{q}_2$ as our best guess with $n - r$ correct bits, otherwise $\pmb{q}_1$ remains. In either cases, with three queries, we will recover $\max (r,n - r)$ sign bits. One can think of this flip/revert procedure as one of majority voting + +Algorithm 1 SignHunter $g:\mathcal{H}\to \mathbb{R}$ : the black-box function to be maximized over the binary hypercube $\mathcal{H}\equiv \{-1, + 1\} ^n$ + +definit $(g)$ : + +$$ +i \leftarrow 0, h \leftarrow 0 +$$ + +$$ +g \leftarrow g +$$ + +$$ +s \sim \mathcal {U} (\mathcal {H}) \quad / / e. g., [ + 1, \dots , + 1 ] +$$ + +$$ +\mathrm {d o n e} \leftarrow f a l s e +$$ + +$$ +g _ {b e s t} \leftarrow - \infty +$$ + +def is_done(): + +return done + +def step(): + +$$ +\mathrm {c} _ {-} \text {l e n} \leftarrow \lceil n / 2 ^ {h} \rceil +$$ + +$$ +s [ i ^ {*} c _ {\text {l e n}}: (i + 1) ^ {*} c _ {\text {l e n}} ] ^ {*} = - 1 +$$ + +$$ +i f g (s) \geq g _ {b e s t}: +$$ + +$$ +g _ {b e s t} \leftarrow g (\boldsymbol {s}) +$$ + +else: + +$$ +s [ i ^ {*} c _ {\text {l e n}}: (i + 1) ^ {*} c _ {\text {l e n}} ] ^ {*} = - 1 +$$ + +$$ +\text {i n c r e m e n t} i +$$ + +$$ +i = = 2 ^ {h}: +$$ + +$$ +i \leftarrow 0, \text {i n c r e m e n t} h +$$ + +$$ +i f h = = \left\lceil \log_ {2} (n) \right\rceil + 1: +$$ + +$$ +\text {d o n e} \leftarrow t r u e +$$ + +def get_current_sign Estimate(): + +return s + +by the guess's coordinates on whether they agree with their gradient sign's counterparts. To see this, let $|g_i^*| = 1$ for all $i$ , then the condition $D_{\mathbf{q}_2}L(\mathbf{x},y)\geq D_{\mathbf{q}_1}L(\mathbf{x},y)$ can be written as $n - r - r\geq r - n + r\Rightarrow n\geq 2r$ . If the agree votes $r$ are less than half of the total votes $n$ , then $\mathbf{q}_2$ is retained. Besides flipping all the coordinates, one can employ the same procedure iteratively on a subset (chunk) of the coordinates $[q_j,\dots ,q_{j + n_i}]$ of the guess vector $\pmb{q}$ , recovering $\max (r_i,n_i - r_i)$ sign bits, where $n_i$ and $r_i$ is the length of the $i$ th chunk and the number of its correct signs, respectively. + +While the magnitudes of gradient coordinates may not have the same value as assumed in the previous example; through empirical evaluation (see Appendix F), we found them to be concentrated. Consequently and with high probability, their votes on retaining or reverting chunks of sign flips are weighted (by their corresponding gradient magnitude) similarly. That said, if we are at a chunk where the distribution of the gradient coordinate magnitudes is uniform, then the flip/revert procedure could favor recovering few sign coordinates with large magnitude counterparts over many sign coordinates with small magnitude counterparts. From our experiments on the noisy FGSM, this still suffices to generate adversarial examples: an attack with $30\%$ correct sign bits (that correspond to the top gradient coordinates magnitudes) is more effective than an attack with $50\%$ correct arbitrary sign bits as shown in Figure 1. Put differently, we would like to recover as many sign bits as possible with as few queries as possible. However, if we can only recover few, they should be those that correspond to coordinates with large gradient magnitude. This notion is in line with the flip/revert procedure. + +We employ the above observation in a divide-and-conquer search which we refer to as SignHunter. As outlined in Algorithm 1, the technique starts with an initial guess of the sign vector $\pmb{q}_1$ ( $s$ in Algorithm 1). It then proceeds to flip the sign of all the coordinates to get a new sign vector $\pmb{q}_2$ , + +and revert the flips if the loss oracle returned a value $L(\pmb{x} + \delta \pmb{q}_2, y)$ (or equivalently the directional derivative) less than the best obtained so far $L(\pmb{x} + \delta \pmb{q}_1, y)$ . SignHunter applies the same rule to the first half of the coordinates, the second half, the first quadrant, the second quadrant, and so on. For a search space of dimension $n$ , SignHunter needs $2^{\lceil \log(n) + 1 \rceil} - 1$ sign flips to complete its search. If the query budget is not exhausted by then, one can update $\pmb{x}$ with the recovered signs and restart the procedure at the updated point with a new starting code $s$ . If we start with a sign vector whose Hamming distance to the optimal sign vector $q^*$ is $n/2$ : agreeing with $q^*$ in the first half of coordinates. In this case, SignHunter needs just four queries to recover the entire sign vector independent of $n$ , whereas the sequential bit flipping still require $n + 2$ queries. In the next theorem, we show that SignHunter is guaranteed to perform at least as well as FGSM with $O(n)$ oracle queries. Up to our knowledge, no such guarantees exist for any black-box attack from the literature. + +Theorem 1. (Optimality of SignHunter) Given $2^{\lceil \log(n) + 1 \rceil}$ queries and that the directional derivative is well approximated by the finite-difference (Eq. 1), SignHunter is at least as effective as FGSM (Goodfellow et al., 2015) in crafting adversarial examples. + +The proof can be found in Appendix B. Theorem 1 provides an upper bound on the number of queries required for SignHunter to recover the gradient sign bits, and perform as well as FGSM. In practice (as will be shown in our experiments), SignHunter crafts adversarial examples with a small fraction of this upper bound. The rationale here is that we do not need to recover the sign bits exactly; we rather need a fast convergence to an adversarially helpful sign vector $s$ . In our setup, we use the best sign estimation obtained $s$ so far in a similar fashion to FGSM, whereas full-gradient estimation approaches often employ an iterative scheme of $T$ steps within the perturbation ball $B_{p}(\boldsymbol{x},\epsilon)$ , calling the gradient estimation routine in every step leading to a search complexity of $nT$ . Instead, our gradient sign estimation routine runs at the top level of our adversarial example generation procedure. Further, SignHunter is amenable to parallel hardware architecture and has a smaller memory footprint (just sign bits) and thus can carry out attacks in batches more efficiently. Crafting black-box adversarial attacks with SignHunter is outlined in Algorithm 2. + +# 4 EXPERIMENTS + +We evaluate SignHunter and compare it with established algorithms from the literature: ZO-SignSGD Liu et al. (2019), NES Ilyas et al. (2018), and Bandits $_{TD}$ Ilyas et al. (2019) in terms of effectiveness in crafting (without loss of generality) untargeted black-box adversarial examples. To highlight SignHunter's adaptive query construction, we introduce a variant of Algorithm 2, named Rand. At every iteration, Rand's sign vector is sampled uniformly from $\mathcal{H}$ . Both $\ell_{\infty}$ and $\ell_{2}$ threat models are considered on the MNIST, CIFAR10, and IMAGENET datasets. Code and data for the experiments can be found at https://bit.ly/3acIHoQ. + +# Experiments Setup. Our experiment + +setup is similar to (Ilyas et al., 2019). Each attacker is given a budget of 10,000 oracle queries per attack attempt and is evaluated on 1000 images from the test sets of MNIST, CIFAR10, and the validation set of IMAGENET. We did not find a standard practice for setting the perturbation bound $\epsilon$ . For the $\ell_{\infty}$ threat model, we use (Madry et al., 2017)'s bound for MNIST and (Ilyas et al., 2019)'s bounds for both CIFAR10 and IMAGENET. For the $\ell_{2}$ threat model, (Ilyas et al., 2019)'s bound is used for IMAGENET. MNIST's bound is set based on the sufficient distortions observed in (Liu et al., + +Algorithm 2 Black-Box Adversarial Example Generation with SignHunter + $\pmb{x}_{init}$ : input to be perturbed, $y_{init}$ : $\pmb{x}_{init}$ 's true label, $B_p(:, \epsilon) : \ell_p$ perturbation ball of radius $\epsilon$ $L$ : loss function of the model under attack +1: $\delta \gets \epsilon //$ set finite-difference probe to perturbation bound +2: $x_{o}\gets x_{init}$ +3: Define the function $g$ as + $g(\pmb {q}) = \frac{L(\Pi_{B_p(\pmb{x}_{init},\epsilon)}(\pmb{x}_o + \delta\pmb{q}),y_{init}) - L(\pmb{x}_o,y_{init})}{\delta}$ +4: SignHunter.init $(g)$ +5: //C(·) returns top class +6: while $C(x) = y_{init}$ do +7: SignHunter.step() +8: $\pmb {s}\gets$ SignHunter.get_current_sign Estimate() +9: $\pmb {x}\gets \Pi_{B_p(\pmb{x}_{init},\epsilon)}(\pmb {x}_o + \delta \pmb {s})$ +10: if SignHunter.is_done() then +11: $x_{o}\gets x$ +12: Define the function $g$ as in Line 3 (with $x_{o}$ update) +13: SignHunter.init $(g)$ +14: return $\pmb{x}$ + +![](images/89d8fb82b223a07e3ede165f701357cf8e5cab54206405157841b1df130ea9a7.jpg) + +![](images/0e1eb76115bacd66184035890aa4665248d945c0038ad4a094e50f2a60869b46.jpg) + +![](images/d8d11050ef97b5d63cef81f128bf854eb9054fd74e1e5ad84771908106d21501.jpg) + +![](images/aad9e3f15cb64b9c6516a88cef831315dbaa685b6bd78b51c5183a9d22df78c8.jpg) +(a) MNIST $\ell_{\infty}$ +(d) MNIST $\ell_2$ +Figure 2: Performance of black-box attacks in the $\ell_{\infty}$ and $\ell_{2}$ perturbation constraint. The plots show the average number of queries used per successful image for each attack when reaching a specified success rate. + +![](images/91c948b3d05a81048c6ccebc381e374708370354994ecd16ea09d1eb7ad90664.jpg) +(b) CIFAR10 $\ell_{\infty}$ +(e) CIFAR10 $\ell_2$ + +![](images/0a1b828598c1d2c12285363966a83c5fc5be96b8b69d29ef571b3cf27e17a559.jpg) +(c) IMAGENET $\ell_{\infty}$ +(f) IMAGENET $\ell_2$ + +2019), which are smaller than the one used in (Madry et al., 2017). We use the observed bound in (Cohen et al., 2019) for CIFAR10. We show results based on standard models-i.e., models that are not adversarially hardened. For MNIST and CIFAR10, the naturally trained models from (Madry et al., 2017)'s MNIST and CIFAR10 challenges are used. For IMAGENET, TensorFlow's Inception (v3) model is used. The loss oracle returns the cross-entropy loss of the respective model. See Appendix C for other general experimental setup details. + +Hyperparameters Setup. While SignHunter does not have any hyperparameters, to fairly compare it with the other algorithms, we tuned their hyperparameters starting with the default values reported by the corresponding authors. The finite difference probe $\delta$ for SignHunter is set to the perturbation bound $\epsilon$ as it is used for both computing the finite difference and crafting the adversarial examples—see Line 1 in Algorithm 2. This tuning-free aspect of SignHunter offers a robustness advantage over algorithms which require expert hypertuning. Details on the hyperparameter setup are available in Appendix C. + +Table 1: Summary of attacks effectiveness on CIFAR10 under $\ell_{\infty}$ and $\ell_{2}$ perturbation constraints, and with a query limit of 10,000 queries. The Failure Rate $\in [0,1]$ column lists the fraction of failed attacks over 1000 images. The Avg. # Queries column reports the average number of queries made to the loss oracle only over successful attacks. + +
AttackFailure RateAvg. # Queries
\( \ell_{\infty} \)\( \ell_2 \)\( \ell_{\infty} \)\( \ell_2 \)
Bandits \( {}_{TD} \)0.950.39432.241201.85
NES0.370.67312.57496.99
Rand0.200.89422.161018.17
SignHunter0.070.21121.00692.39
ZOSignSGD0.370.80161.28528.35
+ +Results. Figure 2 shows the trade-off between the success (evasion) rate and the mean number of queries (of the successful attacks, per convention) needed to generate an adversarial example for the MNIST, CIFAR10, and IMAGENET classifiers under the $\ell_{\infty}$ and $\ell_{2}$ perturbation constraints. These plots indicate the average number of queries required for a desired success rate. Table 1 represents a tabulated summary of plots (b) and (e) of Figure 2. We observe the following: For any given success rate, SignHunter dominates the previous state of the art approaches in all settings except the IMAGENET $\ell_{2}$ setup, where Bandits $T_{D}$ shows a better query efficiency when the desired success rate is roughly greater than 0.35. This is all the more remarkable because Bandits $T_{D}$ + +exploits tiles, a data-dependent prior, searching over $50 \times 50 \times 3$ dimensions for IMAGENET, while SignHunter searches over the explicit data $299 \times 299 \times 3$ dimensions: $36 \times$ more dimensions. + +$\ell_{\infty}$ vs. $\ell_{2}$ Perturbation Threat. In view of Bandits $T_{D}$ 's advantage, SignHunter is remarkably efficient in the $\ell_{\infty}$ setup, achieving a $100\%$ evasion using—on average—just 12 queries per image against the MNIST classifier! In the $\ell_{2}$ setup, SignHunter's performance degrades—yet it still outperforms the other algorithms. This is expected, since SignHunter perturbs all the coordinates with the same magnitude and the $\ell_{2}$ perturbation bound $\epsilon_{2}$ for all the datasets in our experiments is set such that $\epsilon_{2} / \sqrt{n}$ is significantly less than the $\ell_{\infty}$ perturbation bound $\epsilon_{\infty}$ . Take the case of MNIST ( $n = 28 \times 28$ ), where $\epsilon_{\infty} = 0.3$ and $\epsilon_{2} = 3$ . For SignHunter, the $\ell_{2}$ setup is equivalent to an $\ell_{\infty}$ perturbation bound of $3/28 \approx 0.1$ . The employed $\ell_{2}$ perturbation bounds give the state of the art—continuous optimization based—approaches more perturbation options. For instance, it is possible for NES to perturb just one pixel in an MNIST image by a magnitude of 3; two pixels by a magnitude of $3/\sqrt{2} \approx 2.1$ each; ten pixels by a magnitude of $3/\sqrt{10} \approx 0.9$ each, etc. On the other hand, the binary optimization view of SignHunter limits it to always perturb all $28 \times 28$ pixels by a magnitude of $3/28 \approx 0.1$ . Despite its fewer degrees of freedom, SignHunter maintains its effectiveness in the $\ell_{2}$ setup. The plots can also be interpreted as a sensitivity assessment of SignHunter as $\epsilon$ gets smaller going from $\ell_{\infty}$ to the $\ell_{2}$ perturbation threat. + +SignHunter vs FGSM. The performance of SignHunter is in line with Theorem 1 when compared with the performance of FGSM (the noisy FGSM at $k = 100\%$ in Figures 1 and 2 of Appendix A) in both $\ell_{\infty}$ and $\ell_{2}$ setups across all datasets. For instance, FGSM has a failure rate of 0.32 for CIFAR10 $\ell_{2}$ (Appendix A, Figure 2 (b)), while SignHunter achieves a failure rate of 0.21 with $692.39 < 2n = 2 \times 3 \times 32 \times 32 = 6144$ queries (Appendix D, Table 8). Note that for IMAGENET, SignHunter outperforms FGSM with a query budget of 10,000 queries, a fraction of the theoretical number of queries required $2n = 536$ , 406 to perform at least as well. Incorporating SignHunter in an iterative framework of perturbing the data point $x$ till the query budget is exhausted (Lines 10 to 14 in Algorithm 2) supports the observation in white-box settings that iterative FGSM—or Projected Gradient Descent (PGD)—is stronger than FGSM (Madry et al., 2017; Al-Dujaili et al., 2018). This is evident by the upticks in SignHunter's performance on the MNIST $\ell_{2}$ case (Appendix D, Figure 4), which happens after every iteration (after every other $2 \times 28 \times 28$ queries). + +Gradient Estimation. Plots of the Hamming similarity capture the number of recovered sign bits, while plots of the average cosine similarity capture the value of Eq. 2. Both SignHunter and Bandits $_{TD}$ consistently optimize both metrics. In general, SignHunter (Bandits $_{TD}$ ) converges faster especially on the Hamming(cosine) metric as it is estimating the signs(signs and magnitudes) compared to Bandits $_{TD}$ 's full gradient (SignHunter's gradient sign) estimation. This is most obvious in the IMAGENET $\ell_2$ setup (Appendix D, Figure 6). Note that once an attack is successful, the estimated gradient sign at that point is used for the rest of the plot. This explains why, in the $\ell_{\infty}$ settings, SignHunter's plot does not improve compared to its $\ell_2$ counterpart, as most of the attacks are successful in the very first few queries made to the loss oracle and no further refined estimation is required. Another possible reason is that the gradient direction can be very local and does not capture the global loss landscape compared to SignHunter's estimation. More on this is discussed in Section 6. + +SignHunter vs. Rand. Given these results, one could argue that SignHunter is effective, because it maximally perturbs datapoints to the vertices of their perturbation balls.6 However, Rand's poor performance does not support this argument and highlights the effectiveness of SignHunter's adaptive query construction. Except for MNIST and CIFAR10 $\ell_{\infty}$ settings, Rand performs worse than the full-gradient estimation approaches, although it perturbs datapoints similar to SignHunter. Overall, SignHunter is $3.8 \times$ less failure-prone than the state-of-the-art approaches combined, and spends over all the images (successful and unsuccessful attacks) $2.5 \times$ less queries.7 + +# 5 SIGNHUNTER VS. DEFENSES + +To complement Section 4, we evaluate SignHunter against adversarial training, a way to improve the robustness of DNNs (Madry et al., 2017). Specifically, we attacked the secret models used + +in public challenges for MNIST and CIFAR10. For IMAGENET, we used ensemble adversarial training, a method that argues security against black-box attacks based on transferability Tramér et al. (2017a). Appendix E reports the same metrics used in Section 4 as well as a tabulated summary for the results discussed below. + +Public MNIST Black-Box Attack Challenge. In line with the challenge setup, 10,000 test images were used with an $\ell_{\infty}$ perturbation bound of $\epsilon = 0.3$ . Although the secret model is released, we treated it as a black box similar to our experiments in Section 4. No maximum query budget was specified, so we set it to 5,000 queries. This is equal to the number of iterations given to a PGD attack in the white-box setup of the challenge: 100-steps with 50 random restarts. SignHunter's attacks resulted in the lowest model accuracy of $91.47\%$ , outperforming all the submitted attacks to the challenge, with an average number of queries of 233 per successful attack. Note that the attacks submitted to the challenge are based on transferability and do not query the model of interest. On the other hand, the most powerful white-box attack by Wang et al. (2018)—as of May 15, 2019—resulted in a model accuracy of $88.42\%$ . Further, a PGD attack with 5,000 back-propagations achieves $89.62\%$ in contrast to SignHunter's $91.47\%$ with just 5,000 forward-propagations. + +Public CIFAR10 Black-Box Attack Challenge. This challenge setup is similar to the above, but with an $\ell_{\infty}$ perturbation bound of $\epsilon = 8$ . SignHunter's attacks resulted in the lowest model accuracy of $47.16\%$ , outperforming all the submitted attacks to the challenge, with an average number of queries of 569 per successful attack. Similar to the MNIST challenge, all the submitted attacks are based on transferability. On the other hand, the most powerful white-box attack by Zheng et al. (2018)—as of May 15, 2019—resulted in a model accuracy of $44.71\%$ . Further, a PGD attack with 200 back-propagations achieves $45.21\%$ in contrast to SignHunter's $47.16\%$ with 5,000 forward-propagations. + +Ensemble Adversarial Training on IMAGENET. In line with Tramèr et al. (2017a), we set $\epsilon = 0.0625$ and report the v3adv-ens4 model's misclassification over 10,000 random images from IMAGENET's validation set. After 20 queries, SignHunter achieves a top-1 error of $40.61\%$ greater than the $33.4\%$ rate of a series of black-box attacks (including PGD with 20 iterations) transferred from a substitute model. With 1000 queries, SignHunter breaks the model's robustness with a top-1 error of $90.75\%$ . + +# 6 CHARACTERIZING ADVERSARIAL CONES WITH SIGNHUNTER + +Estimating the size of adversarial cones, the space of adversarial examples in the vicinity of a point, for a model has been a topic of interest by the machine learning community Tramér et al. (2017a); Ma et al. (2018); Lu et al. (2018). The Gradient-Aligned Adversarial Subspace (GAAS) method Tramér et al. (2017b) provides an approximation of the adversarial cone dimensionality by finding a set of orthogonal perturbations of norm $\epsilon$ that are all adversarial with respect to the model. By linearizing the model's loss function, this is reduced to finding orthogonal vectors that are maximally aligned with its gradient $g^{*}$ or its gradient sign $q^{*}$ in the $\ell_{\infty}$ setup Tramér et al. (2017a). In Figure 3, we reproduce (Tramér et al., 2017a, Fig. 2) and show that aligning the orthogonal vectors with SignHunter's estimation (we refer to this approach as SAAS) instead of aligning them with the gradient (GAAS) results in a better approximation of the adversarial cone for the two IMAGENET models considered earlier, even when the number of queries given to SignHunter is just a fraction of the dimensionality $n$ . Through its query-efficient finite-difference sign estimation, SignHunter is able to quickly capture the larger-scale variation of the loss landscape in the point's neighborhood, rather than the infinitesimal point-wise variation that the gradient provides, which can be very local. This is important in adversarial settings, where the loss landscape is analyzed in the vicinity of the point Moosavi-Dezfooli et al. (2018); Tramér et al. (2017a). One interesting observation at $k = 1$ (note here, $r_1 = q^*$ ) across all $\epsilon$ is that GAAS finds adversarial directions for fewer points against the v3adv-ens4 model than the naturally trained model v3, whereas SAAS reports similar probability of adversarial directions for both. This contrast suggests that ensemble adversarial training Tramér et al. (2017a) still exhibits the gradient masking effect, where the gradient poorly approximates the global loss landscape. + +# 7 CONCLUSION + +Assuming a black-box threat model, we studied the problem of generating adversarial examples for neural nets and proposed the gradient sign estimation problem as the core challenge in crafting + +![](images/d1b0ec943a44cc4d7e57c5ac1e2b60bcd0ec99e9d466a1c57b1ea5c9b598f884.jpg) +(a) $\epsilon = 4 / 255$ + +![](images/7cb167baed2af6e3ccfda3cf24fccceae4d0427de45a67429ebd305065b5c521.jpg) +(b) $\epsilon = 10 / 255$ +Figure 3: Two estimations of the $\ell_{\infty}$ adversarial cones for two IMAGENET models: v3 and v3adv-ens4. The first estimation (GAAS: Gradient-Aligned Adversarial Subspace) finds $k$ orthogonal vectors maximally aligned with the gradient sign $q^{*}$ Tramér et al. (2017a). The second (SAAS: SignHunter-Aligned Adversarial Subspace) finds $k$ orthogonal vectors that are maximally aligned with SignHunter's $s$ (Algorithm 2, Line 8) after 1,000 queries. Similar to (Tramér et al., 2017a, Figure 2), for 500 correctly classified points $x$ and $\epsilon \in \{4,10,16\}$ , we plot the probability that we find at least $k$ orthogonal vectors $r_i$ —computed based on (Tramér et al., 2017a, Lemma 7)—such that $||r_i||_{\infty} = \epsilon$ and $x + r_i$ is misclassified. For both models and for the same points $x$ , SAAS finds more orthogonal adversarial vectors $r_i$ than GAAS, thereby providing a better characterization of the space of adversarial examples in the vicinity of a point, albeit without a white-box access to the models. + +![](images/2190aefcd6c5c2acc0db65ef2b01668af9954d224c6e050610984416d07ff5d5.jpg) +(c) $\epsilon = 16 / 255$ + +these examples. We formulate the problem as a binary black-box optimization one: maximizing the directional derivative in the direction of $\{\pm 1\}^n$ vectors, approximated by the finite difference of the queries' loss values. The separability property of the directional derivative helped us devise SignHunter, a query-efficient, tuning-free divide-and-conquer algorithm with a small memory footprint that is guaranteed to perform at least as well as FGSM after $O(n)$ queries. No similar guarantee is found in the literature. In practice, SignHunter needs a mere fraction of this number of queries to craft adversarial examples. The algorithm is one of its kind to construct adaptive queries instead of queries that are based on i.i.d. random vectors. Robust to gradient masking, SignHunter can also be used to estimate the dimensionality of adversarial cones. Moreover, SignHunter achieves the highest evasion rate on two public black-box attack challenges and breaks a model that argues robustness against substitute-model attacks. + +# ACKNOWLEDGMENTS + +This work was supported by the MIT-IBM Watson AI Lab. We would like to thank Shashank Srikant for his timely help. We are grateful for feedback from Nicholas Carlini and Zico Kolter. + +# REFERENCES + +Abdullah Al-Dujaili, Alex Huang, Erik Hemberg, and Una-May O'Reilly. Adversarial deep learning for robust detection of binary encoded malware. In 2018 IEEE Security and Privacy Workshops (SPW), pages 76-82. IEEE, 2018. +Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar. signSGD: Compressed optimisation for non-convex problems. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 560-569, Stockholm mssan, Stockholm Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/bernstein18a.html. +Arjun Nitin Bhagoji, Warren He, Bo Li, and Dawn Song. Exploring the space of black-box attacks on deep neural networks. arXiv preprint arXiv:1712.09491, 2017. +Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317-331, 2018. +Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387-402. Springer, 2013. +Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57. IEEE, 2017. + +Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15-26. ACM, 2017. +Jeremy Cohen, Elan Rosenfeld, and J. Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv:1902.02918v1, 2019. +Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.6572. +Chuan Guo, Jacob R Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Q Weinberger. Simple black-box adversarial attacks. arXiv preprint arXiv:1905.07121, 2019. +Jamie Hayes and George Danezis. Machine learning as an adversarial service: Learning black-box adversarial examples. CoRR, abs/1708.05207, 2017. +Elad Hazan, Adam Klivans, and Yang Yuan. Hyperparameter optimization: A spectral approach. arXiv preprint arXiv:1706.00764, 2017. +Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2137-2146, Stockholmsmassan, Stockholm Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/ilyas18a.html. +Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Prior convictions: Black-box adversarial attacks with bandits and priors. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BkMiWhR5K7. +Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial machine learning at scale. 2017. URL https://arxiv.org/abs/1611.01236. +Sijia Liu, Pin-Yu Chen, Xiangyi Chen, and Mingyi Hong. signSGD via zeroth-order oracle. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BJe-DsC5Fm. +Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016. +Pei-Hsuan Lu, Pin-Yu Chen, and Chia-Mu Yu. On the limitation of local intrinsic dimensionality for characterizing the subspaces of adversarial examples. arXiv preprint arXiv:1803.09638, 2018. +Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E Houle, and James Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613, 2018. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. +Seungyong Moon, Gaon An, and Hyun Oh Song. Parsimonious black-box adversarial attacks via efficient combinatorial optimization. arXiv preprint arXiv:1905.06635, 2019. +Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574-2582, 2016. +Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, and Pascal Frossard. Robustness via curvature regularization, and vice versa. arXiv preprint arXiv:1811.09716, 2018. +Nina Narodytska and Shiva Prasad Kasiviswanathan. Simple black-box adversarial attacks on deep neural networks. In CVPR Workshops, volume 2, 2017. +Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506-519. ACM, 2017. +Adi Shamir, Itay Safran, Eyal Ronen, and Orr Dunkelman. A simple explanation for the existence of adversarial examples with small hamming distance. arXiv preprint arXiv:1901.10861, 2019. + +Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017a. +Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017b. +Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, and Shin-Ming Cheng. Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. arXiv preprint arXiv:1805.11770, 2018. +Shiqi Wang, Yizheng Chen, Ahmed Abdou, and Suman Jana. Mixtrain: Scalable training of formally robust neural networks. arXiv preprint arXiv:1811.02625, 2018. +Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks, 2018. URL https://openreview.net/forum?id=HknbyQbC-. +Yun-Bin Zhao. Sparse optimization theory and methods. CRC Press, an imprint of Taylor and Francis, Boca Raton, FL, 2018. ISBN 978-1138080942. +Tianhang Zheng, Changyou Chen, and Kui Ren. Distributionally adversarial attack. arXiv preprint arXiv:1808.05537, 2018. + +# APPENDIX A. NOISY FGSM + +This section shows the performance of the noisy FGSM on standard models (described in Section 1 of the main paper) on the MNIST, CIFAR10 and IMAGENET datasets. In Figure 4, we consider the $\ell_{\infty}$ threat perturbation constraint. Figure 5 reports the performance for the 2 setup. Similar to Ilyas et al. (2019), for each $k$ in the experiment, the top $k$ percent of the signs of the coordinates—chosen either randomly (random- $k$ ) or by the corresponding magnitude $|\partial L(\boldsymbol{x},y) / \partial x_i|$ (top- $k$ )—are set correctly, and the rest are set to $-1$ or $+1$ at random. The misclassification rate shown considers only images that were correctly classified (with no adversarial perturbation). In accordance with the models' accuracy, there were 987, 962, and 792 such images for MNIST, CIFAR10, and IMAGENET out of the sampled 1000 images, respectively. These figures also serve as a validation for Theorem 1 of the main paper when compared to SignHunter's performance shown in Appendix D. + +![](images/8b3977d118a64bd1d8882e5517a23ce932803fa95c203ff51d038c9011c42139.jpg) +(a) + +![](images/342af4730756d0dd407ed3afddaf0792f114506a6aeeef29faaef093bef6088b.jpg) +(b) +Figure 4: Misclassification rate of three neural nets (for (a) MNIST, (b) CIFAR10, and (c) IMAGENET, respectively) on the noisy FGSM's adversarial examples as a function of correctly estimated coordinates of $\mathrm{sign}(\nabla_{\pmb{x}}f(\pmb{x},y))$ on random 1000 images from the corresponding evaluation dataset, with the maximum allowed $\ell_{\infty}$ perturbation $\epsilon$ being set to 0.3, 12, and 0.05, respectively. Across all the models, estimating the sign of the top $30\%$ gradient coordinates (in terms of their magnitudes) is enough to achieve a misclassification rate of $\sim 70\%$ . Note that Plot (c) is similar to Ilyas et al. (2019)'s Figure 1, but it is produced with TensorFlow rather than PyTorch. + +![](images/42675b2c7fb82f32ef1fe0fcffe5a4c2617b8d0a6c7a9ffffdb8ceeb4f42db02.jpg) +(c) + +![](images/57459b207c45ba55d0148701362e69ae8f736792851c5af7da79cc19df5b1e10.jpg) +(a) + +![](images/784529cda987dae5ac918791a9807cebb2b2ac9c8f942ccea6c4704ff5cb21b0.jpg) +(b) +Figure 5: Misclassification rate of three neural nets (for (a) MNIST, (b) CIFAR10, and (c) IMAGENET, respectively) on the noisy FGSM's adversarial examples as a function of correctly estimated coordinates of $\mathrm{sign}(\nabla_{\pmb{x}}f(\pmb{x},y))$ on random 1000 images from the corresponding evaluation dataset, with the maximum allowed $\ell_2$ perturbation $\epsilon$ being set to 3, 127, and 5, respectively. Compared to Figure 4, the performance on MNIST and CIFAR10 drops significantly. + +![](images/e9fa112225576351d9a48879ec99a4fe7d947687a550bc82cfcaced86bf5514a.jpg) +(c) + +# APPENDIX B. PROOFS FOR THEOREMS IN THE MAIN PAPER + +In this section, we present a proof of Theorem 1 of Section 3. Note that the theorem makes the assumption that the finite difference is a good approximation of the directional derivative. This assumption has been the core concept behind most of the black-box adversarial attack algorithms and we state it here for completeness. + +Theorem 1. (Optimality of SignHunter) Given $2^{\lceil \log (n) + 1\rceil}$ queries and that the directional derivative is well approximated by the finite-difference (Eq. 1 in the main paper), SignHunter is at least as effective as FGSM (Goodfellow et al., 2015) in crafting adversarial examples. + +Proof. Based on the separability property of the directional derivative, the $i$ th coordinate of the gradient sign vector can be recovered as follows: construct two binary codes $\mathbf{u}$ and $\mathbf{v}$ such that only their $i$ th bit is different. Therefore, we have + +$$ +q _ {i} ^ {*} = \operatorname {s i g n} \left(g _ {i} ^ {*}\right) = \left\{ \begin{array}{l l} u _ {i} & \text {i f} D _ {\boldsymbol {u}} L (\boldsymbol {x}, y) > D _ {\boldsymbol {v}} L (\boldsymbol {x}, y), \\ v _ {i} & \text {o t h e r w i s e .} \end{array} \right. \tag {4} +$$ + +From the definition of SignHunter, this is carried out for all the $n$ coordinates after $2^{\lceil \log (n) + 1 \rceil}$ queries. Put it differently, after $2^{\lceil \log (n) + 1 \rceil}$ queries, SignHunter has flipped every coordinate alone recovering its sign exactly as shown in Eq. 4 above. Therefore, the gradient sign vector is fully recovered, and one can employ the FGSM attack to craft an adversarial example. Note that this is under the assumption that our finite difference approximation of the directional derivative (Eq. 1 in the main paper) is good enough (or at least rank-preserving). + +# APPENDIX C. EXPERIMENTS SETUP + +This section outlines the experiments setup. To ensure a fair comparison among the considered algorithms, we did our best in tuning their hyperparameters. Initially, the hyperparameters were set to the values reported by the corresponding authors, for which we observed suboptimal performance. We made use of a synthetic concave loss function to efficiently tune the algorithms for each dataset $\times$ perturbation constraint combination. The performance curves on the synthetic loss function using the tuned values of the hyperparameters did show consistency with the reported results from the literature. For instance, we noted that ZO-SignSGD converges faster than NES, and that Bandits $_{TD}$ outperformed the rest of the algorithms towards the end of query budget. Further, in our adversarial examples generation experiments, we observed failure rate and query efficiency in line with the algorithms' corresponding papers—e.g., compare the performance of Bandits $_{TD}$ and NES in Table 9 of Appendix D with (Ilyas et al., 2019, Table 1). That said, we invite the community to provide their best tuned attacks. + +Note that SignHunter does not have any hyperparameters to tune. The finite difference probe $\delta$ for SignHunter is set to the perturbation bound $\epsilon$ as it is used for both computing the finite difference and crafting the adversarial examples—see Line 1 in Algorithm 2 of the main paper. This tuning-free setup of SignHunter offers a robust edge over the state-of-the-art black-box attacks, which often require expert knowledge to carefully tune their parameters. + +Table 3 describes the general setup for the experiments. Table 2 lists the sources of the models we attacked in this work, while Tables 4, 5, 6, and 7 outline the algorithms' hyperparameters. Figure 6 shows the performance of the considered algorithms on a synthetic concave loss function after tuning their hyperparameters. All experiments were run on a CUDA-enabled NVIDIA Tesla V100 16GB. + +A possible explanation of SignHunter's superb performance is that the synthetic loss function is well-behaved in terms of its gradient given an image. That is, most of gradient coordinates share the same sign, since pixels tend to have the same values and the optimal value for all the pixels is the same $\frac{\pmb{x}_{min} + \pmb{x}_{max}}{2}$ . Thus, SignHunter will recover the true gradient sign with as few queries as possible (recall the example in Section 3 of the main paper). Moreover, given the structure of the synthetic loss function, the optimal loss value is always at the boundary of the perturbation region; the boundary is where SignHunter samples its perturbations. + +Table 2: Source of attacked models. + +
ModelSource
MNIST modelshttps://github.com/MadryLab/mnist_challenge
CIFAR10 modelshttps://github.com/MadryLab/cifar10_challenge
IMAGENET-v3 modelhttps://bit.ly/2VYDc4X
IMAGENET-v3adv-ens4 modelhttps://bit.ly/2XWTdkx
+ +Table 3: General setup for all the attacks + +
ParameterValueCIFAR10IMAGENET
MNIST\( \ell_{\infty} \)\( \ell_{2} \)\( \ell_{\infty} \)\( \ell_{2} \)
\( \epsilon \) (allowed perturbation)0.33121270.055
Max allowed queries10000
Evaluation/Test set size1000
Data (pixel value) Range[0,1][0,255][0,1]
+ +Table 4: Hyperparameters setup for NES + +
HyperparameterValueCIFAR10IMAGENET
MNIST\( \ell_{\infty} \)\( \ell_{\infty} \)\( \ell_{2} \)\( \ell_{\infty} \)\( \ell_{2} \)
\( \delta \) (finite difference probe)0.10.12.552.550.10.1
\( \eta \) (image \( \ell_p \) learning rate)0.1121270.022
\( q \) (number of finite difference estimations per step)102020410050
+ +Table 5: Hyperparameters setup for ZO-SignSGD + +
HyperparameterValueCIFAR10IMAGENET
MNISTl∞l∞l2l∞l2
δ (finite difference probe)0.10.12.552.550.10.1
η (image ℓp learning rate)0.10.1220.020.004
q (number of finite difference estimations per step)102020410050
+ +Table 6: Hyperparameters setup for Bandits ${}_{TD}$ + +
HyperparameterValueCIFAR10IMAGENET
MNIST\( \ell_{\infty} \)\( \ell_{\infty} \)\( \ell_{2} \)\( \ell_{\infty} \)\( \ell_{2} \)
\( \eta \) (image \( \ell_p \) learning rate)0.030.015120.010.1
\( \delta \) (finite difference probe)0.10.12.552.550.10.1
\( \tau \) (online convex optimization learning rate)0.0010.00010.00011e-050.00010.1
Tile size (data-dependent prior)81020205050
\( \zeta \) (bandit exploration)0.010.10.10.10.010.1
+ +Table 7: Hyperparameters setup for SignHunter + +
ValueCIFAR10IMAGENET
MNISTl∞l∞l2l∞l2
Hyperparameter
δ (finite difference probe)0.33121270.055
+ +![](images/10b2b41cd24c1bd5447d596f898617e7c167981e1e4bf05831488b430cb49eb2.jpg) +(a) MNIST $\ell_{\infty}$ + +![](images/5edf4badcd9c28d609fc50353621d6c7d8a4e60c45a2dbf06416fcdc63390835.jpg) +(b) MNIST $\ell_2$ + +![](images/b6ae3852b56b31ae768b51b70572dbfa9a50147a3ad1d8e0a06a10fb53d8815f.jpg) +(c) CIFAR10 $\ell_{\infty}$ + +![](images/d88f3f17ece24d8648ac7a0ebacd35647296a10d09341f62b75c14fa2ef4dbe4.jpg) +(d) CIFAR10 $\ell_{2}$ + +![](images/a1e4ffd7d57d4ef408cd5c7d3f8e9c570b69a56202fbc51b08302a4beca7e48f.jpg) +(e) IMAGENET $\ell_{\infty}$ + +![](images/02f7290391b7d7dbc95e821a577cb674769d73dd904b80dca1ef8aa40a673013.jpg) +(f) IMAGENET $\ell_2$ +Figure 6: Tuning testbed for the attacks. A synthetic loss function was used to tune the performance of the attacks over a random sample of 25 images for each dataset and $\ell_p$ perturbation constraint. The plots above show the average performance of the tuned attacks on the synthetic loss function $L(\pmb{x},y) = -(x - x^{*})^{T}(x - x^{*})$ , where $x^{*} = \frac{\pmb{x}_{min} + \pmb{x}_{max}}{2}$ using a query limit of 1000 queries for each image. Note that in all, Bandits $_{TD}$ outperforms both NES and ZO-SignSGD. Also, we observe the same behavior reported by Liu et al. (2019) on the fast convergence of ZO-SignSGD compared to NES. We did not tune SignHunter; it does not have any tunable parameters. + +# APPENDIX D. RESULTS OF ADVERSARIAL BLACK-BOX EXAMPLES GENERATION + +This section shows results of our experiments in crafting adversarial black-box examples on standard models in the form of tables and performance traces, namely Figures 7, 8, and 9; and Tables 8, 9, and 10. + +Table 8: Summary of attacks effectiveness on MNIST under $\ell_{\infty}$ and $\ell_{2}$ perturbation constraints, and with a query limit of 10,000 queries. The Failure Rate $\in [0,1]$ column lists the fraction of failed attacks over 1000 images. The Avg. # Queries column reports the average number of queries made to the loss oracle only over successful attacks. + +
AttackFailure RateAvg. # Queries
\( \ell_{\infty} \)\( \ell_2 \)\( \ell_{\infty} \)\( \ell_2 \)
Bandits \( {}_{TD} \)0.680.59328.00673.16
NES0.630.63235.07361.42
Rand0.330.96847.771144.74
SignHunter0.000.0411.061064.22
ZOSignSGD0.630.75157.00881.08
+ +Table 9: Summary of attacks effectiveness on CIFAR10 under $\ell_{\infty}$ and $\ell_{2}$ perturbation constraints, and with a query limit of 10,000 queries. The Failure Rate $\in [0,1]$ column lists the fraction of failed attacks over 1000 images. The Avg. # Queries column reports the average number of queries made to the loss oracle only over successful attacks. + +
AttackFailure RateAvg. # Queries
\( \ell_{\infty} \)\( \ell_2 \)\( \ell_{\infty} \)\( \ell_2 \)
Bandits \( {}_{TD} \)0.950.39432.241201.85
NES0.370.67312.57496.99
Rand0.200.89422.161018.17
SignHunter0.070.21121.00692.39
ZOSignSGD0.370.80161.28528.35
+ +Table 10: Summary of attacks effectiveness on IMAGENET under $\ell_{\infty}$ and $\ell_{2}$ perturbation constraints, and with a query limit of 10,000 queries. The Failure Rate $\in [0,1]$ column lists the fraction of failed attacks over 1000 images. The Avg. # Queries column reports the average number of queries made to the loss oracle only over successful attacks. + +
AttackFailure RateAvg. # Queries
\( \ell_{\infty} \)\( \ell_2 \)\( \ell_{\infty} \)\( \ell_2 \)
Bandits \( {}_{TD} \)0.070.111010.051635.55
NES0.260.421536.191393.86
Rand0.720.93688.77418.02
SignHunter0.020.23578.561985.55
ZOSignSGD0.230.521054.98931.15
+ +![](images/95dbe728c6078f277b7121e5c25ce916d2236da02218d057c339df0f015c1fee.jpg) + +![](images/4e18762ab83f4219c58ff89d1ed9fd9f7732f41d776c0abd2898b2849aab25e7.jpg) + +![](images/d39a40570d34e1c4a84047b08f0b008b46a4b2693e7483741b0a9e64bab07280.jpg) + +![](images/013aad3defa3cf80d49d3d127d2808bcca042c2f575ec61dbd4aa4a513d22ff6.jpg) + +![](images/e850287a3d7df73b2b852345bf9ec3e2c389c7f8ad92b0096cfd939f5c0dfe0c.jpg) + +![](images/5fd2fdd375a38be8e0252351947b422ad14a74958c6e617b1a9b7bf2fe67d451.jpg) + +![](images/4a70a69e991fbea1c7f1b496e0e6bf6c178168e60ebd4a71cc33b6ffc174fb54.jpg) + +![](images/235cc6b1e294269bc0f451865182321f6b828d80755e461a7cd8ecdff0812690.jpg) + +![](images/5c4967cda2bb7a81a9e0a04c30487749c86fabc818ec639bba25b4ccb2f9abd3.jpg) +Figure 7: Performance curves of attacks on MNIST for $\ell_{\infty}$ (first column) and $\ell_{2}$ (second column) perturbation constraints. Plots of Avg. Loss row reports the loss as a function of the number of queries averaged over all images. The Avg. Hamming Similarity row shows the Hamming similarity of the sign of the attack's estimated gradient $\hat{g}$ with true gradient's sign $q^{*}$ , computed as $1 - ||\mathrm{sign}(\hat{g}) - q^{*}||_{H} / n$ and averaged over all images. Likewise, plots of the Avg. Cosine Similarity row show the normalized dot product of $\hat{g}$ and $g^{*}$ averaged over all images. The Success Rate row reports the attacks' cumulative distribution functions for the number of queries required to carry out a successful attack up to the query limit of 10,000 queries. The Avg. # Queries row reports the average number of queries used per successful image for each attack when reaching a specified success rate: the more effective the attack, the closer its curve is to the bottom right of the plot. + +![](images/f773c3a0d570b58ea39ed7adc9b29cbe91e0d89354e37f3058cdf05b79b8e4d4.jpg) + +![](images/a986ebaea8f63e8013ccc66d0f9ec45c582bb2c4e56d709587d7ab11df0e71c5.jpg) + +![](images/c81f5cadf914cf1b77809999c8c7e1ab87b9a0d1a46970738e80d9037598227b.jpg) + +![](images/55e732dba663d8e456bf726b547b671c474653cef4c4882ca6c9140107856986.jpg) + +![](images/4d5d9f4330c0c5159fc050b876644ecd9fa62a2cf33de1a825c187cc67292b0b.jpg) + +![](images/16fa500512807355bbfe8afa9e7ee1af6aa76a5fb6ed52d6b8e0360044faa592.jpg) + +![](images/a0f3fbfb32a9668ab3421c9c5ec810af3bd27ab0c25ed4a670cd0bae14c4df4e.jpg) + +![](images/0ddf2f37b4387b9a6e03739a53f9d5d502f562029830ef5b60ef3c714417e81c.jpg) + +![](images/ac64c7e5a354e24f0089995a9238170fd7b4402ce0dbf43d76571da2ded4820f.jpg) + +![](images/4ad980595f495f3a54dce97f37a5f0711003d9354b31c1f8169998b844df083f.jpg) +Figure 8: Performance curves of attacks on CIFAR10 for $\ell_{\infty}$ (first column) and $\ell_{2}$ (second column) perturbation constraints. Plots of Avg. Loss row reports the loss as a function of the number of queries averaged over all images. The Avg. Hamming Similarity row shows the Hamming similarity of the sign of the attack's estimated gradient $\hat{g}$ with true gradient's sign $q^{*}$ , computed as $1 - ||\mathrm{sign}(\hat{g}) - q^{*}||_{H} / n$ and averaged over all images. Likewise, plots of the Avg. Cosine Similarity row show the normalized dot product of $\hat{g}$ and $g^{*}$ averaged over all images. The Success Rate row reports the attacks' cumulative distribution functions for the number of queries required to carry out a successful attack up to the query limit of 10,000 queries. The Avg. # Queries row reports the average number of queries used per successful image for each attack when reaching a specified success rate: the more effective the attack, the closer its curve is to the bottom right of the plot. + +![](images/88b9f55ae5c8e3f1df21eebbc1efe756c62f675b09ac0d5333fb279342bb9c89.jpg) + +![](images/7b3405799cd31826853fe76fea717497a5a96422d6cf6f79bb3e2bf52a1e2fcb.jpg) + +![](images/dbd4c06ff1feba363077ff6dfb065b228fd5a66ee6924c7fc9079df974d9ad0d.jpg) + +![](images/0bca35916911b5e70dd6ba219180f5bfa47b1cc57b360adcad45c11f1b519ec6.jpg) + +![](images/dc3d12426f446097650abf891c4145a18fafbb6771d5516e969a7cb7407f0d59.jpg) + +![](images/855ccd0358e7cbf0fa5b6bb058b002efbdfb3f303a951d71bb5dcbb2427028c2.jpg) + +![](images/e94fa7a1128cb076673caaf826ec6d16ea93b627d49e9d1bafc2b55b7cd0e07d.jpg) + +![](images/e4ea87006bd00205367daedb74d6f627e4d4415fca45ec531611c6b5be8a6844.jpg) + +![](images/a814b9980970b959f4d9f15172042d203a77ecef277af8ed0cf0f73f629b941d.jpg) + +![](images/7613b854addfda1b5ea21195ad365d46c6b984f9be752b7ad25c4eb2094dffd0.jpg) +Figure 9: Performance curves of attacks on IMAGENET for $\ell_{\infty}$ (first column) and $\ell_{2}$ (second column) perturbation constraints. Plots of Avg. Loss row reports the loss as a function of the number of queries averaged over all images. The Avg. Hamming Similarity row shows the Hamming similarity of the sign of the attack's estimated gradient $\hat{g}$ with true gradient's sign $q^{*}$ , computed as $1 - ||\mathrm{sign}(\hat{g}) - q^{*}||_{H} / n$ and averaged over all images. Likewise, plots of the Avg. Cosine Similarity row show the normalized dot product of $\hat{g}$ and $g^{*}$ averaged over all images. The Success Rate row reports the attacks' cumulative distribution functions for the number of queries required to carry out a successful attack up to the query limit of 10,000 queries. The Avg. # Queries row reports the average number of queries used per successful image for each attack when reaching a specified success rate: the more effective the attack, the closer its curve is to the bottom right of the plot. + +![](images/265c73c83cb60bb564d5e885ca86883fc2285684d96b6cc18d71fc0d5f5e6a24.jpg) + +# APPENDIX E. PUBLIC BLACK-BOX CHALLENGE RESULTS + +This section shows results of our experiments in crafting adversarial black-box examples on adversarially trained models in the form of tables and performance traces, namely Tables 11, 12, 13, and Figure 10. + +Table 11: Leaderboard of the MNIST black-box challenge. Adapted from the challenge's website—as of May 15, 2019. + +
Black-Box AttackModel Accuracy
SignHunter (Algorithm 2 in the main paper)91.47%
Xiao et al. (2018)92.76%
PGD against 3 independently & adversarially trained copies of the network93.54%
FGSM on the CW loss for model B from (Tramèr et al., 2017a)94.36%
FGSM on the CW loss for the naturally trained public network96.08%
PGD on the cross-entropy loss for the naturally trained public network96.81%
Attack using Gaussian Filter for selected pixels on the adversarially trained public network97.33%
FGSM on the cross-entropy loss for the adversarially trained public network97.66%
PGD on the cross-entropy loss for the adversarially trained public network97.79%
+ +Table 12: Leaderboard of the CIFAR10 black-box challenge. Adapted from the challenge's website—as of May 15, 2019. + +
Black-Box AttackModel Accuracy
SignHunter (Algorithm 2 in the main paper)47.16%
PGD on the cross-entropy loss for the adversariably trained public network63.39%
PGD on the CW loss for the adversariably trained public network64.38%
FGSM on the CW loss for the adversariably trained public network67.25%
FGSM on the CW loss for the naturally trained public network85.23%
+ +Table 13: Top 1 Error Percentage. The numbers between brackets are computed on 10,000 images from the validation set. The rest are from (Tramère et al., 2017a, Table 4). + +
ModelcleanMax. Black-boxSignHunter
after 20 queriesafter 1000 queries
v3adv-ens424.2 (26.73)33.4(40.61)(90.75)
+ +![](images/65b4c851a42c61d2a4803e137dcb62b40f8708ac1c16e25f85c5ce80acd39c3b.jpg) +Madry's Lab (MNIST) + +![](images/026077a39826879cc96c3b567d318e5847eba2a1b6dd580f2c212acb9b9a1cb4.jpg) +Madry's Lab (CIFAR10) + +![](images/24d750931ceb87d4a758e8c6f03df1f5325a0e7ee3b9e3bdd8dfd4b5b923b367.jpg) +Ensemble Adv. Training (IMAGENET) + +![](images/068359d888b0a2cc47f06032b9ff737c143927b363df8c01b439bb8363a2c418.jpg) + +![](images/fc0150e82f48165f8c88aed932ffe701fa79b614e7ddc86c9d85f1a8916a774a.jpg) + +![](images/72083580b195e7816f1849e2204029ec548d9d47261c2a9115d3aed5e6636153.jpg) + +![](images/c84cd531847c7fa102dc3b93b59e1b8e535b85718c0172df3643ca5d54a60385.jpg) + +![](images/36f0403508601e0db38e634a5e68ce0a535849bcb09a4188e85425b1acc8c3ec.jpg) + +![](images/524e4cb1b5713af90e2929bf4d9664dd0696162a3575aab3844f2df73cb435a0.jpg) + +![](images/2d110fb3a0751e4e7c8ee925ded83669ce9a874d97104f0fe9782b50e72bf04d.jpg) + +![](images/a81712105e41606e6113067b7079383bc64c37b6da0887686d04e5e14c9e3418.jpg) + +![](images/55bd70b75fb806459e02e870b26a5abdd74bb7cc85d7f6b171522a4e129e3f2f.jpg) + +![](images/7deed10b1c663a50b1071d2cb9aaaed0f784613ef6eb4b5ee601b202ceabe7ae.jpg) +Figure 10: Performance curves of attacks on the public black-box challenges for MNIST (first column), CIFAR10 (second column) and IMAGENET (third column). Plots of Avg. Loss row reports the loss as a function of the number of queries averaged over all images. The Avg. Hamming Similarity row shows the Hamming similarity of the sign of the attack's estimated gradient $\hat{g}$ with true gradient's sign $q^{*}$ , computed as $1 - ||\mathrm{sign}(\hat{g}) - q^{*}||_{H}/n$ and averaged over all images. Likewise, plots of the Avg. Cosine Similarity row show the normalized dot product of $\hat{g}$ and $g^{*}$ averaged over all images. The Success Rate row reports the attacks' cumulative distribution functions for the number of queries required to carry out a successful attack up to the query limit of 5,000 queries for MNIST and CIFAR10 (1,000 queries for IMAGENET). The Avg. # Queries row reports the average number of queries used per successful image for each attack when reaching a specified success rate: the more effective the attack, the closer its curve is to the bottom right of the plot. + +![](images/624dbde9778c686bf1ddbd96429b61a1e141467b536c6b23bf160a0b56773938.jpg) + +![](images/b79232b15d46271149c75dfe3fcdb246d435563b0786980edc73d3278f65b373.jpg) + +# APPENDIX F. HISTOGRAM OF GRADIENT COORDINATES' MAGNITUDES + +This section illustrates our experiment on the distribution of the magnitudes of gradient coordinates as summarized in Figure 11. How to read the plots: Consider the first histogram in Plot (a) from below; it corresponds to the $1000^{th}$ image from the sampled MNIST evaluation set, plotting the histogram of the values $\{|\partial L(\pmb{x},y) / \partial x_i|\}_{1\leq i\leq n}$ , where the MNIST dataset has dimensionality $n = 784$ . These values are in the range [0, 0.002]. Overall, the values are fairly concentrated—with exceptions, in Plot (e) for instance, the magnitudes of the $\sim 400^{th}$ image's gradient coordinates are spread from 0 to $\sim 0.055$ . + +![](images/b6e32aa210521977ce3f1d7fba68723d872f35a1571d796262e1a41b9bd9ef68.jpg) +Figure 11: Magnitudes of gradient coordinates are concentrated: Plots (a), (b), and (c) show histograms of the magnitudes of gradient coordinates of the loss function $L(\pmb{x}, y)$ with respect to the input point (image) $\pmb{x}$ for MNIST, CIFAR10, and IMAGENET neural net models over 1000 images from the corresponding evaluation set, respectively. Plots (d), (e), (f) show the same but at input points (images) sampled randomly within $B_{\infty}(\pmb{x}, \epsilon)$ : the $\ell_{\infty}$ -ball of radius $\epsilon = 0.3, 12,$ and 0.05 around the images in Plots (a), (b), and (c), respectively. + +# APPENDIX G. ON SCHEMES FOR SIGN FLIPS + +In this section, we show the performance of different sign flip schemes in comparison to SignHunter. Results are summarized in Figure 12. SignHunter's adaptive flips shows a clear advantage over other schemes despite having a worse upper-bound on the query complexity—e.g., Naive can retrieve the signs in $n + 2$ queries, as discussed in Section 3. + +![](images/fa210a7b54e924387fbe949a73e10bc9f72b70eaecbf66584db7533d8e32c57f.jpg) +MNIST $\ell_{\infty}$ + +![](images/b0de033ba63eb4a3caa60d3dfa826ca3156cd877c31829eb0fd736eb61a79210.jpg) +CIFAR10 $\ell_{\infty}$ + +![](images/5b3a6f2ae2aa805bb69b74ec21fc571589e21b45acbe76b7e743db7acc108639.jpg) +IMAGENET $\ell_{\infty}$ + +![](images/cec16a844c41b9d20a636549a19d136fd3284d35cc1743ac280e66e99ca49afe.jpg) +MNIST $\ell_2$ + +![](images/3992e3b0cf69be73df2f790b841d22c9bdeea0ce86b449091cd829afc7fabf84.jpg) +CIFAR10 $\ell_2$ + +![](images/4a742b735f178a9761c9528cf6a82d11b50b5a977550f203bc3f91227449a646.jpg) +IMAGENET $\ell_{2}$ +Figure 12: Performance of different sign flips patterns for Algorithm 2, Line 8 in the $\ell_{\infty}$ and $\ell_{2}$ perturbation constraints: our proposition (SignHunter), random sign flips (Rand), sequential single sign flips (Naive), stochastic hill climbing (SHC), which is similar to Rand but retain the flip only if it is better in terms of the observed model loss. With higher dimensions, SHC is comparable to SignHunter but does not enjoy a deterministic upper-bound on the query complexity. + +# APPENDIX H. SIGNHUNTER AND RECENT RELATED WORK + +In this section, we discuss recent work related to our proposition. + +Parsimonious Black-Box Adversarial Attacks (Moon et al., 2019). Our experiment on the public CIFAR10 black-box attack challenge corresponds to [1, Table 1]. The authors report a $48\%$ success rate ( $52\%$ model accuracy) with an average number of queries of 1261. On the other hand, our proposed algorithm achieves a $52.84\%$ success rate (47.16% model accuracy) with an average number of queries of 569. Further, (Moon et al., 2019, Table 2) corresponds to our results in Appendix D, Table 9; the paper reports a $98.5\%$ success rate with an average number of queries of 722. Our proposed algorithm achieves a $98\%$ success rate with 578.56 average number of queries. Based on these numbers, SignHunter demonstrates better performance than (Moon et al., 2019)'s attack. + +Simple Black-Box Attack (SIMBA) (Guo et al., 2019). The main distinction is that SIMBA performs a ternary flip over $\{-\delta, 0, +\delta\}$ for one random single coordinate at an iteration with $\delta \leq \epsilon$ . On the other hand, SignHunter performs a binary flip $\{-\epsilon, \epsilon\}$ for a group of coordinates at an iteration. Most of Guo et al. (2019)'s experiments were performed for the $\ell_2$ perturbation constraint and against models different from those considered in this paper—except for the IMAGENET v3 model, which the authors find much more difficult to attack. The v3 curves at 10,000 queries in (Guo et al., 2019, Figure 4) for SIMBA (and its variant SIMBA-DCT) look comparable to SignHunter's of Figure 9. For completeness, we implemented SIMBA and evaluated it against the CIFAR10 model in Section 4. The results are shown in Figure 13. In line with Guo et al. (2019), SIMBA is a strong baseline in the $\ell_2$ setup. However, its performance drops significantly in the $\ell_{\infty}$ setup. + +![](images/541a32f318353cd5292b6deb2433f0dc82d40730bf231c900ad667a08e750678.jpg) +(a) CIFAR10 $\ell_{\infty}$ + +![](images/9b6e0f9c9a21e59ed1bebca812ca36771a4fcf0c5c3a219c7b4f7dbe8f743ac1.jpg) +(b) CIFAR10 $\ell_2$ + +Figure 13: Performance of SIMBA and SignHunter in the $\ell_{\infty}$ and $\ell_{2}$ perturbation settings of Section 4 on CIFAR10. The plots show the average number of queries used per successful image for each attack when reaching a specified success rate. In line with Guo et al. (2019), we used a step size of $\delta = 50$ for $\ell_{2}$ (the authors used $\delta = 0.2$ for [0, 1]-valued pixels, our setup takes images in [0, 255] so $\delta = 0.2 * 255 \sim 50$ ). For $\ell_{\infty}$ , we used $\delta = 2$ , following NES's setup in Table 4. + +Harmonica (Hazan et al., 2017). Both SignHunter and Harmonica seek to optimize a black-box function over the binary hypercube $\{\pm 1\}^n$ , albeit with different assumptions on the objective function. Harmonica assumes that the objective function can be approximated by a sparse and low degree polynomial in the Fourier basis. Our assumption with SignHunter is that the objective function is separable (Property 1, Section 3), this lets us optimize the black-box function with $O(n)$ queries given an initial guess instead of searching over the $2^n$ vertices. If this assumption is not met, we can restart SignHunter with another guess with a search complexity of $O(mn)$ where $m$ is the number of restarts. With this difference in assumptions of the two algorithms, we conducted an empirical comparison using the two sample problems provided along with Harmonica's authors implementation. As shown in Table 14, the results show that SignHunter optimizes the two problems with $8\times$ less number of queries than Harmonica, not to mention the significant computational advantage. + +Table 14: Performance comparison of SignHunter and Harmonica on two sample problems from https://github.com/callowbird/Harmonica. The lower solution quality, the better. + +
AlgorithmSolution Quality# queriesTime per Query
Problem 1Harmonica-50.0422367.47 ms
SignHunter-50.02036.30μs
Problem 2Harmonica-916.22422360.22 ms
SignHunter-916.21500584.28μs
+ +# APPENDIX I. ON THE $\ell_2 - \ell_{\infty}$ PERFORMANCE GAP + +As discussed in Section 4, in an $\epsilon -\ell_{2}$ threat setup, black-box attacks that are based on continuous optimization (e.g., NES and Bandits $T D$ ) can vary each pixel $x$ within $[x - \epsilon ,x + \epsilon ]$ . On the other hand, SignHunter is restricted to $[x - \epsilon /\sqrt{n},x + \epsilon /\sqrt{n} ]$ . In other words, SignHunter in $\epsilon -\ell_{2}$ perturbation setup behaves exactly the same when used in $\epsilon /\sqrt{n} -\ell_{\infty}$ perturbation setup. This is illustrated in Figure 14 + +To highlight the additional perturbation space that other algorithms have over SignHunter in the $\ell_2$ setup, we ran NES and Bandits $_{TD}$ as representative examples of standard and dimensionality-reduction-based algorithms against the CIFAR10 model used in Section 4 with an $\ell_{\infty}$ perturbation setup of $\epsilon = 127 / \sqrt{n}$ . In this and the $\ell_2$ setup used in Section 4, SignHunter behaves the same, while the performance of NES and Bandits $_{TD}$ drops significantly from their $\ell_2$ performance due to the reduction in the perturbation space. + +A possible fix to allow SignHunter to access the additional search space introduced in the $\ell_2$ setup is to extend the notion of binary sign flips over $\{+1, -1\}$ to ternary sign flips over $\{+1, 0, -1\}$ and we intend to explore this thoroughly in a future work. + +![](images/d8ce7f20f5b2f08cab19c6536ba19f3e9179c091c1345aeed4f6422f2022dfb2.jpg) +Figure 14: Illustration of adversarial examples crafted by SignHunter in comparison to attacks that are based on the continuous optimization (e.g., NES and Bandits $_{TD}$ ) in both (a) $\ell_{\infty}$ and (b) $\ell_{2}$ settings. For both $\epsilon - \ell_{2}$ and $\epsilon / \sqrt{n} - \ell_{\infty}$ perturbation balls, SignHunter behaves the same, while continuous attacks such as NES have access to more possible perturbations in the $\ell_{2}$ setup compared to their perturbations in the $\ell_{\infty}$ setup. This is demonstrated on CIFAR10 in Figure 15. + +![](images/200a6cdea2009f02cb3ba4956c7b4fa42383cd51b9309f4f6c2c181bf2679d81.jpg) +(a) CIFAR10 $\ell_{\infty}:\epsilon = \frac{127}{32\sqrt{3}}$ + +![](images/e45df39b04384bbce6e319cd57ffe6dae356e5c4624c6639d4fe74f0458e5fde.jpg) +(b) CIFAR10 $\ell_2$ : $\epsilon = 127$ +Figure 15: Performance of black-box attacks in the $\ell_{\infty}$ and $\ell_{2}$ perturbation constraints. The plots show the average number of queries used per successful image for each attack when reaching a specified success rate. Note that (b) is similar to the $\ell_{2}$ setup examined in Section 4. \ No newline at end of file diff --git a/signbitsareallyouneedforblackboxattacks/images.zip b/signbitsareallyouneedforblackboxattacks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2bc959bcaa14abe51018b1d07bb5796f894f0057 --- /dev/null +++ b/signbitsareallyouneedforblackboxattacks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fba641720e9035202690ab6de12481a46f998d4427d32473db9b49da6cd0764 +size 1202185 diff --git a/signbitsareallyouneedforblackboxattacks/layout.json b/signbitsareallyouneedforblackboxattacks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d69d629c61caf9c4ab191c694b49719ebfd87314 --- /dev/null +++ b/signbitsareallyouneedforblackboxattacks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8dd41b40b08b962d997343eabeea30cd1eeea08e7399e80b2dd83baf043a557a +size 976930 diff --git a/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_content_list.json b/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..842cbfe1e7b58139c78f483ab39f31d26ddc691c --- /dev/null +++ b/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b700cc7021f3b8259779db31f80561a27e62da270556e2c7c07490547b9301ef +size 103180 diff --git a/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_model.json b/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d93616453a9b9729e1198542aa133f33ddc67b5c --- /dev/null +++ b/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21cf1d075f40acac3d474c7c2604474418807aee0cd3c3bbe8f902a7f87608c6 +size 120300 diff --git a/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_origin.pdf b/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ce050f72978605ccc110d39faaa3ef1a7189527d --- /dev/null +++ b/signoptaqueryefficienthardlabeladversarialattack/9d0396c1-c5b3-4cdc-8d9a-83989e2ad645_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0c91be4d457a1ec5a7ac8875d2a8cf0fc52671b479b96f47c7dce1098197357 +size 772552 diff --git a/signoptaqueryefficienthardlabeladversarialattack/full.md b/signoptaqueryefficienthardlabeladversarialattack/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bfc992a54e2950409b6674d94e7c407699cda454 --- /dev/null +++ b/signoptaqueryefficienthardlabeladversarialattack/full.md @@ -0,0 +1,435 @@ +# SIGN-OPT: A QUERY-EFFICIENT HARD-LABEL ADVERSARIAL ATTACK + +Minhao Cheng $^{1*}$ , Simranjit Singh $^{1*}$ , Patrick Chen $^{1}$ , Pin-Yu Chen $^{2}$ , Sijia Liu $^{2}$ , Cho-Jui Hsieh $^{1}$ $^{1}$ Department of Computer Science, UCLA, $^{2}$ IBM Research +{mhcheng, simranjit, patrickchen, chohsieh}@cs.ucla.edu +{pin-yu.chen, sijia.liu}@ibm.com + +# ABSTRACT + +We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input. Several algorithms have been proposed for this problem but they typically require huge amount ( $>20,000$ ) of queries for attacking one example. Among them, one of the state-of-the-art approaches (Cheng et al., 2019) showed that hard-label attack can be modeled as an optimization problem where the objective function can be evaluated by binary search with additional model queries, thereby a zeroth order optimization algorithm can be applied. In this paper, we adopt the same optimization formulation but propose to directly estimate the sign of gradient at any direction instead of the gradient itself, which enjoys the benefit of single query. Using this single query oracle for retrieving sign of directional derivative, we develop a novel query-efficient Sign-OPT approach for hard-label black-box attack. We provide a convergence analysis of the new algorithm and conduct experiments on several models on MNIST, CIFAR-10 and ImageNet. We find that Sign-OPT attack consistently requires $5 \times$ to $10 \times$ fewer queries when compared to the current state-of-the-art approaches, and usually converges to an adversarial example with smaller perturbation. + +# 1 INTRODUCTION + +It has been shown that neural networks are vulnerable to adversarial examples (Szegedy et al., 2016; Goodfellow et al., 2015; Carlini & Wagner, 2017; Athalye et al., 2018). Given a victim neural network model and a correctly classified example, an adversarial attack aims to compute a small perturbation such that with this perturbation added, the original example will be misclassified. Many adversarial attacks have been proposed in the literature. Most of them consider the white-box setting, where the attacker has full knowledge about the victim model, and thus gradient based optimization can be used for attack. Popular Examples include C&W (Carlini & Wagner, 2017) and PGD (Madry et al., 2017) attacks. On the other hand, some more recent attacks have considered the probability black-box setting where the attacker does not know the victim model's structure and weights, but can iteratively query the model and get the corresponding probability output. In this setting, although gradient (of output probability to the input layer) is not computable, it can still be estimated using finite differences, and algorithms many attacks are based on this (Chen et al., 2017; Ilyas et al., 2018a; Tu et al., 2019; Jun et al., 2018). + +In this paper, we consider the most challenging and practical attack setting – hard-label black-box setting – where the model is hidden to the attacker and the attacker can only make queries and get the corresponding hard-label decisions (e.g., predicted labels) of the model. A commonly used algorithm proposed in this setting, also called Boundary attack (Brendel et al., 2017), is based on random walks on the decision surface, but it does not have any convergence guarantee. More recently, Cheng et al. (2019) showed that finding the minimum adversarial perturbation in the hard-label setting can be reformulated as another optimization problem (we call this Cheng's formulation in this paper). This + +new formulation enjoys the benefit of having a smooth boundary in most tasks and the function value is computable using hard-label queries. Therefore, the authors of (Cheng et al., 2019) are able to use standard zeroth order optimization to solve the new formulation. Although their algorithm converges quickly, it still requires large number of queries (e.g., 20,000) for attacking a single image since every function evaluation of Cheng's formulation has to be computed using binary search requiring tens of queries. + +In this paper, we follow the same optimization formulation of (Cheng et al., 2019) which has the advantage of smoothness, but instead of using finite differences to estimate the magnitude of directional derivative, we propose to evaluate its sign using only a single query. With this single-query sign oracle, we design novel algorithms for solving the Cheng's formulation, and we theoretically prove and empirically demonstrate the significant reduction in the number of queries required for hard-label black box attack. + +Our contribution are summarized below: + +- Novelty in terms of adversarial attack. We elucidate an efficient approach to compute the sign of directional derivative of Cheng's formulation using a single query, and based on this technique we develop a novel optimization algorithm called Sign-OPT for hard-label black-box attack. +- Novelty in terms of optimization. Our method can be viewed as a new zeroth order optimization algorithm that features fast convergence of signSGD. Instead of directly taking the sign of gradient estimation, our algorithm utilizes the scale of random direction. This makes existing analysis inappropriate to our case, and we provide a new recipe to prove the convergence of this new optimizer. +- We conduct comprehensive experiments on several datasets and models. We show that the proposed algorithm consistently reduces the query count by 5–10 times across different models and datasets, suggesting a practical and query-efficient robustness evaluation tool. Furthermore, on most datasets our algorithm can find an adversarial example with smaller distortion compared with previous approaches. + +# 2 RELATED WORK + +White-box attack Since it was firstly found that neural networks are easy to be fooled by adversarial examples (Goodfellow et al., 2015), a lot of work has been proposed in the white-box attack setting, where the classifier $f$ is completely exposed to the attacker. For neural networks, under this assumption, back-propagation can be conducted on the target model because both network structure and weights are known by the attacker. Algorithms including (Goodfellow et al., 2015; Kurakin et al., 2016; Carlini & Wagner, 2017; Chen et al., 2018; Madry et al., 2017) are then proposed based on gradient computation. Recently, the BPDA attack introduced by Athalye et al. (2018) bypasses some models with obfuscated gradients and is shown to successfully circumvent many defenses. In addition to typical attacks based on small $\ell_p$ norm perturbation, non- $\ell_p$ norm perturbations such as scaling or shifting have also been considered (Zhang et al., 2019). + +Black-box attack Recently, black-box setting is drawing rapidly increasing attention. In black-box setting, the attacker can query the model but has no (direct) access to any internal information inside the model. Although there are some works based on transfer attack (Papernot et al., 2017), we consider the query-based attack in the paper. Depending on the model's feedback for a given query, an attack can be classified as a soft-label or hard-label attack. In the soft-label setting, the model outputs a probability score for each decision. Chen et al. (2017) uses a finite difference in a coordinate-wise manner to approximately estimate the output probability changes and does a coordinate descent to conduct the attack. Ilyas et al. (2018a) uses Neural evolution strategy (NES) to approximately estimate the gradient directly. Later, some variants (Ilyas et al., 2018b; Tu et al., 2019) were proposed to utilize the side information to further speed up the attack procedure. Alzantot et al. (2019) uses a evolutionary algorithm as a black-box optimizer for the soft-label setting. Recently, Al-Dujaili & O'Reilly (2019) proposes SignHunter algorithm based on signSGD (Bernstein et al., 2018) to achieve faster convergence in the soft-label setting. The recent work (Al-Dujaili & O'Reilly, 2019) proposes SignHunter algorithm to achieve a more query-efficient sign estimate when crafting black-box adversarial examples through soft-label information. + +In the hard-label case, only the final decision, i.e. the top-1 predicted class, is observed. As a result, the attacker can only make queries to acquire the corresponding hard-label decision instead of the probability outputs. Brendel et al. (2017) first studied this problem and proposed an algorithm based on random walks near the decision boundary. By selecting a random direction and projecting it onto a boundary sphere in each iteration, it aims to generate a high-quality adversarial example. Query-Limited attack (Ilyas et al., 2018a) tries to estimate the output probability scores with model query and turn the hard-label into a soft-label problem. Cheng et al. (2019) instead re-formalizes the hard-label attack into an optimization problem that finds a direction which could produce the shortest distance to decision boundary. + +The recent arXiv paper (Chen et al., 2019) applied the zeroth-order sign oracle to improve Boundary attack, and also demonstrated significant improvement. The major differences to our algorithm are that we propose a new zeroth-order gradient descent algorithm, provide its algorithmic convergence guarantees, and aim to improve the query complexity of the attack formulation proposed in (Cheng et al., 2019). For completeness, we also compare with this method in Section A.1. Moreover, (Chen et al., 2019) uses one-point gradient estimate, which is unbiased but may encounter larger variance compared with the gradient estimate in our paper. Thus, we can observe in Section A.1 that although they are slightly faster in the initial stage, Sign-OPT will catch up and eventually lead to a slightly better solution. + +# 3 PROPOSED METHOD + +We follow the same formulation in (Cheng et al., 2019) and consider the hard-label attack as the problem of finding the direction with shortest distance to the decision boundary. Specifically, for a given example $\pmb{x}_0$ , true label $y_0$ and the hard-label black-box function $f: \mathbb{R}^d \to \{1, \dots, K\}$ , the objective function $g: \mathbb{R}^d \to \mathbb{R}$ (for the untargeted attack) can be written as: + +$$ +\underset {\boldsymbol {\theta}} {\min } g (\boldsymbol {\theta}) \text {w h e r e} g (\boldsymbol {\theta}) = \arg \underset {\lambda > 0} {\min } \left(f \left(x _ {0} + \lambda \frac {\boldsymbol {\theta}}{\| \boldsymbol {\theta} \|}\right) \neq y _ {0}\right). \tag {1} +$$ + +It has been shown that this objective function is usually smooth and the objective function $g$ can be evaluated by a binary search procedure locally. At each binary search step, we query the function $f(\pmb{x}_0 + \lambda \frac{\theta}{\|\pmb{\theta}\|})$ and determine whether the distance to decision boundary in the direction $\pmb{\theta}$ is greater or smaller than $\lambda$ based on the hard-label prediction1. + +As the objective function is computable, the directional derivative of $g$ can be estimated by finite differences: + +$$ +\hat {\nabla} g (\boldsymbol {\theta}; \mathbf {u}) := \frac {g (\boldsymbol {\theta} + \epsilon \mathbf {u}) - g (\boldsymbol {\theta})}{\epsilon} \mathbf {u} \tag {2} +$$ + +where $\pmb{u}$ is a random Gaussian vector and $\epsilon > 0$ is a very small smoothing parameter. This is a standard zeroth order oracle for estimating directional derivative and based on this we can apply many different zeroth order optimization algorithms to minimize $g$ . + +For example, Cheng et al. (2019) used the Random Derivative Free algorithm Nesterov & Spokoiny (2017) to solve problem (1). However, each computation of (2) requires many hard-label queries due to binary search, so Cheng et al. (2019) still requires a huge number of queries despite having fast convergence. + +In this work, we introduce an algorithm that hugely improves the query complexity over Cheng et al. (2019). Our algorithm is based on the following key ideas: (i) one does not need very accurate values of directional derivative in order to make the algorithm converge, and (ii) there exists an imperfect but informative estimation of directional derivative of $g$ that can be computed by a single query. + +![](images/a92a79dcf728c7cc01f4ce234d7e4f86a7cb7cd97232746326d3575ae450a1ec.jpg) +Figure 1: Illustration + +Algorithm 1: Sign-OPT attack +Input: Hard-label model $f$ original image $x_0$ , initial $\theta_0$ . +for $t = 1,2,\ldots ,T$ do Randomly sample $u_{1},\dots,u_{Q}$ from a Gaussian or Uniform distribution; Compute $\hat{\pmb{g}}\gets \frac{1}{Q}\sum_{q = 1}^{Q}\mathrm{sign}(g(\pmb {\theta}_t + \epsilon \mathbf{u}_q) - g(\pmb {\theta}_t))\cdot \mathbf{u}_q$ Update $\pmb{\theta}_{t + 1}\leftarrow \pmb{\theta}_{t} - \eta \hat{\pmb{g}}$ Evaluate $g(\pmb {\theta}_t)$ using the same search algorithm in Cheng et al. (2019); +end + +# 3.1 A SINGLE QUERY ORACLE + +As mentioned before, the previous approach requires com + +putting $g(\pmb{\theta} + \epsilon \pmb{u}) - g(\pmb{\theta})$ which consumes a lot of queries. However, based on the definition of $g(\cdot)$ , we can compute the sign of this value $\mathrm{sign}(g(\pmb{\theta} + \epsilon \pmb{u}) - g(\pmb{\theta}))$ using a single query. Considering the untargeted attack case, the sign can be computed by + +$$ +\operatorname {s i g n} (g (\boldsymbol {\theta} + \epsilon \mathbf {u}) - g (\boldsymbol {\theta})) = \left\{ \begin{array}{l l} + 1, & f \left(x _ {0} + g (\boldsymbol {\theta}) \frac {(\boldsymbol {\theta} + \epsilon \mathbf {u})}{\| \boldsymbol {\theta} + \epsilon \mathbf {u} \|}\right) = y _ {0}, \\ - 1, & \text {O t h e r w i s e .} \end{array} \right. \tag {3} +$$ + +This is illustrated in Figure 1. Essentially, for a new direction $\pmb{\theta} + \epsilon \mathbf{u}$ , we test whether a point at the original distance $g(\pmb{\theta})$ from $x_0$ in this direction lies inside or outside the decision boundary, i.e. if the produced perturbation will result in a wrong prediction by classifier. If the produced perturbation is outside the boundary i.e. $f(x_0 + g(\pmb{\theta})\frac{(\pmb{\theta} + \epsilon\mathbf{u})}{\|\pmb{\theta} + \epsilon\mathbf{u}\|}) \neq y_0$ , the new direction has a smaller distance to decision boundary, and thus giving a smaller value of $g$ . It indicates that $\mathbf{u}$ is a descent direction to minimize $g$ . + +# 3.2 SIGN-OPT ATTACK + +By sampling random Gaussian vector $Q$ times, we can estimate the imperfect gradient by + +$$ +\hat {\nabla} g (\boldsymbol {\theta}) \approx \hat {\boldsymbol {g}} := \sum_ {q = 1} ^ {Q} \operatorname {s i g n} (g (\boldsymbol {\theta} + \epsilon \mathbf {u} _ {q}) - g (\boldsymbol {\theta})) \mathbf {u} _ {q}, \tag {4} +$$ + +which only requires $Q$ queries. We then use this imperfect gradient estimate to update our search direction $\pmb{\theta}$ as $\pmb{\theta} \gets \pmb{\theta} - \eta \hat{\mathbf{g}}$ with a step size $\eta$ and use the same search procedure to compute $g(\pmb{\theta})$ up to a certain accuracy. The detailed procedure is shown in Algorithm 1. + +We note that Liu et al. (2019) designed a Zeroth Order SignSGD algorithm for soft-label black box attack (not hard-label setting). They use $\hat{\nabla} g(\pmb{\theta}) \approx \hat{\pmb{g}} \coloneqq \sum_{q=1}^{Q} \mathrm{sign}(g(\pmb{\theta} + \epsilon \mathbf{u}_q) - g(\pmb{\theta}) \mathbf{u}_q)$ and show that it could achieve a comparable or even better convergence rate than zeroth order stochastic gradient descent by using only sign information of gradient estimation. Although it is possible to combine ZO-SignSGD with our proposed single query oracle for solving hard-label attack, their estimator will take sign of the whole vector and thus ignore the direction of $\mathbf{u}_q$ , which leads to slower convergence in practice (please refer to Section 4.4 and Figure 5(b) for more details). + +To the best of our knowledge, no previous analysis can be used to prove convergence of Algorithm 1. In the following, we show that Algorithm 1 can in fact converge and furthermore, with similar convergence rate compared with (Liu et al., 2019) despite using a different gradient estimator. + +Assumption 1. Function $g(\theta)$ is $L$ -smooth with a finite value of $L$ . + +Assumption 2. At any iteration step $t$ , the gradient of the function $g$ is upper bounded by $\| \nabla g(\pmb{\theta}_t) \|_2 \leq \sigma$ . + +Theorem 3.1. Suppose that the conditions in the assumptions hold, and the distribution of gradient noise is unimodal and symmetric. Then, Sign-OPT attack with learning rate $\eta_t = O\left(\frac{1}{Q\sqrt{dT}}\right)$ and $\epsilon = O\left(\frac{1}{dT}\right)$ will give following bound on $\mathbb{E}[\|\nabla g(\pmb{\theta})\|_2]$ : + +$$ +\mathbb {E} [ \| \nabla g (\boldsymbol {\theta}) \| _ {2} ] = O \left(\frac {\sqrt {d}}{\sqrt {T}} + \frac {d}{\sqrt {Q}}\right). +$$ + +The proof can be found in subsection A.2. The main difference with the original analysis provided by Liu et al. (2019) is that they only only deal with sign of each element, while our analysis also takes the magnitudes of each element of $\pmb{u}_q$ into account. + +# 3.3 OTHER GRADIENT ESTIMATIONS + +Note that the value $\mathrm{sign}(g(\pmb{\theta} + \epsilon \pmb{u}) - g(\pmb{\theta}))$ computed by our single query oracle is actually the sign of the directional derivative: + +$$ +\operatorname {s i g n} (\langle \nabla g (\boldsymbol {\theta}), \boldsymbol {u} \rangle) = \operatorname {s i g n} (\lim _ {\epsilon \rightarrow \infty} \frac {g (\boldsymbol {\theta} + \epsilon \boldsymbol {u}) - g (\boldsymbol {\theta})}{\epsilon}) = \operatorname {s i g n} (g (\boldsymbol {\theta} + \epsilon \boldsymbol {u}) - g (\boldsymbol {\theta})) \text {f o r a s m a l l} \epsilon . +$$ + +Therefore, we can use this information to estimate the original gradient. The Sign-OPT approach in the previous section uses $\sum_{q}\mathrm{sign}(\langle \nabla g(\pmb {\theta}),\pmb{u}_{q}\rangle)\pmb{u}_{q}$ as an estimation of gradient. Let $y_{q}\coloneqq$ $\mathrm{sign}(\langle \nabla g(\pmb {\theta}),\pmb {u}_q\rangle)$ , a more accurate gradient estimation can be cast as the following constraint optimization problem: + +$$ +\text {F i n d a v e r t o r} \mathbf {z} \text {s u c h t h a t s i g n} (\langle \mathbf {z}, \mathbf {u} _ {q} \rangle) = y _ {q} \forall q = 1, \dots , Q. +$$ + +Therefore, this is equivalent to a hard constraint SVM problem where each $\mathbf{u}_q$ is a training sample and $y_{q}$ is the corresponding label. The gradient can then be recovered by solving the following quadratic programming problem: + +$$ +\min _ {\boldsymbol {z}} \boldsymbol {z} ^ {T} \boldsymbol {z} \quad \text {s . t .} \quad \boldsymbol {z} ^ {T} \boldsymbol {u} _ {q} \geq y _ {q}, \forall q = 1, \dots , Q. \tag {5} +$$ + +By solving this problem, we can get a good estimation of the gradient. As explained earlier, each $y_{q}$ can be determined with a single query. Therefore, we propose a variant of Sign-OPT, which is called SVM-OPT attack. The detailed procedure is shown in Algorithm 2. We will present an empirical comparison of our two algorithms in subsection 4.1. + +Algorithm 2: SVM-OPT attack +Input: Hard-label model $f$ original image $\pmb{x}_0$ , initial $\theta_0$ . +for $t = 1,2,\dots ,T$ do Sample $\pmb {u}_1,\dots ,\pmb {u}_Q$ from Gaussian or orthogonal basis ; Solve $\textit{\textbf{z}}$ defined by (5); Update $\pmb{\theta}_{t + 1}\gets \pmb{\theta}_t - \eta \pmb{z}$ Evaluate $g(\pmb {\theta}_t)$ using search algorithm in (Cheng et al., 2019); +end + +# 4 EXPERIMENTAL RESULTS + +We evaluate the SIGN-OPT algorithm for attacking black-box models in a hard-label setting on three different standard datasets - MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky et al.) and ImageNet-1000 (Deng et al., 2009) and compare it with existing methods. For fair and easy comparison, we use the CNN networks provided by (Carlini & Wagner, 2017), which have also been used by other previous hard-label attacks as well. Specifically, for both MNIST and CIFAR-10, the model consists of nine layers in total - four convolutional layers, two max-pooling layers and two fully-connected layers. Further details about implementation, training and parameters are available on (Carlini & Wagner, 2017). As reported in (Carlini & Wagner, 2017) and (Cheng et al., 2019), we were able to achieve an accuracy of $99.5\%$ on MNIST and $82.5\%$ on CIFAR-10. We use the pretrained Resnet-50 (He et al., 2016) network provided by torchvision (Marcel & Rodriguez, 2010) for ImageNet-1000, which achieves a Top-1 accuracy of $76.15\%$ . + +In our experiments, we found that Sign-OPT and SVM-OPT perform quite similarly in terms of query efficiency. Hence we compare only Sign-OPT attack with previous approaches and provide a comparison between Sign-OPT and SVM-OPT in subsection 4.1. We compare the following attacks: + +- Sign-OPT attack (black box): The approach presented in this paper. +- Opt-based attack (black box): The method proposed in Cheng et al. (2019) where they use Randomized Gradient-Free method to optimize the same objective function. We use the implementation provided at https://github.com/LeMinhThong/blackbox-attack. + +![](images/89f5e1f5240f633a2b64b3259b47a676d40359f59a9f8f2b881f4605a20d99b3.jpg) +Figure 2: Example of Sign-OPT targeted attack. $L_{2}$ distortions and queries used are shown above and below the images. First two rows: Example comparison of Sign-OPT attack and OPT attack. Third and fourth rows: Examples of Sign-OPT attack on CIFAR-10 and ImageNet + +- Boundary attack (black box): The method proposed in Brendel et al. (2017). This is compared only in $L_{2}$ setting as it is designed for the same. We use the implementation provided in Foolbox (https://github.com/bethgelab/foolbox). +- Guessing Smart Attack (black box): The method proposed in (Brunner et al., 2018). This attack enhances boundary attack by biasing sampling towards three priors. Note that one of the priors assumes access to a similar model as the target model and for a fair comparison we do not incorporate this bias in our experiments. We use the implementation provided at https://github.com/ttbrunner/biased Boundary_attack. +- C&W attack (white box): One of the most popular methods in the white-box setting proposed in Carlini & Wagner (2017). We use C&W $L_{2}$ norm attack as a baseline for the white-box attack performance. + +For each attack, we randomly sample 100 examples from validation set and generate adversarial perturbations for them. For untargeted attack, we only consider examples that are correctly predicted by model and for targeted attack, we consider examples that are already not predicted as target label by the model. To compare different methods, we mainly use median distortion as the metric. Median distortion for $x$ queries is the median adversarial perturbation of all examples achieved by a method using less than $x$ queries. Since all the hard-label attack algorithms will start from an adversarial example and keep reduce the distortion, if we stop at any time they will always give an adversarial example and medium distortion will be the most suitable metric to compare their performance. Besides, we also show success rate (SR) for $x$ queries for a given threshold $(\epsilon)$ , which is the percentage of number of examples that have achieved an adversarial perturbation below $\epsilon$ with less than $x$ queries. We evaluate success rate on different thresholds which depend on the dataset being used. For comparison of different algorithms in each setting, we chose the same set of examples across all attacks. + +Implementation details: To optimize algorithm 1, we estimate the step size $\eta$ using the same line search procedure implemented in Cheng et al. (2019). At the cost of a relatively small number of queries, this provides significant speedup in the optimization. Similar to Cheng et al. (2019), $g(\theta)$ in last step of algorithm 1 is approximated via binary search. The initial $\theta_0$ in algorithm 1 is calculated + +![](images/3a329b6a1175508ea789a4418e8b0e3fd8c17f02c82e33da23032e6be721a130.jpg) +Figure 3: Median $L_{2}$ distortion vs Queries. First two: Comparison of Sign-OPT and SVM-OPT attack for MNIST and CIFAR-10. Third: Performance of Sign-OPT for different values of $Q$ . + +![](images/0b5d5ac2c08e81146aa85f6c02ff390ba7cb66fa6750df3ea2a5e97e9482759e.jpg) + +![](images/f2c34ca12ee1c002c8b296893caa66586843ad5b240df99a16eb3af63cc57705.jpg) + +by evaluating $g(\theta)$ on 100 random directions and taking the best one. We provide our implementation publicly2. + +# 4.1 COMPARISON BETWEEN SIGN-OPT AND SVM-OPT + +In our experiments, we found that the performance in terms of queries of both these attacks is remarkably similar in all settings (both $L_{2} / L_{\infty}$ & Targeted/Untargeted) and datasets. We present a comparison for MNIST and CIFAR-10 ( $L_{2}$ norm-based) for both targeted and untargeted attacks in Figure 3. We see that the median distortion achieved for a given number of queries is quite on part for both Sign-OPT and SVM-OPT. + +Number of queries per gradient estimate: In Figure 3, we show the comparison of Sign-OPT attack with different values of $Q$ . Our experiments suggest that $Q$ does not have an impact on the convergence point reached by the algorithm. Although, small values of $Q$ provide a noisy gradient estimate and hence delayed convergence to an adversarial perturbation. Large values of $Q$ , on the other hand, require large amount of time per gradient estimate. After fine tuning on a small set of examples, we found that $Q = 200$ provides a good balance between the two. Hence, we set the value of $Q = 200$ for all our experiments in this section. + +# 4.2 UNTARGETED ATTACK + +In this attack, the objective is to generate an adversary from an original image for which the prediction by model is different from that of original image. Figure 4 provides an elaborate comparison of different attacks for $L_{2}$ case for the three datasets. Sign-OPT attack consistently outperforms the current approaches in terms of queries. Not only is Sign-OPT more efficient in terms of queries, in most cases it converges to a lower distortion than what is possible by other hard-label attacks. Furthermore, we observe Sign-OPT converges to a solution comparable with C&W white-box attack (better on CIFAR-10, worse on MNIST, comparable on ImageNet). This is significant for a hard-label attack algorithm since we are given very limited information. + +We highlight some of the comparisons of Boundary attack, OPT-based attack and Sign-OPT attack $(L_{2}$ norm-based) in Table 1. Particularly for ImageNet dataset on ResNet-50 model, Sign-OPT attack reaches a median distortion below 3.0 in less than $30k$ queries while other attacks need more than $200k$ queries for the same. + +# 4.3 TARGETED ATTACK + +In targeted attack, the goal is to generate an adversarial perturbation for an image so that the prediction of resulting image is the same as a specified target. For each example, we randomly specify the target label, keeping it consistent across different attacks. We calculate the initial $\theta_0$ in algorithm 1 using 100 samples in target label class from training dataset and this $\theta_0$ is the same across different attacks. Figure 2 shows some examples of adversarial examples generated by Sign-OPT attack and the Opt-based attack. The first two rows show comparison of Sign-OPT and Opt attack respectively on an example from MNIST dataset. The figures show adversarial examples generated at almost + +![](images/adc26b6680deb1858fa4705474bdd7f9ec3be9001f833f2496bf3bf1fb0d175e.jpg) +Figure 4: Untargeted attack: Median distortion vs Queries for different datasets. + +![](images/2229bad32629d4fc57e353e2e1b523277e13dc84e05ceb591787863a2c0cf991.jpg) + +![](images/158c9249393f207bce07713a2744e821e00d57775f4b0ecd90651425e45b074c.jpg) + +![](images/f11a684b8e64007b9153161e93b8a9b7328f90886dbb71ced5cf06fab0b17690.jpg) +(a) + +![](images/ef725426a0e2667b4ac740e35154f08c6ac725ee8e03b4a78ece8ebfa1189d72.jpg) +Figure 5: (a) Targeted Attack: Median distortion vs Queries of different attacks on MNIST and CIFAR-10. (b) Comparing Sign-OPT and ZO-SignSGD with and without single query oracle (SQO). + +![](images/16e3bd873001dd0e144286182246e0da15a8a3b6c2a5902cbf0a50dd11008617.jpg) +(b) + +same number of queries for both attacks. Sign-OPT method generates an $L_{2}$ adversarial perturbation of $0.94$ in $\sim 6k$ queries for this particular example while Opt-based attack requires $\sim 35k$ for the same. Figure 5 displays a comparison among different attacks in targeted setting. In our experiments, average distortion achieved by white box attack C&W for MNIST dataset is 1.51, for which Sign-OPT requires $\sim 12k$ queries while others need $> 120k$ queries. We present a comparison of success rate of different attacks for CIFAR-10 dataset in Figure 6 for both targeted and untargeted cases. + +# 4.4 THE POWER OF SINGLE QUERY ORACLE + +In this subsection, we conduct several experiments to prove the effectiveness of our proposed single query oracle in hard-label adversarial attack setting. ZO-SignSGD algorithm (Liu et al., 2019) is proposed for soft-label black box attack and we extend it into hard-label setting. A straightforward way is simply applying ZO-SignSGD to solve the hard-label objective proposed in Cheng et al. (2019), estimate the gradient using binary search as (Cheng et al., 2019) and take its sign. In Figure 5(b), we clearly observe that simply combining ZO-SignSGD and Cheng et al. (2019) is not efficient. With the proposed single query sign oracle, we can also reduce the query count of this method, as demonstrated in Figure 5(b). This verifies the effectiveness of single query oracle, which can universally improve many different optimization methods in the hard-label attack setting. To be noted, there is still improvement on Sign-OPT over ZO-SignSGD with single query oracle because instead + +![](images/e31207fe015826a37b03b02fe2673e046de144213168b06679dde3e3ffd74cac.jpg) +Figure 6: Success Rate vs Queries for CIFAR-10 ( $L_{2}$ norm-based attack). First two and last two depict untargeted and targeted attacks respectively. Success rate threshold is at the top of each plot. + +![](images/17a55aa5a9fc67efde1362c7752efe21f9f2a57404063102072ec60a2004e27c.jpg) + +![](images/cc21f0e46a6132baad7467cfcf95eb03c79f8b538ad6f9099635755dab4a8698.jpg) + +![](images/6f4049a4bcc1a73765d1ae7bc64c9d8c3a6d60467c9793530feaaf7001e21387.jpg) + +Table 1: $L_{2}$ Untargeted attack - Comparison of average $L_{2}$ distortion achieved using a given number of queries for different attacks. SR stands for success rate. + +
MNISTCIFAR10ImageNet (ResNet-50)
#QueriesAvg L2SR(ε = 1.5)#QueriesAvg L2SR(ε = 0.5)#QueriesAvg L2SR(ε = 3.0)
Boundary attack4,0004.241.0%4,0003.122.3%4,000209.630%
8,0004.241.0%8,0002.847.6%30,00017.4016.6%
14,0002.1316.3%12,0000.7829.2%160,0004.6241.6%
OPT attack4,0003.653.0%4,0000.7737.0%4,00083.852.0%
8,0002.4118.0%8,0000.4353.0%30,00016.7714.0%
14,0001.7636.0%12,0000.3361.0%160,0004.2734.0%
Guessing Smart4,0001.7441.0%4,0000.2975.0%4,00016.6912.0%
8,0001.6942.0%8,0000.2580.0%30,00013.2712.0%
14,0001.6843.0%12,0000.2480.0%160,00012.8812.0%
Sign-OPT attack4,0001.5446.0%4,0000.2673.0%4,00023.198.0%
8,0001.1884.0%8,0000.1690.0%30,0002.9950.0%
14,0001.0994.0%12,0000.1395.0%160,0001.2190.0%
C&W (white-box)-0.8899.0%-0.2585.0%-1.5180.0%
+ +of directly taking the sign of gradient estimation, our algorithm utilizes the scale of random direction $u$ as well. In other words, signSGD's gradient norm is always 1 while our gradient norm takes into account the magnitude of $u$ . Therefore, our signOPT optimization algorithm is fundamentally different (Liu et al., 2019) or any other proposed signSGD varieties. Our method can be viewed as a new zeroth order optimization algorithm that features fast convergence in signSGD. + +# 5 CONCLUSION + +We developed a new and ultra query-efficient algorithm for adversarial attack in the hard-label black-box setting. Using the same smooth reformulation in Cheng et al. (2019), we design a novel zeroth order oracle that can compute the sign of directional derivative of the attack objective using single query. Equipped with this single-query oracle, we design a new optimization algorithm that can dramatically reduce number of queries compared with Cheng et al. (2019). We prove the convergence of the proposed algorithm and show our new algorithm is overwhelmingly better than current hard-label black-box attacks. + +# ACKNOWLEDGEMENT + +This work is based upon work supported by the Department of Energy National Energy Technology Laboratory under Award Number DE-OE0000911 and by NSF under IIS1719097. + +# REFERENCES + +Abdullah Al-Dujaili and Una-May O'Reilly. There are no bit parts for sign bits in black-box attacks. arXiv preprint arXiv:1902.06894, 2019. +Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, Huan Zhang, Cho-Jui Hsieh, and Mani B Srivastava. Genattack: Practical black-box attacks with gradient-free optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1111-1119, 2019. +Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML, 2018. +Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar. signSGD: Compressed optimisation for non-convex problems. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 560–569, Stockholmssan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/bernstein18a.html. +Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248, 2017. +Thomas Brunner, Frederik Diehl, Michael Truong Le, and Alois Knoll. Guessing smart: Biased sampling for efficient black-box adversarial attacks. arXiv preprint arXiv:1812.09803, 2018. +Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pp. 39-57. IEEE, 2017. +Jianbo Chen, Michael I. Jordan, and Martin J. Wainwright. Hopskip jumpattack: A query-efficient decision-based attack. arXiv preprint arXiv:1904.02144, 2019. +Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15-26. ACM, 2017. +Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. Ead: elastic-net attacks to deep neural networks via adversarial examples. In Thirty-second AAAI conference on artificial intelligence, 2018. +Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, JinFeng Yi, and Cho-Jui Hsieh. Query-efficient hard-label black-box attack: An optimization-based approach. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJlk6iRqKX. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248-255. IEEE, 2009. +John C Duchi, Peter L Bartlett, and Martin J Wainwright. Randomized smoothing for stochastic optimization. SIAM Journal on Optimization, 22(2):674-701, 2012. +Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning, pp. 2142-2151, 2018a. + +Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Prior convictions: Black-box adversarial attacks with bandits and priors. arXiv preprint arXiv:1807.07978, 2018b. +Kwang-Sung Jun, Lihong Li, Yuzhe Ma, and Jerry Zhu. Adversarial attacks on stochastic bandits. In Advances in Neural Information Processing Systems, pp. 3640-3649, 2018. +Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/~kriz/cifar.html. +Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016. +Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Paishun Ting, Shiyu Chang, and Lisa Amini. Zeroth-order stochastic variance reduction for nonconvex optimization. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 3727-3737. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7630-zeroth-order-stochastic-variance-reduction-for-nonconvex-optimization.pdf. +Sijia Liu, Pin-Yu Chen, Xiangyi Chen, and Mingyi Hong. signSGD via zeroth-order oracle. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BJe-DsC5Fm. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. +Sebastien Marcel and Yann Rodriguez. Torchvision the machine-vision package of torch. In Proceedings of the 18th ACM International Conference on Multimedia, MM '10, pp. 1485-1488, New York, NY, USA, 2010. ACM. ISBN 978-1-60558-933-6. doi: 10.1145/1873951.1874254. URL http://doi.acm.org/10.1145/1873951.1874254. +Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527-566, 2017. +Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506-519. ACM, 2017. +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826, 2016. +Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, and Shin-Ming Cheng. Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. AAAI, 2019. +Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S Dhillon, and Cho-Jui Hsieh. The limitations of adversarial training and the blind-spot attack. arXiv preprint arXiv:1901.04684, 2019. + +# A APPENDIX + +# A.1 COMPARISON WITH HOPSKIPJUMPATTACK + +There is a recent paper (Chen et al., 2019) that applied the zeroth-order sign oracle to improve Boundary attack, and also demonstrated significant improvement. The major differences to our algorithm are that we propose a new zeroth-order gradient descent algorithm, provide its algorithmic convergence guarantees, and aim to improve the query complexity of the attack formulation proposed in (Cheng et al., 2019). To be noted, HopSkipJumpAttack only provides the bias and variance analysis (Theorem 2 and 3) without convergence rate analysis. + +Also, HopSkipJumpAttack uses one-point gradient estimate compared to the 2-point gradient estimate used by SignOPT. Therefore, although the estimation is unbiased, it has large variance, which achieves successful attack faster but generates a worse adversarial example with larger distortion than ours. For completeness, we also compare with this method (and mention the results) as follows. + +Figure 7 shows a comparison of Sign-OPT and HopSkipJumpAttack for CIFAR-10 and MNIST datasets for the case of $L_{2}$ norm based attack. We find in our experiments that performance of both attacks is comparable in terms of queries consumed. In some cases, Sign-OPT converges to a better solution. + +![](images/c32a69010599cbca751b0054191ad557a567aab9132479ff0fe706430bdcd8d4.jpg) +Figure 7: Comparison with HopSkipJumpAttack for CIFAR and MNIST: Median distortion vs Queries. (U) represents untargeted attack and (T) represents targeted attack. + +# A.2 PROOF + +Define following notations: + +$$ +\hat {\nabla} g (\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}) := \operatorname {s i g n} (g (\boldsymbol {\theta} _ {t} + \epsilon \boldsymbol {u} _ {q}) - g (\boldsymbol {\theta} _ {t})) \boldsymbol {u} _ {q} +$$ + +$$ +\dot {\nabla} g (\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}) := \frac {1}{\epsilon} (g (\boldsymbol {\theta} _ {t} + \epsilon \boldsymbol {u} _ {q}) - g (\boldsymbol {\theta} _ {t})) \boldsymbol {u} _ {q} +$$ + +$$ +\bar {\nabla} g \left(\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}\right) := \operatorname {s i g n} \left(\frac {1}{\epsilon} \left(g \left(\boldsymbol {\theta} _ {t} + \epsilon \boldsymbol {u} _ {q}\right) - g \left(\boldsymbol {\theta} _ {t}\right)\right) \boldsymbol {u} _ {q}\right) +$$ + +Thus we could write the corresponding estimate of gradients as follow: + +$$ +\hat {\boldsymbol {g}} _ {t} = \frac {1}{Q} \sum_ {q = 1} ^ {Q} \operatorname {s i g n} (g (\boldsymbol {\theta} _ {t} + \epsilon \mathbf {u} _ {q}) - g (\boldsymbol {\theta} _ {t})) \mathbf {u} _ {q} = \frac {1}{Q} \sum_ {q = 1} ^ {Q} \hat {\nabla} g (\boldsymbol {\theta} _ {t}; \mathbf {u} _ {q}) +$$ + +$$ +\dot {\boldsymbol {g}} _ {t} = \frac {1}{Q} \sum_ {q = 1} ^ {Q} \frac {1}{\epsilon} (g (\boldsymbol {\theta} _ {t} + \epsilon \boldsymbol {u} _ {q}) - g (\boldsymbol {\theta} _ {t})) \boldsymbol {u} _ {q} = \frac {1}{Q} \sum_ {q = 1} ^ {Q} \dot {\nabla} g (\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}) +$$ + +$$ +\bar {\boldsymbol {g}} _ {t} = \frac {1}{Q} \sum_ {q = 1} ^ {Q} \operatorname {s i g n} \left(\frac {1}{\epsilon} \left(g \left(\boldsymbol {\theta} _ {t} + \epsilon \boldsymbol {u} _ {q}\right) - g \left(\boldsymbol {\theta} _ {t}\right)\right) \boldsymbol {u} _ {q}\right) = \frac {1}{Q} \sum_ {q = 1} ^ {Q} \bar {\nabla} g \left(\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}\right) +$$ + +Clearly, we have $\bar{\nabla} g(\pmb{\theta}_t; \pmb{u}_q) = \mathrm{sign}(\dot{\nabla} g(\pmb{\theta}_t; \pmb{u}_q))$ and we could relate $\bar{\nabla} g(\pmb{\theta}_t; \pmb{u}_q)$ and $\hat{\nabla} g(\pmb{\theta}_t; \pmb{u}_q)$ by writing $\hat{\nabla} g(\pmb{\theta}_t; \pmb{u}_q) = G_q \odot \bar{\nabla} g(\pmb{\theta}_t; \pmb{u}_q)$ where where $G_q \in \mathbb{R}^d$ is absolute value of vector $\pmb{u}_q$ (i.e. $G_q = (|\pmb{u}_{q,1}|, |\pmb{u}_{q,2}|, \dots, |\pmb{u}_{q,d}|)^T$ ). + +Note that Zeroth-order gradient estimate $\dot{\nabla} g(\pmb{\theta}_t; \pmb{u}_q)$ is a biased approximation to the true gradient of $\mathbf{g}$ . Instead, it becomes unbiased to the gradient of the randomized smoothing function $g_{\epsilon}(\pmb{\theta}) = \mathbb{E}_{\pmb{u}}[g(\pmb{\theta} + \epsilon \pmb{u})]$ Duchi et al. (2012). + +Our analysis is based on the following two assumptions: + +Assumption 1 function $\mathrm{g}$ is L-smooth with a finite value of $\mathrm{L}$ . + +Assumption 2 At any iteration step $t$ , the gradient of the function $g$ is upper bounded by $\| \nabla g(\pmb{\theta}_t) \|_2 \leq \sigma$ . + +To prove the convergence of proposed method, we need the information on variance of the update $\dot{\nabla} g(\pmb{\theta}_t; \pmb{u}_q)$ . Here, we introduce a lemma from previous works. + +Lemma 1 The variance of Zeroth-Order gradient estimate $\vec{\nabla} g(\pmb {\theta}_t;\pmb {u}_q)$ is upper bounded by + +$$ +\mathbb {E} \big [ \| \dot {\nabla} g (\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}) - \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {2} ^ {2} \big ] \leq \frac {4 (Q + 1)}{Q} \sigma^ {2} + \frac {2}{Q} C (d, \epsilon), +$$ + +where $C(d,\epsilon)\coloneqq 2d\sigma^2 +\epsilon^2 L^2 d^2 /2$ + +Proof of Lemma 1 This lemma could be proved by using proposition 2 in Liu et al. (2019) with $b = 1$ and $q = Q$ . When $b = 1$ there is no difference between with/without replacement, and we opt for with replacement case to obtain above bound. + +By talking $\mathbf{Q} = 1$ , we know that $\mathbb{E}\big[\| \dot{\nabla} g(\pmb {\theta}_t;\pmb {u}_q) - \nabla g_\epsilon (\pmb {\theta}_t)\| _2^2\big]$ is upper bounded. And by Jensen's inequality, we also know that the + +$$ +\mathbb {E} \left[ | (\dot {\nabla} g (\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}) - \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l} | \right] \leq \sqrt {\mathbb {E} \left[ ((\dot {\nabla} g (\boldsymbol {\theta} _ {t} ; \boldsymbol {u} _ {q}) - \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l} ^ {2} \right]} := \delta_ {l}, \tag {6} +$$ + +where $\delta_l$ denotes the upper bound of $lth$ coordinate of $\mathbb{E}\big[|\dot{\nabla} g(\pmb {\theta}_t;\pmb {u}_q) - \nabla g_\epsilon (\pmb {\theta}_t)|\big]$ , and $\delta_l$ is finite since $\mathbb{E}\big[\| \dot{\nabla} g(\pmb {\theta}_t;\pmb {u}_q) - \nabla g_\epsilon (\pmb {\theta}_t)\| _2^2\big]$ is upper bounded. + +Next, we want to show the Prob[ \text{sign}((\bar{\pmb{g}}_t)_l) \neq \text{sign}((\nabla g_\epsilon(\pmb{\theta}_t))_l) ] by following lemma. + +Lemma 2 $|(\nabla g_{\epsilon}(\pmb{\theta}_{t}))_{l}|$ Prob[sign((g(t))l) ≠ sign((Vg(0))l)] ≤ + +Proof of Lemma 2 Similar to Bernstein et al. (2018), we first relax $\mathrm{Prob}[\mathrm{sign}((\dot{\nabla} g(\pmb{\theta}_t; \pmb{u}_q))_l) \neq \mathrm{sign}(\nabla g_\epsilon(\pmb{\theta}_t))_l]$ by Markov inequality: + +$$ +\begin{array}{l} \operatorname {P r o b} \left[ \operatorname {s i g n} \left(\left(\dot {\nabla} g \left(\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}\right)\right) _ {l}\right) \neq \operatorname {s i g n} \left(\left(\nabla g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right)\right) _ {l}\right) \right] \leq \operatorname {P r o b} \left[ \left| \dot {\nabla} g \left(\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}\right) _ {l}\right) \right| \geq \left| \nabla g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right) _ {l} \right| \\ \leq \frac {\mathbb {E} \left[ | (\dot {\nabla} g (\boldsymbol {\theta} _ {t} ; \boldsymbol {u} _ {q}) - \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l} | \right]}{| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) _ {l} |} \\ \leq \frac {\delta_ {l}}{| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) _ {l} |}, \\ \end{array} +$$ + +where the last inequality comes from eq (6). Recall that $(\dot{\nabla} g(\pmb{\theta}_t; \pmb{u}_q))_l)$ is an unbiased estimation to $(\nabla g_{\epsilon}(\pmb{\theta}_t))_l$ . Under the assumption that the noise distribution is unimodal and symmetric, from Bernstein et al. (2018) Lemma D1, we will have + +$$ +\operatorname {P r o b} [ \operatorname {s i g n} ((\dot {\nabla} g (\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q})) _ {l}) \neq \operatorname {s i g n} (\nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l} ] := M \leq \left\{ \begin{array}{l l} \frac {2}{9} \frac {1}{S ^ {2}}, & \textbf {S} \geq \frac {2}{\sqrt {3}} \\ \frac {1}{2} - \frac {S}{2 \sqrt {3}}, & \text {o t h e r w i s e} < \frac {1}{2}, \end{array} \right. +$$ + +where $S\coloneqq |\nabla g_{\epsilon}(\pmb {\theta}_t)_l| / \delta_l$ + +Note that this probability bound applies uniformly to all $q \in Q$ regardless of the magnitude $|\left(\boldsymbol{u}_q\right)_l|$ . That is, + +$$ +\operatorname {P r o b} \left[ \operatorname {s i g n} \left(\sum_ {q = 1} ^ {Q} | (\boldsymbol {u} _ {q}) _ {l} \right| \operatorname {s i g n} \left(\left(\dot {\nabla} g \left(\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}\right)\right) _ {l}\right) \neq \operatorname {s i g n} \left(\nabla g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right)\right) _ {l} \right] = +$$ + +$$ +\operatorname {P r o b} \left[ \operatorname {s i g n} \left(\sum_ {q = 1} ^ {Q} \operatorname {s i g n} \left(\dot {\nabla} g \left(\boldsymbol {\theta} _ {t}; \boldsymbol {u} _ {q}\right)\right) _ {l}\right) \neq \operatorname {s i g n} \left(\nabla g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right)\right) _ {l} \right]. \tag {7} +$$ + +This is true as when all $|(\pmb{u}_q)_l| = 1$ , Prob[sign(( $\sum_{q=1}^{Q}$ sign( $\dot{\nabla}g(\pmb{\theta}_t;\pmb{u}_q))_l$ ) $\neq$ sign( $\nabla g_{\epsilon}(\pmb{\theta}_t)$ )] is equivalent to majority voting of each estimate q yielding correct sign. This is the same as sum of Q bernoulli trials (i.e. binomial distribution) with error rate M. And since error probability M is independent of sampling of $|(\pmb{u}_q)_l|$ , calculating Prob[sign( $\sum_{q=1}^{Q}|(\pmb{u}_q)_l|$ sign(( $\dot{\nabla}g(\pmb{\theta}_t;\pmb{u}_q))_l$ ) $\neq$ sign( $\nabla g_{\epsilon}(\pmb{\theta}_t)$ )] could be thought as taking Q bernoulli experiments and then independently draw a weight from unit length for each of Q experiment. Since the weight is uniform, we will have expectation of weights on correct counts and incorrect counts are the same and equal to 1/2. Therefore, the probability of Prob[sign( $\sum_{q=1}^{Q}|(\pmb{u}_q)_l|$ sign(( $\dot{\nabla}g(\pmb{\theta}_t;\pmb{u}_q))_l$ ) $\neq$ sign( $\nabla g_{\epsilon}(\pmb{\theta}_t)$ )] is still the same as original non-weighted binomial distribution. Notice that by our notation, we will have sign( $\dot{\nabla}g(\pmb{\theta}_t;\pmb{u}_q)_l$ ) = $\bar{\nabla}g(\pmb{\theta}_t;\pmb{u}_q)_l$ thus $\frac{1}{Q}\sum_{q=1}^{Q}$ sign( $\dot{\nabla}g(\pmb{\theta}_t;\pmb{u}_q)_l$ ) = ( $\bar{g}_t$ )l. Let Z counts the number of estimates $\dot{\nabla}g(\pmb{\theta}_t;\pmb{u}_q)_l$ yielding correct sign of $\nabla g_{\epsilon}(\pmb{\theta}_t)_l$ . Probability in eq (7) could be written as: + +$$ +\operatorname {P r o b} [ \operatorname {s i g n} (\operatorname {s i g n} ((\bar {\boldsymbol {g}} _ {t}) _ {l}) \neq \operatorname {s i g n} (\nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l} ] = P [ Z \leq \frac {Q}{2} ]. +$$ + +Following the derivation of theorem 2b in Bernstein et al. (2018), we could get + +$$ +\begin{array}{l} P [ Z \leq \frac {Q}{2} ] \leq \frac {1}{\sqrt {Q} S} \\ \Rightarrow | (\nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l} | \operatorname {P r o b} [ \operatorname {s i g n} ((\bar {\boldsymbol {g}} _ {t}) _ {l}) \neq \operatorname {s i g n} ((\nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l}) ] \leq \frac {\delta_ {l}}{\sqrt {Q}} \tag {8} \\ \end{array} +$$ + +![](images/e4706e75be891b1154f3b7ac7026b8b7e25b4627b4bc52d33920ef7faeabe93c.jpg) + +We also need few more lemmas on properties of function $g$ . + +Lemma 3 $g_{\epsilon}(\pmb{\theta}_1) - g_{\epsilon}(\pmb{\theta}_T)\leq g_{\epsilon}(\pmb{\theta}_1) - g^* +\epsilon^2 L$ + +Proof of Lemma 3 The proof can be found in Liu et al. (2018) Lemma C. $\square$ + +Lemma 4 $\mathbb{E}[\| \nabla g(\pmb {\theta})\| _2]\leq \sqrt{2}\mathbb{E}[\| \nabla g_\epsilon (\pmb {\theta})\| _2] + \frac{\epsilon Ld}{\sqrt{2}}$ where $g^{*} = \min_{\pmb{\theta}}g(\pmb {\theta})$ + +Proof of Lemma 4 The proof can be found in Liu et al. (2019). $\square$ + +Theorem 1 Suppose that the conditions in the assumptions hold, and the distribution of gradient noise is unimodal and symmetric. Then, Sign-OPT attack with learning rate $\eta_t = O\left(\frac{1}{Q\sqrt{dT}}\right)$ and $\epsilon = O\left(\frac{1}{dT}\right)$ will give following bound on $\mathbb{E}[\|\nabla g(\pmb{\theta})\|_2]$ + +$$ +\mathbb {E} [ \| \nabla g (\boldsymbol {\theta}) \| _ {2} ] = O \left(\frac {\sqrt {d}}{\sqrt {T}} + \frac {d}{\sqrt {Q}}\right) +$$ + +Proof of Theorem 1 From L-smoothness assumption we could have + +$$ +\begin{array}{l} g _ {\epsilon} (\boldsymbol {\theta} _ {t + 1}) \leq g _ {\epsilon} (\boldsymbol {\theta} _ {t}) + \langle \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}), \boldsymbol {\theta} _ {t + 1} - \boldsymbol {\theta} _ {t} \rangle + \frac {L}{2} \| \boldsymbol {\theta} _ {t + 1} - \boldsymbol {\theta} _ {t} \| _ {2} ^ {2} \\ = g _ {\epsilon} (\boldsymbol {\theta} _ {t}) - \eta_ {k} \langle \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}), \hat {\boldsymbol {g}} _ {t} \rangle + \frac {L}{2} \eta_ {t} ^ {2} \| \hat {\boldsymbol {g}} _ {t} \| _ {2} ^ {2} \\ = g _ {\epsilon} (\boldsymbol {\theta} _ {t}) - \eta_ {t} \odot \bar {G} _ {t} \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} + \frac {d L}{2} \eta_ {t} ^ {2} \odot \bar {G} _ {t} ^ {2} \\ + 2 \eta_ {t} \odot \bar {G} _ {t} \sum_ {l = 1} ^ {d} | (\nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l} | \operatorname {P r o b} [ \operatorname {s i g n} ((\bar {\boldsymbol {g}} _ {t}) _ {l}) \neq \operatorname {s i g n} ((\nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l}) ], \\ \end{array} +$$ + +where $\bar{G}_t$ is defined as $(\bar{G}_t)_l = \sum_{q=1}^Q (G_q)_l \bar{\nabla} g(\pmb{\theta}_t; \pmb{u}_q)_l = \sum_{q=1}^Q |(\pmb{u}_q)_l| \bar{\nabla} g(\pmb{\theta}_t; \pmb{u}_q)_l$ . Continue the inequality, + +$$ +\begin{array}{l} g _ {\epsilon} (\boldsymbol {\theta} _ {t}) - \eta_ {t} \odot \bar {G} _ {t} \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} + \frac {d L}{2} \eta_ {t} ^ {2} \odot \bar {G} _ {t} ^ {2} \\ + 2 \eta_ {t} \odot \bar {G} _ {t} \sum_ {l = 1} ^ {d} | (\nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l} | \operatorname {P r o b} [ \operatorname {s i g n} ((\bar {\boldsymbol {g}} _ {t}) _ {l}) \neq \operatorname {s i g n} ((\nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})) _ {l}) ] \\ \leq g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right) - \eta_ {t} \odot \bar {G} _ {t} \| \nabla g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right) \| _ {1} + \frac {d L}{2} \eta_ {t} ^ {2} \odot \bar {G} _ {t} ^ {2} + 2 \eta_ {t} \odot \bar {G} _ {t} \sum_ {l = 1} ^ {d} \frac {\delta_ {l}}{\sqrt {Q}} \tag {8} \\ \leq g _ {\epsilon} (\boldsymbol {\theta} _ {t}) - \eta_ {t} \odot \bar {G} _ {t} \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} + \frac {d L}{2} \eta_ {t} ^ {2} \odot \bar {G} _ {t} ^ {2} + 2 \eta_ {t} \odot \bar {G} _ {t} \frac {\| \delta_ {l} \| _ {1}}{\sqrt {Q}} \\ \leq g _ {\epsilon} (\boldsymbol {\theta} _ {t}) - \eta_ {t} \odot \bar {G} _ {t} \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} + \frac {d L}{2} \eta_ {t} ^ {2} \odot \bar {G} _ {t} ^ {2} + 2 \eta_ {t} \odot \bar {G} _ {t} \frac {\sqrt {d} \sqrt {\| \delta_ {l} \| _ {2} ^ {2}}}{\sqrt {Q}} \\ = g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right) - \eta_ {t} \odot \bar {G} _ {t} \| \nabla g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right) \| _ {1} + \frac {d L}{2} \eta_ {t} ^ {2} \odot \bar {G} _ {t} ^ {2} + 2 \eta_ {t} \odot \bar {G} _ {t} \frac {\sqrt {d} \sqrt {\mathbb {E} \left[ \left(\dot {\nabla} g \left(\boldsymbol {\theta} _ {t} ; \boldsymbol {u} _ {q}\right) - \nabla g _ {\epsilon} \left(\boldsymbol {\theta} _ {t}\right)\right)\right) _ {l} ^ {2}} ]}{\sqrt {Q}} \quad \text {b y e q (6)}. \\ \end{array} +$$ + +Thus we will have, + +$$ +\begin{array}{l} g _ {\epsilon} (\boldsymbol {\theta} _ {t + 1}) - g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \leq - \eta_ {t} \odot \bar {G} _ {t} \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} + \frac {d L}{2} \eta_ {t} ^ {2} \odot \bar {G} _ {t} ^ {2} + 2 \eta_ {t} \odot \bar {G} _ {t} \frac {\sqrt {d} \sqrt {\mathbb {E} \left[ \left(\dot {\nabla} g (\boldsymbol {\theta} _ {t} ; \boldsymbol {u} _ {q}) - \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})\right) _ {l} ^ {2} \right]}}{\sqrt {Q}} \\ \Rightarrow \eta_ {t} \odot \bar {G} _ {t} \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} \leq g _ {\epsilon} (\boldsymbol {\theta} _ {t}) - g _ {\epsilon} (\boldsymbol {\theta} _ {t + 1}) + \frac {d L}{2} \eta_ {t} ^ {2} \odot \bar {G} _ {t} ^ {2} + 2 \eta_ {t} \odot \bar {G} _ {t} \frac {\sqrt {d} \sqrt {\mathbb {E} \left[ \left(\dot {\nabla} g \left(\boldsymbol {\theta} _ {t} ; \boldsymbol {u} _ {q}\right) - \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t})\right)\right) _ {l} ^ {2} ]}}{\sqrt {Q}} \\ \Rightarrow \hat {\eta} _ {t} \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} \leq g _ {\epsilon} (\boldsymbol {\theta} _ {t}) - g _ {\epsilon} (\boldsymbol {\theta} _ {t + 1}) + \frac {d L}{2} \hat {\eta} _ {t} ^ {2} + 2 \hat {\eta} _ {t} \sqrt {d} \frac {\sqrt {\mathbb {E} \left[ \left(\dot {\nabla} g (\boldsymbol {\theta} _ {t} ; \boldsymbol {u} _ {q}) - \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}))\right) _ {l} ^ {2} \right]}}{\sqrt {Q}}, \\ \end{array} +$$ + +where we define $\hat{\eta}_t\coloneqq \eta_t\odot \bar{G}_t$ . Sum up all inequalities for all ts and take expectation on both side, we will have + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \hat {\eta} _ {t} \mathbb {E} [ \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} ] \leq \mathbb {E} [ g _ {\epsilon} (\boldsymbol {\theta} _ {1}) - g _ {\epsilon} (\boldsymbol {\theta} _ {T}) ] + \frac {d L}{2} \sum_ {t = 1} ^ {T} \hat {\eta} _ {t} ^ {2} + \sum_ {t = 1} ^ {T} 2 \hat {\eta} _ {t} \sqrt {d} \sqrt {\mathbb {E} \left[ \left(\dot {\nabla} g (\boldsymbol {\theta} _ {t} ; \boldsymbol {u} _ {q}) - \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}))\right) _ {l} ^ {2} \right]} \\ \leq \mathbb {E} \left[ g _ {\epsilon} \left(\boldsymbol {\theta} _ {1}\right) - g _ {\epsilon} \left(\boldsymbol {\theta} _ {T}\right) \right] + \frac {d L}{2} \sum_ {t = 1} ^ {T} \hat {\eta} _ {t} ^ {2} + \sum_ {t = 1} ^ {T} 2 \hat {\eta} _ {t} \sqrt {d} \sqrt {\frac {4 (Q + 1)}{Q} \sigma^ {2} + \frac {2}{Q} C (d , \epsilon)} \quad \text {b y} \\ \end{array} +$$ + +Substitute Lemma 3 into above inequality, we get + +$$ +\sum_ {t = 1} ^ {T} \hat {\eta} _ {t} \mathbb {E} [ \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {1} ] \leq g _ {\epsilon} (\boldsymbol {\theta} _ {1}) - g ^ {*} + \epsilon^ {2} L + \frac {d L}{2} \sum_ {t = 1} ^ {T} \hat {\eta} _ {t} ^ {2} + \sum_ {t = 1} ^ {T} 2 \hat {\eta} _ {t} \sqrt {d} \sqrt {\frac {4 (Q + 1)}{Q} \sigma^ {2} + \frac {2}{Q} C (d , \epsilon)}. +$$ + +Since $\| \cdot \| _2\leq \| \cdot \| _1$ and we could divide $\sum_{t = 1}^{T}\hat{\eta}_{t}$ on both side to get + +$$ +\sum_ {t = 1} ^ {T} \frac {\hat {\eta} _ {t}}{\sum_ {t = 1} ^ {T} \hat {\eta} _ {t}} \mathbb {E} [ \| \nabla g _ {\epsilon} (\pmb {\theta} _ {t}) \| _ {2} ] \leq \frac {g _ {\epsilon} (\pmb {\theta} _ {1}) - g ^ {*} + \epsilon^ {2} L}{\sum_ {t = 1} ^ {T} \hat {\eta} _ {t}} + \frac {d L}{2} \frac {\sum_ {t = 1} ^ {T} \hat {\eta} _ {t} ^ {2}}{\sum_ {t = 1} ^ {T} \hat {\eta} _ {t}} + \sum_ {t = 1} ^ {T} \frac {2 \sqrt {d}}{\sqrt {Q}} \sqrt {4 (Q + 1) \sigma^ {2} + 2 C (d , \epsilon)}. +$$ + +Define a new random variable $R$ with probability $P(R = t) = \frac{\eta_t}{\sum_{t=1}^{T} \eta_t}$ , we will have + +$$ +\mathbb {E} [ \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {R}) \| _ {2} ] = \mathbb {E} [ \mathbb {E} _ {R} [ \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {R}) \| _ {2} ] ] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} P (R = t) \| \nabla g _ {\epsilon} (\boldsymbol {\theta} _ {t}) \| _ {2} \right]. +$$ + +Substitute all the quantities into Lemma 4, we will get + +$$ +\mathbb {E} [ \| \nabla g (\pmb {\theta}) \| _ {2} ] \leq \frac {\sqrt {2} (g _ {\epsilon} (\pmb {\theta} _ {1}) - g ^ {*} + \epsilon^ {2} L)}{\sum_ {t = 1} ^ {T} \hat {\eta} _ {t}} + \frac {d L}{\sqrt {2}} \frac {\sum_ {t = 1} ^ {T} \hat {\eta} _ {t} ^ {2}}{\sum_ {t = 1} ^ {T} \hat {\eta} _ {t}} + \frac {\epsilon L d}{\sqrt {2}} + \sum_ {t = 1} ^ {T} \frac {2 \sqrt {2} \sqrt {d}}{\sqrt {Q}} \sqrt {4 (Q + 1) \sigma^ {2} + 2 C (d , \epsilon)}. +$$ + +By choosing $\epsilon = O\left(\frac{1}{dT}\right)$ and $\eta_t = O\left(\frac{1}{Q\sqrt{dT}}\right)$ , then the convergence rate as shown in above is $O\left(\frac{d}{T} + \frac{d}{\sqrt{Q}}\right)$ . \ No newline at end of file diff --git a/signoptaqueryefficienthardlabeladversarialattack/images.zip b/signoptaqueryefficienthardlabeladversarialattack/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6fdcb8de4ff6df20a9e0d96cc6df60cf6b854695 --- /dev/null +++ b/signoptaqueryefficienthardlabeladversarialattack/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:374775a9bb1f9672d3469fe4bfba0cf0d0a9b820d55245ff966a3e602ddec604 +size 820279 diff --git a/signoptaqueryefficienthardlabeladversarialattack/layout.json b/signoptaqueryefficienthardlabeladversarialattack/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..535b74d51f83ef651a3f41c66f944fd6f3921269 --- /dev/null +++ b/signoptaqueryefficienthardlabeladversarialattack/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee945f0d77f4d14aa1cc168d552010c4e4f82916f85ea1ea23a85fde400ab12f +size 546169 diff --git a/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_content_list.json b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0b349db97b8ed82ef397d7c510cdb62cdcd76c6f --- /dev/null +++ b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bd3c72b93d00acd58b174442310c5bdb8aacd93df7a18ec6a0dad34cff2bb32 +size 138928 diff --git a/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_model.json b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6b881da8eda9dec63aeb6d84aca765f9232d7237 --- /dev/null +++ b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18522fd393fc7babd0d2bca9835dd5dcd10afdb208f11664110d9d1f370e2ed8 +size 160986 diff --git a/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_origin.pdf b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..97eef4375fb5ae20a541a256a48ca70f3c7f1203 --- /dev/null +++ b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/1483a0e2-83de-4be2-ad43-ee7db8585760_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cbb1a74010e21e8b852100473e1371612ba32ebdec4e7d3235da3068641cb7e +size 753043 diff --git a/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/full.md b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c218e60c6852d4acca998349baad9a626e3f9814 --- /dev/null +++ b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/full.md @@ -0,0 +1,615 @@ +# SIMPLE AND EFFECTIVE REGULARIZATION METHODS FOR TRAINING ON NOISILY Labeled DATA WITH GENERALIZATION GUARANTEE + +Wei Hu Zhiyuan Li Dingli Yu + +Princeton University + +{huwei, zhiyuanli, dingli}@cs.princeton.edu + +# ABSTRACT + +Over-parameterized deep neural networks trained by simple first-order methods are known to be able to fit any labeling of data. Such over-fitting ability hinders generalization when unlabeled training examples are present. On the other hand, simple regularization methods like early-stopping can often achieve highly nontrivial performance on clean test data in these scenarios, a phenomenon not theoretically understood. This paper proposes and analyzes two simple and intuitive regularization methods: (i) regularization by the distance between the network parameters to initialization, and (ii) adding a trainable auxiliary variable to the network output for each training example. Theoretically, we prove that gradient descent training with either of these two methods leads to a generalization guarantee on the clean data distribution despite being trained using noisy labels. Our generalization analysis relies on the connection between wide neural network and neural tangent kernel (NTK). The generalization bound is independent of the network size, and is comparable to the bound one can get when there is no label noise. Experimental results verify the effectiveness of these methods on noisily labeled datasets. + +# 1 INTRODUCTION + +Modern deep neural networks are trained in a highly over-parameterized regime, with many more trainable parameters than training examples. It is well-known that these networks trained with simple first-order methods can fit any labels, even completely random ones (Zhang et al., 2017). Although training on properly labeled data usually leads to good generalization performance, the ability to over-fit the entire training dataset is undesirable for generalization when noisy labels are present. Therefore preventing over-fitting is crucial for robust performance since mislabeled data are ubiquitous in very large datasets (Krishna et al., 2016). + +In order to prevent over-fitting to mislabeled data, some form of regularization is necessary. A simple such example is early stopping, which has been observed to be surprisingly effective for this purpose (Rolnick et al., 2017; Guan et al., 2018; Li et al., 2019). For instance, training ResNet-34 with early stopping can achieve $84\%$ test accuracy on CIFAR-10 even when $60\%$ of the training labels are corrupted (Table 1). This is nontrivial since the test error is much smaller than the error rate in training data. How to explain such generalization phenomenon is an intriguing theoretical question. + +As a step towards a theoretical understanding of the generalization phenomenon for overparameterized neural networks when noisy labels are present, this paper proposes and analyzes two simple regularization methods as alternatives of early stopping: + +1. Regularization by distance to initialization. Denote by $\theta$ the network parameters and by $\theta(0)$ its random initialization. This method adds a regularizer $\lambda \left\| \theta - \theta(0) \right\|^2$ to the training objective. +2. Adding an auxiliary variable for each training example. Let $\boldsymbol{x}_i$ be the $i$ -th training example and $\boldsymbol{f}(\boldsymbol{\theta},\cdot)$ represent the neural net. This method adds a trainable variable $\boldsymbol{b}_i$ and tries to fit the $i$ -th label using $\boldsymbol{f}(\boldsymbol{\theta},\boldsymbol{x}_i) + \lambda \boldsymbol{b}_i$ . At test time, only the neural net $\boldsymbol{f}(\boldsymbol{\theta},\cdot)$ is used and the auxiliary variables are discarded. + +These two choices of regularization are well motivated with clear intuitions. First, distance to initialization has been observed to be very related to generalization in deep learning (Neyshabur et al., 2019; Nagarajan and Kolter, 2019), so regularizing by distance to initialization can potentially help generalization. Second, the effectiveness of early stopping indicates that clean labels are somewhat easier to fit than wrong labels; therefore, adding an auxiliary variable could help "absorb" the noise in the labels, thus making the neural net itself not over-fitting. + +We provide theoretical analysis of the above two regularization methods for a class of sufficiently wide neural networks by proving a generalization bound for the trained network on clean data distribution when the training dataset contains noisy labels. Our generalization bound depends on the (unobserved) clean labels, and is comparable to the bound one can get when there is no label noise, therefore indicating that the proposed regularization methods are robust to noisy labels. + +Our theoretical analysis is based on the recently established connection between wide neural net and neural tangent kernel (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019a). In this line of work, parameters in a wide neural net are shown to stay close to their initialization during gradient descent training, and as a consequence, the neural net can be effectively approximated by its first-order Taylor expansion with respect to its parameters at initialization. This leads to tractable linear dynamics under $\ell_2$ loss, and the final solution can be characterized by kernel regression using a particular kernel named neural tangent kernel (NTK). In fact, we show that for wide neural nets, both of our regularization methods, when trained with gradient descent to convergence, correspond to kernel ridge regression using the NTK, which is often regarded as an alternative to early stopping in kernel literature. This viewpoint makes explicit the connection between our methods and early stopping. + +The effectiveness of these two regularization methods is verified empirically - on MNIST and CIFAR-10, they are able to achieve highly nontrivial test accuracy, on a par with or even better than early stopping. Furthermore, with our regularization, the validation accuracy is almost monotone increasing throughout the entire training process, indicating their resistance to over-fitting. + +# 2 RELATED WORK + +Neural tangent kernel was first explicitly studied and named by Jacot et al. (2018), with several further refinements and extensions by Lee et al. (2019); Yang (2019); Arora et al. (2019a). Using the similar idea that weights stay close to initialization and that the neural network is approximated by a linear model, a series of theoretical papers studied the optimization and generalization issues of very wide deep neural nets trained by (stochastic) gradient descent (Du et al., 2019b; 2018b; Li and Liang, 2018; Allen-Zhu et al., 2018a;b; Zou et al., 2018; Arora et al., 2019b; Cao and Gu, 2019). Empirically, variants of NTK on convolutional neural nets and graph neural nets exhibit strong practical performance (Arora et al., 2019a; Du et al., 2019a), thus suggesting that ultra-wide (or infinitely wide) neural nets are at least not irrelevant. + +Our methods are closely related to kernel ridge regression, which is one of the most common kernel methods and has been widely studied. It was shown to perform comparably to early-stopped gradient descent (Bauer et al., 2007; Gerfo et al., 2008; Raskutti et al., 2014; Wei et al., 2017). Accordingly, we indeed observe in our experiments that our regularization methods perform similarly to gradient descent with early stopping in neural net training. + +In another theoretical study relevant to ours, Li et al. (2019) proved that gradient descent with early stopping is robust to label noise for an over-parameterized two-layer neural net. Under a clustering assumption on data, they showed that gradient descent fits the correct labels before starting to over-fit wrong labels. Their result is different from ours from several aspects: they only considered two-layer nets while we allow arbitrarily deep nets; they required a clustering assumption on data while our generalization bound is general and data-dependent; furthermore, they did not address the question of generalization, but only provided guarantees on the training data. + +A large body of work proposed various methods for training with unlabeled examples, such as estimating noise distribution (Liu and Tao, 2015) or confusion matrix (Sukhbaatar et al., 2014), using surrogate loss functions (Ghosh et al., 2017; Zhang and Sabuncu, 2018), meta-learning (Ren et al., 2018), using a pre-trained network (Jiang et al., 2017), and training two networks simultaneously (Malach and Shalev-Shwartz, 2017; Han et al., 2018; Yu et al., 2019). While our methods are not necessarily superior to these methods in terms of performance, our methods are arguably simpler (with minimal change to normal training procedure) and come with formal generalization guarantee. + +# 3 PRELIMINARIES + +Notation. We use bold-faced letters for vectors and matrices. We use $\| \cdot \|$ to denote the Euclidean norm of a vector or the spectral norm of a matrix, and $\| \cdot \| _F$ to denote the Frobenius norm of a matrix. $\langle \cdot ,\cdot \rangle$ represents the standard inner product. Let $I$ be the identity matrix of appropriate dimension. Let $[n] = \{1,2,\dots ,n\}$ . Let $\mathbb{I}[A]$ be the indicator of event $A$ . + +# 3.1 SETTING: LEARNING FROM NOISILY Labeled DATA + +Now we formally describe the setting considered in this paper. We first describe the binary classification setting as a warm-up, and then describe the more general setting of multi-class classification. + +Binary classification. Suppose that there is an underlying data distribution $\mathcal{D}$ over $\mathbb{R}^d\times \{\pm 1\}$ , where 1 and $-1$ are labels corresponding to two classes. However, we only have access to samples from a noisily labeled version of $\mathcal{D}$ . Formally, the data generation process is: draw $(x,y)\sim \mathcal{D}$ , and flip the sign of label $y$ with probability $p$ $(0\leqslant p < \frac{1}{2})$ ; let $\tilde{y}\in \{\pm 1\}$ be the resulting noisy label. + +Let $\{(\pmb{x}_i,\tilde{y}_i)\}_{i = 1}^n$ be i.i.d. samples generated from the above process. Although we only have access to these noisily labeled data, the goal is still to learn a function (in the form of a neural net) that can predict the true label well on the clean distribution $\mathcal{D}$ . For binary classification, it suffices to learn a single-output function $f:\mathbb{R}^d\to \mathbb{R}$ whose sign is used to predict the class, and thus the classification error of $f$ on $\mathcal{D}$ is defined as $\mathrm{Pr}_{(\pmb {x},\pmb {y})\sim \mathcal{D}}[\mathrm{sgn}(f(\pmb {x}))\neq y]$ . + +Multi-class classification. When there are $K$ classes $(K > 2)$ , let the underlying data distribution $\mathcal{D}$ be over $\mathbb{R}^d\times [K]$ . We describe the noise generation process as a matrix $\pmb {P}\in \mathbb{R}^{K\times K}$ , whose entry $p_{c',c}$ is the probability that the label $c$ is transformed into $c^\prime (\forall c,c'\in [K])$ . Therefore the data generation process is: draw $(x,c)\sim \mathcal{D}$ , and replace the label $c$ with $\tilde{c}$ from the distribution $\operatorname *{Pr}[\tilde{c} = c'|c] = p_{c',c}(\forall c'\in [K])$ . + +Let $\{(\pmb{x}_i,\tilde{c}_i)\}_{i = 1}^n$ be i.i.d. samples from the above process. Again we would like to learn a neural net with low classification error on the clean distribution $\mathcal{D}$ . For $K$ -way classification, it is common to use a neural net with $K$ outputs, and the index of the maximum output is used to predict the class. Thus for $\pmb {f}:\mathbb{R}^{d}\to \mathbb{R}^{K}$ , its (top-1) classification error on $\mathcal{D}$ is $\operatorname*{Pr}_{(\pmb {x},c)\sim \mathcal{D}}[c\notin \operatorname *{argmax}_{h\in [K]}f^{(h)}(\pmb {x})]$ , where $f^{(h)}:\mathbb{R}^d\rightarrow \mathbb{R}$ is the function computed by the $h$ -th output of $\pmb{f}$ . + +As standard practice, a class label $c \in [K]$ is also treated as its one-hot encoding $e^{(c)} = (0, 0, \dots, 0, 1, 0, \dots, 0) \in \mathbb{R}^K$ (the $c$ -th coordinate being 1), which can be paired with the $K$ outputs of the network and fed into a loss function during training. + +Note that it is necessary to assume $p_{c,c} > p_{c',c}$ for all $c \neq c'$ , i.e., the probability that a class label $c$ is transformed into another particular label must be smaller than the label $c$ being correct – otherwise it is impossible to identify class $c$ correctly from noisily labeled data. + +# 3.2 RECAP OF NEURAL TANGENT KERNEL + +Now we briefly and informally recap the theory of neural tangent kernel (NTK) (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019a), which establishes the equivalence between training a wide neural net and a kernel method. + +We first consider a neural net with a scalar output, defined as $f(\theta, \mathbf{x}) \in \mathbb{R}$ , where $\theta \in \mathbb{R}^N$ is all the parameters in the net and $\mathbf{x} \in \mathbb{R}^d$ is the input. Suppose that the net is trained by minimizing the $\ell_2$ loss over a training dataset $\{(x_i, y_i)\}_{i=1}^n \subset \mathbb{R}^d \times \mathbb{R}$ : $L(\theta) = \frac{1}{2} \sum_{i=1}^n (f(\theta, x_i) - y_i)^2$ . Let the random initial parameters be $\theta(0)$ , and the parameters be updated according to gradient descent on $L(\theta)$ . It is shown that if the network is sufficiently wide1, the parameters $\theta$ will stay close to the initialization $\theta(0)$ during training so that the following first-order approximation is accurate: + +$$ +f (\boldsymbol {\theta}, \boldsymbol {x}) \approx f (\boldsymbol {\theta} (0), \boldsymbol {x}) + \langle \nabla_ {\boldsymbol {\theta}} f (\boldsymbol {\theta} (0), \boldsymbol {x}), \boldsymbol {\theta} - \boldsymbol {\theta} (0) \rangle . \tag {1} +$$ + +This approximation is exact in the infinite width limit, but can also be shown when the width is finite but sufficiently large. When approximation (1) holds, we say that we are in the NTK regime. + +Define $\phi (\pmb {x}) = \nabla_{\pmb{\theta}}f(\pmb {\theta}(0),\pmb {x})$ for any $\pmb {x}\in \mathbb{R}^d$ . The right hand side in (1) is linear in $\pmb{\theta}$ . As a consequence, training on the $\ell_2$ loss with gradient descent leads to the kernel regression solution + +with respect to the kernel induced by (random) features $\phi (\pmb {x})$ , which is defined as $k(\pmb {x},\pmb{x}^{\prime}) = \langle \phi (\pmb {x}),\phi (\pmb{x}^{\prime})\rangle$ for $\pmb {x},\pmb{x}^{\prime}\in \mathbb{R}^{d}$ . This kernel was named the neural tangent kernel (NTK) by Jacot et al. (2018). Although this kernel is random, it is shown that when the network is sufficiently wide, this random kernel converges to a deterministic limit in probability (Arora et al., 2019a). If we additionally let the neural net and its initialization be defined so that the initial output is small, i.e., $f(\theta (0),\pmb {x})\approx 0,$ then the network at the end of training approximately computes the following function: $\pmb {x}\mapsto k(\pmb {x},\pmb {X})^{\top}(k(\pmb {X},\pmb {X}))^{-1}\pmb {y},$ (2) + +where $\mathbf{X} = (\pmb{x}_1, \dots, \pmb{x}_n)$ is the training inputs, $\pmb{y} = (y_1, \dots, y_n)^\top$ is the training targets, $k(\pmb{x}, \pmb{X}) = (k(\pmb{x}, \pmb{x}_1), \dots, k(\pmb{x}, \pmb{x}_n))^\top \in \mathbb{R}^n$ , and $k(\pmb{X}, \pmb{X}) \in \mathbb{R}^{n \times n}$ with $(i,j)$ -th entry being $k(\pmb{x}_i, \pmb{x}_j)$ . + +Multiple outputs. The NTK theory above can also be generalized straightforwardly to the case of multiple outputs (Jacot et al., 2018; Lee et al., 2019). Suppose we train a neural net with $K$ outputs, $\pmb{f}(\pmb{\theta},\pmb{x})$ , by minimizing the $\ell_2$ loss over a training dataset $\{(x_i,y_i)\}_{i=1}^n \subset \mathbb{R}^d \times \mathbb{R}^K$ : $L(\pmb{\theta}) = \frac{1}{2}\sum_{i=1}^{n}\|\pmb{f}(\pmb{\theta},\pmb{x}_i) - \pmb{y}_i\|^2$ . When the hidden layers are sufficiently wide such that we are in the NTK regime, at the end of gradient descent, each output of $\pmb{f}$ also attains the kernel regression solution with respect to the same NTK as before, using the corresponding dimension in the training targets $\pmb{y}_i$ . Namely, the $h$ -th output of the network computes the function + +$$ +f ^ {(h)} (\boldsymbol {x}) = k (\boldsymbol {x}, \boldsymbol {X}) ^ {\top} (k (\boldsymbol {X}, \boldsymbol {X})) ^ {- 1} \boldsymbol {y} ^ {(h)}, +$$ + +where $\pmb{y}^{(h)}\in \mathbb{R}^n$ whose $i$ -th coordinate is the $h$ -th coordinate of $\pmb{y}_i$ . + +# 4 REGULARIZATION METHODS + +In this section we describe two simple regularization methods for training with noisy labels, and show that if the network is sufficiently wide, both methods lead to kernel ridge regression using the NTK. + +We first consider the case of scalar target and single-output network. The generalization to multiple outputs is straightforward and is treated at the end of this section. Given a noisily labeled training dataset $\{(\pmb{x}_i,\tilde{y}_i)\}_{i = 1}^n\subset \mathbb{R}^d\times \mathbb{R}$ , let $f(\pmb {\theta},\cdot)$ be a neural net to be trained. A direct, unregularized training method would involve minimizing an objective function like $L(\pmb {\theta}) = \frac{1}{2}\sum_{i = 1}^{n}(f(\pmb {\theta},\pmb {x}_i) - \tilde{y}_i)^2$ . To prevent over-fitting, we suggest the following simple regularization methods that slightly modify this objective: + +- Method 1: Regularization using Distance to Initialization (RDI). We let the initial parameters $\theta(0)$ be randomly generated, and minimize the following regularized objective: + +$$ +L _ {\lambda} ^ {\mathrm {R D I}} (\boldsymbol {\theta}) = \frac {1}{2} \sum_ {i = 1} ^ {n} \left(f \left(\boldsymbol {\theta}, \boldsymbol {x} _ {i}\right) - \tilde {y} _ {i}\right) ^ {2} + \frac {\lambda^ {2}}{2} \| \boldsymbol {\theta} - \boldsymbol {\theta} (0) \| ^ {2}. \tag {3} +$$ + +- Method 2: adding an AUXiliary variable for each training example (AUX). We add an auxiliary trainable parameter $b_{i} \in \mathbb{R}$ for each $i \in [n]$ , and minimize the following objective: + +$$ +L _ {\lambda} ^ {\mathrm {A U X}} (\boldsymbol {\theta}, \boldsymbol {b}) = \frac {1}{2} \sum_ {i = 1} ^ {n} \left(f \left(\boldsymbol {\theta}, \boldsymbol {x} _ {i}\right) + \lambda b _ {i} - \tilde {y} _ {i}\right) ^ {2}, \tag {4} +$$ + +where $\pmb {b} = (b_{1},\dots ,b_{n})^{\top}\in \mathbb{R}^{n}$ is initialized to be $\mathbf{0}$ . + +Equivalence to kernel ridge regression in wide neural nets. Now we assume that we are in the NTK regime described in Section 3.2, where the neural net architecture is sufficiently wide so that the first-order approximation (1) is accurate during gradient descent: $f(\pmb{\theta}, \pmb{x}) \approx f(\pmb{\theta}(0), \pmb{x}) + \phi(\pmb{x})^\top (\pmb{\theta} - \pmb{\theta}(0))$ . Recall that we have $\phi(\pmb{x}) = \nabla_{\pmb{\theta}} f(\pmb{\theta}(0), \pmb{x})$ which induces the NTK $k(\pmb{x}, \pmb{x}') = \langle \phi(\pmb{x}), \phi(\pmb{x}') \rangle$ . Also recall that we can assume near-zero initial output: $f(\pmb{\theta}(0), \pmb{x}) \approx 0$ (see Footnote 2). Therefore we have the approximation: + +$$ +f (\boldsymbol {\theta}, \boldsymbol {x}) \approx \phi (\boldsymbol {x}) ^ {\top} (\boldsymbol {\theta} - \boldsymbol {\theta} (0)). \tag {5} +$$ + +Under the approximation (5), it suffices to consider gradient descent on the objectives (3) and (4) using the linearized model instead: + +$$ +\tilde {L} _ {\lambda} ^ {\mathrm {R D I}} (\boldsymbol {\theta}) = \frac {1}{2} \sum_ {i = 1} ^ {n} \left(\phi (\boldsymbol {x} _ {i}) ^ {\top} (\boldsymbol {\theta} - \boldsymbol {\theta} (0)) - \tilde {y} _ {i}\right) ^ {2} + \frac {\lambda^ {2}}{2} \| \boldsymbol {\theta} - \boldsymbol {\theta} (0) \| ^ {2}, +$$ + +$$ +\tilde {L} _ {\lambda} ^ {\mathrm {A U X}} (\boldsymbol {\theta}, \boldsymbol {b}) = \frac {1}{2} \sum_ {i = 1} ^ {n} \left(\phi \left(\boldsymbol {x} _ {i}\right) ^ {\top} (\boldsymbol {\theta} - \boldsymbol {\theta} (0)) + \lambda b _ {i} - \tilde {y} _ {i}\right) ^ {2}. +$$ + +The following theorem shows that in either case, gradient descent leads to the same dynamics and converges to the kernel ridge regression solution using the NTK. + +Theorem 4.1. Fix a learning rate $\eta >0$ . Consider gradient descent on $\tilde{L}_{\lambda}^{\mathsf{RDI}}$ with initialization $\pmb {\theta}(0)$ : + +$$ +\boldsymbol {\theta} (t + 1) = \boldsymbol {\theta} (t) - \eta \nabla_ {\boldsymbol {\theta}} \tilde {L} _ {\lambda} ^ {\mathrm {R D I}} (\boldsymbol {\theta} (t)), \quad t = 0, 1, 2, \dots \tag {6} +$$ + +and gradient descent on $\tilde{L}_{\lambda}^{\mathrm{AUX}}(\pmb {\theta},\pmb {b})$ with initialization $\pmb {\theta}(0)$ and $b(0) = 0$ .. + +$$ +\bar {\boldsymbol {\theta}} (0) = \boldsymbol {\theta} (0), \quad \bar {\boldsymbol {\theta}} (t + 1) = \bar {\boldsymbol {\theta}} (t) - \eta \nabla_ {\boldsymbol {\theta}} \tilde {L} _ {\lambda} ^ {\mathrm {A U X}} (\bar {\boldsymbol {\theta}} (t), \boldsymbol {b} (t)), \quad t = 0, 1, 2, \dots \tag {7} +$$ + +$$ +\boldsymbol {b} (0) = \mathbf {0}, \quad \boldsymbol {b} (t + 1) = \boldsymbol {b} (t) - \eta \nabla_ {\boldsymbol {b}} \tilde {L} _ {\lambda} ^ {\mathrm {A U X}} (\tilde {\boldsymbol {\theta}} (t), \boldsymbol {b} (t)), \quad t = 0, 1, 2, \dots +$$ + +Then we must have $\pmb{\theta}(t) = \bar{\pmb{\theta}}(t)$ for all $t$ . Furthermore, if the learning rate satisfies $\eta \leqslant \frac{1}{\|\pmb{k}(\pmb{X},\pmb{X})\| + \lambda^2}$ , then $\{\pmb{\theta}(t)\}$ converges linearly to a limit solution $\pmb{\theta}^*$ such that: + +$$ +\boldsymbol {\phi} (\boldsymbol {x}) ^ {\top} (\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} (0)) = k (\boldsymbol {x}, \boldsymbol {X}) ^ {\top} \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \tilde {\boldsymbol {y}}, \quad \forall \boldsymbol {x}, +$$ + +where $\tilde{\pmb{y}} = (\tilde{y}_1,\dots ,\tilde{y}_n)^\top \in \mathbb{R}^n$ + +Proof sketch. The proof is given in Appendix B. A key step is to observe $\bar{\pmb{\theta}}(t) = \pmb{\theta}(0) + \sum_{i=1}^{n} \frac{1}{\lambda} b_i(t) \cdot \phi(\pmb{x}_i)$ , from which we can show that $\{\pmb{\theta}(t)\}$ and $\{\bar{\pmb{\theta}}(t)\}$ follow the same update rule. + +Theorem 4.1 indicates that gradient descent on the regularized objectives (3) and (4) both learn approximately the following function at the end of training when the neural net is sufficiently wide: + +$$ +f ^ {*} (\boldsymbol {x}) = k (\boldsymbol {x}, \boldsymbol {X}) ^ {\top} \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \tilde {\boldsymbol {y}}. \tag {8} +$$ + +If no regularization were used, the labels $\tilde{\pmb{y}}$ would be fitted perfectly and the learned function would be $k(\pmb{x},\pmb{X})^\top (k(\pmb{X},\pmb{X}))^{-1}\tilde{\pmb{y}}$ (c.f. (2)). Therefore the effect of regularization is to add $\lambda^2\pmb{I}$ to the kernel matrix, and (8) is known as the solution to kernel ridge regression in kernel literature. In Section 5, we give a generalization bound of this solution on the clean data distribution, which is comparable to the bound one can obtain even when clean labels are used in training. + +Extension to multiple outputs. Suppose that the training dataset is $\{(\pmb{x}_i,\tilde{\pmb{y}}_i)\}_{i = 1}^n\subset \mathbb{R}^d\times \mathbb{R}^K$ , and the neural net $\pmb {f}(\pmb {\theta},\pmb {x})$ has $K$ outputs. On top of the vanilla loss $L(\pmb {\theta}) = \frac{1}{2}\sum_{i = 1}^{n}\| \pmb {f}(\pmb {\theta},\pmb {x}_i) - \tilde{\pmb{y}}_i\| ^2$ the two regularization methods RDI and AUX give the following objectives similar to (3) and (4): + +$$ +L _ {\lambda} ^ {R D I} (\boldsymbol {\theta}) = \frac {1}{2} \sum_ {i = 1} ^ {n} \| \boldsymbol {f} (\boldsymbol {\theta}, \boldsymbol {x} _ {i}) - \tilde {\boldsymbol {y}} _ {i} \| ^ {2} + \frac {\lambda^ {2}}{2} \| \boldsymbol {\theta} - \boldsymbol {\theta} (0) \| ^ {2}, +$$ + +$$ +L _ {\lambda} ^ {\mathrm {A U X}} (\boldsymbol {\theta}, \boldsymbol {B}) = \frac {1}{2} \sum_ {i = 1} ^ {n} \| \boldsymbol {f} (\boldsymbol {\theta}, \boldsymbol {x} _ {i}) + \lambda \boldsymbol {b} _ {i} - \tilde {\boldsymbol {y}} _ {i} \| ^ {2}, \quad \boldsymbol {B} = (\boldsymbol {b} _ {1}, \dots , \boldsymbol {b} _ {n}) \in \mathbb {R} ^ {K \times n}. +$$ + +In the NTK regime, both methods lead to the kernel ridge regression solution at each output. Namely, letting $\tilde{\pmb{Y}} = (\tilde{\pmb{y}}_1,\dots ,\tilde{\pmb{y}}_n)\in \mathbb{R}^{K\times n}$ be the training target matrix and $\tilde{\pmb{y}}^{(h)}\in \mathbb{R}^n$ be the $h$ -th row of $\tilde{\pmb{Y}}$ , at the end of training the $h$ -th output of the network learns the following function: + +$$ +f ^ {(h)} (\boldsymbol {x}) = k (\boldsymbol {x}, \boldsymbol {X}) ^ {\top} \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \tilde {\boldsymbol {y}} ^ {(h)}, \quad h \in [ K ]. \tag {9} +$$ + +# 5 GENERALIZATION GUARANTEE ON CLEAN DATA DISTRIBUTION + +We show that gradient descent training on noisily labeled data with our regularization methods RDI or AUX leads to a generalization guarantee on the clean data distribution. As in Section 4, we consider the NTK regime and let $k(\cdot ,\cdot)$ be the NTK corresponding to the neural net. It suffices to analyze the kernel ridge regression predictor, i.e., (8) for single output and (9) for multiple outputs. + +We start with a regression setting where labels are real numbers and the noisy label is the true label plus an additive noise (Theorem 5.1). Built on this result, we then provide generalization bounds for the classification settings described in Section 3.1. Omitted proofs in this section are in Appendix C. + +Theorem 5.1 (Additive label noise). Let $\mathcal{D}$ be a distribution over $\mathbb{R}^d \times [-1,1]$ . Consider the following data generation process: (i) draw $(\pmb{x},y) \sim \mathcal{D}$ , (ii) conditioned on $(\pmb{x},y)$ , let $\varepsilon$ be drawn from a noise distribution $\mathcal{E}_{\pmb{x},y}$ over $\mathbb{R}$ that may depend on $\pmb{x}$ and $y$ , and (iii) let $\tilde{y} = y + \varepsilon$ . Suppose that $\mathcal{E}_{\pmb{x},y}$ has mean 0 and is subgaussian with parameter $\sigma > 0$ , for any $(\pmb{x},y)$ . + +Let $\{(\pmb{x}_i, y_i, \tilde{y}_i)\}_{i=1}^n$ be i.i.d. samples from the above process. Denote $\pmb{X} = (\pmb{x}_1, \dots, \pmb{x}_n)$ , $\pmb{y} = (y_1, \dots, y_n)^\top$ and $\tilde{\pmb{y}} = (\tilde{y}_1, \dots, \tilde{y}_n)^\top$ . Consider the kernel ridge regression solution in (8): $f^*(\pmb{x}) = k(\pmb{x}, \pmb{X})^\top (k(\pmb{X}, \pmb{X}) + \lambda^2 \pmb{I})^{-1} \tilde{\pmb{y}}$ . Suppose that the kernel matrix satisfies $\operatorname{tr}[k(\pmb{X}, \pmb{X})] = O(n)$ . Then for any loss function $\ell : \mathbb{R} \times \mathbb{R} \to [0, 1]$ that is 1-Lipschitz in the first argument such that $\ell(y, y) = 0$ , with probability at least $1 - \delta$ we have + +$$ +\mathbb {E} _ {(\boldsymbol {x}, y) \sim \mathcal {D}} \left[ \ell \left(f ^ {*} (\boldsymbol {x}), y\right) \right] \leqslant \frac {\lambda + O (1)}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} \left(k (\boldsymbol {X} , \boldsymbol {X})\right) ^ {- 1} \boldsymbol {y}}{n}} + O \left(\frac {\sigma}{\lambda}\right) + \Delta , \tag {10} +$$ + +where $\Delta = O\left(\sigma \sqrt{\frac{\log(1 / \delta)}{n}} + \frac{\sigma}{\lambda}\sqrt{\frac{\log(1 / \delta)}{n}} + \sqrt{\frac{\log\frac{n}{\delta\lambda}}{n}}\right)$ . + +Remark 5.1. As the number of samples $n \to \infty$ , we have $\Delta \to 0$ . In order for the second term $O\left(\frac{\sigma}{\lambda}\right)$ in (10) to go to 0, we need to choose $\lambda$ to grow with $n$ , e.g., $\lambda = n^c$ for some small constant $c > 0$ . Then, the only remaining term in (10) to worry about is $\frac{\lambda}{2} \sqrt{\frac{y^\top(k(\boldsymbol{X},\boldsymbol{X}))^{-1}\boldsymbol{y}}{n}}$ . Notice that it depends on the (unobserved) clean labels $\boldsymbol{y}$ , instead of the noisy labels $\tilde{\boldsymbol{y}}$ . By a very similar proof, one can show that training on the clean labels $\boldsymbol{y}$ (without regularization) leads to a population loss bound $O\left(\sqrt{\frac{y^\top(k(\boldsymbol{X},\boldsymbol{X}))^{-1}\boldsymbol{y}}{n}}\right)$ . In comparison, we can see that even when there is label noise, we only lose a factor of $O(\lambda)$ in the population loss on the clean distribution, which can be chosen as any slow-growing function of $n$ . If $y^\top(k(\boldsymbol{X},\boldsymbol{X}))^{-1}\boldsymbol{y}$ grows much slower than $n$ , by choosing an appropriate $\lambda$ , our result indicates that the underlying distribution is learnable in presence of additive label noise. See Remark 5.2 for an example. + +Remark 5.2. Arora et al. (2019b) proved that two-layer ReLU neural nets trained with gradient descent can learn a class of smooth functions on the unit sphere. Their proof is by showing $\pmb{y}^{\top}(k(\pmb{X},\pmb{X}))^{-1}\pmb{y} = O(1)$ if $y_{i} = g(\pmb{x}_{i})$ ( $\forall i \in [n]$ ) for certain function $g$ , where $k(\cdot, \cdot)$ is the NTK corresponding to two-layer ReLU nets. Combined with their result, Theorem 5.1 implies that the same class of functions can be learned by the same network even if the labels are noisy. + +Next we use Theorem 5.1 to provide generalization bounds for the classification settings described in Section 3.1. For binary classification, we treat the labels as $\pm 1$ and consider a single-output neural net; for $K$ -class classification, we treat the labels as their one-hot encodings (which are $K$ standard unit vectors in $\mathbb{R}^K$ ) and consider a $K$ -output neural net. Again, we use $\ell_2$ loss and wide neural nets so that it suffices to consider the kernel ridge regression solution ((8) or (9)). + +Theorem 5.2 (Binary classification). Consider the binary classification setting stated in Section 3.1. Let $\{(\pmb{x}_i, y_i, \tilde{y}_i)\}_{i=1}^n \subset \mathbb{R}^d \times \{\pm 1\} \times \{\pm 1\}$ be i.i.d. samples from that process. Recall that $\operatorname*{Pr}[\tilde{y}_i \neq y_i | y_i] = p$ ( $0 \leqslant p < \frac{1}{2}$ ). Denote $\pmb{X} = (\pmb{x}_1, \dots, \pmb{x}_n)$ , $\pmb{y} = (y_1, \dots, y_n)^\top$ and $\tilde{\pmb{y}} = (\tilde{y}_1, \dots, \tilde{y}_n)^\top$ . Consider the kernel ridge regression solution in (8): $f^*(\pmb{x}) = k(\pmb{x}, \pmb{X})^\top$ ( $k(\pmb{X}, \pmb{X}) + \lambda^2\pmb{I})^{-1}\tilde{\pmb{y}}$ . Suppose that the kernel matrix satisfies $\mathrm{tr}[k(\pmb{X}, \pmb{X})] = O(n)$ . Then with probability at least $1 - \delta$ , the classification error of $f^*$ on the clean distribution $\mathcal{D}$ satisfies + +$$ +\operatorname * {P r} _ {(\boldsymbol {x}, y) \sim \mathcal {D}} [ \operatorname {s g n} (f ^ {*} (\boldsymbol {x})) \neq y ] \leqslant \frac {\lambda + O (1)}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}} + \frac {1}{1 - 2 p} O \left(\frac {\sqrt {p}}{\lambda} + \sqrt {\frac {p \log \frac {1}{\delta}}{n}} + \sqrt {\frac {\log \frac {n}{\delta \lambda}}{n}}\right). +$$ + +Theorem 5.3 (Multi-class classification). Consider the $K$ -class classification setting stated in Section 3.1. Let $\{(\pmb{x}_i, c_i, \tilde{c}_i)\}_{i=1}^n \subset \mathbb{R}^d \times [K] \times [K]$ be i.i.d. samples from that process. Recall that $\operatorname*{Pr}[\tilde{c}_i = c'|c_i = c] = p_{c',c}$ ( $\forall c, c' \in [K]$ ), where the transition probabilities form a matrix $\pmb{P} \in \mathbb{R}^{K \times K}$ . Let $\mathsf{gap} = \min_{c, c' \in [K], c \neq c'} (p_{c,c} - p_{c',c})$ . + +Let $\mathbf{X} = (\mathbf{x}_1, \ldots, \mathbf{x}_n)$ , and let $\mathbf{y}_i = \mathbf{e}^{(c_i)} \in \mathbb{R}^K$ , $\tilde{\mathbf{y}}_i = \mathbf{e}^{(\tilde{c}_i)} \in \mathbb{R}^K$ be one-hot label encodings. Denote $\mathbf{Y} = (\mathbf{y}_1, \ldots, \mathbf{y}_n) \in \mathbb{R}^{K \times n}$ , $\tilde{\mathbf{Y}} = (\tilde{\mathbf{y}}_1, \ldots, \tilde{\mathbf{y}}_n) \in \mathbb{R}^{K \times n}$ , and let $\tilde{\mathbf{y}}^{(h)} \in \mathbb{R}^n$ be the h-th row of $\tilde{\mathbf{Y}}$ . Define a matrix $\mathbf{Q} = \mathbf{P} \cdot \mathbf{Y} \in \mathbb{R}^{K \times n}$ , and let $\mathbf{q}^{(h)} \in \mathbb{R}^n$ be the h-th row of $\mathbf{Q}$ . + +Consider the kernel ridge regression solution in (9): $f^{(h)}(\pmb{x}) = k(\pmb{x},\pmb{X})^\top \left(k(\pmb{X},\pmb{X}) + \lambda^2\pmb{I}\right)^{-1}\tilde{\pmb{y}}^{(h)}$ . Suppose that the kernel matrix satisfies $\mathrm{tr}[k(\pmb{X},\pmb{X})] = O(n)$ . Then with probability at least $1 - \delta$ , the classification error of $\pmb{f} = (f^{(h)})_{k=1}^{K}$ on the clean data distribution $\mathcal{D}$ is bounded as + +Figure 1: Performance on binary classification using $\ell_2$ loss. Setting 1: MNIST; Setting 2: CIFAR. +![](images/c38c68acff0a5a74d39bea851cc2c357c9035dbc5982f1a3df94d35530ce5480.jpg) +(a) Test error vs. noise level for Setting 1. For each noise level, we do a grid search for $\lambda$ and report the best accuracy. + +![](images/d30af2fdc888698cc8bc4edaf734e5d9beddcd791bce8f6a782eafed84f26ae8.jpg) +(b) Training (dashed) & test (solid) errors vs. epoch for Setting 2. Noise rate $= 20\%$ , $\lambda = 4$ . Training error of AUX is measured with auxiliary variables. + +$$ +\begin{array}{l} \operatorname * {P r} _ {(\boldsymbol {x}, c) \sim \mathcal {D}} \left[ c \notin \operatorname {a r g m a x} _ {h \in [ K ]} f ^ {(h)} (\boldsymbol {x}) \right] \\ \leqslant \frac {1}{\operatorname {g a p}} \left(\frac {\lambda + O (1)}{2} \sum_ {h = 1} ^ {K} \sqrt {\frac {\left(\boldsymbol {q} ^ {(h)}\right) ^ {\top} \left(k (\boldsymbol {X} , \boldsymbol {X})\right) ^ {- 1} \boldsymbol {q} ^ {(h)}}{n}} + K \cdot O \left(\frac {1}{\lambda} + \sqrt {\frac {\log \frac {1}{\delta^ {\prime}}}{n}} + \sqrt {\frac {\log \frac {n}{\delta^ {\prime} \lambda}}{n}}\right)\right). \\ \end{array} +$$ + +Note that the bounds in Theorems 5.2 and 5.3 only depend on the clean labels instead of the noisy labels, similar to Theorem 5.1. + +# 6 EXPERIMENTS + +In this section, we empirically verify the effectiveness of our regularization methods RDI and AUX, and compare them against gradient descent or stochastic gradient descent (GD/SGD) with or without early stopping. We experiment with three settings of increasing complexities: + +Setting 1: Binary classification on MNIST ("5" vs. "8") using a two-layer wide fully-connected net. + +Setting 2: Binary classification on CIFAR ("airplanes" vs. "automobiles") using a 11-layer convolutional neural net (CNN). + +Setting 3: CIFAR-10 classification (10 classes) using standard ResNet-34. + +For detailed description see Appendix D. We obtain noisy labels by randomly corrupting correct labels, where noise rate/level is the fraction of corrupted labels (for CIFAR-10, a corrupted label is chosen uniformly from the other 9 classes.) + +# 6.1 PERFORMANCE OF REGULARIZATION METHODS + +For Setting 1 (binary MNIST), we plot the test errors of different methods under different noise rates in Figure 1a. We observe that both methods $\text{GD} + \text{AUX}$ and $\text{GD} + \text{RDI}$ consistently achieve much lower test error than vanilla GD which over-fits the noisy dataset, and they achieve similar test error to GD with early stopping. We see that $\text{GD} + \text{AUX}$ and $\text{GD} + \text{RDI}$ have essentially the same performance, which verifies our theory of their equivalence in wide networks (Theorem 4.1). + +For Setting 2 (binary CIFAR), Figure 1b shows the learning curves (training and test errors) of SGD, $\mathrm{SGD} + \mathrm{AUX}$ and $\mathrm{SGD} + \mathrm{RDI}$ for noise rate $20\%$ and $\lambda = 4$ . Additional figures for other choices of $\lambda$ are in Figure 5. We again observe that both $\mathrm{SGD} + \mathrm{AUX}$ and $\mathrm{SGD} + \mathrm{RDI}$ outperform vanilla SGD and are comparable to SGD with early stopping. We also observe a discrepancy between $\mathrm{SGD} + \mathrm{AUX}$ and $\mathrm{SGD} + \mathrm{RDI}$ , possibly due to the noise in SGD or the finite width. + +Finally, for Setting 3 (CIFAR-10), Table 1 shows the test accuracies of training with and without AUX. We train with both mean square error $(\mathrm{MSE} / \ell_2$ loss) and categorical cross entropy (CCE) loss. For normal training without AUX, we report the test accuracy at the epoch where validation accuracy is maximum (early stopping). For training with AUX, we report the test accuracy at the last epoch as well as the best epoch. Figure 2 shows the training curves for noise rate 0.4. We observe that training with AUX achieves very good test accuracy - even better than the best accuracy of normal training with early stopping, and better than the recent method of Zhang and Sabuncu (2018) using the same + +
Noise rate00.20.40.6
Normal CCE (early stop)94.05±0.0789.73±0.4386.35±0.4779.13±0.41
Normal MSE (early stop)93.88±0.3789.96±0.1385.92±0.3278.68±0.56
CCE+AUX (last)94.22±0.1092.07±0.1087.81±0.3782.60±0.29
CCE+AUX (best)94.30±0.0992.16±0.0888.61±0.1482.91±0.22
MSE+AUX (last)94.25±0.1092.31±0.1888.92±0.3083.90±0.30
MSE+AUX (best)94.32±0.0692.40±0.1888.95±0.3183.95±0.30
(Zhang and Sabuncu, 2018)-89.83±0.2087.62±0.2682.70±0.23
+ +![](images/2c50c4f1748d46eb54ed9df1ad0e2e25d2c43e514ac312d9188fe178f60d0364.jpg) +Figure 2: Test accuracy during CIFAR-10 training (noise 0.4). + +![](images/17d356c24b6900e43d5b9d5eb3c85c7e433fdbee240a7a4fd55ffb28f7a08adb.jpg) +Figure 3: Setting 2, $\| \pmb{W}^{(4)}\|_{F}$ and $\| \pmb{W}^{(4)} - \pmb{W}^{(4)}(0)\|_{F}$ during training. Noise $= 20\%$ , $\lambda = 4$ . + +![](images/120a9b6de830f9bdd34aa58c20f84ddebaba3896aa6f1bb5b775523fff54378f.jpg) + +architecture (ResNet-34). Furthermore, AUX does not over-fit (the last epoch performs similarly to the best epoch). In addition, we find that in this setting classification performance is insensitive of whether MSE or CCE is used as the loss function. + +# 6.2 DISTANCE OF WEIGHTS TO INITIALIZATION, VERIFICATION OF THE NTK REGIME + +We also track how much the weights move during training as a way to see whether the neural net is in the NTK regime. For Settings 1 and 2, we find that the neural nets are likely in or close to the NTK regime because the weight movements are small during training. Figure 3 shows in Setting 2 how much the 4-th layer weights move during training. Additional figures are provided as Figures 6 to 8. + +Table 2 summarizes the relationship between the distance to initialization and other hyper-parameters that we observe from various experiments. Note that the weights tend to move more with larger noise level, and AUX and RDI can reduce the moving distance as expected (as shown in Figure 3). + +The ResNet-34 in Setting 3 is likely not operating in the NTK regime, so its effectiveness cannot yet be explained by our theory. This is an intriguing direction of future work. + +Table 1: CIFAR-10 test accuracies of different methods under different noise rates. + +
# samplesnoise levelwidthregularization strength λlearning rate
Distance
+ +Table 2: Relationship between distance to initialization at convergence and other hyper-parameters. “/”: positive correlation; “\”: negative correlation; ‘—’: no correlation as long as width is sufficiently large and learning rate is sufficiently small. + +# 7 CONCLUSION + +Towards understanding generalization of deep neural networks in presence of noisy labels, this paper presents two simple regularization methods and shows that they are theoretically and empirically effective. The theoretical analysis relies on the correspondence between neural networks and NTKs. We believe that a better understanding of such correspondence could help the design of other principled methods in practice. We also observe that our methods can be effective outside the NTK regime. Explaining this theoretically is left for future work. + +# ACKNOWLEDGMENTS + +This work is supported by NSF, ONR, Simons Foundation, Schmidt Foundation, Mozilla Research, Amazon Research, DARPA and SRC. The authors thank Sanjeev Arora for helpful discussions and suggestions. The authors thank Amazon Web Services for cloud computing time. + +# REFERENCES + +Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918, 2018a. +Zeyuan Allen-Zhu, Yanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. arXiv preprint arXiv:1811.03962, 2018b. +Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. arXiv preprint arXiv:1904.11955, 2019a. +Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. arXiv preprint arXiv:1901.08584, 2019b. +Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463-482, 2002. +Frank Bauer, Sergei Pereverzev, and Lorenzo Rosasco. On regularization algorithms in learning theory. Journal of complexity, 23(1):52-72, 2007. +Yuan Cao and Quanquan Gu. A generalization theory of gradient descent for learning overparameterized deep relu networks. arXiv preprint arXiv:1902.01384, 2019. +Lenaic Chizat and Francis Bach. A note on lazy training in supervised differentiable programming. arXiv preprint arXiv:1812.07956, 2018. +Simon S Du, Wei Hu, and Jason D Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. In Advances in Neural Information Processing Systems 31, pages 382-393. 2018a. +Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. arXiv preprint arXiv:1811.03804, 2018b. +Simon S Du, Kangcheng Hou, Barnabás Póczos, Ruslan Salakhutdinov, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. arXiv preprint arXiv:1905.13192, 2019a. +Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In International Conference on Learning Representations, 2019b. +L Lo Gerfo, Lorenzo Rosasco, Francesca Odone, E De Vito, and Alessandro Verri. Spectral algorithms for supervised learning. Neural Computation, 20(7):1873-1897, 2008. +Aritra Ghosh, Himanshu Kumar, and PS Sastry. Robust loss functions under label noise for deep neural networks. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. +Melody Y Guan, Varun Gulshan, Andrew M Dai, and Geoffrey E Hinton. Who said what: Modeling individual labelers improves classification. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, pages 8527-8537, 2018. +Daniel Hsu, Sham Kakade, Tong Zhang, et al. A tail inequality for quadratic forms of subgaussian random vectors. Electronic Communications in Probability, 17, 2012. +Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. arXiv preprint arXiv:1806.07572, 2018. +Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. arXiv preprint arXiv:1712.05055, 2017. + +Ranjay A Krishna, Kenji Hata, Stephanie Chen, Joshua Kravitz, David A Shamma, Li Fei-Fei, and Michael S Bernstein. Embracing error to enable rapid crowdsourcing. In Proceedings of the 2016 CHI conference on human factors in computing systems, pages 3167-3179. ACM, 2016. +Jaehoon Lee, Lechao Xiao, Samuel S Schoenholz, Yasaman Bahri, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. arXiv preprint arXiv:1902.06720, 2019. +Mingchen Li, Mahdi Soltanolkotabi, and Samet Oymak. Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks. arXiv preprint arXiv:1903.11680, 2019. +Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. arXiv preprint arXiv:1808.01204, 2018. +Tongliang Liu and Dacheng Tao. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3):447-461, 2015. +Eran Malach and Shai Shalev-Shwartz. Decoupling" when to update" from" how to update". In Advances in Neural Information Processing Systems, pages 960-970, 2017. +Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT Press, 2012. +Vaishnavh Nagarajan and J Zico Kolter. Generalization in deep networks: The role of distance from initialization. arXiv preprint arXiv:1901.01672, 2019. +Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. The role of over-parametrization in generalization of neural networks. In International Conference on Learning Representations, 2019. +Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Early stopping and non-parametric regression: an optimal data-dependent stopping rule. The Journal of Machine Learning Research, 15(1): 335-366, 2014. +Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to reweight examples for robust deep learning. arXiv preprint arXiv:1803.09050, 2018. +David Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017. +Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080, 2014. +Yuting Wei, Fanny Yang, and Martin J Wainwright. Early stopping for kernel boosting algorithms: A general analysis with localized complexities. In Advances in Neural Information Processing Systems, pages 6065-6075, 2017. +Greg Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760, 2019. +Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. How does disagreement help generalization against label corruption? In International Conference on Machine Learning, pages 7164-7173, 2019. +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In Proceedings of the International Conference on Learning Representations (ICLR), 2017, 2017. +Zhilu Zhang and Mert Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In NeurIPS, pages 8778-8788, 2018. +Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep ReLU networks. arXiv preprint arXiv:1811.08888, 2018. + +# A DIFFERENCE TRICK + +We give a quick analysis of the "difference trick" described in Footnote 2, i.e., let $f(\pmb{\theta}, \pmb{x}) = \frac{\sqrt{2}}{2} g(\pmb{\theta}_1, \pmb{x}) - \frac{\sqrt{2}}{2} g(\pmb{\theta}_2, \pmb{x})$ where $\pmb{\theta} = (\pmb{\theta}_1, \pmb{\theta}_2)$ , and initialize $\pmb{\theta}_1$ and $\pmb{\theta}_2$ to be the same (and still random). The following lemma implies that the NTKs for $f$ and $g$ are the same. + +Lemma A.1. If $\theta_{1}(0) = \theta_{2}(0)$ , then + +(i) $f(\pmb {\theta}(0),\pmb {x}) = 0$ $\forall \pmb {x}$ +(ii) $\langle \nabla_{\pmb{\theta}}f(\pmb {\theta}(0),\pmb {x}),\nabla_{\pmb{\theta}}f(\pmb {\theta}(0),\pmb{x}^{\prime})\rangle = \langle \nabla_{\pmb{\theta}_{1}}g(\pmb{\theta}_{1}(0),\pmb {x}),\nabla_{\pmb{\theta}_{1}}g(\pmb{\theta}_{1}(0),\pmb{x}^{\prime})\rangle ,\forall \pmb {x},\pmb{x}^{\prime}.$ + +Proof. (i) holds by definition. For (ii), we can calculate that + +$$ +\begin{array}{l} \left\langle \nabla_ {\boldsymbol {\theta}} f (\boldsymbol {\theta} (0), \boldsymbol {x}), \nabla_ {\boldsymbol {\theta}} f (\boldsymbol {\theta} (0), \boldsymbol {x} ^ {\prime}) \right\rangle \\ = \left\langle \nabla_ {\boldsymbol {\theta} _ {1}} f (\boldsymbol {\theta} (0), \boldsymbol {x}), \nabla_ {\boldsymbol {\theta} _ {1}} f (\boldsymbol {\theta} (0), \boldsymbol {x} ^ {\prime}) \right\rangle + \left\langle \nabla_ {\boldsymbol {\theta} _ {2}} f (\boldsymbol {\theta} (0), \boldsymbol {x}), \nabla_ {\boldsymbol {\theta} _ {2}} f (\boldsymbol {\theta} (0), \boldsymbol {x} ^ {\prime}) \right\rangle \\ = \frac {1}{2} \left\langle \nabla_ {\boldsymbol {\theta} _ {1}} g (\boldsymbol {\theta} _ {1} (0), \boldsymbol {x}), \nabla_ {\boldsymbol {\theta} _ {1}} g (\boldsymbol {\theta} _ {1} (0), \boldsymbol {x}) \right\rangle + \frac {1}{2} \left\langle \nabla_ {\boldsymbol {\theta} _ {2}} g (\boldsymbol {\theta} _ {2} (0), \boldsymbol {x}), \nabla_ {\boldsymbol {\theta} _ {2}} g (\boldsymbol {\theta} _ {2} (0), \boldsymbol {x} ^ {\prime}) \right\rangle \\ = \left\langle \nabla_ {\boldsymbol {\theta} _ {1}} g (\boldsymbol {\theta} _ {1} (0), \boldsymbol {x}), \nabla_ {\boldsymbol {\theta} _ {1}} g (\boldsymbol {\theta} _ {1} (0), \boldsymbol {x} ^ {\prime}) \right\rangle . \\ \end{array} +$$ + +![](images/4c61246f832751b62050c0ed1f5f0a5419b324107c8e1b39a5d96f3cef71b4cc.jpg) + +The above lemma allows us to ensure zero output at initialization while preserving NTK. As a comparison, Chizat and Bach (2018) proposed the following "doubling trick": neurons in the last layer are duplicated, with the new neurons having the same input weights and opposite output weights. This satisfies zero output at initialization, but destroys the NTK. To see why, note that with the "doubling trick", the network will output 0 at initialization no matter what the input to its second to last layer is. Thus the gradients with respect to all parameters that are not in the last two layers are 0. + +In our experiments, we observe that the performance of the neural net improves with the "difference trick." See Figure 4. This intuitively makes sense, since the initial network output is independent of the label (only depends on the input) and thus can be viewed as noise. When the width of the neural net is infinity, the initial network output is actually a zero-mean Gaussian process, whose covariance matrix is equal to the NTK contributed by the gradients of parameters in its last layer. Therefore, learning an infinitely wide neural network with nonzero initial output is equivalent to doing kernel regression with an additive correlated Gaussian noise on training and testing labels. + +![](images/637e161f776ffe9e719365362621a4d6402795f1a5f0d10484ae0f6afcd01e93.jpg) +Figure 4: Plot of test error for fully connected two-layer network on MNIST (binary classification between “5” and “8”) with difference trick and different mixing coefficients $\alpha$ , where $f(\theta_1, \theta_2, x) = \sqrt{\frac{\alpha}{2}} g(\theta_1, x) - \sqrt{\frac{1 - \alpha}{2}} g(\theta_1, x)$ . Note that this parametrization preserves the NTK. The network has 10,000 hidden neurons and we train both layers with gradient descent with fixed learning rate for 5,000 steps. The training loss is less than 0.0001 at the time of stopping. We observe that when $\alpha$ increases, the test error drops because the scale of the initial output of the network goes down. + +# B MISSING PROOF IN SECTION 4 + +Proof of Theorem 4.1. The gradient of $\tilde{L}_{\lambda}^{\mathrm{AUX}}(\pmb {\theta},\pmb {b})$ can be written as + +$$ +\nabla_ {\boldsymbol {\theta}} \tilde {L} _ {\lambda} ^ {\mathsf {A U X}} (\boldsymbol {\theta}, \boldsymbol {b}) = \sum_ {i = 1} ^ {n} a _ {i} \phi (\boldsymbol {x} _ {i}), \quad \nabla_ {b _ {i}} \tilde {L} _ {\lambda} ^ {\mathsf {A U X}} (\boldsymbol {\theta}, \boldsymbol {b}) = \lambda a _ {i}, \quad i = 1, \dots , n, +$$ + +where $a_{i} = \phi (\pmb{x}_{i})^{\top}(\pmb{\theta} - \pmb{\theta}(0)) + \lambda b_{i} - \tilde{y}_{i}$ ( $i\in [n]$ ). Therefore we have $\nabla_{\pmb{\theta}}\tilde{L}_{\lambda}^{\mathsf{AUX}}(\pmb {\theta},\pmb {b}) = \sum_{i = 1}^{n}\frac{1}{\lambda}\nabla_{b_i}\tilde{L}_\lambda^{\mathsf{AUX}}(\pmb {\theta},\pmb {b})\cdot \phi (\pmb {x})$ . Then, according to the gradient descent update rule (7), we know that $\bar{\pmb{\theta}} (t)$ and $\pmb {b}(t)$ can always be related by $\bar{\pmb{\theta}} (t) = \pmb {\theta}(0) + \sum_{i = 1}^{n}\frac{1}{\lambda} b_{i}(t)\cdot \phi (\pmb{x}_{i})$ . It follows that + +$$ +\begin{array}{l} \bar {\boldsymbol {\theta}} (t + 1) = \bar {\boldsymbol {\theta}} (t) - \eta \sum_ {i = 1} ^ {n} \left(\phi (\boldsymbol {x} _ {i}) ^ {\top} (\bar {\boldsymbol {\theta}} (t) - \boldsymbol {\theta} (0)) + \lambda b _ {i} (t) - \tilde {y} _ {i}\right) \phi (\boldsymbol {x} _ {i}) \\ = \bar {\boldsymbol {\theta}} (t) - \eta \sum_ {i = 1} ^ {n} \left(\phi (\boldsymbol {x} _ {i}) ^ {\top} (\bar {\boldsymbol {\theta}} (t) - \boldsymbol {\theta} (0)) - \tilde {y} _ {i}\right) \phi (\boldsymbol {x} _ {i}) - \eta \lambda \sum_ {i = 1} ^ {n} b _ {i} (t) \phi (\boldsymbol {x} _ {i}) \\ = \bar {\boldsymbol {\theta}} (t) - \eta \sum_ {i = 1} ^ {n} \left(\boldsymbol {\phi} (\boldsymbol {x} _ {i}) ^ {\top} (\bar {\boldsymbol {\theta}} (t) - \boldsymbol {\theta} (0)) - \tilde {y} _ {i}\right) \boldsymbol {\phi} (\boldsymbol {x} _ {i}) - \eta \lambda^ {2} (\bar {\boldsymbol {\theta}} (t) - \boldsymbol {\theta} (0)). \\ \end{array} +$$ + +On the other hand, from (6) we have + +$$ +\boldsymbol {\theta} (t + 1) = \boldsymbol {\theta} (t) - \eta \sum_ {i = 1} ^ {n} \left(\phi (\boldsymbol {x} _ {i}) ^ {\top} (\boldsymbol {\theta} (t) - \boldsymbol {\theta} (0)) - \tilde {y} _ {i}\right) \phi (\boldsymbol {x} _ {i}) - \eta \lambda^ {2} (\boldsymbol {\theta} (t) - \boldsymbol {\theta} (0)). +$$ + +Comparing the above two equations, we find that $\{\pmb {\theta}(t)\}$ and $\{\bar{\pmb{\theta}} (t)\}$ have the same update rule. Since $\pmb {\theta}(0) = \bar{\pmb{\theta}} (0)$ , this proves $\pmb {\theta}(t) = \bar{\pmb{\theta}} (t)$ for all $t$ . + +Now we prove the second part of the theorem. Notice that $\tilde{L}_{\lambda}^{\mathsf{RDI}}(\pmb{\theta})$ is a strongly convex quadratic function with Hessian $\nabla_{\pmb{\theta}}^{2}\tilde{L}_{\lambda}^{\mathsf{RDI}}(\pmb{\theta}) = \sum_{i=1}^{n}\phi(\pmb{x}_{i})\phi(\pmb{x}_{i})^{\top} + \lambda^{2}\pmb{I} = \pmb{Z}\pmb{Z}^{\top} + \lambda^{2}\pmb{I}$ , where $\pmb{Z} = (\phi(\pmb{x}_{1}),\dots,\phi(\pmb{x}_{n}))$ . From the classical convex optimization theory, as long as $\eta \leqslant \frac{1}{\|\pmb{Z}\pmb{Z}^{\top} + \lambda^{2}\pmb{I}\|} = \frac{1}{\|\pmb{Z}^{\top}\pmb{Z} + \lambda^{2}\pmb{I}\|} = \frac{1}{\|\pmb{k}(\pmb{X},\pmb{X})\| + \lambda^{2}}$ , gradient descent converges linearly to the unique optimum $\pmb{\theta}^{*}$ of $\tilde{L}_{\lambda}^{\mathsf{RDI}}(\pmb{\theta})$ , which can be easily obtained: + +$$ +\boldsymbol {\theta} ^ {*} = \boldsymbol {\theta} (0) + \boldsymbol {Z} (k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 1} \tilde {\boldsymbol {y}}. +$$ + +Then we have + +$$ +\boldsymbol {\phi} (\boldsymbol {x}) ^ {\top} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} (0)\right) = \boldsymbol {\phi} (\boldsymbol {x}) ^ {\top} \boldsymbol {Z} \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \tilde {\boldsymbol {y}} = k (\boldsymbol {x}, \boldsymbol {X}) ^ {\top} \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \tilde {\boldsymbol {y}}, +$$ + +finishing the proof. + +# C MISSING PROOFS IN SECTION 5 + +# C.1 PROOF OF THEOREM 5.1 + +Define $\varepsilon_{i} = \tilde{y}_{i} - y_{i}$ $(i\in [n])$ , and $\varepsilon = (\varepsilon_1,\dots ,\varepsilon_n)^\top = \tilde{\pmb{y}} -\pmb {y}$ + +We first prove two lemmas. + +Lemma C.1. With probability at least $1 - \delta$ , we have + +$$ +\sqrt {\sum_ {i = 1} ^ {n} \left(f ^ {*} \left(\boldsymbol {x} _ {i}\right) - y _ {i}\right) ^ {2}} \leqslant \frac {\lambda}{2} \sqrt {\boldsymbol {y} ^ {\top} \left(k (\boldsymbol {X} , \boldsymbol {X})\right) ^ {- 1} \boldsymbol {y}} + \frac {\sigma}{2 \lambda} \sqrt {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]} + \sigma \sqrt {2 \log (1 / \delta)}. +$$ + +Proof. In this proof we are conditioned on $X$ and $y$ , and only consider the randomness in $\tilde{y}$ given $X$ and $y$ . + +First of all, we can write + +$$ +\left(f ^ {*} (\boldsymbol {x} _ {1}), \dots , f ^ {*} (\boldsymbol {x} _ {n})\right) ^ {\top} = k (\boldsymbol {X}, \boldsymbol {X}) \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \tilde {\boldsymbol {y}}, +$$ + +so we can write the training loss on true labels $\pmb{y}$ as + +$$ +\begin{array}{l} \sqrt {\sum_ {i = 1} ^ {n} \left(f ^ {*} \left(\boldsymbol {x} _ {i}\right) - y _ {i}\right) ^ {2}} = \left\| k (\boldsymbol {X}, \boldsymbol {X}) \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \tilde {\boldsymbol {y}} - \boldsymbol {y} \right\| \\ = \left\| k (\boldsymbol {X}, \boldsymbol {X}) \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} (\boldsymbol {y} + \boldsymbol {\varepsilon}) - \boldsymbol {y} \right\| \\ = \left\| k (\boldsymbol {X}, \boldsymbol {X}) \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \boldsymbol {\varepsilon} - \lambda^ {2} \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \boldsymbol {y} \right\| \\ \leqslant \left\| k (\boldsymbol {X}, \boldsymbol {X}) \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \varepsilon \right\| + \lambda^ {2} \left\| \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \boldsymbol {y} \right\|. \tag {11} \\ \end{array} +$$ + +Next, since $\varepsilon$ , conditioned on $X$ and $\mathbf{y}$ , has independent and subgaussian entries (with parameter $\sigma$ ), by (Hsu et al., 2012), for any symmetric matrix $\mathbf{A}$ , with probability at least $1 - \delta$ , + +$$ +\left\| \boldsymbol {A} \boldsymbol {\varepsilon} \right\| \leqslant \sigma \sqrt {\operatorname {t r} \left[ \boldsymbol {A} ^ {2} \right] + 2 \sqrt {\operatorname {t r} \left[ \boldsymbol {A} ^ {4} \right] \log (1 / \delta)} + 2 \left\| \boldsymbol {A} ^ {2} \right\| \log (1 / \delta)}. \tag {12} +$$ + +Let $\mathbf{A} = k(\mathbf{X},\mathbf{X})$ ( $k(\mathbf{X},\mathbf{X}) + \lambda^2\mathbf{I}$ ) and let $\lambda_1,\ldots ,\lambda_n > 0$ be the eigenvalues of $k(\mathbf{X},\mathbf{X})$ . We have + +$$ +\begin{array}{l} \mathrm {t r} [ \pmb {A} ^ {2} ] = \sum_ {i = 1} ^ {n} \frac {\lambda_ {i} ^ {2}}{(\lambda_ {i} + \lambda^ {2}) ^ {2}} \leqslant \sum_ {i = 1} ^ {n} \frac {\lambda_ {i} ^ {2}}{4 \lambda_ {i} \cdot \lambda^ {2}} = \frac {\mathrm {t r} [ k (\pmb {X} , \pmb {X}) ]}{4 \lambda^ {2}}, \\ \operatorname {t r} \left[ \boldsymbol {A} ^ {4} \right] = \sum_ {i = 1} ^ {n} \frac {\lambda_ {i} ^ {4}}{\left(\lambda_ {i} + \lambda^ {2}\right) ^ {4}} \leqslant \sum_ {i = 1} ^ {n} \frac {\lambda_ {i} ^ {4}}{4 ^ {4} \lambda^ {2} \left(\frac {\lambda_ {i}}{3}\right) ^ {3}} \leqslant \frac {\operatorname {t r} \left[ k (\boldsymbol {X} , \boldsymbol {X}) \right]}{9 \lambda^ {2}}, \tag {13} \\ \| \boldsymbol {A} ^ {2} \| \leqslant 1. \\ \end{array} +$$ + +Therefore, + +$$ +\begin{array}{l} \left\| k (\boldsymbol {X}, \boldsymbol {X}) \left(k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \boldsymbol {\varepsilon} \right\| \leqslant \sigma \sqrt {\frac {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}{4 \lambda^ {2}} + 2 \sqrt {\frac {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ] \log (1 / \delta)}{9 \lambda^ {2}}} + 2 \log (1 / \delta)} \\ \leqslant \sigma \left(\sqrt {\frac {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}{4 \lambda^ {2}}} + \sqrt {2 \log (1 / \delta)}\right). \\ \end{array} +$$ + +Finally, since $\left(k(\boldsymbol{X},\boldsymbol{X}) + \lambda^2\boldsymbol{I}\right)^{-2} \leq \frac{1}{4\lambda^2}\left(k(\boldsymbol{X},\boldsymbol{X})\right)^{-1}$ (note $(\lambda_i + \lambda^2)^2 \geqslant 4\lambda_i \cdot \lambda^2$ ), we have + +$$ +\lambda^ {2} \left\| (k (\boldsymbol {X}, \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 1} \boldsymbol {y} \right\| = \lambda^ {2} \sqrt {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 2} \boldsymbol {y}} \leqslant \frac {\lambda}{2} \sqrt {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}. \tag {14} +$$ + +The proof is finished by combining (11), (13) and (14). + +Let $\mathcal{H}$ be the reproducing kernel Hilbert space (RKHS) corresponding to the kernel $k(\cdot, \cdot)$ . Recall that the RKHS norm of a function $f(\pmb{x}) = \pmb{\alpha}^{\top} k(\pmb{x}, \pmb{X})$ is + +$$ +\left\| f \right\| _ {\mathcal {H}} = \sqrt {\boldsymbol {\alpha} ^ {\top} k (\boldsymbol {X} , \boldsymbol {X}) \boldsymbol {\alpha}}. +$$ + +Lemma C.2. With probability at least $1 - \delta$ , we have + +$$ +\left\| f ^ {*} \right\| _ {\mathcal {H}} \leqslant \sqrt {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 1} \boldsymbol {y}} + \frac {\sigma}{\lambda} \left(\sqrt {n} + \sqrt {2 \log (1 / \delta)}\right) +$$ + +Proof. In this proof we are still conditioned on $\pmb{X}$ and $\pmb{y}$ , and only consider the randomness in $\tilde{\pmb{y}}$ given $\pmb{X}$ and $\pmb{y}$ . Note that $f^{*}(\pmb{x}) = \alpha^{\top} k(\pmb{x}, \pmb{X})$ with $\alpha = (k(\pmb{X}, \pmb{X}) + \lambda^{2}\pmb{I})^{-1}\tilde{\pmb{y}}$ . Since $(k(\pmb{X}, \pmb{X}) + \lambda^{2}\pmb{I})^{-1} \leq (k(\pmb{X}, \pmb{X}))^{-1}$ and $(k(\pmb{X}, \pmb{X}) + \lambda^{2}\pmb{I})^{-1} \leq \frac{1}{\lambda^{2}}\pmb{I}$ , we can bound + +$$ +\begin{array}{l} \left\| f ^ {*} \right\| _ {\mathcal {H}} = \sqrt {\tilde {\boldsymbol {y}} ^ {\top} \left(k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} k (\boldsymbol {X} , \boldsymbol {X}) \left(k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}\right) ^ {- 1} \tilde {\boldsymbol {y}}} \\ \leqslant \sqrt {(\boldsymbol {y} + \varepsilon) ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 1} (\boldsymbol {y} + \varepsilon)} \\ \end{array} +$$ + +$$ +\begin{array}{l} \leqslant \sqrt {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 1} \boldsymbol {y}} + \sqrt {\varepsilon^ {\top} (k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 1} \varepsilon} \\ \leqslant \sqrt {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 1} \boldsymbol {y}} + \frac {\sqrt {\varepsilon^ {\top} \varepsilon}}{\lambda}. \\ \end{array} +$$ + +Since $\varepsilon$ has independent and subgaussian (with parameter $\sigma$ ) coordinates, using (12) with $A = I$ , with probability at least $1 - \delta$ we have + +$$ +\sqrt {\varepsilon^ {\top} \varepsilon} \leqslant \sigma \left(\sqrt {n} + \sqrt {2 \log (1 / \delta)}\right). +$$ + +![](images/f82561e7b529a1b886ad8354db31ebf0b326889f2b9dd4aaff957124c7dbce4f.jpg) + +Now we prove Theorem 5.1. + +Proof of Theorem 5.1. First, by Lemma C.1, with probability $1 - \delta /3$ + +$$ +\sqrt {\sum_ {i = 1} ^ {n} \left(f ^ {*} \left(\boldsymbol {x} _ {i}\right) - y _ {i}\right) ^ {2}} \leqslant \frac {\lambda}{2} \sqrt {\boldsymbol {y} ^ {\top} \left(k (\boldsymbol {X} , \boldsymbol {X})\right) ^ {- 1} \boldsymbol {y}} + \frac {\sigma}{2 \lambda} \sqrt {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]} + \sigma \sqrt {2 \log (3 / \delta)}, +$$ + +which implies that the training error on the true labels under loss function $\ell$ is bounded as + +$$ +\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \ell \left(f ^ {*} \left(\boldsymbol {x} _ {i}\right), y _ {i}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\ell \left(f ^ {*} \left(\boldsymbol {x} _ {i}\right), y _ {i}\right) - \ell \left(y _ {i}, y _ {i}\right)\right) \\ \leqslant \frac {1}{n} \sum_ {i = 1} ^ {n} | f ^ {*} (\boldsymbol {x} _ {i}) - y _ {i} | \\ \leqslant \frac {1}{\sqrt {n}} \sqrt {\sum_ {i = 1} ^ {n} \left| f ^ {*} \left(\boldsymbol {x} _ {i}\right) - y _ {i} \right| ^ {2}} \\ \leqslant \frac {\lambda}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}} + \frac {\sigma}{2 \lambda} \sqrt {\frac {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}{n}} + \sigma \sqrt {\frac {2 \log (3 / \delta)}{n}}. \\ \end{array} +$$ + +Next, for function class $\mathcal{F}_B = \{f(\pmb{x}) = \pmb{\alpha}^\top k(\pmb{x},\pmb{X}): \| f\|_{\mathcal{H}} \leqslant B\}$ , Bartlett and Mendelson (2002) showed that its empirical Rademacher complexity can be bounded as + +$$ +\hat {\mathcal {R}} _ {S} (\mathcal {F} _ {B}) \triangleq \frac {1}{n} \underset {\gamma \sim \{\pm 1 \} ^ {n}} {\mathbb {E}} \left[ \sup _ {f \in \mathcal {F} _ {B}} \sum_ {i = 1} ^ {n} f (\boldsymbol {x} _ {i}) \gamma_ {i} \right] \leqslant \frac {B \sqrt {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}}{n}. +$$ + +By Lemma C.2, with probability at least $1 - \delta / 3$ we have + +$$ +\left\| f ^ {*} \right\| _ {\mathcal {H}} \leqslant \sqrt {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X}) + \lambda^ {2} \boldsymbol {I}) ^ {- 1} \boldsymbol {y}} + \frac {\sigma}{\lambda} \left(\sqrt {n} + \sqrt {2 \log (3 / \delta)}\right) \triangleq B ^ {\prime}. +$$ + +We also recall the standard generalization bound from Rademacher complexity (see e.g. (Mohri et al., 2012)): with probability at least $1 - \delta$ , we have + +$$ +\sup _ {f \in \mathcal {F}} \left\{\mathbb {E} _ {(\boldsymbol {x}, y) \sim \mathcal {D}} [ \ell (f (\boldsymbol {x}), y) ] - \frac {1}{n} \sum_ {i = 1} ^ {n} \ell (f (\boldsymbol {x} _ {i}), y _ {i}) \right\} \leqslant 2 \hat {\mathcal {R}} _ {S} (\mathcal {F}) + 3 \sqrt {\frac {\log (2 / \delta)}{2 n}}. \tag {15} +$$ + +Then it is tempting to apply the above bound on the function class $\mathcal{F}_{B'}$ which contains $f^*$ . However, we are not yet able to do so, because $B'$ depends on the data $\pmb{X}$ and $\pmb{y}$ . To deal with this, we use a standard $\epsilon$ -net argument on the interval that $B'$ must lie in: (note that $\| \pmb{y} \| = O(\sqrt{n})$ ) + +$$ +B ^ {\prime} \in \left[ \frac {\sigma}{\lambda} \left(\sqrt {n} + \sqrt {2 \log (3 / \delta)}\right), \frac {\sigma}{\lambda} \left(\sqrt {n} + \sqrt {2 \log (3 / \delta)}\right) + O \left(\frac {n}{\lambda}\right) \right]. +$$ + +The above interval has length $O\left(\frac{n}{\lambda}\right)$ , so its $\epsilon$ -net $\mathcal{N}$ has size $O\left(\frac{n}{\epsilon\lambda}\right)$ . Using a union bound, we apply the generalization bound (15) simultaneously on $\mathcal{F}_B$ for all $B \in \mathcal{N}$ : with probability at least $1 - \delta / 3$ we have + +$$ +\sup _ {f _ {B} \in \mathcal {F}} \left\{\mathbb {E} _ {(\boldsymbol {x}, y) \sim \mathcal {D}} [ \ell (f (\boldsymbol {x}), y) ] - \frac {1}{n} \sum_ {i = 1} ^ {n} \ell (f (\boldsymbol {x} _ {i}), y _ {i}) \right\} \leqslant 2 \hat {\mathcal {R}} _ {S} (\mathcal {F} _ {B}) + O \left(\sqrt {\frac {\log \frac {n}{\epsilon \delta \lambda}}{n}}\right), \quad \forall B \in \mathcal {N}. +$$ + +By definition there exists $B \in \mathcal{N}$ such that $B' \leqslant B \leqslant B' + \epsilon$ . Then we also have $f^* \in \mathcal{F}_B$ . Using the above bound on this particular $B$ , and putting all parts together, we know that with probability at least $1 - \delta$ , + +$$ +\begin{array}{l} \mathbb {E} _ {(\boldsymbol {x}, \boldsymbol {y}) \sim \mathcal {D}} [ \ell (f ^ {*} (\boldsymbol {x}), \boldsymbol {y}) ] \\ \leqslant \frac {1}{n} \sum_ {i = 1} ^ {n} \ell \left(f ^ {*} \left(\boldsymbol {x} _ {i}\right), y _ {i}\right) + 2 \hat {\mathcal {R}} _ {S} \left(\mathcal {F} _ {B}\right) + O \left(\sqrt {\frac {\log \frac {n}{\epsilon \delta \lambda}}{n}}\right) \\ \leqslant \frac {\lambda}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}} + \frac {\sigma}{2 \lambda} \sqrt {\frac {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}{n}} + \sigma \sqrt {\frac {2 \log (3 / \delta)}{n}} \\ + \frac {2 \left(B ^ {\prime} + \epsilon\right) \sqrt {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}}{n} + O \left(\sqrt {\frac {\log \frac {n}{\epsilon \delta \lambda}}{n}}\right) \\ \leqslant \frac {\lambda}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}} + \frac {\sigma}{2 \lambda} \sqrt {\frac {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}{n}} + \sigma \sqrt {\frac {2 \log (3 / \delta)}{n}} \\ + \frac {2 \sqrt {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}}{n} \left(\sqrt {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}} + \frac {\sigma}{\lambda} \left(\sqrt {n} + \sqrt {2 \log (3 / \delta)}\right) + \epsilon\right) + O \left(\sqrt {\frac {\log \frac {n}{\epsilon \delta \lambda}}{n}}\right) \\ = \frac {\lambda}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}} + \frac {5 \sigma}{2 \lambda} \sqrt {\frac {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}{n}} + \sigma \sqrt {\frac {2 \log (3 / \delta)}{n}} \\ + \frac {2 \sqrt {\operatorname {t r} [ k (\boldsymbol {X} , \boldsymbol {X}) ]}}{n} \left(\sqrt {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}} + \frac {\sigma}{\lambda} \sqrt {2 \log (3 / \delta)} + \epsilon\right) + O \left(\sqrt {\frac {\log \frac {n}{\epsilon \delta \lambda}}{n}}\right). \\ \end{array} +$$ + +Then, using $\operatorname{tr}[k(\pmb{X},\pmb{X})] = O(n)$ and choosing $\epsilon = 1$ , we obtain + +$$ +\begin{array}{l} \mathbb {E} _ {(\boldsymbol {x}, \boldsymbol {y}) \sim \mathcal {D}} [ \ell (f ^ {*} (\boldsymbol {x}), \boldsymbol {y}) ] \\ \leqslant \frac {\lambda}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}} + O \left(\frac {\sigma}{\lambda}\right) + \sigma \sqrt {\frac {2 \log (3 / \delta)}{n}} \\ + O \left(\sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}}\right) + O \left(\frac {\sigma}{\lambda \sqrt {n}} \sqrt {2 \log (3 / \delta)}\right) + O \left(\frac {1}{\sqrt {n}}\right) + O \left(\sqrt {\frac {\log \frac {n}{\delta \lambda}}{n}}\right) \\ \leqslant \frac {\lambda + O (1)}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}} + O \left(\frac {\sigma}{\lambda} + \sigma \sqrt {\frac {\log (1 / \delta)}{n}} + \frac {\sigma}{\lambda} \sqrt {\frac {\log (1 / \delta)}{n}} + \sqrt {\frac {\log \frac {n}{\delta \lambda}}{n}}\right). \\ \end{array} +$$ + +![](images/a9ba45e32a09641d78d12a4b87fe3df5cdd69c9e60af3e3bd042f3dbda1d5833.jpg) + +# C.2 PROOF OF THEOREM 5.2 + +Proof of Theorem 5.2. We will apply Theorem 5.1 in this proof. Note that we cannot directly apply it on $\{(\pmb{x}_i, y_i, \tilde{y}_i)\}$ because the mean of $\tilde{y}_i - y_i$ is non-zero (conditioned on $(\pmb{x}_i, y_i)$ ). Nevertheless, this issue can be resolved by considering $\{(\pmb{x}_i, (1 - 2p)y_i, \tilde{y}_i)\}$ instead. Then we can easily check that conditioned on $y_i$ , $\tilde{y}_i - (1 - 2p)y_i$ has mean 0 and is subgaussian with parameter $\sigma = O(\sqrt{p})$ . With this change, we are ready to apply Theorem 5.1. + +Define the following ramp loss for $u\in \mathbb{R}$ $\bar{y}\in \{\pm (1 - 2p)\}$ + +$$ +\ell^ {\mathrm {r a m p}} (u, \bar {y}) = \left\{ \begin{array}{l l} (1 - 2 p), & u \bar {y} \leqslant 0, \\ (1 - 2 p) - \frac {1}{1 - 2 p} u \bar {y}, & 0 < u \bar {y} < (1 - 2 p) ^ {2}, \\ 0, & u \bar {y} \geqslant (1 - 2 p) ^ {2}. \end{array} \right. +$$ + +It is easy to see that $\ell^{\mathrm{ramp}}(u,\bar{y})$ is 1-Lipschitz in $u$ for $\bar{y}\in \{\pm (1 - 2p)\}$ , and satisfies $\ell^{\mathrm{ramp}}(\bar{y},\bar{y}) = 0$ for $\bar{y}\in \{\pm (1 - 2p)\}$ . Then by Theorem 5.1, with probability at least $1 - \delta$ , + +$$ +\mathbb {E} _ {(\boldsymbol {x}, \boldsymbol {y}) \sim \mathcal {D}} \left[ \ell^ {\operatorname {r a m p}} \left(f ^ {*} (\boldsymbol {x}), (1 - 2 p) \boldsymbol {y}\right) \right] +$$ + +$$ +\begin{array}{l} \leqslant \frac {\lambda + O (1)}{2} \sqrt {\frac {(1 - 2 p) \boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \cdot (1 - 2 p) \boldsymbol {y}}{n}} + O \left(\frac {\sqrt {p}}{\lambda} + \sqrt {\frac {p \log (1 / \delta)}{n}} + \sqrt {\frac {\log \frac {n}{\delta \lambda}}{n}}\right) \\ = \frac {(1 - 2 p) (\lambda + O (1))}{2} \sqrt {\frac {\boldsymbol {y} ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {y}}{n}} + O \left(\frac {\sqrt {p}}{\lambda} + \sqrt {\frac {p \log (1 / \delta)}{n}} + \sqrt {\frac {\log \frac {n}{\delta \lambda}}{n}}\right). \\ \end{array} +$$ + +Note that $\ell^{\mathrm{ramp}}$ also bounds the 0-1 loss (classification error) as + +$$ +\ell^ {\operatorname {r a m p}} (u, (1 - 2 p) y) \geqslant (1 - 2 p) \mathbb {I} [ \operatorname {s g n} (u) \neq \operatorname {s g n} (y) ] = (1 - 2 p) \mathbb {I} [ \operatorname {s g n} (u) \neq y ], \quad \forall u \in \mathbb {R}, y \in \{\pm 1 \}. +$$ + +Then the conclusion follows. + +![](images/60aa992f62f7b9501d9b075a35f77256d685f4bce56d9401cf5c9d6ae00d27cc.jpg) + +# C.3 PROOF OF THEOREM 5.3 + +Proof of Theorem 5.3. By definition, we have $\mathbb{E}[\tilde{\pmb{y}}_i|c_i] = \mathbb{E}[e^{(\tilde{c}_i)}|c_i] = \pmb{p}_{c_i}$ (the $c_{i}$ -th column of $\pmb{P}$ ). This means $\mathbb{E}[\tilde{\pmb{Y}} |\pmb {Q}] = \pmb{Q}$ . Therefore we can view $\pmb{Q}$ as an encoding of clean labels, and then the observed noisy labels $\tilde{\pmb{Y}}$ is $\pmb{Q}$ plus a zero-mean noise. (The noise is always bounded by 1 so is subgaussian with parameter 1.) This enables us to apply Theorem 5.1 to each $f^{(h)}$ , which says that with probability at least $1 - \delta^{\prime}$ , + +$$ +\mathbb {E} _ {(\boldsymbol {x}, c) \sim \mathcal {D}} \left| f ^ {(h)} (\boldsymbol {x}) - p _ {h, c} \right| \leqslant \frac {\lambda + O (1)}{2} \sqrt {\frac {(\boldsymbol {q} ^ {(h)}) ^ {\top} (k (\boldsymbol {X} , \boldsymbol {X})) ^ {- 1} \boldsymbol {q} ^ {(h)}}{n}} + O \left(\frac {1}{\lambda} + \sqrt {\frac {\log \frac {1}{\delta^ {\prime}}}{n}} + \sqrt {\frac {\log \frac {n}{\delta^ {\prime} \lambda}}{n}}\right). +$$ + +Letting $\delta' = \frac{\delta}{K}$ and taking a union bound over $h \in [K]$ , we know that the above bound holds for every $h$ simultaneously with probability at least $1 - \delta$ . + +Now we proceed to bound the classification error. Note that $c \notin \operatorname{argmax}_{h \in [K]} f^{(h)}(\pmb{x})$ implies $\sum_{h=1}^{K} |f^{(h)}(\pmb{x}) - p_{h,c}| \geqslant \mathrm{gap}$ . Therefore the classification error can be bounded as + +$$ +\begin{array}{l} \Pr_ {(\boldsymbol {x}, c) \sim \mathcal {D}} \left[ c \notin \operatorname {a r g m a x} _ {h \in [ K ]} f ^ {(h)} (\boldsymbol {x}) \right] \leqslant \Pr_ {(\boldsymbol {x}, c) \sim \mathcal {D}} \left[ \sum_ {h = 1} ^ {K} \left| f ^ {(h)} (\boldsymbol {x}) - p _ {h, c} \right| \geqslant \mathsf {g a p} \right] \\ \leqslant \frac {1}{\operatorname {g a p}} \mathbb {E} _ {(\boldsymbol {x}, c) \sim \mathcal {D}} \left[ \sum_ {h = 1} ^ {K} \left| f ^ {(h)} (\boldsymbol {x}) - p _ {h, c} \right| \right] = \frac {1}{\operatorname {g a p}} \sum_ {h = 1} ^ {K} \mathbb {E} _ {(\boldsymbol {x}, c) \sim \mathcal {D}} \left[ \left| f ^ {(h)} (\boldsymbol {x}) - p _ {h, c} \right| \right] \\ \leqslant \frac {1}{\operatorname {g a p}} \left(\frac {\lambda + O (1)}{2} \sum_ {h = 1} ^ {K} \sqrt {\frac {\left(\boldsymbol {q} ^ {(h)}\right) ^ {\top} \left(k (\boldsymbol {X} , \boldsymbol {X})\right) ^ {- 1} \boldsymbol {q} ^ {(h)}}{n}} + K \cdot O \left(\frac {1}{\lambda} + \sqrt {\frac {\log \frac {1}{\delta^ {\prime}}}{n}} + \sqrt {\frac {\log \frac {n}{\delta^ {\prime} \lambda}}{n}}\right)\right), \\ \end{array} +$$ + +completing the proof. + +![](images/eb146c69570c9583f8ce2ee2a973a835c5514dcc5f356ce993acb9829360ff43.jpg) + +# D EXPERIMENT DETAILS AND ADDITIONAL FIGURES + +In Setting 1, we train a two-layer neural network with 10,000 hidden neurons on MNIST ("5" vs "8"). In Setting 2, we train a CNN, which has 192 channels for each of its 11 layers, on CIFAR ("airplanes" vs "automobiles"). We do not have biases in these two networks. In Setting 3, we use the standard ResNet-34. + +In Settings 1 and 2, we use a fixed learning rate for GD or SGD, and we do not use tricks like batch normalization, data augmentation, dropout, etc., except the difference trick in Appendix A. We also freeze the first and the last layer of the CNN and the second layer of the fully-connected net.5 + +In Setting 3, we use SGD with 0.9 momentum, weight decay of $5 \times 10^{-4}$ , and batch size 128. The learning rate is 0.1 initially and is divided by 10 after 82 and 123 epochs (164 in total). Since we + +observe little over-fitting to noise in the first stage of training (before learning rate decay), we restrict the regularization power of AUX by applying weight decay on auxiliary variables, and dividing their weight decay factor by 10 after each learning rate decay. + +See Figures 5 to 8 for additional results for Setting 2 with different $\lambda$ 's. + +![](images/2578de0d44cbf38020923c5c73a41b7dc37de11d14d637ed19966a664d85f5fa.jpg) +(a) $\lambda = 0.25$ + +![](images/35d90a51e34e017e121cf73b07213b5590fa9b58920ff8d089a0ca859a367da7.jpg) +(b) $\lambda = 0.5$ + +![](images/b65b07fae265464dbae6877a7e2b2f4ea7c1227ea30625c4069c324bcf0302fe.jpg) +(c) $\lambda = 1$ + +![](images/c54210ce395670d35c8f39339d10206fa8f74b126667e5fbb8cd9da6ca0ebb6a.jpg) +(d) $\lambda = 2$ + +![](images/ee6bdbe1f372abacb797c4907d0226f17fb5f640b360e4e823d9a4b37c2bb880.jpg) +(e) $\lambda = 4$ + +![](images/9b916b39892f4799f81c4204a0fbd10b6a4788b8f8b2d2cc14e6c77ca41678f5.jpg) +(f) $\lambda = 8$ +Figure 5: Training (dashed) & test (solid) errors vs. epoch for Setting 2. Noise rate = 20%, $\lambda \in \{0.25, 0.5, 1, 2, 4, 8\}$ . Training error of AUX is measured with auxiliary variables. + +![](images/0ed2b02a06b56cfdde636f19a31e2bf4e59dc6c6ad4fb77aeefeee307c1dcdb9.jpg) +Figure 6: Setting 2, $\| \pmb{W}^{(7)}\|_{F}$ and $\| \pmb{W}^{(7)} - \pmb{W}^{(7)}(0)\|_{F}$ during training. Noise rate $= 20\%$ $\lambda = 4$ + +![](images/8ad641be7ca1a7572ea69451987929d3c2c1cbed24a3efc2f97e83a8dedad562.jpg) + +![](images/344e7cd7dc7cf3928246971b9eff3376bb6adfe8a95df0f143fab80846752c1f.jpg) +Figure 7: Setting 2, $\| \pmb{W}^{(4)}\|_{F}$ and $\| \pmb{W}^{(4)} - \pmb{W}^{(4)}(0)\|_{F}$ during training. Noise rate $= 0$ , $\lambda = 2$ . + +![](images/65e895567beed83ccf7a97f8723e50d998a5a60ad2b23906d4e121fdb215e742.jpg) + +![](images/50ebd2d7b2f92e260a17567335601b1b29de40b363f8a516cb87ccd9f9ff2345.jpg) +Figure 8: Setting 2, $\| \pmb{W}^{(7)}\|_F$ and $\| \pmb{W}^{(7)} - \pmb{W}^{(7)}(0)\|_F$ during training. Noise rate $= 0$ , $\lambda = 2$ . + +![](images/8387205e0b109e26017f68bef9fbce16d1c91ce84607f7de0369a58c8be21dbb.jpg) \ No newline at end of file diff --git a/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/images.zip b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..de8a797d6e06b2aaece5911cbd9f003c0523331e --- /dev/null +++ b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40200db190ce821d71a9b606ea79bb68fcf1b8df0fa3c1bc84558489676d1e5d +size 1021152 diff --git a/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/layout.json b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..afdf4b972924e0d6d1d068b9d2c4ccf09c4140f3 --- /dev/null +++ b/simpleandeffectiveregularizationmethodsfortrainingonnoisilylabeleddatawithgeneralizationguarantee/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:442c5fe12e4d4aacbdc05a7bf0046a584b67de9b0f7881790ca74dbc0b426eb4 +size 853582 diff --git a/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_content_list.json b/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..43c8a6e30981108d628927f6c60f3aac7bbdaebf --- /dev/null +++ b/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a395fbed5a548fca8019445d9a9af70a6fff3cf3c12fb48169c0c1ad858ef506 +size 108557 diff --git a/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_model.json b/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9ecdd2319611bc06975db7ef590452d1264c7949 --- /dev/null +++ b/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ea0485f0f88de8612383fe18ae8a38e75808e763e1851855fa0a5df732bf329 +size 130815 diff --git a/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_origin.pdf b/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a21bef24dd24cb2ffa917c7954cb5f63bbde9736 --- /dev/null +++ b/singleepisodepolicytransferinreinforcementlearning/7080033e-b4df-418e-8c4f-86a2033772d9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a6e8b53d46e4c9e24f33a232a208fb0484a9aecf94992bd96627876dff06277 +size 933696 diff --git a/singleepisodepolicytransferinreinforcementlearning/full.md b/singleepisodepolicytransferinreinforcementlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..361dcf35942e13aa2001d26018428e263511ca77 --- /dev/null +++ b/singleepisodepolicytransferinreinforcementlearning/full.md @@ -0,0 +1,430 @@ +# SINGLE EPISODE POLICY TRANSFER IN REINFORCEMENT LEARNING + +Jiachen Yang + +Georgia Institute of Technology, USA +jiachen.yang@gatech.edu + +Brenden Petersen + +Lawrence Livermore National Laboratory, USA +petersen33@11nl.gov + +Hongyuan Zha + +Georgia Institute of Technology, USA +zha@cc.gatech.edu + +Daniel Faissol + +Lawrence Livermore National Laboratory, USA +faissoll@llnl.gov + +# ABSTRACT + +Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL). An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience rollouts for adaptation. To achieve single episode transfer in a family of environments with related dynamics, we propose a general algorithm that optimizes a probe and an inference model to rapidly estimate underlying latent variables of test dynamics, which are then immediately used as input to a universal control policy. This modular approach enables integration of state-of-the-art algorithms for variational inference or RL. Moreover, our approach does not require access to rewards at test time, allowing it to perform in settings where existing adaptive approaches cannot. In diverse experimental domains with a single episode test constraint, our method significantly outperforms existing adaptive approaches and shows favorable performance against baselines for robust transfer. + +# 1 INTRODUCTION + +One salient feature of human intelligence is the ability to perform well in a single attempt at a new task instance, by recognizing critical characteristics of the instance and immediately executing appropriate behavior based on experience in similar instances. Artificial agents must do likewise in applications where success must be achieved in one attempt and failure is irreversible. This problem setting, single episode transfer, imposes a challenging constraint in which an agent experiences—and is evaluated on—only one episode of a test instance. + +As a motivating example, a key challenge in precision medicine is the uniqueness of each patient's response to therapeutics (Hodson, 2016; Bordbar et al., 2015; Whirl-Carrillo et al., 2012). Adaptive therapy is a promising approach that formulates a treatment strategy as a sequential decision-making problem (Zhang et al., 2017; West et al., 2018; Petersen et al., 2019). However, heterogeneity among instances may require explicitly accounting for factors that underlie individual patient dynamics. For example, in the case of adaptive therapy for sepsis (Petersen et al., 2019), predicting patient response prior to treatment is not possible. However, differences in patient responses can be observed via blood measurements very early after the onset of treatment (Cockrell and An, 2018). + +As a first step to address single episode transfer in reinforcement learning (RL), we propose a general algorithm for near-optimal test-time performance in a family of environments where differences in dynamics can be ascertained early during an episode. Our key idea is to train an inference model and a probe that together achieve rapid inference of latent variables—which account for variation in a family of similar dynamical systems—using a small fraction (e.g., $5\%$ ) of the test episode, then deploy a universal policy conditioned on the estimated parameters for near-optimal control on the new instance. Our approach combines the advantages of robust transfer and adaptation-based transfer, as we learn a single universal policy that requires no further training during test, but which is adapted to the new environment by conditioning on an unsupervised estimation of new latent dynamics. + +In contrast to methods that quickly adapt or train policies via gradients during test but assume access to multiple test rollouts and/or dense rewards (Finn et al., 2017; Killian et al., 2017; Rakelly et al., 2019), we explicitly optimize for performance in one test episode without accessing the reward function at test time. Hence our method applies to real-world settings in which rewards during test are highly delayed or even completely inaccessible—e.g., a reward that depends on physiological factors that are accessible only in simulation and not from real patients. We also consider computation time a crucial factor for real-time application, whereas some existing approaches require considerable computation during test (Killian et al., 2017). Our algorithm builds on variational inference and RL as submodules, which ensures practical compatibility with existing RL workflows. + +Our main contribution is a simple general algorithm for single episode transfer in families of environments with varying dynamics, via rapid inference of latent variables and immediate execution of a universal policy. Our method attains significantly higher cumulative rewards, with orders of magnitude faster computation time during test, than the state-of-the-art model-based method (Killian et al., 2017), on benchmark high-dimensional domains whose dynamics are discontinuous and continuous in latent parameters. We also show superior performance over optimization-based meta-learning and favorable performance versus baselines for robust transfer. + +# 2 SINGLE EPISODE TRANSFER IN RL: PROBLEM SETUP + +Our goal is to train a model that performs close to optimal within a single episode of a test instance with new unknown dynamics. We formalize the problem as a family $(S, \mathcal{A}, \mathcal{T}, R, \gamma)$ , where $(S, \mathcal{A}, R, \gamma)$ are the state space, action space, reward function, and discount of an episodic Markov decision process (MDP). Each instance of the family is a stationary MDP with transition function $\mathcal{T}_z(s'|s, a) \in \mathcal{T}$ . When a set $\mathcal{Z}$ of physical parameters determines transition dynamics (Konidaris and Doshi-Velez, 2014), each $\mathcal{T}_z$ has a hidden parameter $z \in \mathcal{Z}$ that is sampled once from a distribution $P_{\mathcal{Z}}$ and held constant for that instance. For more general stochastic systems whose modes of behavior are not easily attributed to physical parameters, $\mathcal{Z}$ is induced by a generative latent variable model that indirectly associates each $\mathcal{T}_z$ to a latent variable $z$ learned from observed trajectory data. We refer to "latent variable" for both cases, with the clear ontological difference understood. Depending on application, $\mathcal{T}_z$ can be continuous or discontinuous in $z$ . We strictly enforce the challenging constraint that latent variables are never observed, in contrast to methods that use known values during training (Yu et al., 2017), to ensure the framework applies to challenging cases without prior knowledge. + +This formulation captures a diverse set of important problems. Latent space $\mathcal{Z}$ has physical meaning in systems where $\mathcal{T}_z$ is a continuous function of physical parameters (e.g., friction and stiffness) with unknown values. In contrast, a discrete set $\mathcal{Z}$ can induce qualitatively different dynamics, such as a 2D navigation task where $z\in \{0,1\}$ decides if the same action moves in either a cardinal direction or its opposite (Killian et al., 2017). Such drastic impact of latent variables may arise when a single drug is effective for some patients but causes serious side effects for others (Cockrell and An, 2018). + +Training phase. Our training approach is fully compatible with RL for episodic environments. We sample many instances, either via a simulator with controllable change of instances or using off-policy batch data in which demarcation of instances—but not values of latent variables—is known, and train for one or more episodes on each instance. While we focus on the case with known change of instances, the rare case of unknown demarcation can be approached either by preprocessing steps such as clustering trajectory data or using a dynamic variant of our algorithm (Appendix C). + +Single test episode. In contrast to prior work that depend on the luxury of multiple experience rollouts for adaptation during test time (Doshi-Velez and Konidaris, 2016; Killian et al., 2017; Finn et al., 2017; Rakelly et al., 2019), we introduce the strict constraint that the trained model has access to—and is evaluated on—only one episode of a new test instance. This reflects the need to perform near-optimally as soon as possible in critical applications such as precision medicine, where an episode for a new patient with new physiological dynamics is the entirety of hospitalization. + +# 3 SINGLE EPISODE POLICY TRANSFER + +We present Single Episode Policy Transfer (SEPT), a high-level algorithm for single episode transfer between MDPs with different dynamics. The following sections discuss specific design choices in SEPT, all of which are combined in synergy for near-optimal performance in a single test episode. + +# 3.1 POLICY TRANSFER THROUGH LATENT SPACE + +Our best theories of natural and engineered systems involve physical constants and design parameters that enter into dynamical models. This physicalist viewpoint motivates a partition for transfer learning in families of MDPs: 1. learn a representation of latent variables with an inference model that rapidly encodes a vector $\hat{z}$ of discriminative features for a new instance; 2. train a universal policy $\pi(a|s, z)$ to perform near-optimally for dynamics corresponding to any latent variable in $\mathcal{Z}$ ; 3. immediately deploy both the inference model and universal policy on a given test episode. To build on the generality of model-free RL, and for scalability to systems with complex dynamics, we do not expend computational effort to learn a model of $\mathcal{T}_z(s'|s, a)$ , in contrast to model-based approaches (Killian et al., 2017; Yao et al., 2018). Instead, we leverage expressive variational inference models to represent latent variables and provide uncertainty quantification. + +In domains with ground truth hidden parameters, a latent variable encoding is the most succinct representation of differences in dynamics between instances. As the encoding $\hat{z}$ is held constant for all episodes of an instance, a universal policy $\pi(a|s,z)$ can either adapt to all instances when $\mathcal{Z}$ is finite, or interpolate between instances when $\mathcal{T}_z$ is continuous in $z$ (Schaul et al., 2015). Estimating a discriminative encoding for a new instance enables immediate deployment of $\pi(a|s,z)$ on the single test episode, bypassing the need for further fine-tuning. This is critical for applications where further training complex models on a test instance is not permitted due to safety concerns. In contrast, methods that do not explicitly estimate a latent representation of varied dynamics must use precious experiences in the test episode to tune the trained policy (Finn et al., 2017). + +In the training phase, we generate an optimized $^1$ dataset $\mathcal{D} \coloneqq \{\tau^i\}_{i=1}^N$ of short trajectories, where each $\tau^i \coloneqq (s_1^i, a_1^i, \ldots, s_{T_p}^i, a_{T_p}^i)$ is a sequence of early state-action pairs at the start of episodes of instance $\mathcal{T}_i \in \mathcal{T}$ (e.g. $T_p = 5$ ). We train a variational auto-encoder, comprising an approximate posterior inference model $q_\phi(z|\tau)$ that produces a latent encoding $\hat{z}$ from $\tau$ and a parameterized generative model $p_\psi(\tau|z)$ . The dimension chosen for $\hat{z}$ may differ from the exact true dimension when it exists but is unknown; domain knowledge can aid the choice of dimensionality reduction. Because dynamics of a large variety of natural systems are determined by independent parameters (e.g., coefficient of contact friction and Reynolds number can vary independently), we consider a disentangled latent representation where latent units capture the effects of independent generative parameters. To this end, we bring $\beta$ -VAE (Higgins et al., 2017) into the context of families of dynamical systems, choosing an isotropic unit Gaussian as the prior and imposing the constraint $\bar{D}_{KL}(q_\phi(z|\tau^i)\| p(z)) < \epsilon$ . The $\beta$ -VAE is trained by maximizing the variational lower bound $\mathcal{L}(\psi, \phi; \tau^i)$ for each $\tau^i$ across $\mathcal{D}$ : + +$$ +\max _ {\psi , \phi} \log p _ {\psi} \left(\tau^ {i}\right) \geq \mathcal {L} \left(\psi , \phi ; \tau^ {i}\right) := - \beta D _ {K L} \left(q _ {\phi} \left(z \mid \tau^ {i}\right) \| p (z)\right) + \mathbb {E} _ {q _ {\phi} \left(z \mid \tau^ {i}\right)} \left[ \log p _ {\psi} \left(\tau^ {i} \mid z\right) \right] \tag {1} +$$ + +This subsumes the VAE (Kingma and Welling, 2014) as a special case $(\beta = 1)$ , and we refer to both as VAE in the following. Since latent variables only serve to differentiate among trajectories that arise from different transition functions, the meaning of latent variables is not affected by isometries and hence the value of $\hat{z}$ by itself need not have any simple relation to a physically meaningful $z$ even when one exists. Only the partition of latent space is important for training a universal policy. + +Earlier methods for a family of similar dynamics relied on Bayesian neural network (BNN) approximations of the entire transition function $s_{t + 1} \sim \hat{\mathcal{T}}_z^{(\mathrm{BNN})}(s_t, a_t)$ , which was either used to perform computationally expensive fictional rollouts during test time (Killian et al., 2017) or used indirectly to further optimize a posterior over $z$ (Yao et al., 2018). Our use of variational inference is more economical: the encoder $q_{\phi}(z|\tau)$ can be used immediately to infer latent variables during test, while the decoder $p_{\psi}(\tau | z)$ plays a crucial role for optimized probing in our algorithm (see Section 3.3). + +In systems with ground truth hidden parameters, we desire two additional properties. The encoder should produce low-variance encodings, which we implement by minimizing the entropy of $q_{\phi}(z|\tau)$ : + +$$ +\min _ {\phi} H \left(q _ {\phi} (z | \tau)\right) := - \int_ {z} q _ {\phi} (z | \tau) \log q _ {\phi} (z | \tau) d z = \frac {D}{2} \log (2 \pi) + \frac {1}{2} \sum_ {d = 1} ^ {D} \left(1 + \log \sigma_ {d} ^ {2}\right) \tag {2} +$$ + +under a diagonal Gaussian parameterization, where $\sigma_d^2 = \mathrm{Var}(q_\phi (z|\tau))$ and $\dim (z) = D$ . We add $-H(q_{\phi}(z|\tau))$ as a regularizer to equation 1. Second, we must capture the impact of $z$ on higher- + +order dynamics. While previous work neglects the order of transitions $(s_t, a_t, s_{t+1})$ in a trajectory (Rakelly et al., 2019), we note that a single transition may be compatible with multiple instances whose differences manifest only at higher orders. In general, partitioning the latent space requires taking the ordering of a temporally-extended trajectory into account. Therefore, we parameterize our encoder $q_{\phi}(z|\tau)$ using a bidirectional LSTM—as both temporal directions of $(s_t, a_t)$ pairs are informative—and we use an LSTM decoder $p_{\psi}(\tau|z)$ (architecture in Appendix E.2). In contrast to embedding trajectories from a single MDP for hierarchical learning (Co-Reyes et al., 2018), our purpose is to encode trajectories from different instances of transition dynamics for optimal control. + +# 3.2 TRANSFER OF A UNIVERSAL POLICY + +We train a single universal policy $\pi(a|s,z)$ and deploy the same policy during test (without further optimization), for two reasons: robustness against imperfection in latent variable representation and significant improvement in scalability. Earlier methods trained multiple optimal policies $\{\pi_i^*(a|s)\}_{i=1}^N$ on training instances with a set $\{z^i\}_{i=1}^N$ of hidden parameters, then employed either behavioral cloning (Yao et al., 2018) or off-policy Q-learning (Arnekvist et al., 2019) to train a final policy $\pi(a|s,z)$ using a dataset $\{(s_t,\hat{z}^i; a_t \sim \pi_i^*(a|s_t))\}$ . However, this supervised training scheme may not be robust (Yu et al., 2017): if $\pi(a|s,z)$ were trained only using instance-specific optimal state-action pairs generated by $\pi_i^*(a|s)$ and posterior samples of $\hat{z}$ from an optimal inference model, it may not generalize well when faced with states and encodings that were not present during training. Moreover, it is computationally infeasible to train a collection $\{\pi_i^*\}_{i=1}^N$ —which is thrown away during test—when faced with a large set of training instances from a continuous set $\mathcal{Z}$ . Instead, we interleave training of the VAE and a single policy $\pi(a|s,z)$ , benefiting from considerable computation savings at training time, and higher robustness due to larger effective sample count. + +# 3.3 OPTIMIZED PROBING FOR ACCELERATED LATENT VARIABLE INFERENCE + +To execute near-optimal control within a single test episode, we first rapidly compute $\hat{z}$ using a short trajectory of initial experience. This is loosely analogous to the use of preliminary medical treatment to define subsequent prescriptions that better match a patient's unique physiological response. Our goal of rapid inference motivates two algorithmic design choices to optimize this initial phase. First, the trajectory $\tau$ used for inference by $q_{\phi}(z|\tau)$ must be optimized, in the sense of machine teaching (Zhu et al., 2018), as certain trajectories are more suitable than others for inferring latent variables that underlie system dynamics. If specific degrees of freedom are impacted the most by latent variables, an agent should probe exactly those dimensions to produce an informative trajectory for inference. Conversely, methods that deploy a single universal policy without an initial probing phase (Yao et al., 2018) can fail in adversarial cases, such as when the initial placeholder $\hat{z}$ used in $\pi_{\theta}(a|s,\cdot)$ at the start of an instance causes failure to exercise dimensions of dynamics that are necessary for inference. Second, the VAE must be specifically trained on a dataset $\mathcal{D}$ of short trajectories consisting of initial steps of each training episode. We cannot expend a long trajectory for input to the encoder during test, to ensure enough remaining steps for control. Hence, single episode transfer motivates the machine teaching problem of learning to distinguish among dynamics: our algorithm must have learned both to generate and to use a short initial trajectory to estimate a representation of dynamics for control. + +Our key idea of optimized probing for accelerated latent variable inference is to train a dedicated probe policy $\pi_{\varphi}(a|s)$ to generate a dataset $\mathcal{D}$ of short trajectories at the beginning of all training episodes, such that the VAE's performance on $\mathcal{D}$ is optimized. Orthogonal to training a meta-policy for faster exploration during standard RL training (Xu et al., 2018), our probe and VAE are trained for the purpose of performing well on a new test MDP. For ease of exposition, we discuss the case with access to a simulator, but our method easily allows use of off-policy batch data. We start each training episode using $\pi_{\varphi}$ for a probe phase lasting $T_{p}$ steps, record the probe trajectory $\tau_{p}$ into $\mathcal{D}$ , train the VAE using minibatches from $\mathcal{D}$ , then use $\tau_{p}$ with the encoder to generate $\hat{z}$ for use by $\pi_{\theta}(a|s,\hat{z})$ to complete the remainder of the episode (Algorithm 1). At test time, SEPT only requires lines 5, 8, and 9 in Algorithm 1 (training step in 9 removed; see Algorithm 2). The reward function for $\pi_{\varphi}$ is defined as the VAE objective, approximated by the variational lower bound (1): $R_{p}(\tau) := \mathcal{L}(\psi ,\phi ;\tau)\leq \log p_{\psi}(\tau)$ . This feedback loop between the probe and VAE directly trains the + +Algorithm 1 Single Episode Policy Transfer: training phase +1: procedure SEPT-TRAIN +2: Initialize encoder $\phi$ , decoder $\psi$ , probe policy $\varphi$ , control policy $\theta$ , and trajectory buffer $\mathcal{D}$ +3: for each instance $\mathcal{T}_z$ with transition function sampled from $\mathcal{T}$ do +4: for each episode on instance $\mathcal{T}_z$ do +5: Execute $\pi_{\varphi}$ for $T_p$ steps and store trajectory $\tau_p$ into $\mathcal{D}$ +6: Use variational lower bound (1) as the reward to train $\pi_{\varphi}$ by descending gradient (3) +7: Train VAE using minibatches from $\mathcal{D}$ for gradient ascent on (1) and descent on (2) +8: Estimate $\hat{z}$ from $\tau_p$ using encoder $q_{\phi}(z|\tau)$ +9: Execute $\pi_{\theta}(a|s,z)$ with $\hat{z}$ for remaining time steps and train it with suitable RL algorithm +10: end for +11: end for +12: end procedure + +probe to help the VAE's inference of latent variables that distinguish different dynamics (Figure 1). We provide detailed justification as follows. First we state a result derived in Appendix A: + +Proposition 1. Let $p_{\varphi}(\tau)$ denote the distribution of trajectories induced by $\pi_{\varphi}$ . Then the gradient of the entropy $H(p_{\varphi}(\tau))$ is given by + +$$ +\nabla_ {\varphi} H \left(p _ {\varphi} (\tau)\right) = \mathbb {E} _ {p _ {\varphi} (\tau)} \left[ \nabla_ {\varphi} \sum_ {i = 0} ^ {T _ {p} - 1} \log \left(\pi_ {\varphi} \left(a _ {i} \mid s _ {i}\right)\right) (- \log p _ {\varphi} (\tau)) \right] \tag {3} +$$ + +Noting that dataset $\mathcal{D}$ follows distribution $p_{\varphi}$ and that the VAE is exactly trained to maximize the log probability of $\mathcal{D}$ , we use $\mathcal{L}(\psi, \phi; \tau)$ as a tractable lowerbound on $\log p_{\varphi}(\tau)$ . Crucially, to generate optimal probe trajectories for the VAE, we take a minimum-entropy viewpoint and descend the gradient (3). This is opposite of a maximum entropy viewpoint that encourages the policy to generate diverse trajectories (Co-Reyes et al., 2018), which would minimize $\log p_{\varphi}(\tau)$ and produce an adversarial dataset for the VAE—hence, optimal probing is not equivalent to diverse exploration. The degenerate case of $\pi_{\varphi}$ learning to "stay still" for minimum entropy is precluded by any source of environmental stochasticity: trajectories from different instances will still differ, so degenerate trajectories result in low VAE performance. Finally we observe that equation 3 is the defining equation of a simple policy gradient algorithm (Williams, 1992) for training $\pi_{\varphi}$ , with $\log p_{\varphi}(\tau)$ interpreted as the cumulative reward of a trajectory generated by $\pi_{\varphi}$ . This completes our justification for defining reward $R_{p}(\tau) \coloneqq \mathcal{L}(\psi, \phi; \tau)$ . We also show empirically in ablation experiments that this reward is more effective than choices that encourage high perturbation of state dimensions or high entropy (Section 6). + +The VAE objective function may not perfectly evaluate a probe trajectory generated by $\pi_{\varphi}$ because the objective value increases due to VAE training regardless of $\pi_{\varphi}$ . To give a more stable reward signal to $\pi_{\varphi}$ , we can use a second VAE whose parameters slowly track the main VAE according to $\psi' \gets \alpha \psi + (1 - \alpha) \psi'$ for $\alpha \in [0, 1]$ , and similarly for $\phi'$ . While analogous to target networks in DQN (Mnih et al., 2015), the difference is that our second VAE is used to compute the reward for $\pi_{\varphi}$ . + +![](images/def79b59059e1ff40845e738d856f52590d022ac15f5460c9536dd3033124eec.jpg) +Figure 1: $\pi_{\varphi}$ learns to generate an optimal dataset for the VAE, whose performance is the reward for $\pi_{\varphi}$ . Encoding $\hat{z}$ by the VAE is given to control policy $\pi_{\theta}$ . + +# 4 RELATED WORK + +Transfer learning in a family of MDPs with different dynamics manifests in various formulations (Taylor and Stone, 2009). Analysis of $\epsilon$ -stationary MDPs and $\epsilon$ -MDPs provide theoretical grounding by showing that an RL algorithm that learns an optimal policy in an MDP can also learn a near-optimal policy for multiple transition functions (Kalmár et al., 1998; Szita et al., 2002). + +Imposing more structure, the hidden-parameter Markov decision process (HiP-MDP) formalism posits a space of hidden parameters that determine transition dynamics, and implements transfer by model-based policy training after inference of latent parameters (Doshi-Velez and Konidaris, 2016; Konidaris and Doshi-Velez, 2014). Our work considers HiP-MDP as a widely applicable yet special case of a general viewpoint, in which the existence of hidden parameters is not assumed but rather is induced by a latent variable inference model. The key structural difference from POMDPs (Kaelbling et al., 1998) is that given fixed latent values, each instance from the family is an MDP with no hidden states; hence, unlike in POMDPs, tracking a history of observations provides no benefit. In contrast to multi-task learning (Caruana, 1997), which uses the same tasks for training and test, and in contrast to parameterized-skill learning (Da Silva et al., 2012), where an agent learns from a collection of rewards with given task identities in one environment with fixed dynamics, our training and test MDPs have different dynamics and identities of instances are not given. + +Prior latent variable based methods for transfer in RL depend on a multitude of optimal policies during training (Arnekvist et al., 2019), or learn a surrogate transition model for model predictive control with real-time posterior updates during test (Perez et al., 2018). Our variational model-free approach does not incur either of these high computational costs. We encode trajectories to infer latent representation of differing dynamics, in contrast to state encodings in (Zhang et al., 2018). Rather than formulating variational inference in the space of optimal value functions (Tirinzoni et al., 2018), we implement transfer through variational inference in a latent space that underlies dynamics. Previous work for transfer across dynamics with hidden parameters employ model-based RL with Gaussian process and Bayesian neural network (BNN) models of the transition function (Doshi-Velez and Konidaris, 2016; Killian et al., 2017), which require computationally expensive fictional rollouts to train a policy from scratch during test time and poses difficulties for real-time test deployment. DPT uses a fully-trained BNN to further optimize latent variable during a single test episode, but faces scalability issues as it needs one optimal policy per training instance (Yao et al., 2018). In contrast, our method does not need a transition function and can be deployed without optimization during test. Methods for robust transfer either require access to multiple rounds from the test MDP during training (Rajeswaran et al., 2017), or require the distribution over hidden variables to be known or controllable (Paul et al., 2019). While meta-learning (Finn et al., 2017; Rusu et al., 2019; Zintgraf et al., 2019; Rakelly et al., 2019) in principle can take one gradient step during a single test episode, prior empirical evaluation were not made with this constraint enforced, and adaptation during test is impossible in settings without dense rewards. + +# 5 EXPERIMENTAL SETUP + +We conducted experiments on three benchmark domains with diverse challenges to evaluate the performance, speed of reward attainment, and computational time of SEPT versus five baselines in the single test episode3. We evaluated four ablation and variants of SEPT to investigate the necessity of all algorithmic design choices. For each method on each domain, we conducted 20 independent training runs. For each trained model, we evaluate on $M$ independent test instances, all starting with the same model; adaptations during the single test episode, if done by any method, are not preserved across the independent test instances. This means we evaluate on a total of $20M$ independent test instances per method per domain. Hyperparameters were adjusted using a coarse coordinate search on validation performance. We used DDQN with prioritized replay (Van Hasselt et al., 2016; Schaul et al., 2016) as the base RL component of all methods for a fair evaluation of transfer performance; other RL algorithms can be readily substituted. + +Domains. We use the same continuous state discrete action HiP-MDPs proposed by Killian et al. (2017) for benchmarking. Each isolated instance from each domain is solvable by RL, but it is highly challenging, if not impossible, for naïve RL to perform optimally for all instances because significantly different dynamics require different optimal policies. In 2D navigation, dynamics are discontinuous in $z \in \{0,1\}$ as follows: location of barrier to goal region, flipped effect of actions (i.e., depending on $z$ , the same action moves in either a cardinal direction or its opposite), and direction of a nonlinear wind. In Acrobot (Sutton et al., 1998), the agent applies $\{+1,0,-1\}$ torques to swing a two-link pendulum above a certain height. Dynamics are determined by a vector $z = (m_{1},m_{2},l_{1},l_{2})$ of masses and lengths, centered at 1.0. We use four unique instances in training and validation, + +![](images/33504bd5f80a6d43a7b611bcf9f99f210173a40ccc5a4247f391f09f6d053549.jpg) +(a) 2D navigation + +![](images/57cfb902969bc7e7ea1283db070fd658c1d530846ab0cfb281f9a62bd8c52263.jpg) +(b) Acrobot + +![](images/401930f79ea5cf2a187b2961e2a5b98a3a4b2db8f4b0439b6a2ffb0a0be45def.jpg) +(c) 2D navigation + +![](images/0efecee820943e9607e4da11926a542f49485878d9b363c41ebd2cdc518e9d04.jpg) +(d) Acrobot + +![](images/87c6c4d62eb3bc905b7b1afa21317bdf3121be2c7e070edaeff98836d5e96e74.jpg) + +![](images/9865c52c0addf1eb9d9086c6b57346bd23b18ec74d82f9f6c51430fc4c715538.jpg) +(f) 2D navigation + +![](images/898e3de7784054c808923aa91f50afc8ccc20b3dd82534c3ee23226e9b14ecaa.jpg) +(g) Acrobot + +![](images/d652ac016bdb82992617df321e48732f90fe71b2315a24b7e5fe90b120ec50e5.jpg) +(h) 2D navigation +Figure 2: (a-e): Comparison against baselines. (a-b): Number of steps to solve 2D navigation and Acrobot in a single test episode; failure to solve is assigned a count of 50 in 2D nav and 200 in Acrobot. (c-e): Cumulative reward versus test episode step. BNN requires long computation time and showed low rewards on HIV, hence we report 3 seeds in Figure 4b. (f-j): Ablation results. DynaSEPT is out of range in (g), see Figure 4a. Error bars show standard error of mean over all test instances over 20 training runs per method. + +![](images/f543b061c00e2ae5b166f856cba96522a2c5250eea5aa11ef4c3eacf9bb3bbae.jpg) +(i) Acrobot + +![](images/12e7746a0f487881678d911e33bf9d8e5d488bc1389680cf843663e9d77d26fb.jpg) +(e) HIV +(j) HIV + +constructed by sampling $\Delta z$ uniformly from $\{-0.3, -0.1, 0.1, 0.3\}$ and adding it to all components of $z$ . During test, we sample $\Delta z$ from $\{-0.35, -0.2, 0.2, 0.35\}$ to evaluate both interpolation and extrapolation. In $\mathbf{HIV}$ , a patient's state dynamics is modeled by differential equations with high sensitivity to 12 hidden variables and separate steady-state regions of health, such that different patients require unique treatment policies (Adams et al., 2004). Four actions determine binary activation of two drugs. We have $M = 10, 5, 5$ for 2D navigation, Acrobot, and HIV, respectively. + +Baselines. First, we evaluated two simple baselines that establish approximate bounds on test performance of methods that train a single policy: as a lower bound, $\mathbf{Avg}$ trains a single policy $\pi(a|s)$ on all instances sampled during training and runs directly on test instances; as an upper bound in the limit of perfect function approximation for methods that use latent variables as input, Oracle $\pi(a|s,z)$ receives the true hidden parameter $z$ during both training and test. Next we adapted existing methods, detailed in Appendix E.1, to single episode test evaluation: 1. we allow the model-based method BNN (Killian et al., 2017) to fine-tune a pre-trained BNN and train a policy using BNN-generated fictional episodes every 10 steps during the test episode; 2. we adapted the adversarial part of EPOpt (Rajeswaran et al., 2017), which we term EPOpt-adv, by training a policy $\pi(a|s)$ on instances with the lowest 10-percentile performance; 3. we evaluate MAML as an archetype of meta-learning methods that require dense rewards or multiple rollouts (Finn et al., 2017). We allow MAML to use a trajectory of the same length as SEPT's probe trajectory for one gradient step during test. We used the same architecture for the RL module of all methods (Appendix E.2). To our knowledge, these model-free baselines are evaluated on single-episode transfer for the first time in this work. + +Ablations. To investigate the benefit of our optimized probing method for accelerated inference, we designed an ablation called SEPT-NP, in which trajectories generated by the control policy are used by the encoder for inference and stored into $\mathcal{D}$ to train the VAE. Second, we investigated an alternative reward function for the probe, labeled TotalVar and defined as $R(\tau) := 1 / T_p\sum_{t=1}^{T_p-1}\sum_{i=1}^{\dim(S)}|s_{t+1,i} - s_{t,i}|$ for probe trajectory $\tau$ . In contrast to the minimum entropy viewpoint in Section 3.3, this reward encourages generation of trajectories that maximize total variation across all state space dimensions. Third, we tested the maximum entropy viewpoint on probe trajectory generation, labeled MaxEnt, by giving negative lowerbound as the probe reward: $R_p(\tau) := -\mathcal{L}(\psi, \phi; \tau)$ . Last, we tested whether DynaSEPT, an extension that dynamically decides to probe or execute control (Appendix C), has any benefit for stationary dynamics. + +# 6 RESULTS AND DISCUSSION + +2D navigation and Acrobot are solved upon attaining terminal reward of 1000 and 10, respectively. SEPT outperforms all baselines in 2D navigation and takes significantly fewer number of steps to solve (Figures 2a and 2c). While a single instance of 2D navigation is easy for RL, handling multiple + +![](images/de1130eed4928856ed028b6c77ecdb7d97a2c3644bc523b327039a69b1311927.jpg) +(a) 2D navigation + +![](images/83effeee5cf6026a7b3f4a14c5698448d5ec9f67a8a92a4a59ca6a663a29af10.jpg) +(b) Acrobot + +![](images/3753c85b295940d2ca7ae390c0bbf0b3d6ce97335875ba1f73ffdd4df4202c9a.jpg) +(c) HIV +Figure 3: Cumulative reward on test episode for different $T_{p}$ (a-c) and different $\dim(z)$ (d-f). 8M independent test instances for each hyperparameter setting. + +![](images/94b5d0efc57c3c144cc3545943a222968068e07e0f6b55892b1aefb333250bb0.jpg) +(d) 2D navigation + +![](images/54768aae0acff63b5c663f24964dd115a8041bf5f9e41749e247537cdd562461.jpg) +(e) Acrobot + +![](images/9369422c2933dc14901d8d7ce67e484e971485efef22b1a5c7741cdeb3a14730.jpg) +(f) HIV + +instances is highly non-trivial. EPOpt-adv and Avg almost never solve the test instance—we set "steps to solve" to 50 for test episodes that were unsolved—because interpolating between instance-specific optimal policies in policy parameter space is not meaningful for any task instance. MAML did not perform well despite having the advantage of being provided with rewards at test time, unlike SEPT. The gradient adaptation step was likely ineffective because the rewards are sparse and delayed. BNN requires significantly more steps than SEPT, and it uses four orders of magnitude longer computation time (Table 4), due to training a policy from scratch during the test episode. Training times of all algorithms except BNN are in the same order of magnitude (Table 3). + +In Acrobot and HIV, where dynamics are continuous in latent variables, interpolation within policy space can produce meaningful policies, so all baselines are feasible in principle. SEPT is statistically significantly faster than BNN and Avg, is within error bars of MAML, while EPOpt-adv outperforms the rest by a small margin (Figures 2b and 2d). Figure 5 shows that SEPT is competitive in terms of percentage of solved instances. As the true values of latent variables for Acrobot test instances were interpolated and extrapolated from the training values, this shows that SEPT is robust to out-of-training dynamics. BNN requires more steps due to simultaneously learning and executing control during the test episode. On HIV, SEPT reaches significantly higher cumulative rewards than all methods. Oracle is within the margin of error of Avg. This may be due to insufficient examples of the high-dimensional ground truth hidden parameters. Due to its long computational time, we run three seeds for BNN on HIV, shown in Figure 4b, and find it was unable to adapt within one test episode. + +Comparing directly to reported results in DPT (Yao et al., 2018), SEPT solves 2D Navigation at least $33\%$ ( $>10$ steps) faster, and the cumulative reward of SEPT (mean and standard error) are above DPT's mean cumulative reward in Acrobot (Table 2). Together, these results show that methods that explicitly distinguish different dynamics (e.g., SEPT and BNN) can significantly outperform methods that implicitly interpolate in policy parameter space (e.g., Avg and EPOpt-adv) in settings where $z$ has large discontinuous effect on dynamics, such as 2D navigation. When dynamics are continuous in latent variables (e.g., Acrobot and HIV), interpolation-based methods fare better than BNN, which faces the difficulty of learning a model of the entire family of dynamics. SEPT worked the best in the first case and is robust to the second case because it explicitly distinguishes dynamics and does not require learning a full transition model. Moreover, SEPT does not require rewards at test time allowing it be useful on a broader class of problems than optimization-based meta-learning approaches like MAML. Appendix D contains training curves. + +Ablation results. Comparing to SEPT-NP, Figures 2f, 2g and 2j show that the probe phase is necessary to solve 2D navigation quickly, while giving similar performance in Acrobot and significant improvement in HIV. SEPT significantly outperformed TotalVar in 2D navigation and HIV, while TotalVar gives slight improvement in Acrobot, showing that directly using VAE performance as the reward for probing in certain environments can be more effective than a reward that deliberately encourages perturbation of state dimensions. The clear advantage of SEPT over MaxEnt in 2D navigation and HIV supports our hypothesis in Section 3.3 that the variational lowerbound, rather than its negation in the maximum entropy viewpoint, should be used as the probe reward, while performance was not significantly differentiated in Acrobot. SEPT outperforms DynaSEPT on all problems where dynamics are stationary during each instance. On the other hand, DynaSEPT is the better choice in a non-stationary variant of 2D navigation where the dynamics "switch" abruptly at $t = 10$ (Figure 4c). + +Robustness. Figure 3 shows that SEPT is robust to varying the probe length $T_{p}$ and $\dim(z)$ . Even with certain suboptimal probe length and $\dim(z)$ , it can outperform all baselines on 2D navigation in both steps-to-solve and final reward; it is within error bars of all baselines on Acrobot based on final + +cumulative reward; and final cumulative reward exceeds that of baselines in HIV. Increasing $T_{p}$ means foregoing valuable steps of the control policy and increasing difficulty of trajectory reconstruction for the VAE in high dimensional state spaces; $T_{p}$ is a hyper-parameter that should be validated for each application. Appendix D.5 shows the effect of $\beta$ on latent variable encodings. + +# 7 CONCLUSION AND FUTURE DIRECTIONS + +We propose a general algorithm for single episode transfer among MDPs with different stationary dynamics, which is a challenging goal with real-world significance that deserves increased effort from the transfer learning and RL community. Our method, Single Episode Policy Transfer (SEPT), trains a probe policy and an inference model to discover a latent representation of dynamics using very few initial steps in a single test episode, such that a universal policy can execute optimal control without access to rewards at test time. Strong performance versus baselines in domains involving both continuous and discontinuous dependence of dynamics on latent variables show the promise of SEPT for problems where different dynamics can be distinguished via a short probing phase. + +The dedicated probing phase may be improved by other objectives, in addition to performance of the inference model, to mitigate the risk and opportunity cost of probing. An open challenge is single episode transfer in domains where differences in dynamics of different instances are not detectable early during an episode, or where latent variables are fixed but dynamics are nonstationary. Further research on dynamic probing and control, as sketched in DynaSEPT, is one path toward addressing this challenge. Our work is one step along a broader avenue of research on general transfer learning in RL equipped with the realistic constraint of a single episode for adaptation and evaluation. + +# ACKNOWLEDGMENTS + +This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC. LLNL-JRNL-791194. + +# REFERENCES + +Adams, B. M., Banks, H. T., Kwon, H.-D., and Tran, H. T. (2004). Dynamic multidrug therapies for HIV: Optimal and sti control approaches. *Mathematical biosciences and engineering*, 1(2), 223-241. +Arnekvist, I., Kragic, D., and Stork, J. A. (2019). Vpe: Variational policy embedding for transfer reinforcement learning. In 2019 International Conference on Robotics and Automation (ICRA), pages 36-42. IEEE. +Bordbar, A., McCloskey, D., Zielinski, D. C., Sonnenschein, N., Jamshidi, N., and Palsson, B. O. (2015). Personalized whole-cell kinetic models of metabolism for discovery in genomics and pharmacodynamics. Cell systems, 1(4), 283-292. +Caruana, R. (1997). Multitask learning. Machine learning, 28(1), 41-75. +Co-Reyes, J. D., Liu, Y., Gupta, A., Eysenbach, B., Abbeel, P., and Levine, S. (2018). Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings. In Proceedings of the 35th International Conference on Machine Learning, pages 1009-1018. +Cockrell, R. C. and An, G. (2018). Examining the controllability of sepsis using genetic algorithms on an agent-based model of systemic inflammation. PLOS Computational Biology, 14(2), 1-17. +Da Silva, B. C., Konidaris, G., and Barto, A. G. (2012). Learning parameterized skills. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 1443-1450. Omnipress. +Doshi-Velez, F. and Konidaris, G. (2016). Hidden parameter markov decision processes: A semiparametric regression approach for discovering latent task parametrizations. In *IJCAI: proceedings of the conference*, volume 2016, page 1432. NIH Public Access. + +Finn, C., Abbeel, P., and Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126-1135. +Higgins, I., Matthew, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. (2017). beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, volume 3. +Hodson, R. (2016). Precision medicine. Nature, 537(7619), S49. +Kaelbling, L. P., Littman, M. L., and Cassandra, A. R. (1998). Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2), 99-134. +Kalmár, Z., Szepesvári, C., and Lórincz, A. (1998). Module-based reinforcement learning: Experiments with a real robot. Autonomous Robots, 5(3-4), 273-295. +Kastrin, A., Ferk, P., and Leskošek, B. (2018). Predicting potential drug-drug interactions on topological and semantic similarity features using statistical learning. *PloS one*, 13(5), e0196865. +Killian, T. W., Daulton, S., Konidaris, G., and Doshi-Velez, F. (2017). Robust and efficient transfer learning with hidden parameter markov decision processes. In Advances in Neural Information Processing Systems, pages 6250-6261. +Kingma, D. P. and Welling, M. (2014). Auto-encoding variational bayes. In International Conference on Learning Representations. +Konidaris, G. and Doshi-Velez, F. (2014). Hidden parameter markov decision processes: an emerging paradigm for modeling families of related tasks. In the AAAI Fall Symposium on Knowledge, Skill, and Behavior Transfer in Autonomous Robots. +Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529. +Paul, S., Osborne, M. A., and Whiteson, S. (2019). Fingerprint policy optimisation for robust reinforcement learning. In International Conference on Machine Learning, pages 5082-5091. +Perez, C. F., Such, F. P., and Karaletsos, T. (2018). Efficient transfer learning and online adaptation with latent variable models for continuous control. arXiv preprint arXiv:1812.03399. +Petersen, B. K., Yang, J., Grathwohl, W. S., Cockrell, C., Santiago, C., An, G., and Faissol, D. M. (2019). Deep reinforcement learning and simulation as a path toward precision medicine. Journal of Computational Biology. +Rajeswaran, A., Ghotra, S., Ravindran, B., and Levine, S. (2017). Epopt: Learning robust neural network policies using model ensembles. In International Conference on Learning Representations. +Rakelly, K., Zhou, A., Finn, C., Levine, S., and Quillen, D. (2019). Efficient off-policy meta-reinforcement learning via probabilistic context variables. In International Conference on Machine Learning, pages 5331-5340. +Rusu, A. A., Rao, D., Sygnowski, J., Vinyals, O., Pascanu, R., Osindero, S., and Hadsell, R. (2019). Meta-learning with latent embedding optimization. In International Conference Learning Representations (ICLR). +Schaul, T., Horgan, D., Gregor, K., and Silver, D. (2015). Universal value function approximators. In International Conference on Machine Learning, pages 1312-1320. +Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2016). Prioritized experience replay. In International Conference Learning Representations (ICLR), volume 2016. +Sutton, R. S., Barto, A. G., et al. (1998). Reinforcement learning: An introduction. MIT press. +Szita, I., Takács, B., and Lörincz, A. (2002). $\varepsilon$ -mdps: Learning in varying environments. Journal of Machine Learning Research, 3(Aug), 145-174. + +Taylor, M. E. and Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul), 1633-1685. +Tirinzoni, A., Sanchez, R. R., and Restelli, M. (2018). Transfer of value functions via variational methods. In Advances in Neural Information Processing Systems, pages 6179-6189. +Van Hasselt, H., Guez, A., and Silver, D. (2016). Deep reinforcement learning with double q-learning. In Thirtieth AAAI Conference on Artificial Intelligence. +West, J., You, L., Brown, J., Newton, P. K., and Anderson, A. R. A. (2018). Towards multi-drug adaptive therapy. bioRxiv. +Whirl-Carrillo, M., McDonagh, E. M., Hebert, J., Gong, L., Sangkuhl, K., Thorn, C., Altman, R. B., and Klein, T. E. (2012). Pharmacogenomics knowledge for personalized medicine. *Clinical Pharmacology & Therapeutics*, 92(4), 414-417. +Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4), 229-256. +Xu, T., Liu, Q., Zhao, L., Xu, W., and Peng, J. (2018). Learning to explore with meta-policy gradient. In Proceedings of the 35th International Conference on Machine Learning, pages 5463-5472. +Yao, J., Killian, T., Konidaris, G., and Doshi-Velez, F. (2018). Direct policy transfer via hidden parameter markov decision processes. In LULARLA Workshop, FAIM, volume 2018. +Yu, W., Tan, J., Liu, C. K., and Turk, G. (2017). Preparing for the unknown: Learning a universal policy with online system identification. In Proceedings of Robotics: Science and Systems, Cambridge, Massachusetts. +Zhang, A., Satija, H., and Pineau, J. (2018). Decoupling dynamics and reward for transfer learning. arXiv preprint arXiv:1804.10689. +Zhang, J., Cunningham, J. J., Brown, J. S., and Gartenby, R. A. (2017). Integrating evolutionary dynamics into treatment of metastatic castrate-resistant prostate cancer. Nature communications, 8(1), 1816. +Zhu, X., Singla, A., Zilles, S., and Rafferty, A. N. (2018). An overview of machine teaching. arXiv preprint arXiv:1801.05927. +Zintgraf, L. M., Shiarlis, K., Kurin, V., Hofmann, K., and Whiteson, S. (2019). Fast context adaptation via meta-learning. In International Conference on Machine Learning (ICML), volume 2019. + +# A DERIVATIONS + +Proposition 1. Let $p_{\varphi}(\tau)$ denote the distribution of trajectories induced by $\pi_{\varphi}$ . Then the gradient of the entropy $H(p_{\varphi}(\tau))$ is given by + +$$ +\nabla_ {\varphi} H \left(p _ {\varphi} (\tau)\right) = \mathbb {E} _ {p _ {\varphi} (\tau)} \left[ \nabla_ {\varphi} \sum_ {i = 0} ^ {T _ {p} - 1} \log \left(\pi_ {\varphi} \left(a _ {i} \mid s _ {i}\right)\right) (- \log p _ {\varphi} (\tau)) \right] \tag {3} +$$ + +Proof. Assuming regularity, the gradient of the entropy is + +$$ +\begin{array}{l} \nabla_ {\varphi} H (p _ {\varphi} (\tau)) = - \nabla_ {\varphi} \int p _ {\varphi} (\tau) \log p _ {\varphi} (\tau) d \tau \\ = - \int \nabla_ {\varphi} p _ {\varphi} (\tau) d \tau - \int \left(\nabla_ {\varphi} p _ {\varphi} (\tau)\right) \log p _ {\varphi} (\tau) d \tau \\ = - \nabla_ {\varphi} \int p _ {\varphi} (\tau) d \tau - \int p _ {\varphi} (\tau) \left(\nabla_ {\varphi} \log p _ {\varphi} (\tau)\right) \log p _ {\varphi} (\tau) d \tau \\ = \mathbb {E} _ {p _ {\varphi} (\tau)} \left[ \left(\nabla_ {\varphi} \log p _ {\varphi} (\tau)\right) \left(- \log p _ {\varphi} (\tau)\right) \right] \\ \end{array} +$$ + +For trajectory $\tau := (s_0, a_0, s_1, \ldots, s_t)$ generated by the probe policy $\pi_{\varphi}$ : + +$$ +p _ {\varphi} (\tau) = p (s _ {0}) \prod_ {i = 0} ^ {t - 1} p (s _ {i + 1} | s _ {i}, a _ {i}) \pi_ {\varphi} (a _ {i} | s _ {i}) +$$ + +Then + +$$ +\nabla_ {\varphi} \log p _ {\varphi} (\tau) = \nabla_ {\varphi} \left(\log p (s _ {0}) + \sum_ {i = 0} ^ {t - 1} \log p (s _ {i + 1} | s _ {i}, a _ {i}) + \sum_ {i = 0} ^ {t - 1} \log \pi_ {\varphi} (a _ {i} | s _ {i})\right) +$$ + +Since $p(s_0)$ and $p(s_{i + 1}|s_i,a_i)$ do not depend on $\varphi$ , we get + +$$ +\nabla_ {\varphi} \log p _ {\varphi} (\tau) = \nabla_ {\varphi} \sum_ {i = 0} ^ {t - 1} \log \pi_ {\varphi} (a _ {i} | s _ {i}) +$$ + +Substituting this into the gradient of the entropy gives equation 3. + +# B TESTING PHASE OF SEPT + +# Algorithm 2 Single Episode Policy Transfer: testing phase + +1: procedure SEPT-TEST +2: Restore trained decoder $\psi$ , encoder $\phi$ , probe policy $\varphi$ , and control policy $\theta$ +3: Run probe policy $\pi_{\varphi}$ for $T_{p}$ time steps and record trajectory $\tau_{p}$ +4: Use $\tau_{p}$ with decoder $q_{\phi}(z|\tau)$ to estimate $\hat{z}$ +5: Use $\hat{z}$ with control policy $\pi_{\theta}(a|s,z)$ for the remaining duration of the test episode +6: end procedure + +# C DYNASEPT + +In our problem formulation, it is not necessary to compute $\hat{z}$ at every step of the test episode, as each instance is a stationary MDP and change of instances is known. However, removing the common assumption of stationarity leads to time-dependent transition functions $\mathcal{T}_z(s'|s,a)$ , which introduces problematic cases. For example, a length $T_{p}$ probing phase would fail if $z$ leads to a switch in dynamics at time $t > T_{p}$ , such as when poorly understood drug-drug interactions lead to abrupt changes in dynamics during co-medication therapies (Kastrin et al., 2018). Here we describe an alternative general algorithm for non-stationary dynamics, which we call DynaSEPT. We train a + +single policy $\pi_{\theta}(a|s,z,\eta)$ that dynamically decides whether to probe for better inference or act to maximize the MDP reward $R_{\mathrm{env}}$ , based on a scalar-valued function $\eta \colon \mathbb{R} \to [0,1]$ representing the degree of uncertainty in posterior inference, which is updated at every time step. The total reward is $R_{\mathrm{tot}}(\tau) \coloneqq \eta R_p(\tau) + (1 - \eta)R_{\mathrm{env}}(\tau_f)$ , where $\tau$ is a short sliding-window trajectory of length $T_p$ , and $\tau_f$ is the final state of $\tau$ . The history-dependent term $R_p(\tau)$ is equivalent to a delayed reward given for executing a sequence of probe actions. Following the same reasoning for SEPT, one choice for $R_p(\tau)$ is $\mathcal{L}(\phi, \psi; \tau)$ . Assuming the encoder outputs variance $\sigma_i^2$ of each latent dimension, one choice for $\eta$ is a normalized standard deviation over all dimensions of the latent variable, i.e. $\eta \coloneqq \frac{1}{D}\sum_{i=1}^{D}\sigma_i(q_\phi)/\sigma_{i,\max}(q_\phi)$ , where $\sigma_{i,\max}$ is a running max of $\sigma_i$ . + +Despite its novelty, we consider DynaSEPT only for rare nonstationary dynamics and merely as a baseline in the predominant case of stationary dynamics, where SEPT is our primary contribution. DynaSEPT does not have any clear advantage over SEPT when each instance $T_{z}$ is a stationary MDP. DynaSEPT requires $\eta$ to start at 1.0, representing complete lack of knowledge about latent variables, and it still requires the choice of hyperparameter $T_{p}$ . Only after $T_{p}$ steps can it use the uncertainty of $q_{\phi}(z|\tau)$ to adapt $\eta$ and continue to generate the sliding window trajectory to improve $\hat{z}$ . By this time, SEPT has already generated an optimized sequence using $\pi_{\varphi}$ for the encoder to estimate $\hat{z}$ . If a trajectory of length $T_{p}$ is sufficient for computing a good estimate of latent variables, then SEPT is expected to outperform DynaSEPT. + +![](images/82bc4d6fa9fe676e1bd433d8fd1875a8b9aa259251130b15e71f84a2cb510fba.jpg) +(a) Acrobot + +![](images/69a8db9fa5ce27556991b960a3e40955d12a537a52010a05889b543bbdf4888f.jpg) +(b) HIV +Figure 4: (a) Ablations on Acrobot, including DynaSEPT with 3 seeds; (b) BNN and MAML attained orders of magnitude lower rewards than other baselines (3 seeds); (c) DynaSEPT performs well on nonstationary dynamics in 2D navigation. + +![](images/4dc038c8875bab9444a7e2787ab8feb92d633c114a763423e8405a27a349032e.jpg) +(c) 2D switch + +# D SUPPLEMENTARY EXPERIMENTAL RESULTS + +# D.1 2D AND ACROBOT + +2D navigation and Acrobot have a definition of "solved". Table 1 reports the number of steps in a test episode required to solve the MDP. Average and standard deviation were computed across all test instances and across all independently trained models. If an episode was not solved, the maximum allowed number of steps was used (50 for 2D navigation and 200 for Acrobot). Table 2 shows the mean and standard error of the cumulative reward over test episodes on Acrobot. The reported mean cumulative value for DPT in Acrobot is -27.7 (Yao et al., 2018). + +Table 1: Steps to solve 2D navigation and Acrobot + +
2D navigationAcrobot
Average49±0.5114±4.2
Oracle12±0.388±3.6
BNN34±0.8169±4.0
EPOpt-adv49±0.391±3.8
MAML49±0.499±4.6
SEPT20±0.9100±4.4
+ +Table 2: Error bars on Acrobot + +
Acrobot
Average-22.2±2.3
Oracle-17.1±2.2
BNN-50.6±4.7
EPOpt-adv-17.5±2.2
MAML-20.1±2.6
SEPT-23.1±3.1
+ +# D.2 TIMING COMPARISON + +Table 3: Total training times in seconds on all experiment domains + +
2D navigationAcrobotHIV
Average1.3e3±2771.0e3±851.4e3±47
Oracle0.6e3±1631.1e3±1291.5e3±47
BNN2.9e3±2449.0e4±3.0e34.3e4±313
EPOpt-adv1.1e3±441.1e3±1.01.9e3±33
MAML0.9e3±1161.1e3±961.3e3±6.0
SEPT1.9e3±702.3e3±1e32.8e3±11
+ +Table 4: Test episode time in seconds on all experiment domains + +
2D navigationAcrobotHIV
Average0.04±0.040.09±0.040.42±0.01
Oracle0.02±0.040.09±0.040.45±0.02
BNN2.6e3±9572.8e3±9681.4e3±8.8
EPOpt-adv0.04±0.040.10±0.060.45±0.03
MAML0.05±0.050.10±0.070.48±0.01
SEPT0.04±0.070.12±0.100.60±0.02
+ +# D.3 PERCENT OF SOLVED EPISODES + +![](images/61b866f05e632e3924ca4a0ea297ac3d01ea7db12022cce7257738a21a66ad7d.jpg) +(a) 2D navigation + +![](images/df8c3c88d93519a46d87126f0d0a286c35f9d50393d57e57238a6086180ad824.jpg) +(b) Acrobot +Figure 5: Percent of solved test instances as a function of time steps during the test episode. Percentage is computed among 200 test instances for (a) 2D navigation and (b) 100 test instances for Acrobot. + +2D navigation and Acrobot are considered solved upon reaching a terminal reward of 1000 and 10, respectively. Figure Figure 5 shows the percentage of all test episodes that are solved as a function of time steps in the test episode. Percentage is measured from a total of 200 test episodes for 2D navigation and 100 test episodes for Acrobot. + +# D.4 TRAINING CURVES + +![](images/7e9a45f31572d5c3550580b3caa5ed921b4e00bd86244ac5eccee3ec50ccec6c.jpg) +(a) 2D navigation + +![](images/68ed7fec3acdd806f649ca3bcf61ca5f1692e55075015728ec71ffdc21522bce.jpg) +(b) Acrobot +Figure 6: Average episodic return over training episodes. Only SEPT and Oracle converged in 2D navigation. All methods converged in Acrobot. All methods except MAML converged in HIV. BNN is not shown as the implementation (Killian et al., 2017) does not record training progress. + +![](images/a94812e84e07c77d92519fd8a70eb4bcbb84836150f39fb40b1b994e42a7626a.jpg) +(c) HIV + +Figure 6 shows training curves on all domains by all methods. None of the baselines, except for Oracle, converge in 2D navigation, because it is meaningless for Avg and EPOpt-adv to interpolate between optimal policies for each instance, and MAML cannot adapt due to lack of informative rewards for almost the entire test episode. Hence these baselines cannot work for a new unknown test episode, even in principle. We allowed the same number of training episodes for HIV as in Killian et al. (2017), and all baselines except MAML show learning progress. + +# D.5 LATENT REPRESENTATION OF DYNAMICS + +![](images/fe94c61c2b3f3046766858748cbe4c4e45e9a33bafb471c60ba74008d2809964.jpg) + +![](images/7c203af86927f4423754fe3945c661096deb5d8c7082dac69a4ced602d490f7c.jpg) + +![](images/b45a6fe88b94b33b068417c7e12c0d37f74f0139a3277eb6b711547a846996d7.jpg) + +![](images/810fb0e39e53d6cc117872eadbd050e2346d2a5b9bf94cdda65912b891dac345.jpg) + +![](images/45b92207b8023f82fc0f08a85631eeb465df1edd59d65bde302fccc917351c15.jpg) +Figure 7: Two-dimensional encodings generated for four instances of Acrobot (represented by four ground-truth colors), for different values of $\beta$ . We chose $\beta = 1$ for Acrobot. + +![](images/bd37eb8e519a0a6512ff65a46dd61320b13208f5dccbe9c2b32c1847b61468be.jpg) + +![](images/57b4ddcb282b32fb59a1303b42d2d1580f69cec6179fa5fefc6013c412ab152f.jpg) + +![](images/1429396bc77b00c3d11a0a3605f55cae381f926b3a45f4903a82e4e53f774c04.jpg) + +There is a tradeoff between reconstruction and disentanglement as $\beta$ increases (Higgins et al., 2017). Increasing $\beta$ encourages greater similarity between the posterior and an isotropic Gaussian. Figure 7 gives evidence that this comes at a cost of lower quality of separation in latent space. + +# D.6 PROBE REWARD + +![](images/0b863116b61b5e79531fbb6cd9c8ddac90803234fdb060de3f9d0e645d7f17a6.jpg) +Figure 8: Probe policy reward curve in one training run in 2D navigation + +# EXPERIMENTAL DETAILS + +For 2D navigation, Acrobot, and HIV, total number of training episodes allowed for all methods are $10\mathrm{k}$ , $4\mathrm{k}$ , and $2.5\mathrm{k}$ , respectively. We switch instances once every 10, 8 and 5 episodes, respectively. There are 2, 8 and 5 unique training instances, and 2, 5, and 5 validation instances, respectively. For each independent training run, we tested on 10, 5, and 5 test instances, respectively. + +# E.1 ALGORITHM IMPLEMENTATION DETAILS + +The simple baselines Average and Oracle can be immediately deployed in a single test episode after training. However, the other methods for transfer learning require modification to work in the setting of single episode test, as they were not designed specifically for this highly constrained setting. We detail the necessary modifications below. We also describe the ablation SEPT-NP in more detail. + +BNN. In Killian et al. (2017), a pre-trained BNN model was fine-tuned using the first test episode and then used to generate fictional episodes for training a policy from scratch. More episodes on the same test instance were allowed to help improve model accuracy of the BNN. In the single test episode setting, all fine-tuning and policy training must be conducted within the first test episode. We fine-tune the pre-trained BNN every 10 steps and allow the same total number of fictional episodes as reported in (Killian et al., 2017) for policy training. We measured the cumulative reward attained by the policy—while it is undergoing training—during the single real test episode. + +EPOpt. EPOpt trains on the lowest $\epsilon$ -percentile rollouts from instances sampled from a source distribution, then adapts the source distribution using observations from the target instance (Rajeswaran et al., 2017). Since we do not allow observation from the test instance, we only implemented the adversarial part of EPOpt. To run EPOpt with off-policy DDQN, we generated 100 rollouts per iteration and stored the lowest 10-percentile into the replay buffer, then executed the same number of minibatch training steps as the number that a regular DDQN would have done during rollouts. + +MAML. While MAML uses many complete rollouts per gradient step (Finn et al., 2017), the single episode test constraint mandates that it can only use a partial episode for adaptation during test, and hence the same must be done during meta-training. For both training and test, we allow MAML to take one gradient step for adaptation using a trajectory of the same length as the probe trajectory of SEPT, starting from the initial state of the episode. We implemented a first-order approximation that computes the meta-gradient at the post-update parameters but omits second derivatives. This was reported to have nearly equal performance as the full version, due to the use of ReLU activations. + +SEPT-NP. $\pi_{\theta}(a|s,z)$ begins with a zero-vector for $z$ at the start of training. When it has produced a trajectory $\tau_{p}$ of length $T_{p}$ , we store $\tau_{p}$ into $\mathcal{D}$ for training the VAE, and use $\tau_{p}$ with the VAE to estimate $z$ for the episode. Later training episodes begin with the rolling mean of all $z$ estimated so far. For test, we give the final rolling mean of $z$ at the end of training as initial input to $\pi_{\theta}(a|s,z)$ . + +# E.2 ARCHITECTURE + +Encoder. For all experiments, the encoder $q_{\phi}(z|\tau)$ is a bidirectional LSTM with 300 hidden units and tanh activation. Outputs are mean-pooled over time, then fully-connected to two linear output layers of width $\dim(z)$ , interpreted as the mean and log-variance of a Gaussian over $z$ . + +Decoder. For all experiments, the decoder $p_{\psi}(\tau |z)$ is an LSTM with 256 hidden units and tanh activation. Given input $[s_t,a_t,\hat{z}]$ at LSTM time step $t$ , the output is fully-connected to two linear output layers of width $|\mathcal{S}| + |\mathcal{A}|$ , and interpreted as the mean and log-variance of a Gaussian decoder for the next state-action pair $(s_{t + 1},a_{t + 1})$ . + +Q network. For all experiments, the $Q$ function is a fully-connected neural network with two hidden layers of width 256 and 512, ReLU activation, and a linear output layer of size $|\mathcal{A}|$ . For SEPT and Oracle, the input is the concatenation $[s_t,z]$ , where $z$ is estimated in the case of SEPT and $z$ is the ground truth in for the Oracle. For all other methods, the input is only the state $s$ . + +Probe policy network. For all experiments, $\pi_{\varphi}(a|s)$ is a fully-connected neural network with 3 hidden layers, ReLU activation, 32 nodes in all layers, and a softmax in the output layer. + +# E.3 HYPERPARAMETERS + +VAE learning rate was 1e-4 for all experiments. Size of the dataset $\mathcal{D}$ of probe trajectories was limited to 1000, with earliest trajectories discarded. 10 minibatches from $\mathcal{D}$ were used for each VAE training step. We used $\beta = 1$ for the VAE. Probe policy learning rate was 1e-3 for all experiments. DDQN minibatch size was 32, one training step was done for every 10 environment steps, $\epsilon_{\mathrm{end}} = 0.15$ , learning rate was 1e-3, gradient clip was 2.5, $\gamma = 0.99$ , and target network update rate was 5e-3. Exploration decayed according $\epsilon_{n + 1} = c\epsilon_n$ every episode, where $c$ satisfies $\epsilon_{\mathrm{end}} = c^{N}\epsilon_{\mathrm{start}}$ and $N$ is the total number of episodes. Prioritized replay used the same parameters in (Killian et al., 2017). + +Table 5: Hyperparameters used by each method, where applicable + +
2D navigationAcrobotHIV
Tp258
Instances1000500500
Episodes per instance1085
VAE batch size106464
dim(ˆz)226
α1.00.0051.0
Probe minibatches1101
DDQN εstart1.01.00.3
\ No newline at end of file diff --git a/singleepisodepolicytransferinreinforcementlearning/images.zip b/singleepisodepolicytransferinreinforcementlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..48721a964217f3d56f7e505ed57974dcdb3e583d --- /dev/null +++ b/singleepisodepolicytransferinreinforcementlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f967686971b79475f319b5d0f40c91ff087a3bcdd62f424de3a70cf572bc0d3a +size 511080 diff --git a/singleepisodepolicytransferinreinforcementlearning/layout.json b/singleepisodepolicytransferinreinforcementlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e440c3acb3c4f472dadbcec63eff1f1496cdb180 --- /dev/null +++ b/singleepisodepolicytransferinreinforcementlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a90ac76dd412c815bb27e2a1e9a2126adf2143cc48fef587cb652bbac4c6ac55 +size 670669 diff --git a/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_content_list.json b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bdb45819f911014d14eb7a0af530c86986bbb816 --- /dev/null +++ b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a8b8ca07b6313eb6edbc4208c9ead9ed120519b97753220a4ce8bd53bffbd97 +size 173103 diff --git a/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_model.json b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ed647dbc4dbcd8b74b5e9a92210054e9e79ece25 --- /dev/null +++ b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d17caad25ab400b91c487cb3c5398384d6127829fa79a17ea00cc7a902843741 +size 203243 diff --git a/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_origin.pdf b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e4f8a8d5f729a734ee77b760fa057697dc26fce5 --- /dev/null +++ b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/e1b4ee0b-7d52-421f-b213-29c05b23a405_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caf9e818d1e17b1de03dac22740eed4f492fe2ee35c2b78683fa54ccefdae532 +size 768912 diff --git a/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/full.md b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0696d97c8b30a72b0778ea0a8ed227bc163d799a --- /dev/null +++ b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/full.md @@ -0,0 +1,814 @@ +# SLOWMO: IMPROVING COMMUNICATION-EFFICIENT DISTRIBUTED SGD WITH SLOW MOMENTUM + +Jianyu Wang* + +Department of Electrical and Computer Engineering + +Carnegie Mellon University + +Pittsburgh, PA 15213, USA + +jianyuwl@andrew.cmu.edu + +Vinayak Tantia, Nicolas Ballas & Michael Rabbat + +Facebook AI Research + +Montreal, Canada + +{tantia, ballasn, mikerabbar}@fb.com + +# ABSTRACT + +Distributed optimization is essential for training large models on large datasets. Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local SGD steps, and decentralized methods (e.g., using gossip algorithms) to decouple communications among workers. Although these methods run faster than ALLREDUCE-based methods, which use blocking communication before every update, the resulting models may be less accurate after the same number of updates. Inspired by the BMUF method of Chen & Huo (2016), we propose a slow momentum (SLOWMO) framework, where workers periodically synchronize and perform a momentum update, after multiple iterations of a base optimization algorithm. Experiments on image classification and machine translation tasks demonstrate that SLOWMO consistently yields improvements in optimization and generalization performance relative to the base optimizer, even when the additional overhead is amortized over many updates so that the SLOWMO runtime is on par with that of the base optimizer. We provide theoretical convergence guarantees showing that SLOWMO converges to a stationary point of smooth non-convex losses. Since BMUF can be expressed through the SLOWMO framework, our results also correspond to the first theoretical convergence guarantees for BMUF. + +# 1 INTRODUCTION + +Distributed optimization (Chen et al., 2016; Goyal et al., 2017) is essential for training large models on large datasets (Radford et al., 2019; Liu et al., 2019; Mahajan et al., 2018b). Currently, the most widely-used approaches have workers compute small mini-batch gradients locally, in parallel, and then aggregate these using a blocking communication primitive, ALLREDUCE, before taking an optimizer step. Communication overhead is a major issue limiting the scaling of this approach, since ALLREDUCE must complete before every step and blocking communications are sensitive to stragglers (Dutta et al., 2018; Ferdinand et al., 2019). + +Multiple complementary approaches have recently been investigated to reduce or hide communication overhead. Decentralized training (Jiang et al., 2017; Lian et al., 2017; 2018; Assran et al., 2019) reduces idling due to blocking and stragglers by employing approximate gradient aggregation (e.g., via gossip or distributed averaging). Approaches such as Local SGD reduce the frequency of communication by having workers perform multiple updates between each round of communication (McDonald et al., 2010; McMahan et al., 2017; Zhou & Cong, 2018; Stich, 2019; Yu et al., 2019b). It is also possible to combine decentralized algorithms with Local SGD (Wang & Joshi, + +2018; Wang et al., 2019). These approaches reduce communication overhead while injecting additional noise into the optimization process. Consequently, although they run faster than large minibatch methods, the resulting models may not achieve the same quality in terms of training loss or generalization accuracy after the same number of iterations. + +Momentum is believed to be a critical component for training deep networks, and it has been empirically demonstrated to improve both optimization and generalization (Sutskever et al., 2013). Yet, there is no consensus on how to combine momentum with communication efficient training algorithms. Momentum is typically incorporated into such approaches by having workers maintain separate buffers which are not synchronized (Lian et al., 2017; 2018; Assran et al., 2019; Koloskova et al., 2019a). However, recent work shows that synchronizing the momentum buffer, using periodic ALLREDUCE or a decentralized method, leads to improvements in accuracy at the cost of doubling the communication overhead (Yu et al., 2019a). In block-wise model update filtering (BMUF), nodes perform multiple local optimization steps between communication rounds (similar to local SGD), and they also maintain a momentum buffer that is only updated after each communication round (Chen & Huo, 2016). Although it is now commonly used for training speech models, there are no theoretical convergence guarantees for BMUF, and it has not been widely applied to other tasks (e.g., in computer vision or natural language processing). + +Inspired by BMUF, we propose a general framework called slow momentum (SLOWMO) to improve the accuracy of communication-efficient distributed training methods. SLOWMO runs on top of a base algorithm, which could be local SGD or a decentralized method such as stochastic gradient push (SGP) (Nedic & Olshevsky, 2016; Assran et al., 2019). Periodically, after taking some number $\tau$ of base algorithm steps, workers average their parameters using ALLREDUCE and perform a momentum update. We demonstrate empirically that SLOWMO consistently improves optimization and generalization performance across a variety of base algorithms on image classification and neural machine translation tasks—training ResNets on CIFAR-10 and ImageNet, and training a transformer on WMT'16 En-De. Ultimately, SLOWMO allows us to reap the speedup and scaling performance of communication-efficient distributed methods without sacrificing as much in accuracy. + +We also prove theoretical bounds showing that SLOWMO converges to a stationary point of smooth non-convex functions at a rate $\mathcal{O}(1 / \sqrt{mT\tau})$ after $T\tau$ total inner optimization steps and $T$ SLOWMO updates with $m$ worker nodes, for a variety of base optimizers. Thus, SLOWMO is order-wise no slower than stochastic gradient descent. BMUF and the recently-proposed Lookahead optimizer (Zhang et al., 2019) can be expressed through the SLOWMO framework, and so our results also translate to the first theoretical convergence guarantees for both of these methods. + +# 2 THE SLOW MOMENTUM (SLOWMO) FRAMEWORK + +SLOWMo is a framework intended for solving stochastic optimization problems of the form + +$$ +\min _ {\boldsymbol {x} \in \mathbb {R} ^ {d}} \frac {1}{m} \sum_ {i = 1} ^ {m} \mathbb {E} _ {\xi_ {i} \sim D _ {i}} F _ {i} (\boldsymbol {x}; \xi_ {i}), \tag {1} +$$ + +using $m$ worker nodes, where the loss function term $F_{i}$ and samples $\xi_{i}$ from the distribution $D_{i}$ are available at the $i$ th worker. SLOWMO builds on top of a base optimization algorithm and has a nested loop structure shown in Algorithm 1. Each worker maintains a local copy of the parameters, $\boldsymbol{x}_{t,k}^{(i)}$ at worker $i$ after the $k$ th inner step of the $t$ th outer iteration. We assume that all workers are initialized to the same point $\boldsymbol{x}_{0,0}$ , and the framework also uses a slow momentum buffer $\boldsymbol{u}_t$ which is initialized to $\boldsymbol{u}_0 = \mathbf{0}$ ; although each worker stores a copy of $\boldsymbol{u}_t$ locally, these are always synchronized across all nodes, so we omit the superscript to simplify the notation. + +Within each outer iteration, workers first take $\tau$ steps of the base optimizer. The base optimizer could be a method which involves no communication, such as SGD (with or without momentum) or a decentralized algorithm which involves some communication, such as stochastic gradient push (SGP) (Assran et al., 2019). We denote these updates by $\boldsymbol{x}_{t,k+1}^{(i)} = \boldsymbol{x}_{t,k}^{(i)} - \gamma_t \boldsymbol{d}_{t,k}^{(i)}$ where $\gamma_t$ is the base optimizer (fast) learning rate and $\boldsymbol{d}_{t,k}^{(i)}$ is the update direction used at worker $i$ . If the base optimizer is SGD then $\boldsymbol{d}_{t,k}^{(i)} = \nabla F_i(\boldsymbol{x}_{t,k}^{(i)}, \xi_{t,k}^{(i)})$ . For other base optimizers which may use additional + +# Algorithm 1: Slow Momentum + +Input: Base optimizer with learning rate $\gamma_{t}$ ; Inner loop steps $\tau$ ; Slow learning rate $\alpha$ ; Slow momentum factor $\beta$ ; Number of worker nodes $m$ . Initial point $\mathbf{x}_{0,0}$ and initial slow momentum buffer $\mathbf{u}_0 = \mathbf{0}$ . +for $t\in \{0,1,\ldots ,T - 1\}$ at worker $i$ in parallel do +2 Reset/maintain/average base optimizer buffers +3 for $k\in \{0,1,\ldots ,\tau -1\}$ do +4 Base optimizer step: $\pmb{x}_{t,k+1}^{(i)} = \pmb{x}_{t,k}^{(i)} - \gamma_t \pmb{d}_{t,k}^{(i)}$ +5 end +6 Exact-Average: $\pmb{x}_{t,\tau} = \frac{1}{m}\sum_{i = 1}^{m}\pmb{x}_{t,\tau}^{(i)}$ +7 Update slow momentum: $\pmb{u}_{t + 1} = \beta \pmb{u}_t + \frac{1}{\gamma_t} (\pmb{x}_{t,0} - \pmb{x}_{t,\tau})$ +8 Update outer iterates: $\pmb{x}_{t + 1,0} = \pmb{x}_{t,0} - \alpha \gamma_t\pmb{u}_{t + 1}$ +9 end + +![](images/b1f5ed5632655b7aa5626285c3b7ed586cdfb9781f9883c6fcd1949d92adc5a8.jpg) +Figure 1: Illustration of one outer iteration in the slow momentum framework for $m = 3$ workers. + +local momentum or communication, $\pmb{d}_{t,k}^{(i)}$ represents the full update applied at worker $i$ on this step. Specific examples of $\pmb{d}_{t,k}^{(i)}$ for different base optimizers are presented in Table C.1 in Appendix C. + +After the $\tau$ base optimizer steps, the workers calculate the average $\pmb{x}_{t,\tau} = \pmb{x}_{t,0} - \frac{\gamma_t}{m}\sum_{i = 1}^{m}\sum_{k = 0}^{\tau -1}\pmb{d}_{t,k}^{(i)}$ using ALLREDUCE (line 6), and then they perform a slow momentum update (lines 7-8), + +$$ +\boldsymbol {u} _ {t + 1} = \beta \boldsymbol {u} _ {t} + \frac {1}{\gamma_ {t}} \left(\boldsymbol {x} _ {t, 0} - \boldsymbol {x} _ {t, \tau}\right) \tag {2} +$$ + +$$ +\boldsymbol {x} _ {t + 1, 0} = \boldsymbol {x} _ {t, 0} - \alpha \gamma_ {t} \boldsymbol {u} _ {t + 1}. \tag {3} +$$ + +Although the workers perform this update locally, in parallel, we again omit superscripts because the values of $\boldsymbol{x}_{t,0}$ , $\boldsymbol{x}_{t,\tau}$ , and hence $\boldsymbol{u}_{t+1}$ and $\boldsymbol{x}_{t+1,0}$ are always identical across all workers, since they follow the ALLREDUCE in line 6. Note that the difference $\boldsymbol{x}_{t,0} - \boldsymbol{x}_{t,\tau}$ is scaled by $\frac{1}{\gamma_t}$ in (2) to make the slow momentum buffer invariant to the fast learning rate $\gamma_t$ , which may change through training, e.g., when using a learning rate schedule. The outer update in line 8 uses the product $\alpha \gamma_t$ of the slow and fast learning rates. We use the distinction between slow and fast because the base optimizer step is applied $\tau$ times for each outer update, but this is not intended to imply that one learning rate is necessarily bigger or smaller than the other. We give specific examples of learning rates and other hyperparameters used in the experiments in Section 4 below. + +A specific SLOWMo algorithm instance is obtained by specifying the base algorithm and the hyperparameters $\alpha, \beta, \gamma$ , and $\tau$ . We can recover a number of existing algorithms in this framework. When the base algorithm is SGD, $\tau = 1$ , $\alpha = 1$ , and $\beta \in [0,1)$ , we recover standard large mini-batch SGD with learning rate $\gamma$ and momentum $\beta$ . When the base algorithm is SGD, $\tau > 1$ , $\alpha = 1$ , and $\beta = 0$ , we recover Local SGD (McDonald et al., 2010; Stich, 2019; Yu et al., 2019b; Wang & Joshi, 2018). When the base algorithm does not involve communication among workers, $\tau > 1$ and $\beta > 0$ , we recover BMUF (Chen & Huo, 2016). + +We also obtain interesting novel distributed algorithms. In particular, the experiments in Section 4 demonstrate that using SLOWMo with a decentralized base algorithm like SGP and reasonable values of $\tau$ consistently leads to improved optimization and generalization performance over the base method alone, without a significant increase in runtime. We also observe empirically that, for a fixed number of iterations, SLOWMo combined with SGP is superior to SLOWMo combined with SGD. + +The above are all distributed algorithms. Perhaps surprisingly, SLOWMO also encompasses a recently-introduced non-distributed method: if we have $m = 1$ worker with SGD/Adam as the base algorithm, $\alpha \in (0,1]$ , $\beta = 0$ , and $\tau > 0$ , we recover the Lookahead optimizer of Zhang et al. (2019), which also has a nested loop structure. Section 5 provides theoretical convergence guarantees when using the SLOWMO framework to minimize smooth non-convex functions, and thus provides the first theoretical convergence guarantees in the literature for BMUF and Lookahead in this setting. + +# 3 RELATED WORK + +The idea of reducing communication overhead by using ALLREDUCE to synchronize parameters after every $\tau > 0$ optimizer steps has been considered at least since the work of McDonald et al. (2010), and has been more recently referred to as Local SGD in the literature. Elastic-average SGD (Zhang et al., 2015) uses a related approach, but with a parameter server rather than ALLREDUCE. Lin et al. (2018) apply Local SGD for distributed training of deep neural networks and propose post-local SGD, which starts by running ALLREDUCE-SGD for some epochs before switching to Local SGD, to improve generalization at the cost of additional communication. + +Decentralized methods use approximate distributed averaging over a peer-to-peer topology, rather than ALLREDUCE. This decouples communication but also injects additional noise in the optimization process since the models at different workers are no longer precisely synchronized. Lian et al. (2017) present decentralized parallel SGD (D-PSGD), where each worker sends a copy of its model to its peers at every iteration, and show it can be faster than parameter-server and ALLREDUCE methods for training deep neural networks. Lian et al. (2018) study an asynchronous extension, AD-PSGD. Assran et al. (2019) study stochastic gradient push (SGP), and propose its asynchronous counterpart overlap SGP (OSGP), which achieve a further speedup over D-PSGD and AD-PSGD by using less coupled communication. D-PSGD, AD-PSGD, and SGP all have similar theoretical convergence guarantees for smooth non-convex functions, showing a linear scaling relationship between the number of workers and the number of iterations to reach a neighborhood of a first-order stationary point. Although the theory for all three methods only covers the case of SGD updates without momentum, implementations use momentum locally at each worker, and workers only average their model parameters (not momentum buffers). Yu et al. (2019a) prove that linear scaling holds when workers average their parameters and momentum buffers, although this doubles the communication overhead. We refer to this approach as double-averaging below. + +Scaman et al. (2019) establish optimal rates of convergence for decentralized optimization methods in the deterministic, convex setting. Richards & Rebeschini (2019) provide guarantees on the generalization error of non-parametric least-squares regression trained using decentralized gradient descent, showing that there are regimes where one can achieve a linear speedup. Neither of these results apply directly to the setting considered in this paper, which focuses on smooth non-convex stochastic optimization, and extending this line of work to non-convex settings is an interesting direction. + +Mahajan et al. (2018a) propose an approach to distributed learning of linear classifiers (i.e., convex problems) where, in parallel, workers minimize locally formed approximate loss functions, and then the resulting minimizers are averaged to determine a descent direction. Methods which fit in the SLOWMo framework, including Local SGD, BMUF (Chen & Huo, 2016), and the serial Lookahead optimizer (Zhang et al., 2019), can be seen as related to this approach, where the actual loss function at each worker is used rather than an approximate one, and where the descent direction is used in a momentum update rather than a (deterministic) line search method. + +Note that various approaches to gradient compression have been proposed to reduce the communication overhead for ALLREDUCE and decentralized learning methods (Alistarh et al., 2007; Wen et al., 2007; Bernstein et al., 2019; Karimireddy et al., 2019; Koloskova et al., 2019b; Vogels et al., 2019). However, it is presently not clear to what extent compression may be beneficial for methods like BMUF, D-PSGD, SGP, and OSGP, which perform averaging on the model parameters rather than on gradients. Combining SLOWMo with compression techniques is an interesting and important direction for future work. + +Finally, although momentum methods are known to achieve accelerated rates for deterministic optimization, currently the theoretical understanding of the benefits of momentum methods (both serial and parallel) is limited (Bottou et al., 2018). Loizou & Richtárik (2017) show that accelerated convergence rates can be achieved when the objective is a quadratic finite sum (i.e., least squares problem) that satisfies an interpolation condition. Can et al. (2019) show that accelerated rates can also be achieved in the more general setting of smooth, strongly convex objectives, in a regime where the noise level is below an explicit threshold, and where the rate of convergence is measured in terms of the 1-Wasserstein metric of the distribution of trajectories produced by the momentum method. In the setting of smooth non-convex functions, Gitman et al. (2019) establish stability and asymptotic convergence results for the quasi-hyperbolic momentum method (Ma & Yarats, 2019), + +which can be viewed as interpolating between SGD and a stochastic momentum method. Extending these results, which focus on the serial setting, to obtain decentralized momentum methods with guaranteed acceleration, is an important direction for future work. + +# 4 EXPERIMENTAL RESULTS + +We evaluate the effectiveness of SLOWMo on three datasets: image classification on CIFAR-10 and ImageNet, and neural machine translation on WMT'16-En-De. All experiments use NVIDIA DGX-1 servers as worker nodes. Each server contains 8 NVIDIA V100 GPUs and the servers are interconnected via commodity 10 Gbps Ethernet. + +On CIFAR-10 (Krizhevsky et al., 2009), we train a ResNet-18 (He et al., 2016) using 32 V100 GPUs, located on 32 different worker nodes. The total mini-batch size is 4096, and we train for 200 epochs. The learning rate $(\gamma_{t})$ linearly increases during the first 5 epochs, following the warm-up strategy in Goyal et al. (2017), and then decays by a factor of 10 at epochs 100, 150, and 175. The (fast) learning rate was tuned separately for each base optimizer. All experiments were run 5 times with different random seeds, and the mean metrics are reported. + +On ImageNet (Krizhevsky et al., 2012), we train a ResNet-50 (He et al., 2016) using 32 worker nodes (i.e., 256 GPUs). The total mini-batch size is 8192, and we train for 90 epochs. The learning rate schedule is identical to (Goyal et al., 2017), i.e., linear warm-up in the first 5 epochs and decay by a factor of 10 at epochs 30, 60 and 80. + +On WMT'16-En-De, we train a transformer model (Vaswani et al., 2017) using 8 worker nodes (i.e., 64 GPUs). The model is trained with 200k token batches, and we train for 25 epochs. We follow the experimental setting of Ott et al. (2018). + +For each task, we consider several baselines: (i) Local SGD /Local Adam, where worker nodes independently run single-node SGD (with Nesterov momentum) or Adam and periodically average model parameters; (ii) stochastic gradient push (SGP), the state-of-the-art synchronous decentralized training method; and (iii) Overlap-SGP (OSGP), an asynchronous version of SGP. For each baseline, we examine its performance with and without SLOWMo. Recall that Local SGD and Local Adam with SLOWMo are equivalent to BMUF. Local SGD and Local Adam do not involve communication during the inner loop (base optimizer) updates, while SGP and OSGP involve gossiping with one peer at every step. In addition, we also evaluate the performance of AR-SGD/AR-Adam, the traditional ALLREDUCE implementation of parallel SGD/Adam. Details of all baseline methods are provided in Appendices A and C. + +In general, the hyperparameters of SLOWMo (slow learning rate $\alpha$ , slow momentum $\beta$ , and number of inner loop steps $\tau$ ) need to be tuned for each base optimizer and task. The results in Table 1 all use $\alpha = 1$ , which we found to be consistently the best. For Local SGD (with or without SLOWMo), we set $\tau = 12$ , and for all other baseline methods we use $\tau = 48$ . Using $\tau > 12$ for Local SGD resulted in significantly worse loss/accuracy on ImageNet and WMT'16 En-De. + +Note also that all of our baselines (or base optimizers) leverage a local momentum scheme, following previous works (Assran et al., 2019; Koloskova et al., 2019a). When using these methods with SLOWMo, there are different ways to handle the base algorithm local momentum buffers at the beginning of each outer loop (line 2 in Algorithm 1): zeroing, averaging among workers, or maintaining the current local value. Appendix B.4 provides an empirical comparison. For the experiments reported here, when using SGD with Nesterov momentum as the base algorithm (CIFAR-10 and ImageNet) we zero the base algorithm buffer, and when using Adam as the base algorithm (WMT'16 En-De) we maintain the current value of the Adam buffers. We also tried to apply SLOWMo on top of AR-SGD base optimizer, but we did not observe any improvement in that setting. + +Optimization and Generalization Performance. Table 1 shows the best training loss and the validation accuracy/BLEU score for each baseline, with and without SLOWMo. Using SLOWMo consistently improves both the optimization and generalization performance across all training tasks and baseline algorithms. Figure 2 presents validation error/loss per epoch to give a sense of convergence speed. Observe that SGP with SLOWMo substantially improves convergence, compared to SGP alone. We observe a similar phenomenon when comparing the training curves; see Appendix B. + +Table 1: Comparisons to the original distributed optimization algorithms on various training tasks. The best training loss, validation accuracy (for image classification), and BLEU score (for machine translation) are reported. We fix slow learning rate $\alpha = 1$ . We set the number of local steps $\tau = 12$ for CIFAR10. For ImageNet and WMT, we use $\tau = 48$ for SGP and OSGP and $\tau = 12$ for Local SGD. The slow momentum $\beta$ is tuned for each case. It typically ranges from 0.4 to 0.8. + +
DatasetsBaselineTraining LossValidation Acc./BLEU
Originalw/ SLOWMoOriginalw/ SLOWMo
CIFAR-10Local SGD0.1220.00691.73%93.20%
OSGP0.0110.00193.17%93.74%
SGP0.0020.00193.90%94.32%
AR-SGD0.002-92.66%-
ImageNetLocal SGD1.431.2169.94%73.24%
OSGP1.030.9774.96%75.54%
SGP1.071.0075.15%75.73%
AR-SGD0.96-76.00%-
WMT'16 En-DeLocal Adam2.5202.48026.6227.14
SGP2.5002.44726.9227.84
AR-Aadam2.468-27.17-
+ +Communication Cost. Table 2 shows the average training time per iteration on ImageNet and WMT'16. For SGP/OSGP, since the additional communication cost due to averaging in line 6 of Algorithm 1 is amortized over $\tau = 48$ iterations, SLOWMo maintains nearly the same speed as the corresponding base algorithm. For methods like Local SGD or Local Adam, which already compute an exact average every $\tau$ iterations, using SlowMo (i.e., using $\beta >0$ ) does not increase the amount of communication. In other words, using SLOWMo on top of the base algorithm improves training/validation accuracy at a negligible additional communication cost. + +Effects of $\tau$ . The most important hyper-parameter in SLOWMo is the number of base optimizer steps $\tau$ before each SLOWMo update, since it influences both the accuracy and the training time. Figure 3 presents the validation accuracy and average iteration time of SGP-SLOWMo for different values of $\tau$ on ImageNet and WMT'16. It can be observed that the validation performance does not monotonically increase or decrease with $\tau$ . Instead, there is a best value. On both ImageNet and WMT'16, we find $\tau = 48$ to be a good tradeoff between speed and accuracy. Moreover, SLOWMo is pretty robust to the choice of $\tau$ ; even if $\tau = 96$ for ImageNet and $\tau = 192$ for WMT'16, SGP with SLOWMo achieves better validation accuracy/loss than SGP alone. + +We further investigate the effect of other hyperparameters (the slow learning rate $\alpha$ , slow momentum $\beta$ ) as well as the different strategies for handling base algorithm buffers in Appendix B. + +![](images/2e1fbd5a84c0d0541569b07f2c600e1ab2acea298ec34d6323147244734d1b2e.jpg) +(a) CIFAR-10, batch size:4k. + +![](images/31304b8787397a3be89cddb35c49a0db3d06850c796cbffe67138ad25d4f63f9.jpg) +(b) ImageNet, batch size:8k. + +![](images/2b72abd927dabbe9983378fc9036d0e139527ec98b814eace2014201345b8e2e.jpg) +(c) WMT16 En-De, batch size:200k. +Figure 2: Validation curves for various tasks using SGP as the base algorithm. We fix $\alpha = 1, \tau = 12$ for these three plots. Shaded areas in (a) and (b) show the min-max values across all worker nodes. The corresponding training curves are presented in Appendix B.2. + +Table 2: Average time per iteration with and without SLOWMo. Recall that $\tau = 48$ for the SGP and OSGP base optimizer and $\tau = 12$ for Local SGD/Local Adam. In some cases, with SLOWMo was faster than without; we hypothesize that this is due to statistical variations in timing and background network traffic. +(a) ImageNet, batch size:8k, 32 nodes. + +
BaselineTime/iterations (ms)
Originalw/ SLOWMo
Local SGD294282
OSGP271271
SGP304302
AR-SGD420-
+ +(b) WMT'16 En-De, batch size:200k, 8 nodes. + +
BaselineTime/iterations (ms)
Originalw/ SLOWMO
Local Adam503505
SGP12251279
AR-Arm1648-
+ +![](images/8986d4c2a25cbf9bbf9b507e241f1bc9862b7ee8a9179d0920fb582456907dbe.jpg) +(a) Effect of $\tau$ on ImageNet. + +![](images/cc69a2b3591a306ba41137a19d1072ddd0c28101f9a7fa6d9ef04ede2edeaf48.jpg) +(b) Effect of $\tau$ on WMT'16. +Figure 3: The effects of $\tau$ in SLOWMo. We use SGP as the base algorithm. For ImageNet we plot validation accuracy (higher is better), and for WMT'16 En-De we plot validation NLL (lower is better). Increasing $\tau$ amortizes communication cost over more iterations, so the average time per iteration decreases. We hypothesize that moderate values of $\tau$ have a regularizing effect, improving loss and accuracy, and when $\tau$ is too large performance is degraded because workers' local models drift too far apart. + +Comparison with Double-Averaging Momentum. As mentioned in Section 3, Yu et al. (2019a) propose an alternative momentum scheme, double-averaging, to improve the convergence of Local SGD and D-PSGD. We empirically compare it with SLOWMo in terms of the validation accuracy and average training time per iteration on ImageNet. When the base algorithm is SGP, double averaging achieves $75.54\%$ validation accuracy and takes $402~\mathrm{ms}$ per iteration on average, while SLOWMo-SGP $(\tau = 48)$ reaches $75.73\%$ validation accuracy while taking $302~\mathrm{ms}$ per iteration on average. Similarly, when the baseline algorithm is Local SGD with $\tau = 12$ , double-averaging reaches $72.04\%$ and takes $405~\mathrm{ms}$ per iteration, while SLOWMo reaches $73.24\%$ and takes only $282~\mathrm{ms}$ per iteration. + +# 5 THEORETICAL RESULTS + +This section provides a convergence guarantee for SLOWMO and shows that it can achieve a linear speedup in terms of number of workers. Let $f_{i}(\pmb{x}) = \mathbb{E}_{\xi_{i} \sim D_{i}}[F_{i}(\pmb{x}; \xi_{i})]$ denote the expected objective function at worker $i$ , and let $f(\pmb{x}) = \frac{1}{m} \sum_{i=1}^{m} f_{i}(\pmb{x})$ . Our analysis is conducted for a constant learning rate $\gamma_{t} = \gamma$ under the following standard assumptions. + +Assumption 1 (L-smooth). Each local objective function $f_{i}(\pmb{x})$ is L-smooth, i.e., $\| \nabla f_{i}(\pmb{x}) - \nabla f_{i}(\pmb{y}) \| \leq L \| \pmb{x} - \pmb{y} \|$ , for all $\pmb{x}, \pmb{y} \in \mathbb{R}^{d}$ and $i \in \{1,2,\dots,m\}$ . + +Assumption 2 (Bounded variance). There exists a finite positive constant $\sigma^2$ such that $\mathbb{E}_{\xi \sim D_i}\| \nabla F_i(\pmb {x};\xi) - \nabla f_i(\pmb {x})\| ^2\leq \sigma^2$ , for all $i\in \{1,2,\dots,m\}$ + +In order to generalize the analysis to various base algorithms, we define $\pmb{d}_{t,k} = \frac{1}{m}\sum_{i=1}^{m}\pmb{d}_{t,k}^{(i)}$ as the average descent direction across the $m$ workers and make the following assumption. + +Assumption 3. There exists a finite positive constant $V$ such that $\mathbb{E}\| \pmb{d}_{t,k} - \mathbb{E}_{t,k}[\pmb{d}_{t,k}]\|^2 \leq V$ , where $\mathbb{E}_{t,k}$ denotes expectation conditioned on all randomness from stochastic gradients up to the $k$ -th step of $t$ -th outer iteration. + +As mentioned in Section 2, the analytic form of $\pmb{d}_{t,k}$ depends on the choice of base algorithm. Therefore, the value of $V$ also changes. For instance, when the base algorithm is Local-SGD, then $\pmb{d}_{t,k} = \frac{1}{m}\sum_{i=1}^{m}\nabla F_i(\pmb{x}_{t,k}^{(i)};\xi_{t,k}^{(i)})$ . It follows that + +$$ +\mathbb {E} \left\| \boldsymbol {d} _ {t, k} - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\| ^ {2} = \frac {1}{m ^ {2}} \sum_ {i = 1} ^ {m} \mathbb {E} \left\| \nabla F _ {i} \left(\boldsymbol {x} _ {t, k} ^ {(i)}; \xi_ {t, k} ^ {(i)}\right) - \nabla f _ {i} \left(\boldsymbol {x} _ {t, k} ^ {(i)}\right) \right\| ^ {2} \leq \frac {\sigma^ {2}}{m} = V. \tag {4} +$$ + +The above value $(V = \sigma^2 /m)$ can also be applied to other base algorithms, such as D-PSGD, SGP, and OSGP. More details are provided in Appendix C. + +Our main convergence result is stated next. Proofs of all results in this section appear in Appendix D. Theorem 1 (General Result). Suppose all worker nodes start from the same initial point $\mathbf{x}_{0,0}$ , and the initial slow momentum is $\mathbf{u}_0 = \mathbf{0}$ . If we set $\alpha$ , $\beta$ , $\gamma_t = \gamma$ , $\tau$ and $T$ so that $\frac{\alpha\gamma}{1 - \beta} = \sqrt{\frac{m}{\tau T}}$ and the total iterations $\tau T$ satisfies $\tau T \geq mL^2\left(1 + \sqrt{3}\max \left\{\frac{3\tau(1 - \beta - \alpha)}{\alpha},\frac{4\tau\beta}{1 - \beta},1\right\}\right)$ , then under Assumptions 1 to 3, we have that: + +$$ +\begin{array}{l} \frac {1}{\tau T} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} \leq \frac {2 (f (\boldsymbol {x} _ {0 , 0}) - f _ {i n f}) + m V L}{\sqrt {m \tau T}} + \underbrace {\frac {1}{\tau T} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t , k}) - \mathbb {E} _ {t , k} [ \boldsymbol {d} _ {t , k} ] \| ^ {2}} _ {\text {E f f e c t o f b a s e o p t i m i z e r}} \\ + \underbrace {\frac {4 m V L ^ {2} (\tau - 1)}{\tau T} \left(\frac {1 - \beta}{\alpha} - 1\right) ^ {2} + \frac {8 m V L ^ {2} \tau}{\tau T} \frac {\beta^ {2}}{(1 - \beta^ {2})}} _ {\text {E f f e c t o f s l o w m o m e n t u m}} \tag {5} \\ \end{array} +$$ + +where $f_{inf} = \inf_{\pmb{x}} f(\pmb{x})$ + +Consistent with AR-SGD. Recall that AR-SGD is equivalent to taking $\tau = 1$ , $\alpha = 1$ , and $\beta = 0$ and using SGD with learning rate $\gamma$ as the base optimizer. In this case, all terms on the RHS but the first one vanish, $V = \sigma^2 / m$ , and (5) is identical to the well-known rate of $\mathcal{O}(1 / \sqrt{mT\tau})$ for SGD. + +Effect of the base optimizer. The second term in (5) only depends on the base optimizer. It measures the bias between the full batch gradient $\nabla f(\boldsymbol{x}_{t,k})$ and the expected update averaged across all workers $\mathbb{E}_{t,k}[\boldsymbol{d}_{t,k}]$ . For the base optimizers considered in this paper, this term relates to the discrepancies among local models and can be easily found in previous distributed optimization literature. In particular, under the same assumptions as Theorem 1, one can show that this term vanishes in a rate of $1/(T\tau)$ for D-PSGD, SGP, OSGP and Local-SGD; see Appendix C. + +As an example, we provide the convergence analysis for the extreme case of Local SGD, where there is no communication between nodes during each inner iteration. Intuitively, using other base algorithms should only make this term smaller since they involve more communication than Local SGD. + +Corollary 1 (Convergence of BMUF, i.e., Local SGD with SLOWMo). Under the same conditions as Theorem 1, if the inner algorithm is Local SGD and there exists a positive finite constant $\zeta$ such that $\frac{1}{m}\sum_{i=1}^{m}\|\nabla f(\mathbf{x}) - \nabla f_i(\mathbf{x})\|^2 \leq \zeta^2$ , then + +$$ +\frac {1}{\tau T} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} = \mathcal {O} \left(\frac {1}{\sqrt {m \tau T}}\right) + \mathcal {O} \left(\frac {m \tau}{T}\right). \tag {6} +$$ + +Linear speedup. Corollary 1 shows that when the total number of steps $T\tau$ is sufficiently large: $T \geq m^3\tau^3$ , the convergence rate will be dominated by the first term $\mathcal{O}(1 / \sqrt{mT\tau})$ . That is, in order to achieve an $\epsilon$ error, the algorithm requires $m$ times less steps when using $m$ times more worker nodes. This also recovers the same rate as AR-SGD. + +Extension to single-node case. As mentioned in Section 2, when there is only one node and the slow momentum factor is $\beta = 0$ , the SLOWMo-SGD is the Lookahead optimizer. One can directly apply Theorem 1 to this special case and get the following corollary. + +Corollary 2 (Convergence of Lookahead). Under the same conditions as Theorem 1, if the inner optimizer is AR-SGD and $\beta = 0$ , then one can obtain the following upper bound: + +$$ +\begin{array}{l} \frac {1}{\tau T} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} \leq \frac {2 (f (\boldsymbol {x} _ {0 , 0}) - f _ {i n f}) + \sigma^ {2} L}{\sqrt {\tau T}} + \frac {4 \sigma^ {2} L ^ {2} (\tau - 1)}{\tau T} \left(\frac {1}{\alpha} - 1\right) ^ {2} (7) \\ = \mathcal {O} \left(\frac {1}{\sqrt {\tau T}}\right) + \mathcal {O} \left(\frac {1}{T}\right). (8) \\ \end{array} +$$ + +# 6 FASTER SLOWMO: REMOVING THE PERIODIC ALLREDUCE + +SLOWMo helps improve both the optimization and generalization of communication-efficient algorithms. When the base optimizer is SGP or OSGP, SLOWMo also comes at the expense of higher communication cost, since it requires performing an exact average every $\tau$ iterations. Although the communication cost can be amortized, here we go one step further and propose a SGP-SLOWMo variant, named SGP-SLOWMo-noaverage, where we remove the exact average when we perform the SLOWMo update, i.e. we skip line 6 in Algorithm 1. We empirically evaluate this variant on the ImageNet and WMT'16 datasets, using $\alpha = 1$ , $\beta = 0.6$ and $\tau = 48$ . + +Surprisingly, we observe that SGP-SLOWMo-noaverage achieves similar performances on Imagenet (75.78%, compared to 75.73% for SGP-SLOWMo) and only slightly degrades the validation NLL on WMT'16 (2.11, compared to 2.10), while preserving the iteration time of the base algorithm (298 ms per iteration on ImageNet and 1227 ms per iteration on WMT'16) since this variant does not require additional communication. These results suggest that the slow momentum updates, and not the momentum buffer synchronization, contribute the most to the performance gain of SLOWMo. We leave further investigation of SLOWMo-SGP-noaverage for future work. + +# 7 CONCLUDING REMARKS + +In this paper, we propose a general momentum framework, SLOWMo, for communication-efficient distributed optimization algorithms. SLOWMo can be built on the top of SGD, as well as decentralized methods, such as SGP and (asynchronous) OSGP. On three different deep learning tasks, we empirically show that SLOWMo consistently improves the optimization and generalization performance of the corresponding baseline algorithm while maintaining a similar level of communication efficiency. Moreover, we establish a convergence guarantee for SLOWMo, showing that it converges to a stationary point of smooth and non-convex objective functions. Since BMUF (Chen & Huo, 2016) can be expressed through SLOWMo framework (by setting the base optimizer to be Local SGD or Local Adam), to the best of our knowledge, we provide the first convergence guarantee for BMUF in the literature. + +# REFERENCES + +Dan Alistarh, Demjan Grubic, Jerry Z. Li, Ryota Tomioka, and Milan Vojnovic. Qsgd: Communication-efficient sgd via gradient quantization and encoding. In Advances in Neural Information Processing Systems, pp. 1709-1720, 2007. +Mahmoud Assran, Nicolas Loizou, Nicolas Ballas, and Michael Rabbat. Stochastic gradient push for distributed deep learning. In International Conference on Machine Learning, 2019. +Jeremy Bernstein, Jiawei Zhao, Kamyar Azizzadenesheli, and Anima Anandkumar. signSGD with majority vote is communication efficient and fault tolerant. In International Conference on Learning Representations, 2019. +Léon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223-311, 2018. + +Bugra Can, Mert Gurbuzbalaban, and Lingjiong Zhu. Accelerated linear convergence of stochastic momentum methods in Wasserstein distances. arXiv preprint arxiv:1901.07445, 2019. +Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous SGD. In International Conference on Learning Representations Workshop Track, 2016. +Kai Chen and Qiang Huo. Scalable training of deep learning machines by incremental block training with intra-block parallel optimization and blockwise model-update filtering. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5880-5884, 2016. +Sanghamitra Dutta, Gauri Joshi, Soumyadip Ghosh, Parijat Dube, and Priya Nagpurkar. Slow and stale gradients can win the race: Error-routine trade-offs in distributed SGD. In International Conference on Artificial Intelligence and Statistics, pp. 803-812, 2018. +Nuwan Ferdinand, Haider Al-Lawati, Stark Draper, and Matthew Nokelby. Anytime minibatch: Exploiting stragglers in online distributed optimization. In International Conference on Learning Representations, 2019. +Igor Gitman, Hunter Lang, Pengchuan Zhang, and Lin Xiao. Understanding the role of momentum in stochastic gradient methods. In Advances in Neural Information Processing Systems, 2019. +Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016. +Zhanhong Jiang, Aditya Balu, Chinmay Hegde, and Soumik Sarkar. Collaborative deep learning in fixed topology networks. In Advances in Neural Information Processing Systems, pp. 5904-5914, 2017. +Sai Praneeth Karimireddy, Quentin Rebjock, Sebastian Stich, and Martin Jaggi. Error feedback fixes SignSGD and other gradient compression schemes. In International Conference on Machine Learning, 2019. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. +Anastasia Koloskova, Tao Lin, Sebastian U Stich, and Martin Jaggi. Decentralized deep learning with arbitrary communication compression. arXiv preprint arXiv:1907.09356, 2019a. +Anastasiia Koloskova, Sebastian Stich, and Martin Jaggi. Decentralized stochastic optimization and gossip algorithms with compressed communication. In International Conference on Machine Learning, 2019b. +Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Learning multiple layers of features from tiny images. CIFAR-10 (Canadian Institute for Advanced Research), 2009. URL http://www.cs.toronto.edu/~kriz/cifar.html. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097-1105, 2012. +Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in Neural Information Processing Systems, pp. 5330-5340, 2017. +Xiangru Lian, Wei Zhang, Ce Zhang, and Ji Liu. Asynchronous decentralized parallel stochastic gradient descent. In Proceedings of the 35th International Conference on Machine Learning, pp. 3049-3058, 2018. + +Tao Lin, Sebastian U Stich, Kumar Kshitij Patel, and Martin Jaggi. Don't use large mini-batches, use local sgd. arXiv preprint arXiv:1808.07217, 2018. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019. +Nicolas Loizou and Peter Richtárik. Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods. arXiv preprint arxiv:1712.09677, 2017. +Jerry Ma and Denis Yarats. Quasi-hyperbolic momentum and Adam for deep learning. In International Conference on Learning Representations, 2019. +Dhruv Mahajan, Nikunj Agrawal, S Sathiya Keerthi, Sundararajan Sellamanickam, and Léon Botou. An efficient distributed learning algorithm based on effective local functional approximations. The Journal of Machine Learning Research, 19(1):2942-2978, 2018a. +Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 181-196, 2018b. +Ryan McDonald, Keith Hall, and Gideon Mann. Distributed training strategies for the structured perceptron. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 456-464. Association for Computational Linguistics, 2010. +H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273-1282, 2017. +Angelia Nedic and Alex Olshevsky. Stochastic gradient-push for strongly convex functions on time-varying directed graphs. IEEE Trans. Automatic Control, 61(12):3936-3947, 2016. +Myle Ott, Grangier David Edunov, Sergey, and Michael Auli. Scaling neural machine translation. In Conference on Machine Translation (WMT), 2018. +Adam Paszke, Soumith Chintala, Ronan Collobert, Koray Kavukcuoglu, Clement Farabet, Samy Bengio, Iain Melvin, Jason Weston, and Johnny Mariethoz. Pytorch: Tensors and dynamic neural networks in python with stronggpu acceleration, 2017. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multi-task learners. Open AI tech. report, Feb. 2019. +Dominic Richards and Patrick Rebeschini. Optimal statistical rates for decentralised non-parametric regression with linear speed-up. In Advances in Neural Information Processing Systems, 2019. +Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, and Laurent Massoulie. Optimal convergence rates for convex distributed optimization in networks. Journal of Machine Learning Research, pp. 1-31, 2019. +Sebastian U Stich. Local SGD converges fast and communicates little. In International Conference on Learning Representations, 2019. +Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International Conference on Machine Learning, pp. 1139-1147, 2013. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998-6008, 2017. +Thijs Vogels, Sai Praneeth Karimireddy, and Martin Jaggi. PowerSGD: Practical low-rank gradient compression for distributed optimization. In Advances in Neural Information Processing Systems, 2019. + +Jianyu Wang and Gauri Joshi. Cooperative SGD: A unified framework for the design and analysis of communication-efficient SGD algorithms. arXiv preprint arXiv:1808.07576, 2018. +Jianyu Wang, Anit Kumar Sahu, Zhouyi Yang, Gauri Joshi, and Soummya Kar. MATCHA: Speeding up decentralized SGD via matching decomposition sampling. arXiv preprint arXiv:1905.09435, 2019. +Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. TernGrad: Ternary gradients to reduce communication in distributed deep learning. In Advances in Neural Information Processing Systems, pp. 1509-1519, 2007. +Hao Yu, Rong Jin, and Sen Yang. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization. In International Conference on Machine Learning, 2019a. +Hao Yu, Sen Yang, and Shenghuo Zhu. Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 5693-5700, 2019b. +Michael R Zhang, James Lucas, Geoffrey Hinton, and Jimmy Ba. Lookahead optimizer: k steps forward, 1 step back. arXiv preprint arXiv:1907.08610, 2019. +S. Zhang, A. Choromanska, and Y. LeCun. Deep learning with elastic averaged SGD. In Advances in Neural Information Processing Systems, pp. 685-693, 2015. +Fan Zhou and Guojing Cong. On the convergence properties of a $k$ -step averaging stochastic gradient descent algorithm for nonconvex optimization. In International Joint Conference on Artificial Intelligence, 2018. + +# A EXPERIMENT DETAILS + +# A.1 IMPLEMENTATION DETAILS + +All methods are implemented in PyTorch 1.0 (Paszke et al., 2017), and our experiments use CUDA 9.2, CUDAN7.3, and NCCL 2.2.13. The ImageNet experiments build on the example from https://github.com/pytorch/examples/imagenet. The WMT'16 En-De experiments build on https://github.com/pytorch/fairseq. For SGP and OSGP we use the implementations available at https://github.com/facebookresearch/stochastic_gradient.push. + +# A.2 CIFAR-10 + +For the CIFAR-10 experiments, we train a ResNet-18, the implementation of which is available at https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py. In all base algorithms, we use a Nesterov momentum parameter of 0.9 and set the weight decay factor as $10^{-4}$ . For each base algorithm, we tune the (fast) learning rate from $\{0.01, 0.025, 0.05, 0.1, 0.15\}$ and linearly scale it with the number of workers (i.e., 32). We found that, with a total batch size 4096, the best learning rate for AR-SGD is 0.01, for OSGP/SGP is 0.05, and for Local SGD is 0.025. + +When applying SLOWMO to these base algorithms, we fix $\alpha = 1$ and $\tau = 12$ and tune the value of $\beta$ from $\{0.4, 0.5, 0.6, 0.7, 0.8\}$ . It turns out that for OSGP, SGP, and Local SGD, the best values of $\beta$ are all equal to 0.7. More discussion on the effects of $\alpha$ and $\beta$ can be found in Appendix B.3. + +# A.3 IMAGENET + +For the ImageNet experiments, we use the same learning-rate, schedule, momentum, and weight decay as those suggested in Goyal et al. (2017) for SGD. In particular, we use ResNet50 (He et al. (2016)) and train it for 90 epochs with a reference learning-rate of 0.1 with respect to a 256 sample batch, and scale this linearly with the batch-size. We decay the learning-rate by a factor of 10 at epochs 30, 60, 80. We use a Nesterov momentum parameter of 0.9. We use weight decay $10^{-4}$ . + +When using SLOWMo, we set the slow learning rate to $\alpha = 1$ and explore different numbers of inner steps, $\tau \in \{12,48\}$ and different slow momentum value $\beta \in \{0.5,0.6,0.7\}$ when the base optimizer is SGP/OSGP and $\beta = 0.7$ when the base optimizer is LocalSGD. We also explore a larger set of $\tau$ values $\tau \in \{12,24,48,96\}$ in the ablation experiments. + +# A.4 WMT16 EN-DE + +For the WMT16 En-De experiments, we follow the experimental protocol described in Ott et al. (2018). All experiments are based on the big transformer model (Vaswani et al., 2017) with 6 blocks in the encoder and decoder networks. For these experiments, the base optimizer used is Adam (Kingma & Ba, 2015) with beta1 = 0.9, beta2 = 0.98, and $\epsilon = 10^{-8}$ and trained for 25 epochs. We use the same learning rate schedule as Ott et al. (2018), i.e., the learning rate increases linearly for 4,000 steps to $10^{-3}$ , after which it is decayed proportionally to the inverse square root of the number of steps. We use label smoothing with weight 0.1 for the uniform prior distribution. + +For SLOWMo, we explore $\{0.5,1.0\}$ as the slow learning rate $\alpha$ . We observe that $\alpha = 1$ gives better performance and therefore report results for $\alpha = 1$ unless stated otherwise. We explore different numbers of inner steps, $\tau \in \{12,48\}$ and different slow momentum value $\beta \in \{0.1,0.3,0.5,0.6,0.7\}$ . We also explore a larger set of $\tau$ values, i.e. $\tau \in \{12,24,48,96,192\}$ , in the ablation experiments. + +Table B.1: Validation NLL (lower is better) with and without SLOWMo on WMT'16 En-De. We observe that SLOWMo improves the validation NLL of SGP and Local Adam. + +
BaselineValidation NLL
Originalw/ SLOWMO
Local Adam2.1792.137
SGP2.1372.106
AR-Adam2.108-
+ +# B ADDITIONAL EMPIRICAL RESULTS + +# B.1 VALIDATION NLL ON WMT'16 EN-DE + +We show the Validation NLL on WMT'16 En-De in Table B.1, corresponding to the experiments in Table 1. We observe that SLOWMo improves the validation NLL (along with BLEU score) of SGP and Local Adam. + +# B.2 ADDITIONAL TRAINING CURVES + +We present the training loss-versus-epochs curves in Figure B.1, corresponding to the validation curves in Figure 2. It can be observed that SLOWMo substantially improves the convergence speed of SGP. + +![](images/f581ff3b30f35aeb5fed19f7ebfb96ab1e4f8f036a345fe56b9ad4169bbb13a5.jpg) +(a) CIFAR-10, batch size:4k. + +![](images/507150ced1098e7050a04d82a48ffb6adf75e85714ec9c716be0cf90f931b09b.jpg) +Figure B.1: Training curves for various tasks using SGP as the base algorithm. We fix $\alpha = 1, \tau = 12$ for these three plots. Shaded areas in (a) and (b) show the min-max values across all worker nodes. The corresponding validation curves are presented in Figure 2. Note that the training losses in these three figures are evaluated right after the SLOWMo update (i.e., Eq. (3)). + +![](images/b0f7c174bdb4d9d4c74ea924741ae859df051e257f3614d8f3c06d52c79ac147.jpg) +(b) ImageNet, batch size:8k. +(c) WMT16 En-De, batch size:200k. + +# B.3 EFFECT OF SLOW LEARNING RATE $\alpha$ AND SLOW MOMENTUM FACTOR $\beta$ + +In this section we evaluate the impact of slow learning rate $\alpha$ and slow momentum $\beta$ hyperameters. + +In Figure B.2a, we perform a parameter sweep over $\alpha$ and $\beta$ on CIFAR-10 dataset, using OSGP as the base algorithm of SLOWMO. One can observe that when the value of $\beta$ is fixed, $\alpha = 1$ always gives the highest validation accuracy; when the value of $\alpha$ is fixed, there is a best value of $\beta$ ranging from 0.4 to 0.8. + +We further validate this claim on the WMT'16-En-De dataset. Figure B.2b shows that $\alpha = 1$ gives lower validation loss than $\alpha = 0.5$ for fixed $\beta$ when using SGP or Local Adam as the base algorithms. When running SLOWMo-Adam with $\beta > 0.5$ and $\alpha = 1.0$ , or with $\beta > 0.7$ and $\alpha = 0.5$ , the validation loss was substantially worse and so is not plotted here. Motivated by the above observations, we stick to fix $\alpha = 1$ and fine-tune $\beta$ for SLOWMo methods on all training tasks. + +![](images/97e11a63d80c548a4935e4cec249a5b7879b72c086e74468491879de3494092e.jpg) +(a) Parameter sweep on CIFAR-10 training task using OSGP as the base algorithm. + +![](images/c1a9fde15fda8b771b21772c2b6af95fd4b6dbb67558cd0fc2d9c1f0115ea5ce.jpg) +(b) Effect of $\alpha$ and $\beta$ on WMT'16 on SLOWMo-Adam. +Figure B.2: Impact on slow learning rate $\alpha$ and slow momentum $\beta$ on SLOWMO. + +# B.4 BASE OPTIMIZER MOMENTUM BUFFER STRATEGIES + +As described in Section 2, the base optimizer may have some associated buffers. SGD with momentum uses a momentum buffer, and Adam tracks estimates of the first and second moments of the gradient. The slow momentum buffer is updated every $\tau$ steps according to Eq.(2). There are several strategies that can be used to update the base optimizer buffers at the outer iteration level (line 2 in Algorithm 1). Here, we explore three strategies: 1) reset the base optimizer buffers to zero; 2) maintain the base optimizer buffers to their current values; 3) average the base optimizer buffers across workers, which requires additional communications. We evaluate the impact of these strategies on ImageNet and WMT'16 in Table B.2 and Table B.3. + +On ImageNet, we observe that the different buffer strategies achieve similar training and validation performance. However, the averaging strategy comes at the cost of higher communication overhead (an additional call to ALLREDUCE for each buffer averaged). Based on these results, we choose the reset strategy as the default in our experiments. + +On WMT'16, we find that the reset buffer strategy underperforms both the maintain and average approaches. When using Adam as base optimizer, resetting the second moment to zeros hurts the optimization performance. This is not surprising since it is recognized that warming up the Adam buffer is important. Averaging buffers achieves the best results but comes at a significantly higher communication cost. We therefore select the maintain strategy as the default one when using Adam. + +Table B.2: Effect of different buffer strategies: ImageNet, batch size:8k, 32 nodes. + +
Buffer StrategyTraining LossValidation Accuracy
Avg parameters + avg buffers1.0675.66%
Avg parameters + reset buffers1.0075.73%
Avg parameters + maintain buffers0.9875.78%
+ +Table B.3: Effect of different buffer strategies: WMT' 16 En-De, batch size:200k, 8 nodes. + +
Buffer StrategyTraining LossValidation Loss
Avg parameters + avg buffers2.4382.101
Avg parameters + reset buffers5.0934.732
Avg parameters + maintain buffers2.4472.106
+ +# B.5 STANDARD DEVIATIONS ON CIFAR-10 + +Since experiments for CIFAR-10 were ran for 5 times with different random seeds, here we report the standard deviations on the validation accuracy in Table B.4, as a complementary to Table 1. + +Table B.4: Validation Accuracy with and without SLOWMo on CIFAR-10. Using SLOWMo consistently improves the performance of the base algorithms. + +
BaselineValidation Accuracy
Originalw/ SLOWMO
Local SGD91.73 ± .14%93.20 ± .23%
OSGP93.17 ± .11%93.74 ± .17%
SGP93.90 ± .13%94.32 ± .21%
AR-SGD92.66 ± .16%-
+ +# C BASELINE ALGORITHMS + +In this section, we give a detailed description of each baseline algorithms used throughout the paper, provide theoretical justification on how to incorporate the update rules of D-PSGD, SGP and OSGP into the analysis of SLOWMo, and also derive the analytic form of $V$ used in Assumption 3 for each method. A summary of the base optimizer update directions is given in Table C.1 + +Table C.1: Examples of update directions used by the base optimizer in SLOWMo, where $\pmb{h}^{(i)}$ and $\pmb{v}^{(i)}$ denote the first-order and second-order momentum buffers respectively and $\beta_{\mathrm{local}}, \beta_1, \beta_2$ are the corresponding local momentum factors. When the local momentum buffers are reset at the beginning of each inner loop, then $\pmb{h}_{t,0}^{(i)} = 0$ , $\pmb{v}_{t,0}^{(i)} = 0$ and $l = k$ ; when the local momentum buffers are maintained, then $\pmb{h}_{t,0}^{(i)} = \pmb{h}_{t-1,\tau}^{(i)}$ , $\pmb{v}_{t,0}^{(i)} = \pmb{v}_{t-1,\tau}^{(i)}$ and $l = t\tau + k$ . + +
Base OptimizerUpdate directions dt,k(i) (and possible additional buffers h(i), v(i))
SGDh(t,k+1)(i) = βlocalh(t,k)(i) + ∇Fi(x(t,k);ξ(t,k))
dt,k(i) = βlocalh(t,k+1)(i) + ∇Fi(x(t,k);ξ(t,k))
SGPh(t,k+1)(i) = βlocalh(t,k)(i) + ∇Fi(z(t,k);ξ(t,k))
dt,k(i) = 1/γt x(t,k)(i) - 1/γt ∑j∈Nk in(i) p(i,j)[x(t,k)(i) - γt(βlocalh(t,k+1)(i) + ∇Fi(z(t,k);ξ(t,k))))
Adamh(t,k+1)(i) = β1h(t,k)(i) + (1-β1) ∇Fi(x(t,k);ξ(t,k))
v(t,k+1)(i) = β2v(t,k)(i) + (1-β2) ∇Fi2(x(t,k);ξ(t,k))
hat(h(t,k+1)(i) = h(t,k+1)/(1-β1),hat(v(t,k+1)(i) = v(t,k+1)/(1-β2))
dt(k,i) = hat(v(t,k+1)/√hat(v(t,k+1) + ε)
SGP (Adam)h(t,k+1)(i) = β1h(t,k)(i) + (1-β1) ∇Fi(z(t,k);ξ(t,k))
v(t,k+1)(i) = β2v(t,k)(i) + (1-β2) ∇Fi2(z(t,k);ξ(t,k))
hat(h(t,k+1)(i) = h(t,k+1)/(1-β1),hat(v(t,k+1)(i) = v(t,k+1)/(1-β2))
dt(k,i) = 1/γt x(t,k)(i) - 1/γt ∑j∈Nk in(i) p(i,j)[x(t,k)(i) - γt hat(h(t,k+1)/√hat(v(t,k+1) + ε)]
+ +# C.1 SGP AND OSGP + +Algorithms 2 and 3 present the pseudo-code of SGP and OSGP (Assran et al., 2019). To be consistent with the experimental results of the original paper, we also use local Nesterov momentum for each worker node. The communication topology among worker nodes is a time-varying directed exponential graph Assran et al. (2019). That is, if all nodes are ordered sequentially, then, according to their rank $(0,1,\dots,m - 1)$ , each node periodically communicates with peers that are $2^0,2^1,\ldots ,2^{[\log_2(m - 1)]}$ hops away. We let each node only send and receive a single message (i.e., communicate with 1 peer) at each iteration. + +Algorithm 2: Stochastic Gradient Push with Nesterov Momentum (SGP) +Input: learning rate $\gamma$ ; momentum factor $\beta_0$ ; Number of worker nodes $m$ . Initial point $\pmb{x}_0^{(i)} = \pmb{z}_0^{(i)}$ , $h_0^{(i)} = 0$ and $w_0^{(i)} = 1$ for all nodes $i \in \{1,2,\dots,m\}$ . +for $k \in \{0,1,\dots,K-1\}$ at worker $i$ in parallel do + Compute mini-batch gradients: $\nabla F_i(\pmb{z}_k^{(i)};\xi_k^{(i)})$ +Update local momentum: $\pmb{h}_{k+1}^{(i)} = \beta_0\pmb{h}_k^{(i)} + \nabla F_i(\pmb{z}_k^{(i)};\xi_k^{(i)})$ $\pmb{x}_{k+\frac{1}{2}}^{(i)} = \pmb{x}_k^{(i)} - \gamma[\beta_0\pmb{h}_{k+1}^{(i)} + \nabla F_i(\pmb{z}_k^{(i)};\xi_k^{(i)})]$ +Send $(p_k^{(j,i)}\pmb{x}_{k+\frac{1}{2}}^{(i)}, p_k^{(j,i)}w_k^{(i)})$ to out-neighbors +Receive $(p_k^{(i,j)}\pmb{x}_{k+\frac{1}{2}}^{(j)}, p_k^{(i,j)}w_k^{(j)})$ from in-neighbors +Update model parameters: $\pmb{x}_{k+1}^{(i)} = \sum_{j \in \mathcal{N}_k^{in(i)}} p_k^{(i,j)}\pmb{x}_{k+\frac{1}{2}}^{(j)}$ +Update de-biased factors: $w_{k+1}^{(i)} = \sum_{j \in \mathcal{N}_k^{in(i)}} p_k^{(i,j)}w_{k+\frac{1}{2}}^{(j)}$ +Update de-biased model parameters: $\pmb{z}_{k+1}^{(i)} = \pmb{x}_{k+1}^{(i)} / w_{k+1}^{(i)}$ +end + +Algorithm 3: Overlap Stochastic Gradient Push with Nesterov Momentum (OSGP) +Input: learning rate $\gamma$ ; momentum factor $\beta_0$ ; Number of worker nodes $m$ . Initial point $\pmb{x}_0^{(i)} = \pmb{z}_0^{(i)}$ , $h_0^{(i)} = 0$ and $w_0^{(i)} = 1$ for all nodes $i \in \{1,2,\dots,m\}$ ; count_since_last = 0. +for $k \in \{0,1,\dots,K-1\}$ at worker $i$ in parallel do + Compute mini-batch gradients: $\nabla F_i(\pmb{z}_k^{(i)};\xi_k^{(i)})$ +Update local momentum: $\pmb{h}_{k+1}^{(i)} = \beta_0\pmb{h}_k^{(i)} + \nabla F_i(\pmb{z}_k^{(i)};\xi_k^{(i)})$ $\pmb{x}_{k+\frac{1}{2}}^{(i)} = \pmb{x}_k^{(i)} - \gamma[\beta_0\pmb{h}_{k+1}^{(i)} + \nabla F_i(\pmb{z}_k^{(i)};\xi_k^{(i)})]$ +Non-blocking send $(p_{k}^{(j,i)}\pmb{x}_{k+\frac{1}{2}}^{(i)}, p_{k}^{(j,i)}w_{k}^{(i)})$ to out-neighbors + $\pmb{x}_{k+1}^{(i)} = p^{(i,i)}\pmb{x}_k^{(i)}$ $w_{k+1}^{(i)} = p^{(i,i)}w_{k}^{(i)}$ +if count_since_last = s then + Block until $(p_{k}^{(i,j)}\pmb{x}_{k+\frac{1}{2}}^{(j)}, p_{k}^{(i,j)}w_{k}^{(j)})$ is received from in-neighbors + count_since_last = 0 +else + count_since_last = count_since_last + 1 +end +if Receive buffer non-empty then + for $(p_{k'}^{(j,i)}\pmb{x}_{k'+\frac{1}{2}}^{(i)}, p_{k'}^{(j,i)}w_{k'}^{(i)})$ in the receive buffer do + $\pmb{x}_{k+1}^{(i)} = \pmb{x}_{k+1}^{(i)} + p_{k'}^{(i,j)}\pmb{x}_{k'+\frac{1}{2}}^{(j)}$ $w_{k+1}^{(i)} = w_{k+1}^{(i)} + p_{k'}^{(i,j)}w_{k'+\frac{1}{2}}^{(j)}$ + end +end +Update de-biased model parameters: $\pmb{z}_{k+1}^{(i)} = \pmb{x}_{k+1}^{(i)} / w_{k+1}^{(i)}$ + +Note that although the implementation of SGP is with Nesterov momentum, the theoretical analysis in Assran et al. (2019) only considers the vanilla case where there is no momentum. Accordingly, the update rule can be written in a matrix form as + +$$ +\boldsymbol {X} _ {k + 1} = \left(\boldsymbol {X} _ {k} - \gamma \nabla \boldsymbol {F} \left(\boldsymbol {Z} _ {k}; \boldsymbol {\xi} _ {k}\right)\right) \boldsymbol {P} _ {k} ^ {\top} \tag {9} +$$ + +where $\pmb{X}_k = [\pmb{x}_k^{(1)},\dots,\pmb{x}_k^{(m)}]\in \mathbb{R}^{d\times m}$ stacks all model parameters at different nodes and $Z_{k} = [z_{k}^{(1)},\ldots ,z_{k}^{(m)}]\in \mathbb{R}^{d\times m}$ denotes the de-biased parameters. Similarly, we define the stochastic gradient matrix as $\nabla F(\pmb {Z}_k) = [\nabla F_1(\pmb {z}_k^{(1)};\xi_k^{(i)}),\dots,\nabla F_m(\pmb {z}_k^{(i)};\xi_k^{(i)})]\in \mathbb{R}^{d\times m}$ . Moreover, $P_{k}\in \mathbb{R}^{m\times m}$ is defined as the mixing matrix which conforms to the underlying communication topology. If node $j$ is one of the in-neighbors of node $i$ , then $p^{(i,j)} > 0$ otherwise $p^{(i,j)} = 0$ . In particular, matrix $P_{k}$ is column-stochastic. + +If we multiply a vector $\mathbf{1} / m$ on both sides of the update rule (9), we have + +$$ +\boldsymbol {x} _ {k + 1} = \boldsymbol {x} _ {k} - \frac {\gamma}{m} \sum_ {i = 1} ^ {m} \nabla F _ {i} \left(\boldsymbol {z} _ {k} ^ {(i)}; \xi_ {k} ^ {(i)}\right) \tag {10} +$$ + +where $\pmb{x}_k = \pmb{X}_k\mathbf{1} / m$ denotes the average model across all worker nodes. Recall that in SLOWMO, we rewrite the updates of the base algorithm at the $k$ th steps of the $t$ th outer iteration as + +$$ +\boldsymbol {x} _ {t, k + 1} = \boldsymbol {x} _ {t, k} - \gamma \boldsymbol {d} _ {t, k}. \tag {11} +$$ + +So comparing (10) and (11), we can conclude that $\pmb{d}_{t,k} = \frac{1}{m}\sum_{i=1}^{m}\nabla F_i(\pmb{z}_{t,k}^{(i)};\xi_{t,k}^{(i)})$ . As a consequence, we have $\mathbb{E}_{t,k}[\pmb{d}_{t,k}] = \frac{1}{m}\sum_{i=1}^{m}\nabla f_i(\pmb{z}_{t,k}^{(i)})$ . Since mini-batch gradients are independent, it follows that + +$$ +\begin{array}{l} \mathbb {E} \left\| \boldsymbol {d} _ {t, k} - \mathbb {E} _ {t, k} \left[ \boldsymbol {d} _ {t, k} \right] \right\| ^ {2} = \mathbb {E} \left\| \frac {1}{m} \sum_ {i = 1} ^ {m} \left[ \nabla F _ {i} \left(\boldsymbol {z} _ {t, k} ^ {(i)}; \xi_ {t, k} ^ {(i)}\right) - \nabla f _ {i} \left(\boldsymbol {z} _ {t, k} ^ {(i)}\right) \right] \right\| ^ {2} (12) \\ = \frac {1}{m ^ {2}} \sum_ {i = 1} ^ {m} \mathbb {E} \left\| \nabla F _ {i} \left(\boldsymbol {x} _ {t, k} ^ {(i)}; \xi_ {t, k} ^ {(i)}\right) - \nabla f _ {i} \left(\boldsymbol {x} _ {t, k} ^ {(i)}\right) \right\| ^ {2} (13) \\ \leq \frac {\sigma^ {2}}{m} = V. (14) \\ \end{array} +$$ + +Similarly, for OSGP, one can repeat the above procedure again. But the definition of $X_{k}, Z_{k}$ and $\nabla F(Z_{k})$ will change, in order to account for the delayed messages. In this case, we still have the update rule Eq. (10). But $x_{k}$ is no longer the averaged model across all nodes. It also involves delayed model parameters. We refer the interested reader to Assran et al. (2019) for further details. + +# C.2 D-PSGD + +In the case of decentralized parallel SGD (D-PSGD), proposed in Lian et al. (2017), the update rule is quite similar to SGP. However, the communication topology among worker nodes is an undirected graph. Hence, the mixing matrix $P_{k}$ is doubly-stochastic. Each node will exchange the model parameters with its neighbors. The update rule can be written as + +$$ +\boldsymbol {X} _ {k + 1} = \left(\boldsymbol {X} _ {k} - \gamma \nabla \boldsymbol {F} \left(\boldsymbol {X} _ {k}; \boldsymbol {\xi} _ {k}\right)\right) \boldsymbol {P} _ {k} ^ {\top}. \tag {15} +$$ + +Again, we have that $\pmb{X}_k = [\pmb{x}_k^{(1)},\dots,\pmb{x}_k^{(m)}]\in \mathbb{R}^{d\times m}$ stacks all model parameters at different nodes and $\nabla F(\pmb{X}_k) = [\nabla F_1(\pmb{x}_k^{(1)};\xi_k^{(i)}),\dots,\nabla F_m(\pmb{x}_k^{(i)};\xi_k^{(i)})]\in \mathbb{R}^{d\times m}$ denotes the stochastic gradient matrix. By multiplying a vector $1 / m$ on both sides of (15), we have + +$$ +\begin{array}{l} \boldsymbol {x} _ {k + 1} = \boldsymbol {x} _ {k} - \frac {\gamma}{m} \sum_ {i = 1} ^ {m} \nabla F _ {i} \left(\boldsymbol {x} _ {k} ^ {(i)}; \xi_ {k} ^ {(i)}\right) (16) \\ = \boldsymbol {x} _ {k} - \gamma \boldsymbol {d} _ {k}. (17) \\ \end{array} +$$ + +As a result, using the same technique as (12)-(14), we have $V = \sigma^2 / m$ for D-PSGD. + +# C.3 LOCALSGD + +We further present the pseudo-code of Local SGD in Algorithm 4, and the pseudo-code of double-averaging momentum scheme in Algorithm 5. + +Algorithm 4: Local SGD with Nesterov Momentum +Input: learning rate $\gamma$ ; momentum factor $\beta_0$ ; Communication period $\tau$ ; Number of worker nodes $m$ . Initial point $\pmb{x}_0^{(i)}$ and $\pmb{h}_0^{(i)} = 0$ for all nodes $i \in \{1,2,\dots,m\}$ . +for $k \in \{0,1,\dots,K-1\}$ at worker $i$ in parallel do + Compute mini-batch gradients: $\nabla F_i(\pmb{x}_k^{(i)};\xi_k^{(i)})$ +Update local momentum: $\pmb{h}_{k+1}^{(i)} = \beta_0\pmb{h}_k^{(i)} + \nabla F_i(\pmb{x}_k^{(i)};\xi_k^{(i)})$ $\pmb{x}_{k+\frac{1}{2}}^{(i)} = \pmb{x}_k^{(i)} - \gamma[\beta_0\pmb{h}_{k+1}^{(i)} + \nabla F_i(\pmb{z}_k^{(i)};\xi_k^{(i)})]$ +if $k \mod \tau = 0$ then + ALLREDUCE model parameters: $\pmb{x}_{k+1}^{(i)} = \frac{1}{m}\sum_{j=1}^{m}\pmb{x}_{k+\frac{1}{2}}^{(j)}$ +else + $\pmb{x}_{k+1}^{(i)} = \pmb{x}_{k+\frac{1}{2}}^{(i)}$ +end + +Algorithm 5: Local SGD with Double-Averaging Nesterov Momentum (Yu et al., 2019a) +Input: learning rate $\gamma$ ; momentum factor $\beta_0$ ; Communication period $\tau$ ; Number of worker nodes $m$ . Initial point $\pmb{x}_0^{(i)}$ and $\pmb{h}_0^{(i)} = 0$ for all nodes $i \in \{1,2,\dots,m\}$ . +for $k \in \{0,1,\dots,K-1\}$ at worker $i$ in parallel do + Compute mini-batch gradients: $\nabla F_i(\pmb{x}_k^{(i)};\xi_k^{(i)})$ +Update local momentum: $\pmb{h}_{k+\frac{1}{2}}^{(i)} = \beta_0\pmb{h}_k^{(i)} + \nabla F_i(\pmb{x}_k^{(i)};\xi_k^{(i)})$ $\pmb{x}_{k+\frac{1}{2}}^{(i)} = \pmb{x}_k^{(i)} - \gamma[\beta_0\pmb{h}_{k+\frac{1}{2}}^{(i)} + \nabla F_i(\pmb{z}_k^{(i)};\xi_k^{(i)})]$ +if $k \mod \tau = 0$ then + ALLREDUCE model parameters: $\pmb{x}_{k+1}^{(i)} = \frac{1}{m}\sum_{j=1}^{m}\pmb{x}_{k+\frac{1}{2}}^{(j)}$ +ALLREDUCE momentum buffers: $\pmb{h}_{k+1}^{(i)} = \frac{1}{m}\sum_{j=1}^{m}\pmb{h}_{k+\frac{1}{2}}^{(j)}$ +else + $\pmb{x}_{k+1}^{(i)} = \pmb{x}_{k+\frac{1}{2}}^{(i)}$ $\pmb{h}_{k+1}^{(i)} = \pmb{h}_{k+\frac{1}{2}}^{(i)}$ +end +end + +# D PROOFS + +# D.1 EQUIVALENT UPDATES + +To begin, recall that $\pmb{d}_{t,k} = \frac{1}{m}\sum_{i = 1}^{m}\pmb{d}_{t,k}^{(i)}$ , and that the local updates are + +$$ +\boldsymbol {x} _ {t, k + 1} ^ {(i)} = \boldsymbol {x} _ {t, k} ^ {(i)} - \gamma \boldsymbol {d} _ {t, k} ^ {(i)} \tag {18} +$$ + +for $k = 0, \ldots, \tau - 1$ , followed by an averaging step to obtain $\boldsymbol{x}_{t,\tau} = \frac{1}{m}\sum_{i=1}^{m}\boldsymbol{x}_{t,\tau}^{(i)}$ . Therefore, we can write the update rule of the base optimizer as + +$$ +\boldsymbol {x} _ {t, \tau} = \boldsymbol {x} _ {t, 0} - \gamma \sum_ {k = 0} ^ {\tau - 1} \boldsymbol {d} _ {t, k}. \tag {19} +$$ + +Combining this with (2) and (3), we have + +$$ +\begin{array}{l} \boldsymbol {x} _ {t + 1, 0} = \boldsymbol {x} _ {t, 0} - \alpha \gamma \sum_ {k = 0} ^ {\tau - 1} \boldsymbol {d} _ {t, k} - \alpha \gamma \beta \boldsymbol {u} _ {t} (20) \\ = \boldsymbol {x} _ {t, 0} - \alpha \gamma \sum_ {k = 0} ^ {\tau - 1} \boldsymbol {d} _ {t, k} + \beta \left(\boldsymbol {x} _ {t, 0} - \boldsymbol {x} _ {t - 1, 0}\right) (21) \\ \end{array} +$$ + +Let $\pmb{y}_{t,0} = \pmb{x}_{t,0} + \frac{\beta}{1 - \beta} (\pmb{x}_{t,0} - \pmb{x}_{t - 1,0}),\forall t.$ Then by rearranging terms we get + +$$ +\boldsymbol {y} _ {t + 1, 0} = \boldsymbol {y} _ {t, 0} - \frac {\alpha \gamma}{1 - \beta} \sum_ {k = 0} ^ {\tau - 1} \boldsymbol {d} _ {t, k}. \tag {22} +$$ + +Now, let us further extend the auxiliary sequence to all values of $k \neq 0$ as follows: + +$$ +\boldsymbol {y} _ {t, k + 1} = \boldsymbol {y} _ {t, k} - \frac {\alpha \gamma}{1 - \beta} \boldsymbol {d} _ {t, k}. \tag {23} +$$ + +It is easy to show that $\pmb{y}_{t,\tau} = \pmb{y}_{t + 1,0}$ . In the sequel, we will analyze the convergence of sequence $\{\pmb{y}_{t,k}\}$ instead of $\{\pmb{x}_{t,k}\}$ . + +# D.2 PRELIMINARIES + +In the table below, we list all notations used in this paper. + +Table D.1: List of notations. + +
Global learning rateα
Global momentum factorβ
learning rateγ
Outer iteration lengthτ
Total number of outer iterationsT
Total number of stepsK
Liptschiz constantL
Number of worker nodesm
+ +Throughout the theoretical analysis, we will repeatedly use the following facts: + +Fact 1: $\langle \mathbf{a},\mathbf{b}\rangle = \frac{1}{2}\| \mathbf{a}\| ^2 +\frac{1}{2}\| \mathbf{b}\| ^2 -\frac{1}{2}\| \mathbf{a} - \mathbf{b}\| ^2;$ +- Fact 2: According to Young's Inequality, for any $a > 0$ , we have + +$$ +\pm \langle \mathbf {a}, \mathbf {b} \rangle \leq \frac {1}{2 a} \| \mathbf {a} \| ^ {2} + \frac {a}{2} \| \mathbf {b} \| ^ {2}. \tag {24} +$$ + +Fact 3: $\| \mathbf{a} + \mathbf{b}\| ^2\leq 2\| \mathbf{a}\| ^2 +2\| \mathbf{b}\| ^2;$ + +- Fact 4: Suppose $\{a_i\}_{i=1}^k$ is a set of non-negative scalars and $s = \sum_{i=1}^k a_i$ . Then according to Jensen's Inequality, we have + +$$ +\left\| \sum_ {i = 1} ^ {n} a _ {i} \mathbf {b} _ {i} \right\| ^ {2} = s ^ {2} \cdot \left\| \sum_ {i = 1} ^ {k} \frac {a _ {i}}{s} \mathbf {b} _ {i} \right\| ^ {2} \leq s ^ {2} \cdot \sum_ {i = 1} ^ {k} \frac {a _ {i}}{s} \| \mathbf {b} _ {i} \| ^ {2} = s \cdot \sum_ {i = 1} ^ {k} a _ {i} \| \mathbf {b} _ {i} \| ^ {2}. \tag {25} +$$ + +# D.3 GENERAL TREATMENT + +Since each local objective $f_{i}$ is $L$ -smooth, the function $f = \frac{1}{m}\sum_{i=1}^{m}f_{i}$ is also $L$ -smooth. It follows that + +$$ +\mathbb {E} _ {t, k} \left[ f \left(\boldsymbol {y} _ {t, k + 1}\right) \right] - f \left(\boldsymbol {y} _ {t, k}\right) \leq - \frac {\alpha \gamma}{1 - \beta} \left\langle \nabla f \left(\boldsymbol {y} _ {t, k}\right), \mathbb {E} _ {t, k} \left[ \boldsymbol {d} _ {t, k} \right] \right\rangle + \frac {\alpha^ {2} \gamma^ {2} L}{2 (1 - \beta) ^ {2}} \mathbb {E} _ {t, k} \left[ \| \boldsymbol {d} _ {t, k} \| ^ {2} \right] \tag {26} +$$ + +where $\mathbb{E}_{t,k}$ denotes a conditional expectation over the randomness in the $(t,k)$ -th iteration, conditioned on all past random variables. For the first term on the right hand side: + +$$ +\begin{array}{l} - \left\langle \nabla f (\boldsymbol {y} _ {t, k}), \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\rangle = - \left\langle \nabla f (\boldsymbol {y} _ {t, k}) - \nabla f (\boldsymbol {x} _ {t, k}), \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\rangle - \left\langle \nabla f (\boldsymbol {x} _ {t, k}), \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\rangle \\ \leq \frac {1}{2 a} \| \nabla f (\boldsymbol {y} _ {t, k}) - \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} + \frac {a}{2} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ - \left\langle \nabla f \left(\boldsymbol {x} _ {t, k}\right), \mathbb {E} _ {t, k} \left[ \boldsymbol {d} _ {t, k} \right] \right\rangle (27) \\ = \frac {1}{2 a} \| \nabla f (\boldsymbol {y} _ {t, k}) - \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} - \frac {1 - a}{2} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ - \frac {1}{2} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} + \frac {1}{2} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} (28) \\ \leq \frac {L ^ {2}}{2 a} \left\| \boldsymbol {y} _ {t, k} - \boldsymbol {x} _ {t, k} \right\| ^ {2} - \frac {1 - a}{2} \left\| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\| ^ {2} \\ - \frac {1}{2} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} + \frac {1}{2} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} (29) \\ \end{array} +$$ + +where (27) comes from Fact 4 (24), and $a > 0$ is constant. For simplicity, we directly set $a = 0.5$ . Eqn. (28) uses Fact 2 $\langle \mathbf{a}, \mathbf{b} \rangle = \frac{1}{2} \| \mathbf{a} \|^2 + \frac{1}{2} \| \mathbf{b} \|^2 - \frac{1}{2} \| \mathbf{a} - \mathbf{b} \|^2$ . Furthermore, according to the definition of $y_{t,k}$ , it can be shown that + +$$ +\begin{array}{l} \left\| \boldsymbol {y} _ {t, k} - \boldsymbol {x} _ {t, k} \right\| ^ {2} = \left\| \left(1 - \frac {\alpha}{1 - \beta}\right) \sum_ {j = 0} ^ {k - 1} \gamma \boldsymbol {d} _ {t, j} + \boldsymbol {y} _ {t, 0} - \boldsymbol {x} _ {t, 0} \right\| ^ {2} (30) \\ \leq 2 \gamma^ {2} \left(1 - \frac {\alpha}{1 - \beta}\right) ^ {2} \left\| \sum_ {j = 0} ^ {k - 1} \boldsymbol {d} _ {t, j} \right\| ^ {2} + 2 \| \boldsymbol {y} _ {t, 0} - \boldsymbol {x} _ {t, 0} \| ^ {2} (31) \\ = 2 \gamma^ {2} \left(1 - \frac {\alpha}{1 - \beta}\right) ^ {2} \left\| \sum_ {j = 0} ^ {k - 1} \boldsymbol {d} _ {t, j} \right\| ^ {2} + \frac {2 \beta^ {2}}{(1 - \beta) ^ {2}} \| \boldsymbol {x} _ {t, 0} - \boldsymbol {x} _ {t - 1, 0} \| ^ {2}. (32) \\ \end{array} +$$ + +Substituting (32) into (29), it follows that + +$$ +\begin{array}{l} - \left\langle \nabla f (\boldsymbol {y} _ {t, k}), \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\rangle \leq - \frac {1}{2} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} - \frac {1}{4} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} + \frac {1}{2} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + 2 \gamma^ {2} L ^ {2} \left(1 - \frac {\alpha}{1 - \beta}\right) ^ {2} \left\| \sum_ {j = 0} ^ {k - 1} \boldsymbol {d} _ {t, j} \right\| ^ {2} + \frac {2 L ^ {2} \beta^ {2}}{(1 - \beta) ^ {2}} \| \boldsymbol {x} _ {t, 0} - \boldsymbol {x} _ {t - 1, 0} \| ^ {2}. \tag {33} \\ \end{array} +$$ + +Moreover, for the second term in (26), we have + +$$ +\mathbb {E} _ {t, k} \left[ \| \boldsymbol {d} _ {t, k} \| ^ {2} \right] = \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} + \mathbb {E} _ {t, k} \left[ \| \boldsymbol {d} _ {t, k} - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \right]. \tag {34} +$$ + +Then, plugging (33) and (34) into (26), + +$$ +\begin{array}{l} \mathbb {E} _ {t, k} \left[ f \left(\boldsymbol {y} _ {t, k + 1}\right) \right] - f \left(\boldsymbol {y} _ {t, k}\right) \leq - \frac {\gamma_ {\text {e f f}}}{2} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} - \frac {\gamma_ {\text {e f f}}}{2} \left(\frac {1}{2} - \gamma_ {\text {e f f}} L\right) \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \frac {\gamma_ {\mathrm {e f f}} ^ {2} L}{2} \mathbb {E} _ {t, k} \left[ \left\| \boldsymbol {d} _ {t, k} - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\| ^ {2} \right] + \frac {\gamma_ {\mathrm {e f f}}}{2} \left\| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\| ^ {2} \\ + 2 \gamma_ {\text {e f f}} \gamma^ {2} L ^ {2} \left(1 - \frac {\alpha}{1 - \beta}\right) ^ {2} \left\| \sum_ {j = 0} ^ {k - 1} \boldsymbol {d} _ {t, j} \right\| ^ {2} + \frac {2 \gamma_ {\text {e f f}} \beta^ {2} L ^ {2}}{(1 - \beta) ^ {2}} \| \boldsymbol {x} _ {t, 0} - \boldsymbol {x} _ {t - 1, 0} \| ^ {2} \tag {35} \\ \end{array} +$$ + +where $\gamma_{\mathrm{eff}} = \alpha \gamma /(1 - \beta)$ . Taking the total expectation, + +$$ +\begin{array}{l} \mathbb {E} \left[ f \left(\boldsymbol {y} _ {t, k + 1}\right) - f \left(\boldsymbol {y} _ {t, k}\right) \right] \leq - \frac {\gamma_ {\text {e f f}}}{2} \mathbb {E} \| \nabla f \left(\boldsymbol {x} _ {t, k}\right) \| ^ {2} - \frac {\gamma_ {\text {e f f}}}{2} \left(\frac {1}{2} - \gamma_ {\text {e f f}} L\right) \mathbb {E} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \underbrace {\frac {\gamma_ {\text {e f f}} ^ {2} L}{2} \mathbb {E} \left[ \| \boldsymbol {d} _ {t , k} - \mathbb {E} _ {t , k} [ \boldsymbol {d} _ {t , k} ] \| ^ {2} \right] + \frac {\gamma_ {\text {e f f}}}{2} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t , k}) - \mathbb {E} _ {t , k} [ \boldsymbol {d} _ {t , k} ] \| ^ {2}} _ {N _ {1} (t, k)} \\ + \underbrace {2 \gamma_ {\text {e f f}} \gamma^ {2} L ^ {2} \left(1 - \frac {\alpha}{1 - \beta}\right) ^ {2} \mathbb {E} \left\| \sum_ {j = 0} ^ {k - 1} \boldsymbol {d} _ {t , j} \right\| ^ {2}} _ {N _ {2} (t, k)} + \underbrace {\frac {2 \gamma_ {\text {e f f}} \beta^ {2} L ^ {2}}{(1 - \beta) ^ {2}} \mathbb {E} \left\| \boldsymbol {x} _ {t , 0} - \boldsymbol {x} _ {t - 1 , 0} \right\| ^ {2}} _ {N _ {3} (t)}. \tag {36} \\ \end{array} +$$ + +Summing from $k = 0$ to $k = \tau -1$ , we have + +$$ +\begin{array}{l} \mathbb {E} [ f (\boldsymbol {y} _ {t + 1, 0}) - f (\boldsymbol {y} _ {t, 0}) ] = \mathbb {E} [ f (\boldsymbol {y} _ {t, \tau}) - f (\boldsymbol {y} _ {t, 0}) ] (37) \\ \leq - \frac {\gamma_ {\mathrm {e f f}}}{2} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} - \frac {\gamma_ {\mathrm {e f f}}}{2} \left(\frac {1}{2} - \gamma_ {\mathrm {e f f}} L\right) \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \frac {\gamma_ {\mathrm {e f f}} ^ {2} L \tau V}{2} + \frac {\gamma_ {\mathrm {e f f}}}{2} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} + \sum_ {k = 0} ^ {\tau - 1} N _ {2} (t, k) + \sum_ {k = 0} ^ {\tau - 1} N _ {3} (t), (38) \\ \end{array} +$$ + +where $N_{1}(t,k), N_{2}(t,k)$ , and $N_{3}(t)$ are as defined in (36). Summing from $t = 0$ to $t = T - 1$ and dividing both side by total iterations $K = \tau T$ , + +$$ +\begin{array}{l} \frac {\mathbb {E} [ f (\boldsymbol {y} _ {T , 0}) - f (\boldsymbol {y} _ {0 , 0}) ]}{K} \leq - \frac {\gamma_ {\text {e f f}}}{2 K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} - \frac {\gamma_ {\text {e f f}}}{2 K} \left(\frac {1}{2} - \gamma_ {\text {e f f}} L\right) \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \frac {\gamma_ {\mathrm {e f f}} ^ {2} L V}{2} + \frac {\gamma_ {\mathrm {e f f}}}{2 K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left\| \nabla f \left(\boldsymbol {x} _ {t, k}\right) - \mathbb {E} _ {t, k} \left[ \boldsymbol {d} _ {t, k} \right] \right\| ^ {2} \\ + \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} N _ {2} (t, k) + \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} N _ {3} (t). \tag {39} \\ \end{array} +$$ + +Now, we are going to further expand the expressions of the last two terms in (39). + +# D.3.1 BOUNDING $N_{2}(t,k)$ + +Using the fact $\| a + b\|^2 \leq 2\| a\|^2 + 2\| b\|^2$ , we have + +$$ +\begin{array}{l} \left\| \sum_ {j = 0} ^ {k - 1} \boldsymbol {d} _ {t, j} \right\| ^ {2} \leq 2 \left\| \sum_ {j = 0} ^ {k - 1} \left(\boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} [ \boldsymbol {d} _ {t, j} ]\right) \right\| ^ {2} + 2 \left\| \sum_ {j = 0} ^ {k - 1} \mathbb {E} _ {t, j} [ \boldsymbol {d} _ {t, j} ] \right\| ^ {2} (40) \\ \leq 2 \left\| \sum_ {j = 0} ^ {k - 1} \left(\boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} \left[ \boldsymbol {d} _ {t, j} \right]\right) \right\| ^ {2} + 2 k \sum_ {j = 0} ^ {k - 1} \left\| \mathbb {E} _ {t, j} \left[ \boldsymbol {d} _ {t, j} \right] \right\| ^ {2} (41) \\ \end{array} +$$ + +where the last inequality comes from Fact 3. Then, taking the total expectation and summing over the $t$ -th outer iteration, + +$$ +\begin{array}{l} \mathbb {E} \left[ \sum_ {k = 0} ^ {\tau - 1} \left\| \sum_ {j = 0} ^ {k - 1} \boldsymbol {d} _ {t, j} \right\| ^ {2} \right] \leq 2 \mathbb {E} \left[ \sum_ {k = 0} ^ {\tau - 1} \left\| \sum_ {j = 0} ^ {k - 1} \left(\boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} [ \boldsymbol {d} _ {t, j} ]\right) \right\| ^ {2} \right] + 2 \sum_ {k = 0} ^ {\tau - 1} k \sum_ {j = 0} ^ {k - 1} \mathbb {E} \| \mathbb {E} _ {t, j} [ \boldsymbol {d} _ {t, j} ] \| ^ {2} (42) \\ = 2 \sum_ {k = 0} ^ {\tau - 1} \sum_ {j = 0} ^ {k - 1} \mathbb {E} \left[ \left\| \boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} \left[ \boldsymbol {d} _ {t, j} \right] \right\| ^ {2} \right] + 2 \sum_ {k = 0} ^ {\tau - 1} k \sum_ {j = 0} ^ {k - 1} \mathbb {E} \left\| \mathbb {E} _ {t, j} \left[ \boldsymbol {d} _ {t, j} \right] \right\| ^ {2} (43) \\ \leq 2 V \sum_ {k = 0} ^ {\tau - 1} k + 2 \sum_ {k = 0} ^ {\tau - 1} k \sum_ {j = 0} ^ {\tau - 1} \mathbb {E} \left\| \mathbb {E} _ {t, j} \left[ \boldsymbol {d} _ {t, j} \right] \right\| ^ {2} (44) \\ = \tau (\tau - 1) V + \tau (\tau - 1) \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left\| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \right\| ^ {2} (45) \\ \end{array} +$$ + +where (43) uses the following fact: + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \sum_ {j = 0} ^ {k - 1} \left(\boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} [ \boldsymbol {d} _ {t, j} ]\right) \right\| ^ {2} \right] = \sum_ {j = 0} ^ {k - 1} \mathbb {E} \left[ \left\| \boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} [ \boldsymbol {d} _ {t, j} ] \right\| ^ {2} \right] \\ + 2 \sum_ {j = 0} ^ {k - 1} \sum_ {l = j + 1} ^ {k - 1} \mathbb {E} \left\langle \boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} \left[ \boldsymbol {d} _ {t, j} \right], \boldsymbol {d} _ {t, l} - \mathbb {E} _ {t, l} \left[ \boldsymbol {d} _ {t, l} \right] \right\rangle (46) \\ = \sum_ {j = 0} ^ {k - 1} \mathbb {E} \left[ \left\| \boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} [ \boldsymbol {d} _ {t, j} ] \right\| ^ {2} \right] \\ + 2 \sum_ {j = 0} ^ {k - 1} \sum_ {l = j + 1} ^ {k - 1} \mathbb {E} \left\langle \boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} \left[ \boldsymbol {d} _ {t, j} \right], \mathbb {E} _ {t, l} \left[ \boldsymbol {d} _ {t, l} - \mathbb {E} _ {t, l} \left[ \boldsymbol {d} _ {t, l} \right] \right] \right\rangle (47) \\ = \sum_ {j = 0} ^ {k - 1} \mathbb {E} \left[ \left\| \boldsymbol {d} _ {t, j} - \mathbb {E} _ {t, j} \left[ \boldsymbol {d} _ {t, j} \right] \right\| ^ {2} \right]. (48) \\ \end{array} +$$ + +As a result, we end up with the following + +$$ +\sum_ {k = 0} ^ {\tau - 1} N _ {2} (t, k) \leq 2 \gamma_ {\text {e f f}} ^ {3} L ^ {2} \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} \left[ V + \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \right], \tag {49} +$$ + +$$ +\sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} N _ {2} (t, k) \leq 2 \gamma_ {\text {e f f}} ^ {3} L ^ {2} \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} \left[ T V + \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \right] \tag {50} +$$ + +$$ +\frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} N _ {2} (t, k) \leq 2 \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} \left[ \frac {V}{\tau} + \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \right] \tag {51} +$$ + +where $K = \tau T$ denotes the total steps. + +# D.3.2 BOUNDING $N_{3}(t)$ + +From the update rule (2), (3) and (18), we have + +$$ +\begin{array}{l} \left\| \boldsymbol {x} _ {t, 0} - \boldsymbol {x} _ {t - 1, 0} \right\| ^ {2} = \alpha^ {2} \gamma^ {2} \left\| \sum_ {k = 0} ^ {\tau - 1} \boldsymbol {d} _ {t - 1, k} + \beta \boldsymbol {u} _ {t - 1} \right\| ^ {2} (52) \\ = \alpha^ {2} \gamma^ {2} \left\| \sum_ {k = 0} ^ {\tau - 1} \boldsymbol {d} _ {t - 1, k} + \beta \sum_ {k = 0} ^ {\tau - 1} \boldsymbol {d} _ {t - 2, k} + \beta^ {2} \boldsymbol {u} _ {t - 2} \right\| ^ {2} (53) \\ = \alpha^ {2} \gamma^ {2} \left\| \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \left(\sum_ {k = 0} ^ {\tau - 1} \boldsymbol {d} _ {s, k}\right) \right\| ^ {2} (54) \\ \leq 2 \alpha^ {2} \gamma^ {2} \underbrace {\left\| \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \left(\sum_ {k = 0} ^ {\tau - 1} \left(\boldsymbol {d} _ {s , k} - \mathbb {E} _ {s , k} [ \boldsymbol {d} _ {s , k} ]\right)\right) \right\| ^ {2}} _ {T _ {1}} \\ + 2 \alpha^ {2} \gamma^ {2} \underbrace {\left\| \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \left(\sum_ {k = 0} ^ {\tau - 1} \mathbb {E} _ {s , k} [ \boldsymbol {d} _ {s , k} ]\right) \right\| ^ {2}} _ {T _ {2}} (55) \\ \end{array} +$$ + +For the first term $T_{1}$ , taking the total expectation, we get + +$$ +\begin{array}{l} \mathbb {E} \left[ T _ {1} \right] = \mathbb {E} \left[ \left\| \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \left(\sum_ {k = 0} ^ {\tau - 1} \left(\boldsymbol {d} _ {s, k} - \mathbb {E} _ {s, k} [ \boldsymbol {d} _ {s, k} ]\right)\right) \right\| ^ {2} \right] (56) \\ = \sum_ {s = 0} ^ {t - 1} \beta^ {2 (t - 1 - s)} \mathbb {E} \left[ \left\| \sum_ {k = 0} ^ {\tau - 1} \left(\boldsymbol {d} _ {s, k} - \mathbb {E} _ {s, k} [ \boldsymbol {d} _ {s, k} ]\right) \right\| ^ {2} \right] (57) \\ = \sum_ {s = 0} ^ {t - 1} \beta^ {2 (t - 1 - s)} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \left\| \boldsymbol {d} _ {s, k} - \mathbb {E} _ {s, k} [ \boldsymbol {d} _ {s, k} ]\right) \right\| ^ {2} \Bigg ] (58) \\ \leq \tau V \sum_ {s = 0} ^ {t - 1} \beta^ {2 (t - 1 - s)} = \tau V \left(1 + \beta^ {2} + \beta^ {4} + \dots + \beta^ {2 (t - 1)}\right) \leq \frac {\tau V}{1 - \beta^ {2}} (59) \\ \end{array} +$$ + +where (57) and (58) are derived using the same routine as (46) to (48). Similarly, for the second term $T_{2}$ in (55), according to Fact 4, + +$$ +\begin{array}{l} \mathbb {E} \left[ T _ {2} \right] \leq \left(\sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s}\right) \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \mathbb {E} \left[ \left\| \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} _ {s, k} \boldsymbol {d} _ {s, k} \right\| ^ {2} \right] (60) \\ \leq \tau \left(\sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s}\right) \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \| \mathbb {E} _ {s, k} \boldsymbol {d} _ {s, k} \| ^ {2} \right] (61) \\ \leq \frac {\tau}{1 - \beta} \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \| \mathbb {E} _ {s, k} \boldsymbol {d} _ {s, k} \| ^ {2} \right]. (62) \\ \end{array} +$$ + +Substituting (59) and (62) back into (55) and summing over the $t$ -th outer iteration, we have + +$$ +\sum_ {k = 0} ^ {\tau - 1} N _ {3} (t) \leq \frac {4 \gamma_ {\text {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau^ {2} V}{(1 - \beta^ {2})} + \frac {4 \gamma_ {\text {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau^ {2}}{(1 - \beta)} \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \| \mathbb {E} _ {s, k} \boldsymbol {d} _ {s, k} \| ^ {2} \right], \tag {63} +$$ + +$$ +\sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} N _ {3} (t) \leq \frac {4 K \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau V}{(1 - \beta^ {2})} + \frac {4 \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau^ {2}}{(1 - \beta)} \sum_ {t = 0} ^ {T - 1} \sum_ {s = 0} ^ {t - 1} \beta^ {t - 1 - s} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \| \mathbb {E} _ {s, k} \boldsymbol {d} _ {s, k} \| ^ {2} \right] \tag {64} +$$ + +$$ += \frac {4 K \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau V}{(1 - \beta^ {2})} + \frac {4 \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau^ {2}}{(1 - \beta)} \sum_ {t = 0} ^ {T - 2} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \| \mathbb {E} _ {t, k} \boldsymbol {d} _ {t, k} \| ^ {2} \right] \sum_ {s = t + 1} ^ {T - 1} \beta^ {T - 1 - s} \tag {65} +$$ + +$$ +\leq \frac {4 K \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau V}{(1 - \beta^ {2})} + \frac {4 \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau^ {2}}{(1 - \beta) ^ {2}} \sum_ {t = 0} ^ {T - 2} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \| \mathbb {E} _ {t, k} \boldsymbol {d} _ {t, k} \| ^ {2} \right] \tag {66} +$$ + +$$ +\leq \frac {4 K \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau V}{(1 - \beta^ {2})} + \frac {4 \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau^ {2}}{(1 - \beta) ^ {2}} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \| \mathbb {E} _ {t, k} \boldsymbol {d} _ {t, k} \| ^ {2} \right], \tag {67} +$$ + +$$ +\frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} N _ {3} (t) \leq \frac {4 \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau V}{(1 - \beta^ {2})} + \frac {4 \gamma_ {\mathrm {e f f}} ^ {3} L ^ {2} \beta^ {2} \tau^ {2}}{(1 - \beta) ^ {2}} \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left[ \| \mathbb {E} _ {t, k} \boldsymbol {d} _ {t, k} \| ^ {2} \right]. \tag {68} +$$ + +# D.3.3 FINAL RESULTS + +Plugging (51) and (68) back into (39), one can obtain + +$$ +\begin{array}{l} \frac {\mathbb {E} [ f (\boldsymbol {y} _ {T , 0}) - f (\boldsymbol {y} _ {0 , 0}) ]}{K} \leq - \frac {\gamma_ {\mathrm {e f f}}}{2 K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} - \frac {\gamma_ {\mathrm {e f f}}}{2 K} C _ {1} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \frac {\gamma_ {\mathrm {e f f}} ^ {2} L V}{2} \left[ 1 + 4 \gamma_ {\mathrm {e f f}} L (\tau - 1) \left(\frac {1 - \beta}{\alpha} - 1\right) ^ {2} + \frac {8 \gamma_ {\mathrm {e f f}} L \tau \beta^ {2}}{(1 - \beta^ {2})} \right] \\ + \frac {\gamma_ {\text {e f f}}}{2 K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \tag {69} \\ \end{array} +$$ + +where $C_1 = 1/2 - \gamma_{\mathrm{eff}}L - 4\gamma_{\mathrm{eff}}^2 L^2\tau(\tau-1)(1-\beta-\alpha)^2/\alpha^2 - 8\gamma_{\mathrm{eff}}^2 L^2\tau^2\beta^2/(1-\beta)^2$ . When the constants satisfy + +$$ +\frac {1}{2} - \gamma_ {\text {e f f}} L - 4 \gamma_ {\text {e f f}} ^ {2} L ^ {2} \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} - \frac {8 \gamma_ {\text {e f f}} ^ {2} L ^ {2} \tau^ {2} \beta^ {2}}{(1 - \beta) ^ {2}} \geq 0, \tag {70} +$$ + +we have + +$$ +\begin{array}{l} \frac {\mathbb {E} [ f (\boldsymbol {y} _ {T , 0}) - f (\boldsymbol {y} _ {0 , 0}) ]}{K} \leq - \frac {\gamma_ {\mathrm {e f f}}}{2 K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} + \frac {\gamma_ {\mathrm {e f f}}}{2 K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \frac {\gamma_ {\mathrm {e f f}} ^ {2} L V}{2} \left[ 1 + 4 \gamma_ {\mathrm {e f f}} L (\tau - 1) \left(\frac {1 - \beta}{\alpha} - 1\right) ^ {2} + \frac {8 \gamma_ {\mathrm {e f f}} L \tau \beta^ {2}}{(1 - \beta^ {2})} \right]. \tag {71} \\ \end{array} +$$ + +After rearranging terms, we get + +$$ +\begin{array}{l} \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} \leq \frac {2 \mathbb {E} [ f (\boldsymbol {y} _ {0 , 0}) - f (\boldsymbol {y} _ {T , 0}) ]}{\gamma_ {\mathrm {e f f}} K} + \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \gamma_ {\text {e f f}} L V \left[ 1 + 4 \gamma_ {\text {e f f}} L (\tau - 1) \left(\frac {1 - \beta}{\alpha} - 1\right) ^ {2} + \frac {8 \gamma_ {\text {e f f}} L \tau \beta^ {2}}{1 - \beta^ {2}} \right]. \tag {72} \\ \end{array} +$$ + +Furthermore, since $\pmb{y}_{0,0} = \pmb{x}_{0,0} - \beta \pmb{x}_{-1,0} / (1 - \beta) = \pmb{x}_{0,0}$ and $f(\pmb{y}_{T,0}) \geq f_{\mathrm{inf}}$ , the above upper bound can be simplified as + +$$ +\begin{array}{l} \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} \leq \frac {2 (f (\boldsymbol {x} _ {0 , 0}) - f _ {\inf})}{\gamma_ {\mathrm {e f f}} K} + \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \gamma_ {\text {e f f}} L V \left[ 1 + 4 \gamma_ {\text {e f f}} L (\tau - 1) \left(\frac {1 - \beta}{\alpha} - 1\right) ^ {2} + \frac {8 \gamma_ {\text {e f f}} L \tau \beta^ {2}}{1 - \beta^ {2}} \right]. \tag {73} \\ \end{array} +$$ + +If we set $\gamma_{\mathrm{eff}} = \sqrt{\frac{m}{K}}$ , then + +$$ +\begin{array}{l} \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} \leq \frac {2 (f (\boldsymbol {x} _ {0 , 0}) - f _ {\inf })}{\sqrt {m K}} + \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \\ + \frac {m L V}{\sqrt {m K}} + \frac {4 m L ^ {2} (\tau - 1)}{K} \left(\frac {1 - \beta}{\alpha} - 1\right) ^ {2} + \frac {8 m L ^ {2} \tau \beta^ {2}}{K (1 - \beta^ {2})}. \tag {74} \\ \end{array} +$$ + +Recall the learning rate constraint is + +$$ +\frac {1}{2} - \gamma_ {\text {e f f}} L - 4 \gamma_ {\text {e f f}} ^ {2} L ^ {2} \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} - 8 \gamma_ {\text {e f f}} ^ {2} L ^ {2} \tau^ {2} \frac {\beta^ {2}}{(1 - \beta) ^ {2}} \geq 0. \tag {75} +$$ + +When $\gamma_{\mathrm{eff}} = \frac{\sqrt{m}}{\sqrt{K}}$ , the constraint can be rewritten as + +$$ +\frac {1}{2} \geq \sqrt {\frac {m L ^ {2}}{K}} + 4 \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} \frac {m L ^ {2}}{K} + 8 \tau^ {2} \frac {\beta^ {2}}{(1 - \beta) ^ {2}} \frac {m L ^ {2}}{K}. \tag {76} +$$ + +After rearranging, we have + +$$ +\frac {K}{m L ^ {2}} - 2 \sqrt {\frac {K}{m L ^ {2}}} + 1 \geq 8 \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} + 1 6 \tau^ {2} \frac {\beta^ {2}}{(1 - \beta) ^ {2}} + 1, \tag {77} +$$ + +$$ +\frac {K}{m L ^ {2}} - 1 \geq \left(8 \tau (\tau - 1) \frac {\left(1 - \beta - \alpha\right) ^ {2}}{\alpha^ {2}} + 1 6 \tau^ {2} \frac {\beta^ {2}}{(1 - \beta) ^ {2}} + 1\right) ^ {\frac {1}{2}}, \tag {78} +$$ + +$$ +\frac {K}{m L ^ {2}} \geq 1 + \left(8 \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} + 1 6 \tau^ {2} \frac {\beta^ {2}}{(1 - \beta) ^ {2}} + 1\right) ^ {\frac {1}{2}}. \tag {79} +$$ + +Furthermore, note that + +$$ +\left(8 \tau (\tau - 1) \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} + 1 6 \tau^ {2} \frac {\beta^ {2}}{(1 - \beta) ^ {2}} + 1\right) ^ {\frac {1}{2}} +$$ + +$$ +\leq \left(9 \tau^ {2} \frac {(1 - \beta - \alpha) ^ {2}}{\alpha^ {2}} + 1 6 \tau^ {2} \frac {\beta^ {2}}{(1 - \beta) ^ {2}} + 1\right) ^ {\frac {1}{2}} \tag {80} +$$ + +$$ +\leq \sqrt {3} \max \left\{\frac {3 \tau (1 - \beta - \alpha)}{\alpha}, \frac {4 \tau \beta}{1 - \beta}, 1 \right\}. \tag {81} +$$ + +Therefore, when $K \geq mL^2 \left(1 + \sqrt{3} \max \left\{\frac{3\tau(1 - \beta - \alpha)}{\alpha}, \frac{4\tau\beta}{1 - \beta}, 1\right\}\right)$ , the condition (79) must be satisfied. + +# D.4 SPECIAL CASE 1: BLOCKWISE MODEL UPDATE FILTERING (BMUF) + +In this case, the inner optimizer is Local-SGD. That is, + +$$ +\boldsymbol {d} _ {t, k} = \frac {1}{m} \sum_ {i = 1} ^ {m} \nabla F \left(\boldsymbol {x} _ {t, k} ^ {(i)}; \xi_ {t, k} ^ {(i)}\right), \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] = \frac {1}{m} \sum_ {i = 1} ^ {m} \nabla f \left(\boldsymbol {x} _ {t, k} ^ {(i)}\right). \tag {82} +$$ + +Since all worker nodes are averaged after every $\tau$ iterations, we have $\boldsymbol{x}_{t,0}^{(i)} = \boldsymbol{x}_{t,0}, \forall i$ . Besides, it is easy to validate that $V = \sigma^2 / m$ . + +According to previous literature on the convergence of Local-SGD (Wang & Joshi, 2018; Yu et al., 2019a), we can directly get the following upper bound. + +$$ +A = \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \tag {83} +$$ + +$$ += \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \left\| \nabla f \left(\boldsymbol {x} _ {t, k}\right) - \frac {1}{m} \sum_ {i = 1} ^ {m} \nabla f \left(\boldsymbol {x} _ {t, k} ^ {(i)}\right) \right\| ^ {2} \tag {84} +$$ + +$$ +\leq \frac {1}{m K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \sum_ {i = 1} ^ {m} \mathbb {E} \left\| \nabla f \left(\boldsymbol {x} _ {t, k}\right) - \nabla f \left(\boldsymbol {x} _ {t, k} ^ {(i)}\right) \right\| ^ {2} \tag {85} +$$ + +$$ +\leq \frac {L ^ {2}}{m K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \sum_ {i = 1} ^ {m} \mathbb {E} \left\| \boldsymbol {x} _ {t, k} - \boldsymbol {x} _ {t, k} ^ {(i)} \right\| ^ {2} \tag {86} +$$ + +$$ +\leq \frac {2 \gamma^ {2} L ^ {2} \sigma^ {2} \tau}{1 - 1 2 \gamma^ {2} L ^ {2} \tau^ {2}} + \frac {6 \gamma^ {2} L ^ {2} \zeta^ {2} \tau^ {2}}{1 - 1 2 \gamma^ {2} L ^ {2} \tau^ {2}}. \tag {87} +$$ + +When $\gamma L\tau \leq \frac{1}{6}$ , we have $1 / (1 - 12\gamma^2 L^2\tau^2) \geq 3/2$ . It follows that + +$$ +\frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) - \mathbb {E} _ {t, k} [ \boldsymbol {d} _ {t, k} ] \| ^ {2} \leq 3 \gamma^ {2} L ^ {2} \sigma^ {2} \tau + 9 \gamma^ {2} L ^ {2} \zeta^ {2} \tau^ {2}. \tag {88} +$$ + +Substituting (88) into (73) and setting $\frac{\alpha}{1 - \beta}\gamma L = \sqrt{m / K}$ , we have + +$$ +\begin{array}{l} \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} \leq \frac {2 L (f (\boldsymbol {x} _ {0 , 0}) - f _ {\inf }) + \sigma^ {2}}{\sqrt {m K}} + \frac {\alpha^ {2} m}{(1 - \beta) ^ {2}} \frac {3 \sigma^ {2} \tau + 9 \zeta^ {2} \tau^ {2}}{K} + \\ \left(\frac {1 - \beta}{\alpha} - 1\right) ^ {2} \frac {4 \sigma^ {2} (\tau - 1)}{K} + \frac {\beta^ {2}}{(1 - \beta^ {2})} \frac {8 \sigma^ {2} \tau}{K} (89) \\ = \mathcal {O} \left(\frac {1}{\sqrt {m K}}\right) + \mathcal {O} \left(\frac {m}{K}\right). (90) \\ \end{array} +$$ + +# D.5 SPECIAL CASE 2: LOOKAHEAD + +In this case, the inner optimizer is SGD and $m = 1$ . Thus, we have $\beta = 0$ , $\mathbb{E}_{t,k}[\pmb{d}_{t,k}] = \nabla f(\pmb{x}_{t,k})$ and $V = \sigma^2$ . Therefore, + +$$ +\frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} \leq \frac {2 (f (\boldsymbol {x} _ {0 , 0}) - f _ {\inf })}{\alpha \gamma K} + \alpha \gamma L \sigma^ {2} + 4 (1 - \alpha) ^ {2} \gamma^ {2} L ^ {2} (\tau - 1) \sigma^ {2} \tag {91} +$$ + +It can be observed that when $\alpha = 1$ or $\tau = 1$ , the above upper bound reduces to the case of vanilla mini-batch SGD. If we set $\alpha \gamma L = \sqrt{1 / K}$ , then we have + +$$ +\begin{array}{l} \frac {1}{K} \sum_ {t = 0} ^ {T - 1} \sum_ {k = 0} ^ {\tau - 1} \mathbb {E} \| \nabla f (\boldsymbol {x} _ {t, k}) \| ^ {2} \leq \frac {2 L (f (\boldsymbol {x} _ {0 , 0}) - f _ {\inf }) + \sigma^ {2}}{\sqrt {K}} + \frac {4 (1 - \alpha) ^ {2} (\tau - 1) \sigma^ {2}}{\alpha^ {2} K} (92) \\ = \mathcal {O} \left(\frac {1}{\sqrt {K}}\right) + \mathcal {O} \left(\frac {1}{K}\right). (93) \\ \end{array} +$$ + +If the total iterations $K$ is sufficiently large, then the first term will dominate the convergence rate. \ No newline at end of file diff --git a/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/images.zip b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b58084df8270a986b7d876eb9d48e0411466d38a --- /dev/null +++ b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7d3586ff7ce20fd8cd2a15d300588749182ed42b0a450bd5c64938f3f3a8498 +size 1491844 diff --git a/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/layout.json b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..13154da4bc3dfc71e2654d368cfb76a659a3f5bc --- /dev/null +++ b/slowmoimprovingcommunicationefficientdistributedsgdwithslowmomentum/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aab8e86d922eac495fb87118435b156e7bc4e5fcee22fd637f5d5c7e1c462839 +size 975032 diff --git a/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_content_list.json b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d9dee1256c59d057f0a0ba061dd58c8e810ce517 --- /dev/null +++ b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:455bbe9ebb4665a95c23ff067f63e605acd70044bfb698607717bfe0b9c8f7b1 +size 133284 diff --git a/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_model.json b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f5462ca62d3b217aeabf06af40617e6a2c8e62d3 --- /dev/null +++ b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6a155ed30a673572912351c923b32511146fdcbd717b48d8ef5eb5d295dc2d3 +size 161987 diff --git a/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_origin.pdf b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..47cd07d3a83a372a87611e4ead3fb6c032faa840 --- /dev/null +++ b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/61b16ea6-db36-4626-b704-2aa9fd527751_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce470026a061aff9beb7f2dd9b21982499625a87442c7c9733cdc8d0e7c9287c +size 634063 diff --git a/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/full.md b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3e4cc5f6d247d2da1e1d38d39512cf641476898a --- /dev/null +++ b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/full.md @@ -0,0 +1,709 @@ +# SMOOTH MARKETS: A BASIC MECHANISM FOR ORGANIZING GRADIENT-BASED LEARNERS + +David Balduzzi1, Wojciech M Czarnecki1, Thomas W Anthony1, Ian M Gemp1, Edward Hughes1, Joel Z Leibo1, Georgios Piliouras2, Thore Graepel1 + +$^{1}$ DeepMind ; $^{2}$ Singapore University of Technology and Design {dbalduzzi, lejlot, twa, imgemp, edwardhughes, jzl, georgios, thore}@* + +# ABSTRACT + +With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative results from game theory show there is little hope of understanding or controlling general $n$ -player games. We therefore introduce smooth markets (SM-games), a class of $n$ -player games with pairwise zero sum interactions. SM-games codify a common design pattern in machine learning that includes (some) GANs, adversarial training, and other recent algorithms. We show that SM-games are amenable to analysis and optimization using first-order methods. + +"I began to see legibility as a central problem in modern statecraft. The premodern state was, in many respects, partially blind [...] It lacked anything like a detailed 'map' of its terrain and its people. It lacked, for the most part, a measure, a metric that would allow it to 'translate' what it knew into a common standard necessary for a synoptic view. As a result, its interventions were often crude and self-defeating." + +from Seeing like a State by Scott (1999) + +# 1 INTRODUCTION + +As artificial agents proliferate, it is increasingly important to analyze, predict and control their collective behavior (Parkes and Wellman, 2015; Rahwan et al., 2019). Unfortunately, despite almost a century of intense research since von Neumann (1928), game theory provides little guidance outside a few special cases such as two-player zero-sum, auctions, and potential games (Monderer and Shapley, 1996; Nisan et al., 2007; Vickrey, 1961; von Neumann and Morgenstern, 1944). Nash equilibria provide a general solution concept, but are intractable in almost all cases for many different reasons (Babichenko, 2016; Daskalakis et al., 2009; Hart and Mas-Colell, 2003). These and other negative results (Palaiopanos et al., 2017) suggest that understanding and controlling societies of artificial agents is near hopeless. Nevertheless, human societies – of billions of agents – manage to organize themselves reasonably well and mostly progress with time, suggesting game theory is missing some fundamental organizing principles. + +In this paper, we investigate how markets structure the behavior of agents. Market mechanisms have been studied extensively (Nisan et al., 2007). However, prior work has restricted to concrete examples, such as auctions and prediction markets, and strong assumptions, such as convexity. Our approach is more abstract and more directly suited to modern machine learning where the building blocks are neural nets. Markets, for us, encompass discriminators and generators trading errors in GANs (Goodfellow et al., 2014) and agents trading wins and losses in StarCraft (Vinyals et al., 2019). + +# 1.1 OVERVIEW + +The paper introduces a class of games where optimization and aggregation make sense. The phrase requires unpacking. "Optimization" means gradient-based methods. Gradient descent (and friends) are the workhorse of modern machine learning. Even when gradients are not available, gradient estimates underpin many reinforcement learning and evolutionary algorithms. "Aggregation" means weighted sums. Sums and averages are the workhorses for analyzing ensembles and populations + +across many fields. "Makes sense" means we can draw conclusions about the gradient-based dynamics of the collective by summing over properties of its members. + +As motivation, we present some pathologies that arise in even the simplest smooth games. Examples in section 2 show that coupling strongly concave profit functions to form a game can lead to uncontrolled behavior, such as spiraling to infinity and excessive sensitivity to learning rates. Hence, one of our goals is to understand how to 'glue together agents' such that their collective behavior is predictable. + +Section 3 introduces a class of games where simultaneous gradient ascent behaves well and is amenable to analysis. In a smooth market (SM-game), each player's profit is composed of a personal objective and pairwise zero-sum interactions with other players. Zero-sum interactions are analogous to monetary exchange (my expenditure is your revenue), double-entry bookkeeping (credits balance debits), and conservation of energy (actions cause equal and opposite reactions). SM-games explicitly account for externalities. Remarkably, building this simple bookkeeping mechanism into games has strong implications for the dynamics of gradient-based learners. SM-games generalize adversarial games (Cai et al., 2016) and codify a common design pattern in machine learning, see section 3.1. + +Section 4 studies SM-games from two points of view. Firstly, from that of a rational, profit-maximizing agent that makes decisions based on first-order profit forecasts. Secondly, from that of the game as a whole. SM-games are not potential games, so the game does not optimize any single function. A collective of profit-maximizing agents is not rational because they do not optimize a shared objective (Drexler, 2019). We therefore introduce the notion of legibility, which quantifies how the dynamics of the collective relate to that of individual agents. + +Finally, section 5 applies legibility to prove some basic theorems on the dynamics of SM-games under gradient-ascent. We show that (i) Nash equilibria are stable; (ii) that if profits are strictly concave then gradient ascent converges to a Nash equilibrium for all learning rates; and (iii) the dynamics are bounded under reasonable assumptions. + +The results are important for two reasons. Firstly, we identify a class of games whose dynamics are, at least in some respects, amenable to analysis and control. The kinds of pathologies described in section 2 cannot arise in SM-games. Secondly, we identify the specific quantities, forecasts, that are useful to track at the level of individual firms and can be meaningfully aggregated to draw conclusions about their global dynamics. It follows that forecasts should be a useful lever for mechanism design. + +# 1.2 RELATED WORK + +A wide variety of machine learning markets and agent-based economies have been proposed and studied: Abernethy and Frongillo (2011); Balduzzi (2014); Barto et al. (1983); Baum (1999); Hu and Storkey (2014); Kakade et al. (2003; 2005); Kearns et al. (2001); Kwee et al. (2001); Lay and Barbu (2010); Minsky (1986); Selfridge (1958); Storkey (2011); Storkey et al. (2012); Sutton et al. (2011); Wellman and Wurman (1998). The goal of this paper is different. Rather than propose another market mechanism, we abstract an existing design pattern and elucidate some of its consequences for interacting agents. + +Our approach draws on work studying convergence in generative adversarial networks (Balduzzi et al., 2018; Gemp and Mahadevan, 2018; Gidel et al., 2019; Mescheder, 2018; Mescheder et al., 2017), related minimax problems (Abernethy et al., 2019; Bailey and Piliouras, 2018), and monotone games (Gemp and Mahadevan, 2017; Nemirovski et al., 2010; Tatarenko and Kamgarpour, 2019). + +# 1.3 CAVEAT + +We consider dynamics in continuous time $\frac{d\mathbf{w}}{dt} = \pmb{\xi}(\mathbf{w})$ in this paper. Discrete dynamics, $\mathbf{w}_{t+1} \gets \mathbf{w}_t + \pmb{\xi}(\mathbf{w})$ require a more delicate analysis, e.g. Bailey et al. (2019). In particular, we do not claim that optimizing GANs and SM-games is easy in discrete time. Rather, our analysis shows that it is relatively easy in continuous time, and therefore possible in discrete time, with some additional effort. The contrast is with smooth games in general, where gradient-based methods have essentially no hope of finding local Nash equilibria even in continuous time. + +![](images/14dd60da91b6cfbaa8ea2f9f16f76a00c32df422b1b313d95d40e6fc6bbe7a31.jpg) +A + +![](images/71e141356e389c6996c62dfbb7a34f1fd387159189422741579eceeb44730ca7.jpg) +B + +![](images/cc9e04bdfc71aecbe37554e5c2bf76f495d5220e5a044c449720ec48263d7cc1.jpg) +C + +![](images/24c00992692999ff68095ff055e0c92d0b92079e6aedf500c0143f4e974b32cb.jpg) +D +Figure 1: Effect of learning rates in two games. Note: $x$ -axis is log-scale. Left: "half a game", e.g. 2. Right: minimal SM-game, e.g. 3. Top: Both players have same learning rate. Bottom: Second player has $\frac{1}{8}$ learning rate of first (which is same as for top). Reducing the learning rate of the second player destabilizes the dynamics in "half a game", whereas the SM-game is essentially unaffected. + +# 1.4 NOTATION + +Vectors are column-vectors. The notations $\mathbf{S} \succ \mathbf{0}$ and $\mathbf{v} \succ \mathbf{0}$ refer to a positive-definite matrix and vector with all entries positive respectively. Rather than losses, we work with profits. Proofs are in the appendix. We use economic terminology (firms, profits, forecasts, and sentiment) even though the examples of SM-games, such as GANs and adversarial training, are taken from mainstream machine learning. We hope the economic terminology provides an invigorating change of perspective. The underlying mathematics is no more than first and second-order derivatives. + +
profit of firm iπi(w)(1)
gradient of profitξi(w) := ∇wiπi(w)
profit forecastfvi(w) := viT·ξi(w)(2)
aggregate forecast∑i fvi(wi)(3)
sentiment of firm iviT·∇wi fvi(w)(4)
+ +# 2 SMOOTH GAMES + +Smooth games model interacting agents with differentiable objectives. They are the kind of games that are played by neural nets. In practice, the differentiability assumption can be relaxed by replacing gradients with gradient estimates. + +Definition 1. A smooth game (Letcher et al., 2019) consists in $n$ players $[n] = \{1, \dots, n\}$ , equipped with twice continuously differentiable profit functions $\{\pi_i : \mathbb{R}^d \to \mathbb{R}\}_{i=1}^n$ . The parameters are $\mathbf{w} = (\mathbf{w}_1, \dots, \mathbf{w}_n) \in \mathbb{R}^d$ with $\mathbf{w}_i \in \mathbb{R}^{d_i}$ where $\sum_{i=1}^n d_i = d$ . Player $i$ controls the parameters $\mathbf{w}_i$ . + +If players update their actions via simultaneous gradient ascent, then a smooth game yields a dynamical system specified by the differential equation $\frac{d\mathbf{w}}{dt} = \pmb{\xi}(\mathbf{w})$ for + +$$ +\boldsymbol {\xi} (\mathbf {w}) := \left(\boldsymbol {\xi} _ {1} (\mathbf {w}), \dots , \boldsymbol {\xi} _ {n} (\mathbf {w})\right) +$$ + +where $\xi_{i}(\mathbf{w})\coloneqq \nabla_{\mathbf{w}_{i}}\pi_{i}(\mathbf{w})$ is a $d_{i}$ -vector. The Jacobian of a game is the $(d\times d)$ -matrix of second-derivatives $\mathbf{J}(\mathbf{w})\coloneqq \left(\frac{\partial\xi_{\alpha}(\mathbf{w})}{\partial w_{\beta}}\right)_{\alpha ,\beta = 1}^{d}$ . The setup can be recast in terms of minimizing losses by substituting $\ell_{i}\coloneqq -\pi_{i}$ for all $i$ . + +Smooth games are too general to be tractable since they encompass all dynamical systems. + +Lemma 1. Every continuous dynamical system on $\mathbb{R}^d$ , for any $d$ , arises as simultaneous gradient ascent on the profit functions of a smooth game. + +The next two sections illustrate some problems that arise in simple smooth games. + +Definition 2. We recall some solution concepts from dynamical systems and game theory: + +- A stable fixed point $^1$ $\mathbf{w}^*$ satisfies $\boldsymbol{\xi}(\mathbf{w}^*) = 0$ and $\mathbf{v}^{\top} \cdot \mathbf{J}(\mathbf{w}^*) \cdot \mathbf{v} < 0$ for all vectors $\mathbf{v} \neq \mathbf{0}$ . +- A local Nash equilibrium $\mathbf{w}^*$ has neighborhoods $U_i$ of $\mathbf{w}_i^*$ for all $i$ , such that $\pi_i(\mathbf{w}_i, \mathbf{w}_{-i}^*) < \pi_i(\mathbf{w}_i^*, \mathbf{w}_{-i}^*)$ all $\mathbf{w}_i \in U_i \setminus \{\mathbf{w}_i^*\}$ . +- A classical Nash equilibrium $\mathbf{w}^*$ satisfies $\pi_i(\mathbf{w}_i, \mathbf{w}_{-i}^*) \leq \pi_i(\mathbf{w}_i^*, \mathbf{w}_{-i}^*)$ for all $\mathbf{w}_i$ and all players $i$ . + +Example 1 below shows that stable fixed points and local Nash equilibria do not necessarily coincide. The notion of classical Nash equilibrium is ill-suited to nonconcave settings. + +Intuitively, a fixed point is stable if all trajectories sufficiently nearby flow into it. A joint strategy is a local Nash if each player is harmed if it makes a small unilateral deviation. Local Nash differs from the classic definition in two ways. It is weaker, because it only allows small unilateral deviations. This is necessary since players are neural networks and profits are not usually concave. It is also stronger, because unilateral deviations decrease (rather than not increase) profits. + +# 2.1 PROBLEMS WITH POTENTIAL GAMES + +A game is a potential game if $\pmb {\xi} = \nabla \phi$ for some function $\phi$ , see Balduzzi et al. (2018) for details. + +Example 1 (potential game). Fix a small $\epsilon >0$ . Consider the two-player games with profit functions + +$$ +\pi_ {1} (\mathbf {w}) = w _ {1} w _ {2} - \frac {\epsilon}{2} w _ {1} ^ {2} a n d \pi_ {2} (\mathbf {w}) = w _ {1} w _ {2} - \frac {\epsilon}{2} w _ {2} ^ {2}. +$$ + +The game has a unique local Nash equilibrium at $\mathbf{w} = (0,0)$ with $\pi_1(0,0) = 0 = \pi_2(0,0)$ . + +The game is chosen to be as nice as possible: $\pi_1$ and $\pi_2$ are strongly concave functions of $w_1$ and $w_2$ respectively. The game is a potential game since $\pmb{\xi} = (w_{2} - \epsilon w_{1},w_{1} - \epsilon w_{2}) = \nabla \phi$ for $\phi (\mathbf{w}) = w_{1}w_{2} - \frac{\epsilon}{2} (w_{1}^{2} + w_{2}^{2})$ . Nevertheless, the game exhibits three related problems. + +Firstly, the Nash equilibrium is unstable. Players at the Nash equilibrium can increase their profits via the joint update $\mathbf{w} \gets (0, 0) + \eta \cdot (1, 1)$ , so $\pi_1(\mathbf{w}) = \eta (1 - \frac{\epsilon}{2}) = \pi_2(\mathbf{w}) > 0$ . The existence of a Nash equilibrium where players can improve their payoffs by coordinated action suggests the incentives are not well-designed. + +Secondly, the dynamics can diverge to infinity. Starting at $\mathbf{w}^{(1)} = (1,1)$ and applying simultaneously gradient ascent causes the norm of vector $\| \mathbf{w}^{(t)}\|_2$ to increase without limit as $t \to \infty$ - and at an accelerating rate - due to a positive feedback loop between the players' parameters and profits. Finally, players impose externalities on each other. The decisions of the first player affect the profits of the second, and vice versa. Obviously players must interact for a game to be interesting. However, positive feedback loops arise because the interactions are not properly accounted for. + +In short, simultaneous gradient ascent does not converge to the Nash – and can diverge to infinity. It is open to debate whether the fault lies with gradients, the concept of Nash, or the game structure. In this paper, we take gradients and Nash equilibria as given and seek to design better games. + +# 2.2 PROBLEMS WITH LEARNING RATES + +Gradient-based optimizers rarely follow the actual gradient. For example RMSProp and Adam use adaptive, parameter-dependent learning rates. This is not a problem when optimizing a function. Suppose $f(\mathbf{w})$ is optimized with reweighted gradient $(\nabla f)_{\eta} \coloneqq (\eta_1\nabla_1f,\dots ,\eta_n\nabla_nf)$ where $\pmb{\eta} > \mathbf{0}$ is a vector of learning rates. Even though $(\nabla f)_{\eta}$ is not necessarily the gradient of any function, it behaves like $\nabla f$ because they have positive inner product when $\nabla f \neq \mathbf{0}$ : + +$$ +(\nabla f) _ {\boldsymbol {\eta}} ^ {\intercal} \cdot \nabla f = \sum_ {i} \eta_ {i} \cdot (\nabla_ {i} f) ^ {2} > 0, \text {s i n c e} \eta_ {i} > 0 \text {f o r a l l} i. +$$ + +Parameter-dependent learning rates thus behave well in potential games where the dynamics derive from an implicit potential function $\xi (\mathbf{w}) = \nabla \phi (\mathbf{w})$ . Severe problems can arise in general games. + +Example 2 ("half a game"). Consider the following game, where the $w_{2}$ -player is indifferent to $w_{1}$ : + +$$ +\pi_ {1} (\mathbf {w}) = w _ {1} w _ {2} - \frac {\epsilon}{2} w _ {1} ^ {2} a n d \pi_ {2} (\mathbf {w}) = - \frac {\epsilon}{2} w _ {2} ^ {2}. +$$ + +The dynamics are clear by inspection: the $w_{2}$ -player converges to $w_{2} = 0$ , and then the $w_{1}$ -player does the same. It is hard to imagine that anything could go wrong. In contrast, behavior in the next example should be worse because convergence is slowed down by cycling around the Nash: + +Example 3 (minimal SM-game). A simple SM-game, see definition 3, is + +$$ +\pi_ {1} (\mathbf {w}) = w _ {1} w _ {2} - \frac {\epsilon}{2} w _ {1} ^ {2} a n d \pi_ {2} (\mathbf {w}) = - w _ {1} w _ {2} - \frac {\epsilon}{2} w _ {2} ^ {2}. +$$ + +Figure 1 shows the dynamics of the games, in discrete time, with small learning rates and small gradient noise. In the top panel, both players have the same learning rate. Both games converge. Example 2 converges faster - as expected - without cycling around the Nash. + +In the bottom panels, the learning rate of the second player is decreased by a factor of eight. The SM-game's dynamics do not change significantly. In contrast, the dynamics of example 2 become unstable: although player 1 is attracted to the Nash, it is extremely sensitive to noise and does not stay there for long. One goal of the paper is to explain why SM-games are more robust, in general, to differences in relative learning rates. + +# 2.3 STOP_GRADIENT AND LEARNING RATES + +Tools for automatic differentiation (AD) such as TensorFlow and PyTorch include stop-gradient operators that stop gradients from being computed. For example, let $f(\mathbf{w}) = w_1 \cdot \text{stop-gradient}(w_2) - \frac{\epsilon}{2} (w_1^2 + w_2^2)$ . The use of stop-gradient means $f$ is not strictly speaking a function and so we use $\nabla_{\mathrm{AD}}$ to refer to its gradient under automatic differentiation. Then + +$$ +\nabla_ {\mathrm {A D}} f (\mathbf {w}) = (w _ {2} - \epsilon w _ {1}, - \epsilon w _ {2}) +$$ + +which is the simultaneous gradient from example 2. Any smooth vector field is the gradient of a function augmented with stop-gradient operators, see appendix D. Stop-gradient is often used in complex neural architectures (for example when one neural network is fed into another leading to multiplicative interactions), and is thought to be mostly harmless. Section 2.2 shows that stop-gradients can interact in unexpected ways with parameter-dependent learning rates. + +# 2.4 SUMMARY + +It is natural to expect individually well-behaved agents to also behave well collectively. Unfortunately, this basic requirement fails in even the simplest examples. + +Maximizing a strongly concave function is well-behaved: there is a unique, finite global maximum. However, example 1 shows that coupling concave functions can cause simultaneous gradient ascent to diverge to infinity. The dynamics of the game differs in kind from the dynamics of the players in isolation. Example 2 shows that reducing the learning rate of a well-behaved (strongly concave) player in a simple game destabilizes the dynamics. How collectives behave is sensitive not only to profits, but also to relative learning rates. Off-the-shelf optimizers such as Adam (Kingma and Ba, 2015) modify learning rates under the hood, which may destabilize some games. + +# 3 SMOOTH MARKETS (SM-GAMES) + +Let us restrict to more structured games. Take an accountant's view of the world, where the only thing we track is the flow of money. Interactions are pairwise. Money is neither created nor destroyed, so interactions are zero-sum. If we model the interactions between players by differentiable functions $g_{ij}(\mathbf{w}_i,\mathbf{w}_j)$ that depend on their respective strategies then we have an SM-game. All interactions are explicitly tracked. There are no externalities off the books. Positive interactions, $g_{ij} > 0$ , are revenue, negative are costs, and the difference is profit. The model prescribes that all firms are profit maximizers. More formally: + +Definition 3 (SM-game). A smooth market is a smooth game where interactions between players are pairwise zero-sum. The profits have the form + +$$ +\pi_ {i} (\mathbf {w}) = f _ {i} \left(\mathbf {w} _ {i}\right) + \sum_ {j \neq i} g _ {i j} \left(\mathbf {w} _ {i}, \mathbf {w} _ {j}\right) \tag {1} +$$ + +where $g_{ij}(\mathbf{w}_i,\mathbf{w}_j) + g_{ji}(\mathbf{w}_j,\mathbf{w}_i)\equiv 0$ for all $i,j$ + +The functions $f_{i}$ can act as regularizers. Alternatively, they can be interpreted as natural resources or dummy players that react too slowly to model as players. Dummy players provide firms with easy (non-adversarial) sources of revenue. + +Humans, unlike firms, are not profit-maximizers; humans typically buy goods because they value them more than the money they spend on them. Appendix C briefly discusses extending the model. + +# 3.1 EXAMPLES OF SM-GAMES + +SM-games codify a common design pattern: + +1. Optimizing a function. A near-trivial case is where there is a single player with profit $\pi_1(\mathbf{w}) = f_1(\mathbf{w})$ . +2. Generative adversarial networks and related architectures like CycleGANs are zero or near zero sum (Goodfellow et al., 2014; Wu et al., 2019; Zhu et al., 2017). +3. Zero-sum polymatrix games are SM-games where $f_{i}(\mathbf{w}_{i}) \equiv 0$ and $g_{ij}(\mathbf{w}_i,\mathbf{w}_j) = \mathbf{w}_i^\top \mathbf{A}_{ij}\mathbf{w}_j$ for some matrices $\mathbf{A}_{ij}$ . Weights are constrained to probability simplices. The games have nice properties including: Nash equilibria are computed via a linear program and correlated equilibria marginalize onto Nash equilibria (Cai et al., 2016). +4. Intrinsic curiosity modules use games to drive exploration. One module is rewarded for predicting the environment and an adversary is rewarded for choosing actions whose outcomes are not predicted by the first module (Pathak et al., 2017). The modules share some weights, so the setup is nearly, but not exactly, an SM-game. +5. Adversarial training is concerned with the minmax problem (Kurakin et al., 2017; Madry et al., 2018) + +$$ +\min _ {\mathbf {w} \in W} \sum_ {i} \left[ \max _ {\boldsymbol {\delta} _ {i} \in B _ {\epsilon}} \ell \left(f _ {\mathbf {w}} (\mathbf {x} _ {i} + \boldsymbol {\delta} _ {i}), y _ {i}\right) \right]. +$$ + +Setting $g_{0i}(\mathbf{w}_0, \boldsymbol{\delta}_i) = \ell(f_{\mathbf{w}_0}(\mathbf{x}_i + \boldsymbol{\delta}_i), y_i)$ obtains a star-shaped SM-game with the neural net (player 0) at the center and $n$ adversaries – one per datapoint $(\mathbf{x}_i, y_i)$ – on the arms. + +6. Task-suites where a population of agents are trained on a population of tasks, form a bipartite graph. If the tasks are parametrized and adversarially rewarded based on their difficulty for agents, then the setup is an SM-game. +7. Homogeneous games arise when all the coupling functions are equal up to sign (recall $g_{ij} = -g_{ji}$ ). An example is population self-play (Silver et al., 2016; Vinyals et al., 2019) which lives on a graph where $g_{ij}(\mathbf{w}_i, \mathbf{w}_j) \coloneqq P(\mathbf{w}_i \text{ beats } \mathbf{w}_j) - \frac{1}{2}$ comes from the probability that policy $\mathbf{w}_i$ beats $\mathbf{w}_j$ . + +Monetary exchanges in SM-games are quite general. The error signals traded between generators and discriminators and the wins and losses traded between agents in StarCraft are two very different special cases. + +# 4 FROM MICROTOMACRO + +How to analyze the behavior of the market as a whole? Adam Smith claimed that profit-maximizing leads firms to promote the interests of society, as if by an invisible hand (Smith, 1776). More formally, we can ask: Is there a measure that firms collectively increase or decrease? It is easy to see that firms do not collectively maximize aggregate profit (AP) or aggregate revenue (AR): + +$$ +\mathrm {A P} (\mathbf {w}) := \sum_ {i} \pi_ {i} (\mathbf {w}) = \sum_ {i} f _ {i} (\mathbf {w} _ {i}) \qquad \mathrm {A R} (\mathbf {w}) := \sum_ {i, j} \max \left(g _ {i j} (\mathbf {w} _ {i}, \mathbf {w} _ {j}), 0\right). +$$ + +![](images/0eced34356ff2969ad9c31ccc796d0a0bd76dd9de0e042cd00f5ea044c190167.jpg) + +![](images/7ead8ef5ec7cbe7686ebba2c6984ba9661797cd9bd7aa3aecef4b9580a6b70f3.jpg) +Figure 2: SM-game graph topologies. A: two-player (e.g. GANs). B: star-shaped (e.g. adversarial training). C: bipartite (e.g. task-suites). D: all-to-all. + +![](images/222184896360df40212e0ca06af5ff7d9c6dd4f224aa6f9811435c1a374ffe39.jpg) + +![](images/f47484407318f0075f8b4869f165f0702386d429a2bf0e1e932dedd9c7473cf5.jpg) + +Maximizing aggregate profit would require firms to ignore interactions with other firms. Maximizing aggregate revenue would require firms to ignore costs. In short, SM-games are not potential games; there is no function that they optimize in general. However, it turns out the dynamics of SM-games aggregates the dynamics of individual firms, in a sense made precise in section 4.3. + +# 4.1 RATIONALITY: SEEING LIKE A FIRM + +Give an objective function to an agent. The agent is rational, relative to the objective, if it chooses actions because it forecasts they will lead to better outcomes as measured by the objective. In SM-games, agents are firms, the objective is profit, and forecasts are computed using gradients. + +Firms aim to increase their profit. Applying the first-order Taylor approximation obtains + +$$ +\underbrace {\pi_ {i} \left(\mathbf {w} + \mathbf {v} _ {i}\right) - \pi_ {i} (\mathbf {w})} _ {\text {c h a n g e i n p r o f i t}} = \underbrace {\mathbf {v} _ {i} ^ {\mathrm {T}} \boldsymbol {\xi} _ {i} (\mathbf {w})} _ {\text {p r o f i t f o r e c a s t}} + \quad \{h. o. t. \}, \tag {2} +$$ + +where $\{h.o.t.\}$ refers to higher-order terms. Firm $i$ 's forecast of how profits will change if it modifies production by $\mathbf{v}_i$ is $\mathsf{f}_{\mathbf{v}_i}(\mathbf{w}) \coloneqq \mathbf{v}_i^\top \pmb{\xi}_i(\mathbf{w})$ . The Taylor expansion implies that $\mathsf{f}_{\mathbf{v}_i}(\mathbf{w}) \approx \pi_i(\mathbf{w} + \mathbf{v}_i) - \pi_i(\mathbf{w})$ for small updates $\mathbf{v}_i$ . Forecasts encode how individual firms expect profits to change ceteris paribus2. + +# 4.2 PROFIT CHANGES DO NOT ADD UP + +How does profit maximizing by individual firms look from the point of view of the market as a whole? Summing over all firms obtains + +$$ +\underbrace {\sum_ {i} \left[ \pi_ {i} \left(\mathbf {w} + \mathbf {v} _ {i}\right) - \pi_ {i} (\mathbf {w}) \right]} _ {\text {a g g r e g a t e - c h a n g e - i n p r o f i t}} = \underbrace {\sum_ {i} f _ {\mathbf {v} _ {i}} (\mathbf {w})} _ {\text {a g g r e g a t e f o r e c a s t}} + \quad \{\text {h . o . t .} \} \tag {3} +$$ + +where $\mathsf{f}_{\mathbf{v}}(\mathbf{w}) = \sum_{i}\mathsf{f}_{\mathbf{v}_{i}}(\mathbf{w})$ is the aggregate forecast. Unfortunately, the left-hand side of Eq. (3) is incoherent. It sums the changes in profit that would be experienced by firms updating their production in isolation. However, firms change their production simultaneously. Updates are not ceteris paribus and so profit is not a meaningful macroeconomic concept. The following minimal example illustrates the problem: + +Example 4. Suppose $\pi_1(\mathbf{w}) = w_1w_2$ and $\pi_2(\mathbf{w}) = -w_1w_2$ . Fix $\mathbf{w} = (w_1, w_2)$ and let $\mathbf{v} = (w_2, -w_1)$ . The sum of the changes in profit expected by the firms, reasoning in isolation, is + +$$ +\left[ \pi_ {1} (\mathbf {w} + \mathbf {v} _ {1}) - \pi_ {1} (\mathbf {w}) + \pi_ {2} (\mathbf {w} + \mathbf {v} _ {2}) - \pi_ {2} (\mathbf {w}) \right] = w _ {2} ^ {2} + w _ {1} ^ {2} > 0 +$$ + +whereas the actual change in aggregate profit is zero because $\pi_1(\mathbf{x}) + \pi_2(\mathbf{x}) = 0$ for any $\mathbf{x}$ . + +Tracking aggregate profits is therefore not useful. The next section shows forecasts are better behaved. + +# 4.3 LEGIBILITY: SEEING LIKE AN ECONOMY + +Give a target function to every agent in a collective. The collective is legible, relative to the targets, if it increases or decreases the aggregate target according to whether its members forecast, on aggregate, + +they will increase or decrease their targets. We show that SM-games are legible. The targets are profit forecasts (note: not profits). + +Let us consider how forecasts change. Define the sentiment as the directional derivative of the forecast $D_{\mathbf{v}_i} \mathfrak{f}_{\mathbf{v}_i}(\mathbf{w}) = \mathbf{v}_i^\top \nabla \mathfrak{f}_{\mathbf{v}_i}(\mathbf{w})$ . The first-order Taylor expansion of the forecast shows that the sentiment is a forecast about the profit forecast: + +$$ +\underbrace {\mathfrak {f} _ {\mathbf {v} _ {i}} \left(\mathbf {w} + \mathbf {v} _ {i}\right) - \mathfrak {f} _ {\mathbf {v} _ {i}} (\mathbf {w})} _ {\text {c h a n g e i n p r o f i t f o r e c a s t}} = \underbrace {\mathbf {v} _ {i} ^ {\mathrm {T}} \nabla \mathfrak {f} _ {\mathbf {v} _ {i}} (\mathbf {w})} _ {\text {s e n t i m e n t}} + \{h. o. t. \}. \tag {4} +$$ + +The perspective of firms can be summarized as: + +1. Choose an update direction $\mathbf{v}_i$ that is forecast to increase profit. +2. The firm is then in one of two main regimes: + +a. If sentiment is positive then forecasts increase as the firm modifies its production – forecasts become more optimistic. The firm experiences increasing returns-to-scale. +b. If sentiment is negative then forecasts decrease as the firm modifies its production – forecasts become more pessimistic. The firm experiences diminishing returns-to-scale. + +Our main result is that sentiment is additive, which means that forecasts are legible: + +Proposition 2 (forecasts are legible in SM-games). Sentiment is additive + +$$ +D _ {\mathbf {v}} \mathfrak {f} _ {\mathbf {v}} (\mathbf {w}) = \sum_ {i} D _ {\mathbf {v} _ {i}} \mathfrak {f} _ {\mathbf {v} _ {i}} (\mathbf {w}). +$$ + +Thus, the aggregate profit forecast $\mathfrak{f}_{\mathbf{v}}$ increases or decreases according to whether individual forecasts $\mathfrak{f}_{\mathbf{v}_i}$ are expected to increase or decrease in aggregate. + +Section 5.1 works through an example that is not legible. + +# 5 DYNAMICS OF SMOOTH MARKETS + +Finally, we study the dynamics of gradient-based learners in SM-games. Suppose firms use gradient ascent. Firm $i$ 's updates are, infinitesimally, in the direction $\mathbf{v}_i = \pmb{\xi}_i(\mathbf{w})$ so that $\frac{d\mathbf{w}_i}{dt} = \pmb{\xi}_i(\mathbf{w})$ . Since updates are gradients, we can simplify our notation. Define firm $i$ 's forecast as $\mathfrak{f}_i(\mathbf{w}) \coloneqq \frac{1}{2}\pmb{\xi}_i^{\top}\nabla_i\pi_i = \frac{1}{2}\|\pmb{\xi}_i(\mathbf{w})\|_2^2$ and its sentiment, ceteris paribus, as $\pmb{\xi}_i^{\top}\nabla_i\mathfrak{f}_i = \frac{d\mathfrak{f}_i}{dt}(\mathbf{w})$ . + +We allow firms to choose their learning rates; firms with higher learning rates are more responsive. Define the $\pmb{\eta}$ -weighted dynamics $\pmb{\xi}_{\pmb{\eta}}(\mathbf{w}) \coloneqq (\eta_{1} \pmb{\xi}_{1}, \dots, \eta_{n} \pmb{\xi}_{n})$ and $\pmb{\eta}$ -weighted forecast as + +$$ +\mathfrak {f} _ {\boldsymbol {\eta}} (\mathbf {w}) := \frac {1}{2} \| \boldsymbol {\xi} _ {\boldsymbol {\eta}} (\mathbf {w}) \| _ {2} ^ {2} = \frac {1}{2} \sum_ {i} \eta_ {i} ^ {2} \cdot \mathfrak {f} _ {i} (\mathbf {w}). +$$ + +In this setting, proposition 2 implies that + +Proposition 3 (legibility under gradient dynamics). Fix dynamics $\frac{d\mathbf{w}}{dt} \coloneqq \xi_{\eta}(\mathbf{w})$ . Sentiment decomposes additively: + +$$ +\frac {d \mathfrak {f} _ {\eta}}{d t} = \sum_ {i} \eta_ {i} \cdot \frac {d \mathfrak {f} _ {i}}{d t} +$$ + +Thus, we can read off the aggregate dynamics from the dynamics of forecasts of individual firms. + +# 5.1 EXAMPLE OF A FAILURE OF LEGIBILITY + +The pairwise zero-sum structure is crucial to legibility. It is instructive to take a closer look at example 1, where the forecasts are not legible. + +Suppose $\pi_1(\mathbf{w}) = w_1w_2 - \frac{\epsilon}{2} w_1^2$ and $\pi_2(\mathbf{w}) = w_1w_2 - \frac{\epsilon}{2} w_2^2$ . Then $\pmb {\xi}(\mathbf{w}) = (w_{2} - \epsilon w_{1},w_{1} - \epsilon w_{2})$ and the firms' sentiments are $\frac{d\mathrm{f}_1}{dt} = -\epsilon (w_2 - \epsilon w_1)^2$ and $\frac{d\mathrm{f}_2}{dt} = -\epsilon (w_1 - \epsilon w_2)^2$ which are always + +non-positive. However, the aggregate sentiment is + +$$ +\frac {d \mathfrak {f}}{d t} (\mathbf {w}) = - \epsilon (w _ {2} - \epsilon w _ {1}) ^ {2} - \epsilon (w _ {1} - \epsilon w _ {2}) ^ {2} + (1 + \epsilon^ {2}) w _ {1} w _ {2} - \epsilon (w _ {1} + w _ {2}) ^ {2} +$$ + +which for small $\epsilon$ is dominated by $w_{1}w_{2}$ , and so can be either positive or negative. + +When $\mathbf{w} = (1,1)$ we have $\frac{d\mathbf{f}}{dt} = 1 - \epsilon (6 - 5\epsilon +2\epsilon^2)\approx 1 > 0$ and $\frac{d\mathfrak{f}_1}{dt} +\frac{d\mathfrak{f}_2}{dt} = -2\epsilon (1 - \epsilon)^2 < 0$ . Each firm expects their forecasts to decrease, and yet the opposite happens due to a positive feedback loop that ultimately causes the dynamics to diverge to infinity. + +# 5.2 STABILITY, CONVERGENCE AND BOUNDEDNESS + +We provide three fundamental results on the dynamics of smooth markets. Firstly, we show that stability, from dynamical systems, and local Nash equilibrium, from game theory, coincide in SM-games: + +Theorem 4 (stability). A fixed point in an SM-game is a local Nash equilibrium iff it is stable. Thus, every local Nash equilibrium is contained in an open set that forms its basin of attraction. + +Secondly, we consider convergence. Lyapunov functions are tools for studying convergence. Given dynamical system $\frac{d\mathbf{w}}{dt} = \xi (\mathbf{w})$ with fixed point $\mathbf{w}^*$ , recall that $V(\mathbf{w})$ is a Lyapunov function if: (i) $V(\mathbf{w}^{*}) = 0$ ; (ii) $V(\mathbf{w}) > 0$ for all $\mathbf{w}\neq \mathbf{w}^{*}$ ; and (iii) $\frac{dV}{dt} (\mathbf{w}) < 0$ for all $\mathbf{w}\neq \mathbf{w}^{*}$ . If a dynamical system has a Lyapunov function then the dynamics converge to the fixed point. Aggregate forecasts share properties (i) and (ii) with Lyapunov functions. + +(i) Shared global minima: $\mathsf{f}_{\eta}(\mathbf{w}) = 0$ iff $\mathsf{f}_{\eta'}(\mathbf{w}) = 0$ for all $\eta, \eta' \succ 0$ , which occurs iff $\mathbf{w}$ is a stationary point, $\xi_{i}(\mathbf{w}) = \mathbf{0}$ for all $i$ . +(ii) Positivity: $\mathsf{f}_{\eta}(\mathbf{w}) > 0$ for all points that are not fixed points, for all $\pmb{\eta}\succ \mathbf{0}$ + +We can therefore use forecasts to study convergence and divergence across all learning rates: + +Theorem 5. In continuous time, for all positive learning rates $\eta >0$ + +1. Convergence: If $\mathbf{w}^*$ is a stable fixed point ( $\mathbf{S} \prec \mathbf{0}$ ), then there is an open neighborhood $U \ni \mathbf{w}^*$ where $\frac{d\mathbf{f}_n}{dt}(\mathbf{w}) < 0$ for all $\mathbf{w} \in U \setminus \{\mathbf{w}^*\}$ , so the dynamics converge to $\mathbf{w}^*$ from anywhere in $U$ . +2. Divergence: If $\mathbf{w}^*$ is an unstable fixed point ( $\mathbf{S} \succ \mathbf{0}$ ), there is an open neighborhood $U \ni \mathbf{w}^*$ such that $\frac{df\eta}{dt}(\mathbf{w}) > 0$ for all $\mathbf{w} \in U \setminus \{\mathbf{w}^*\}$ , so the dynamics within $U$ are repelled by $\mathbf{w}^*$ . + +The theorem explains why SM-games are robust to relative differences in learning rates – in contrast to the sensitivity exhibited by the game in example 2. If a fixed point is stable, then for any dynamics $\frac{dw}{dt} = \xi_{\eta}(\mathbf{w})$ , there is a corresponding aggregate forecast $\hat{\mathfrak{f}}_{\eta}(\mathbf{w})$ that can be used to show convergence. The aggregate forecasts provide a family of Lyapunov-like functions. + +Finally, we consider the setting where firms experience diminishing returns-to-scale for sufficiently large production vectors. The assumption is realistic for firms in a finite economy since revenues must eventually saturate whilst costs continue to increase with production. + +Theorem 6 (boundedness). Suppose all firms have negative sentiment for sufficiently large values of $\|\mathbf{w}_i\|$ . Then the dynamics are bounded for all $\boldsymbol{\eta} \succ \mathbf{0}$ . + +The theorem implies that the kind of positive feedback loops that caused example 1 to diverge to infinity, cannot occur in SM-games. + +# 5.3 LEGIBILITY AND THE LANDSCAPE + +One of our themes is that legibility allows to read off the dynamics of games. We make the claim visually explicit in this section. Let us start with a concrete game. + +Example 5. Consider the SM-game with profits + +$$ +\pi_ {1} (\mathbf {w}) = - \frac {1}{6} | w _ {1} | ^ {3} + \frac {1}{2} w _ {1} ^ {2} - w _ {1} w _ {2} \quad a n d \quad \pi_ {2} (\mathbf {w}) = - \frac {1}{6} | w _ {2} | ^ {3} + \frac {1}{2} w _ {2} ^ {2} + w _ {1} w _ {2}. +$$ + +![](images/3b9bc92d88456a20656bdb10cab547091dedc7c523d148c9e3289f4f376857d1.jpg) +Figure 3: Legibile dynamics. Panels AB: Dynamics in an SM-game with both positive and negative sentiment, under different learning rates. Panels CD: Cartoon maps of the dynamics. + +![](images/0baceb4feec8d9d622e37a55723515d49b2185e89c3c674678e9bd0577c5b47c.jpg) + +![](images/c1fa6bc64b286ffae5fb3b098d3e7ccfc4ef25597f20cb7ee9ca2be67072180f.jpg) + +![](images/92336001245ee1249424310955fbe3d2f5d95f51367e64f9c33a4a1bc03b8d44.jpg) + +Figure 3AB plots the dynamics of the SM-game in example 5, under two different learning rates for player 1. There is an unstable fixed point at the origin and an ovoidal cycle. Dynamics converge to the cycle from both inside and outside the ovoid. Changing player 1's learning rate, panel B, squashes the ovoid. Panels CD provide a cartoon map of the dynamics. There are two regions, the interior and exterior of the ovoid and the boundary formed by the ovoid itself. + +In general, the phase space of any SM-game is carved into regions where sentiment $\frac{d\mathfrak{f}_{\eta}}{dt}(\mathbf{w})$ is positive and negative, with boundaries where sentiment is zero. The dynamics can be visualized as operating on a landscape where height at each point $\mathbf{w}$ corresponds to the value of the aggregate forecast $\mathfrak{f}_{\eta}(\mathbf{w})$ . The dynamics does not always ascend or always descend the landscape. Rather, sentiment determines whether the dynamics ascends, descends, or remains on a level-set. Since sentiment is additive, $\frac{d\mathfrak{f}_{\eta}}{dt}(\mathbf{w}) = \sum_{i} \eta_{i} \cdot \frac{d\mathfrak{f}}{dt}(\mathbf{w})$ , the decision to ascend or descend comes down to a weighted sum of the sentiments of the firms. $^3$ Changing learning rates changes the emphasis given to different firms' opinions, and thus changes the shapes of the boundaries between regions in a relatively straightforward manner. + +SM-games can thus express richer dynamics than potential games (cycles will not occur when performing gradient ascent on a fixed objective), which still admit a relatively simple visual description in terms of a landscape and decisions about which direction to go (upwards or downwards). Computing the landscape for general SM-games, as for neural nets, is intractable. + +# 6 DISCUSSION + +Machine learning has got a lot of mileage out of treating differentiable modules like plug-and-play lego blocks. This works when the modules optimize a single loss and the gradients chain together seamlessly. Unfortunately, agents with differing objectives are far from plug-and-play. Interacting agents form games, and games are intractable in general. Worse, positive feedback loops can cause individually well-behaved agents to collectively spiral out of control. + +It is therefore necessary to find organizing principles – constraints – on how agents interact that ensure their collective behavior is amenable to analysis and control. The pairwise zero-sum condition that underpins SM-games is one such organizing principle, which happens to admit an economic interpretation. Our main result is that SM-games are legible: changes in aggregate forecasts are the sum of how individual firms expect their forecasts to change. It follows that we can translate properties of the individual firms into guarantees on collective convergence, stability and boundedness in SM-games, see theorems 4-6. + +Legibility is a local-to-global principle, whereby we can draw qualitative conclusions about the behavior of collectives based on the nature of their individual members. Identifying and exploiting games that embed local-to-global principles will become increasingly important as artificial agents become more common. + +# REFERENCES + +Abernethy, J. and Frongillo, R. (2011). A Collaborative Mechanism for Crowdsourcing Prediction Problems. In NeurIPS. + +Abernethy, J., Lai, K. A., and Wibisono, A. (2019). Last-iterate convergence rates for min-max optimization. In arXiv:1906.02027. +Babichenko, Y. (2016). Query Complexity of Approximate Nash Equilibria. Journal ACM, 63(4). +Bailey, J. P., Gidel, G., and Piliouras, G. (2019). Finite Regret and Cycles with Fixed Step-Size via Alternating Gradient Descent-Ascent. In arXiv:1907.04392. +Bailey, J. P. and Piliouras, G. (2018). Multiplicative Weights Update in Zero-Sum Games. In ACM EC. +Balduzzi, D. (2014). Cortical prediction markets. In AAMAS. +Balduzzi, D., Racanière, S., Martens, J., Foerster, J., Tuyls, K., and Graepel, T. (2018). The mechanics of $n$ -player differentiable games. In ICML. +Barto, A. G., Sutton, R. S., and Anderson, C. W. (1983). Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problems. IEEE Trans. Systems, Man, Cyb, 13(5):834-846. +Baum, E. B. (1999). Toward a Model of Intelligence as an Economy of Agents. *Machine Learning*, 35(155-185). +Berard, H., Gidel, G., Almahairi, A., Vincent, P., and Lacoste-Julien, S. (2019). A Closer Look at the Optimization Landscapes of Generative Adversarial Networks. In arXiv:1906.04848. +Cai, Y., Candogan, O., Daskalakis, C., and Papadimitriou, C. (2016). Zero-sum Polymatrix Games: A Generalization of Minmax. Mathematics of Operations Research, 41(2):648-655. +Daskalakis, C., Goldberg, P. W., and Papadimitriou, C. (2009). The Complexity of Computing a Nash Equilibrium. SIAM J. Computing, 39(1):195-259. +Drexler, K. E. (2019). Reframing Superintelligence: Comprehensive AI Services as General Intelligence. Future of Humanity Institute, University of Oxford, Technical Report #2019-1. +Gemp, I. and Mahadevan, S. (2017). Online Monotone Games. In arXiv:1710.07328. +Gemp, I. and Mahadevan, S. (2018). Global Convergence to the Equilibrium of GANs using Variational Inequalities. In arXiv:1808.01531. +Gidel, G., Hemmat, R. A., Pezeshki, M., Lepriol, R., Huang, G., Lacoste-Julien, S., and Mitliagkas, I. (2019). Negative Momentum for Improved Game Dynamics. In AISTATS. +Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative Adversarial Nets. In NeurIPS. +Hart, S. and Mas-Colell, A. (2003). Uncoupled Dynamics Do Not Lead to Nash Equilibrium. American Economic Review, 93(5):1830-1836. +Hu, J. and Storkey, A. (2014). Multi-period Trading Prediction Markets with Connections to Machine Learning. In ICML. +Kakade, S., Kearns, M., and Ortiz, L. (2003). Graphical economics. In $COLT$ . +Kakade, S., Kearns, M., Ortiz, L., Pemantle, R., and Suri, S. (2005). Economic properties of social networks. In NeurIPS. +Kearns, M., Littman, M., and Singh, S. (2001). Graphical models for game theory. In UAI. +Kingma, D. P. and Ba, J. L. (2015). Adam: A method for stochastic optimization. In *ICLR*. +Kurakin, A., Goodfellow, I., and Bengio, S. (2017). Adversarial Machine Learning at Scale. In ICLR. +Kwee, I., Hutter, M., and Schmidhuber, J. (2001). Market-based reinforcement learning in partially observable worlds. In ICANN. + +Lay, N. and Barbu, A. (2010). Supervised aggregation of classifiers using artificial prediction markets. In ICML. +Letcher, A., Balduzzi, D., Racanière, S., Martens, J., Foerster, J., Tuyls, K., and Graepel, T. (2019). Differentiable Game Mechanics. JMLR, 20:1-40. +Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (2018). Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR. +Mescheder, L. (2018). On the convergence properties of GAN training. In ArXiv:1801:04406. +Mescheder, L., Nowozin, S., and Geiger, A. (2017). The Numerics of GANs. In NeurIPS. +Minsky, M. (1986). The society of mind. Simon and Schuster, New York NY. +Monderer, D. and Shapley, L. S. (1996). Potential Games. Games and Economic Behavior, 14:124-143. +Nemirovski, A., Onn, S., and Rothblum, U. G. (2010). Accuracy certificates for computational problems with convex structure. Mathematics of Operations Research, 35(1). +Nisan, N., Roughgarden, T., Tardos, É., and Vazirani, V., editors (2007). Algorithmic Game Theory. Cambridge University Press, Cambridge. +Palaiopanos, G., Panageas, I., and Piliouras, G. (2017). Multiplicative Weights Update with Constant Step-Size in Congestion Games: Convergence, Limit Cycles and Chaos. In NeurIPS. +Parkes, D. C. and Wellman, M. P. (2015). Economic reasoning and artificial intelligence. Science, 349(6245):267-272. +Pathak, D., Agrawal, P., Efros, A. A., and Darrell, T. (2017). Curiosity-driven Exploration by Self-supervised Prediction. In ICML. +Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., Mcelreath, R., Mislove, A., Parkes, D. C., Pentland, A. S., Roberts, M. E., Shariff, A., Tenenbaum, J. B., and Wellman, M. (2019). Machine behaviour. Nature, 568:477-486. +Scott, J. (1999). Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press. +Selfridge, O. G. (1958). Pandemonium: a paradigm for learning. In Mechanisation of Thought Processes: Proc Symposium Held at the National Physics Laboratory. +Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T. P., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489. +Smith, A. (1776). The Wealth of Nations. W. Strahan and T. Cadell, London. +Storkey, A. (2011). Machine Learning Markets. In AISTATS. +Storkey, A., Millin, J., and Geras, K. (2012). Isoelastic Agents and Wealth Udates in Machine Learning Markets. In ICML. +Sutton, R., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. (2011). Horde: A Scalable Real-time Architecture for Learning Knowledge from Unsupervised Motor Interaction. In AAMAS. +Tatarenko, T. and Kamgarpour, M. (2019). Learning Generalized Nash Equilibria in a Class of Convex Games. IEEE Transactions on Automatic Control, 64(4):1426-1439. + +Vickrey, W. (1961). Counterspeculation, Auctions and Competitive Sealed Tenders. J Finance, 16:8-37. +Vinyals, O., Babuschkin, I., Chung, J., Mathieu, M., Jaderberg, M., Czarnecki, W. M., Dudzik, A., Huang, A., Georgiev, P., Powell, R., Ewalds, T., Horgan, D., Kroiss, M., Danihelka, I., Agapiou, J., Oh, J., Dalibard, V., Choi, D., Sifre, L., Sulsky, Y., Vezhnevets, S., Molloy, J., Cai, T., Budden, D., Paine, T., Gulcehre, C., Wang, Z., Pfaff, T., Pohlen, T., Wu, Y., Yogatama, D., Cohen, J., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Apps, C., Kavukcuoglu, K., Hassabis, D., and Silver, D. (2019). AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/. +von Neumann, J. (1928). Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100(1):295-320. +von Neumann, J. and Morgenstern, O. (1944). Theory of Games and Economic Behavior. Princeton University Press, Princeton NJ. +Wellman, M. P. and Wurman, P. R. (1998). Market-aware agents for a multiagent world. Robotics and Autonomous Systems, 24:115-125. +Wu, Y., Donahue, J., Balduzzi, D., Simonyan, K., and Lillicrap, T. (2019). LOGAN: Latent Optimisation for Generative Adversarial Networks. In arXiv:1912.00953. +Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2017). Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In CVPR. + +# APPENDIX + +# A MECHANICS OF SMOOTH MARKETS + +This section provides a physics-inspired perspective on smooth markets. Consider a dynamical system with $n$ particles moving according to the differential equations: + +$$ +\frac {d \mathbf {w} _ {1}}{d t} = \boldsymbol {\xi} _ {1} (\mathbf {w} _ {1}, \dots , \mathbf {w} _ {n}) +$$ + +: + +$$ +\frac {d \mathbf {w} _ {n}}{d t} = \boldsymbol {\xi} _ {n} (\mathbf {w} _ {1}, \dots , \mathbf {w} _ {n}). +$$ + +The kinetic energy of a particle is mass times velocity squared, $mv^2$ , or in our case + +$$ +\text {e n e r g y} i ^ {\text {t h}} \text {p a r t i c l e} = \eta_ {i} ^ {2} \cdot f _ {i} (\mathbf {w}) = \eta_ {i} ^ {2} \cdot \| \boldsymbol {\xi} _ {i} \| ^ {2} +$$ + +where we interpret the learning rate squared $\eta_i^2$ of particle $i$ as its mass and $\xi_{i}$ as its velocity. The total energy of the system is the sum over the kinetic energies of the particles: + +$$ +\text {t o t a l e n e r g y} = \mathfrak {f} _ {\boldsymbol {\eta}} (\mathbf {w}) = \sum_ {i} \eta_ {i} ^ {2} \cdot \| \boldsymbol {\xi} _ {i} \| ^ {2}. +$$ + +For example, in a Hamiltonian game we have that energy is conserved: + +$$ +\text {t o t a l e n e r g y} = \| \boldsymbol {\xi} \| ^ {2} = \text {c o n s t a n t} \quad \text {o r} \quad \frac {d \mathfrak {f}}{d t} (\mathbf {w}) = 0 +$$ + +since $\pmb{\xi}^{\mathsf{T}}\cdot \nabla \| \pmb {\xi}\|^{2} = \pmb{\xi}^{\mathsf{T}}\cdot \mathbf{A}^{\mathsf{T}}\pmb {\xi} = 0$ , see Balduzzi et al. (2018); Letcher et al. (2019) for details. + +Energy is measured in joules $(kg\cdot m\cdot s^{-2})$ . The rate of change of energy with respect to time is power, measured in joules per second or watts $(kg\cdot m\cdot s^{-3})$ . Conservation of energy means that a (closed) Hamiltonian system, in aggregate, generates no power. The existence of an invariant function makes Hamiltonian systems easy to reason about in many ways. + +Smooth markets are more general than Hamiltonian games in that total energy is not necessarily conserved. Nevertheless, they are much more constrained than general dynamical systems. Legibility, proposition 3, says that the total power (total rate of energy generation) in smooth markets is the sum of the power (rate of energy generation) of the individual particles: + +$$ +\frac {d \mathfrak {f}}{d t} (\mathbf {w}) = \sum_ {i} \frac {d \mathfrak {f} _ {i}}{d t} (\mathbf {w}). +$$ + +Example where legibility fails. Once again, it is instructive to look at a concrete example where legibility fails. Recall the potential game in example 1 with profits + +$$ +\pi_ {1} (\mathbf {w}) = w _ {1} w _ {2} - \frac {\epsilon}{2} w _ {1} ^ {2} \quad \text {a n d} \quad \pi_ {2} (\mathbf {w}) = w _ {1} w _ {2} - \frac {\epsilon}{2} w _ {2} ^ {2}. +$$ + +and sentiments + +$$ +\frac {d \mathfrak {f} _ {1}}{d t} (\mathbf {w}) = - \epsilon (w _ {2} - \epsilon w _ {1}) ^ {2} \quad \text {a n d} \quad \frac {d \mathfrak {f} _ {2}}{d t} (\mathbf {w}) = - \epsilon (w _ {1} - \epsilon w _ {2}) ^ {2}. +$$ + +Physically, the negative sentiments $\frac{df_1}{dt} < 0$ and $\frac{df_2}{dt} < 0$ mean that each "particle" in the system, considered in isolation, is always dissipating energy. Nevertheless as shown in section 5.1 the system as a whole has + +$$ +\frac {d \mathfrak {f}}{d t} (\mathbf {w}) = - \epsilon (w _ {2} - \epsilon w _ {1}) ^ {2} - \epsilon (w _ {1} - \epsilon w _ {2}) ^ {2} + (1 + \epsilon^ {2}) w _ {1} w _ {2} - \epsilon (w _ {1} + w _ {2}) ^ {2} +$$ + +which is positive for some values of $\mathbf{w}$ . Thus, the system as a whole can generate energy through interaction effects between the (dissipative) particles. + +# B PROOFS + +# Proof of lemma 1. + +Lemma 1. Every continuous dynamical system on $\mathbb{R}^d$ , for any $d$ , arises as simultaneous gradient ascent on the profit functions of a smooth game. + +Proof. Specifically, we mean that every dynamical system of the form $\frac{dw}{dt} = \xi(w)$ arises as simultaneous gradient ascent on the profits of a smooth game. + +Given continuous vector field $\xi$ on $\mathbb{R}^d$ , we need to construct a smooth game with dynamics given by $\xi$ . To that end, consider a $d$ -player game where player $i$ controls coordinate $w_i$ . Set the profit of player $i$ to + +$$ +\pi_ {i} (\mathbf {w}) := \int_ {x = 0} ^ {x = w _ {i}} \xi_ {i} (w _ {1}, \dots , w _ {i - 1}, x, w _ {i + 1}, \dots , w _ {d}) d x +$$ + +and observe that $\frac{\partial\pi_i}{\partial w_i} (\mathbf{w}) = \xi_i(\mathbf{w})$ by the fundamental theorem of calculus. + +Proof of proposition 2. Before proving proposition 2, we first prove a lemma. + +Lemma 7 (generalized Helmholtz decomposition). The Jacobian decomposes into $\mathbf{J}(\mathbf{w}) = \mathbf{S}(\mathbf{w}) + \mathbf{A}(\mathbf{w})$ where $\mathbf{S}(\mathbf{w})$ and $\mathbf{A}(\mathbf{w})$ are symmetric and antisymmetric, respectively, for all $\mathbf{w} \in \mathbb{R}^d$ . + +Proof. Follows immediately. See Letcher et al. (2019) for details and explanation. + +Proposition 2. Sentiment is additive: $D_{\mathbf{v}}f_{\mathbf{v}}(\mathbf{w}) = \sum_{i}D_{\mathbf{v}_{i}}f_{\mathbf{v}_{i}}(\mathbf{w})$ . + +Proof. For any collection of updates $(\mathbf{v}_i)_{i = 1}^n$ , we need to show that + +$$ +\mathbf {v} ^ {\mathsf {T}} \cdot \nabla \mathsf {f} _ {\mathbf {v}} (\mathbf {w}) = \sum_ {i} \mathbf {v} _ {i} ^ {\mathsf {T}} \cdot \nabla \mathsf {f} _ {\mathbf {v} _ {i}} (\mathbf {w}). +$$ + +Direct computation obtains $\mathbf{v}^{\top}\nabla \mathsf{f}_{\mathbf{v}}(\mathbf{w}) = \mathbf{v}^{\top}\mathbf{J}\mathbf{v} = \mathbf{v}^{\top}\mathbf{S}\mathbf{v} + \mathbf{v}^{\top}\mathbf{A}\mathbf{v} = \sum_{i}\mathbf{v}^{\top}\mathbf{S}_{ii}\mathbf{v} = \sum_{i}\mathbf{v}_{i}^{\top}\cdot \nabla \mathsf{f}_{\mathbf{v}_{i}}(\mathbf{w})$ because $\mathbf{A}$ is antisymmetric and $\mathbf{S}$ is block-diagonal. + +Proof of proposition 3. First we prove a lemma. + +Lemma 8. $\pmb{\xi}_{\eta}^{\top}(\mathbf{w})\cdot \nabla \mathfrak{f}_{\eta}(\mathbf{w}) = \sum_{i = 1}^{n}\eta_{i}^{2}\cdot \pmb{\xi}_{i}^{\top}\nabla \mathfrak{f}_{i}(\mathbf{w}).$ + +Proof. Observe by direct computation that + +$$ +\nabla \mathfrak {f} _ {i} = \mathbf {J} ^ {\mathsf {T}} \cdot \left( \begin{array}{c} 0 \\ \vdots \\ \boldsymbol {\xi} _ {i} \\ \vdots \\ 0 \end{array} \right), \quad \text {a n d s o} \quad \nabla (\eta_ {i} \mathfrak {f} _ {i}) = \mathbf {J} ^ {\mathsf {T}} \cdot \left( \begin{array}{c} 0 \\ \vdots \\ \eta_ {i} \cdot \boldsymbol {\xi} _ {i} \\ \vdots \\ 0 \end{array} \right). +$$ + +It is then easy to see that $\nabla \mathfrak{f}_{\eta} = \sum_{i}\eta_{i}\cdot \nabla \mathfrak{f}_{i} = \sum_{i}\eta_{i}\cdot \mathbf{J}^{\intercal}\pmb{\xi}_{i} = \mathbf{J}^{\intercal}\pmb{\xi}_{\eta}$ . Thus, + +$$ +\boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot \nabla \mathfrak {f} _ {\eta} = \boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot \mathbf {J} ^ {\intercal} \cdot \boldsymbol {\xi} _ {\eta} = \boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot (\mathbf {S} + \mathbf {A} ^ {\intercal}) \cdot \boldsymbol {\xi} _ {\eta} +$$ + +where $\mathbf{S} = \mathbf{S}^{\intercal}$ since $\mathbf{S}$ is symmetric. By antisymmetry of $\mathbf{A}$ , we have that $\mathbf{v}^{\intercal}\mathbf{A}^{\intercal}\mathbf{v} = 0$ for all $\mathbf{v}$ . The expression thus simplifies to + +$$ +\boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot \nabla \mathfrak {f} _ {\eta} = \boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot \mathbf {S} \cdot \boldsymbol {\xi} _ {\eta} = \sum_ {i = 1} ^ {n} \eta_ {i} ^ {2} \cdot (\boldsymbol {\xi} _ {i} ^ {\intercal} \cdot \mathbf {S} _ {i i} \cdot \boldsymbol {\xi} _ {i}) = \sum_ {i = 1} ^ {n} \eta_ {i} ^ {2} \cdot \boldsymbol {\xi} _ {i} ^ {\intercal} \nabla_ {i} \mathfrak {f} _ {i} +$$ + +by the block-diagonal structure of $\mathbf{S}$ + +![](images/71ab1322c63c5678887b5b0458c6c2dcd9c751a83fb628650c4daa5b59313449.jpg) + +Proposition 3 (legibility under gradient dynamics). Fix dynamics $\frac{dw}{dt} \coloneqq \xi_{\eta}(w)$ . Sentiment decomposes additively: + +$$ +\frac {d \mathbf {f} _ {\eta}}{d t} = \sum_ {i} \eta_ {i} \cdot \frac {d \mathbf {f} _ {i}}{d t} +$$ + +Proof. Applying the chain rule obtains that + +$$ +\frac {d \mathfrak {f} _ {\eta}}{d t} = \left\langle \nabla_ {\mathbf {w}} \mathfrak {f} _ {\eta}, \frac {d \mathbf {w}}{d t} \right\rangle = \left\langle \nabla_ {\mathbf {w}} \mathfrak {f} _ {\eta}, \pmb {\xi} _ {\eta} (\mathbf {w}) \right\rangle , +$$ + +where the second equality follows by construction of the dynamical system as $\frac{d\mathbf{w}}{dt} = \xi_{\eta}(\mathbf{w})$ . Lemma 8 shows that + +$$ +\begin{array}{l} \left\langle \nabla_ {\mathbf {w}} f _ {\eta}, \boldsymbol {\xi} _ {\eta} (\mathbf {w}) \right\rangle = \boldsymbol {\xi} _ {\eta} ^ {\intercal} \mathbf {J} ^ {\intercal} \boldsymbol {\xi} _ {\eta} \\ = \boldsymbol {\xi} _ {\eta} ^ {\intercal} (\mathbf {S} + \mathbf {A} ^ {\intercal}) \boldsymbol {\xi} _ {\eta} \\ = \sum_ {i} \eta_ {i} ^ {2} \cdot \boldsymbol {\xi} _ {i} ^ {\mathsf {T}} \mathbf {S} _ {i i} \boldsymbol {\xi} _ {i} \\ = \sum_ {i} \eta_ {i} ^ {2} \left\langle \boldsymbol {\xi} _ {i}, \nabla \mathfrak {f} _ {i} (\mathbf {w}) \right\rangle . \\ \end{array} +$$ + +Finally, since $\frac{d\mathbf{w}_i}{dt} = \eta_i\cdot \pmb {\xi}_i(\mathbf{w})$ by construction, we have + +$$ +\begin{array}{l} \left\langle \eta_ {i} \cdot \boldsymbol {\xi} _ {i}, \eta_ {i} \cdot \nabla \mathfrak {f} _ {i} (\mathbf {w}) \right\rangle = \left\langle \frac {d \mathbf {w} _ {i}}{d t}, \nabla_ {\mathbf {w} _ {i}} \left(\eta_ {i} \mathfrak {f} _ {i} (\mathbf {w})\right) \right\rangle \\ = \frac {d \left(\eta_ {i} f _ {i}\right)}{d t} = \eta_ {i} \frac {d f _ {i}}{d t} \\ \end{array} +$$ + +for all $i$ as required. + +![](images/196308823c2a5fa2f10aa009cc4b8d834bda583f170e261d406ee60f5fd68e97.jpg) + +# Proof of theorem 4. + +Theorem 4. A fixed point in an SM-game is a local Nash equilibrium iff it is stable. + +Proof. Suppose that $\mathbf{w}^*$ is a fixed point of the game, that is suppose $\xi (\mathbf{w}^{*}) = \mathbf{0}$ . + +Recall from lemma 7 that the Jacobian of $\xi$ decomposes uniquely into two components $\mathbf{J}(\mathbf{w}) = \mathbf{S}(\mathbf{w}) + \mathbf{A}(\mathbf{w})$ where $\mathbf{S} \equiv \mathbf{S}^{\intercal}$ is symmetric and $\mathbf{A} + \mathbf{A}^{\intercal} \equiv 0$ is antisymmetric. It follows that $\mathbf{v}^{\intercal}\mathbf{J}\mathbf{v} = \mathbf{v}^{\intercal}\mathbf{S}\mathbf{v} + \mathbf{v}^{\intercal}\mathbf{A}\mathbf{v} = \mathbf{v}^{\intercal}\mathbf{S}\mathbf{v}$ since $\mathbf{A}$ is antisymmetric. Thus, $\mathbf{w}^{*}$ is a stable fixed point iff $\mathbf{S}(\mathbf{w}^{*}) \succ 0$ is negative definite. + +In an SM-game, the antisymmetric component is arbitrary and the symmetric component is block diagonal – where blocks correspond to players' parameters. That is, $\mathbf{S}_{ij} = \mathbf{0}$ for $i \neq j$ because the interactions between players $i$ and $j$ are pairwise zero-sum – and are therefore necessarily confined to the antisymmetric component of the Jacobian. Since $\mathbf{S}$ is block-diagonal, it follows that $\mathbf{S}$ is negative definite iff the submatrices $\mathbf{S}_{ii}$ along the diagonal are negative definite for all players $i$ . + +Finally, $\mathbf{S}_{ii}(\mathbf{w}^{*}) = \nabla_{ii}^{2}\pi_{i}(\mathbf{w}^{*})$ is negative definite iff profit $\pi_i(\mathbf{w}^*)$ is strictly concave in the parameters controlled by player $i$ at $\mathbf{w}^*$ . The result follows. + +# Proof of theorem 5. + +Theorem 5. In continuous time, for all positive learning rates $\eta >0$ + +1. If $\mathbf{w}^*$ is a stable fixed point ( $\mathbf{S} \prec \mathbf{0}$ ), then there is an open neighborhood $U \ni \mathbf{w}^*$ where $\frac{d\mathfrak{f}_n}{dt}(\mathbf{w}) < 0$ for all $\mathbf{w} \in U \setminus \{\mathbf{w}^*\}$ , so the dynamics converge to $\mathbf{w}^*$ from anywhere in $U$ . +2. If $\mathbf{w}^*$ is an unstable fixed point ( $\mathbf{S} \succ \mathbf{0}$ ), there is an open neighborhood $U \ni \mathbf{w}^*$ such that $\frac{d\mathbf{f}_{\eta}}{dt}(\mathbf{w}) > 0$ for all $\mathbf{w} \in U \setminus \{\mathbf{w}^*\}$ , so the dynamics within $U$ are repelled by $\mathbf{w}^*$ . + +Proof. We prove the first part. The second follows by a symmetric argument. First, strict concavity implies $\mathbf{S}_{ii} = \nabla_{ii}^{2}\pi_{i}$ is negative definite for all $i$ . Second, since $\mathbf{S}$ is block-diagonal, with zeros in all blocks $\mathbf{S}_{ij}$ for pairs of players $i \neq j$ , it follows that $\mathbf{S}$ is also negative definite. Observe that + +$$ +\boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot \nabla f _ {\eta} = \boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot \mathbf {J} ^ {\intercal} \boldsymbol {\xi} _ {\eta} = \boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot (\mathbf {S} + \mathbf {A} ^ {\intercal}) \boldsymbol {\xi} _ {\eta} = \boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot \mathbf {S} \cdot \boldsymbol {\xi} _ {\eta} < 0 +$$ + +for all $\xi_{\eta} \neq 0$ since $\mathbf{S}$ is negative definite. Thus, simultaneous gradient ascent on the profits acts to infinitesimally reduce the function $\mathfrak{f}_{\eta}(\mathbf{w})$ . + +Since $\xi_{\eta}$ reduces $\mathfrak{f}_{\eta}$ , it will converge to a stationary point satisfying $\nabla \mathfrak{f}_{\eta} = 0$ . Observe that $\nabla \mathfrak{f}_{\eta} = 0$ iff $\xi_{\eta} = 0$ since $\nabla \mathfrak{f}_{\eta} = \mathbf{J}^{\intercal}\xi_{\eta}$ and the symmetric component $\mathbf{S}$ of the Jacobian is negative definite. Finally, observe that all stationary points of $\mathfrak{f}_{\eta}$ , and hence $\xi_{\eta}$ , are stable fixed points of $\xi_{\eta}$ because $\mathbf{S}$ is negative definite, which implies that the fixed point is a Nash equilibrium. + +# Proof of theorem 6. + +Theorem 6. Suppose all firms have negative sentiment, $\frac{d\hat{\mathbf{l}}_i}{dt} (\mathbf{w}) < 0$ , for sufficiently large values of $\|\mathbf{w}_i\|$ . Then the dynamics are bounded for any learning rates $\pmb{\eta} > \mathbf{0}$ . + +Proof. Fix $\pmb{\eta} \succ \mathbf{0}$ and also fix $d > 0$ such that $\frac{df_i}{dt}(\mathbf{w}) < 0$ for all $\mathbf{w}$ satisfying $\|\mathbf{w}_i\|_2 > d$ . Let $U(d) = \{\mathbf{w} : \| \mathbf{w}_i\|_2 > d \text{ for all } i\}$ and suppose $g > 0$ is sufficiently large such that $\mathfrak{f}_{\pmb{\eta}}^{-1}(g) = \{\mathbf{w} : \mathfrak{f}_{\pmb{\eta}}(\mathbf{w}) = g\} \subset U(d)$ . We show that + +$$ +\mathfrak {f} _ {\boldsymbol {\eta}} \left(\mathbf {w} ^ {(0)}\right) < g \quad \text {i m p l i e s} \quad \mathfrak {f} _ {\boldsymbol {\eta}} \left(\mathbf {w} ^ {(t)}\right) < g \quad \text {f o r a l l} t > 0, +$$ + +for the dynamical system defined by $\frac{d\mathbf{w}}{dt} = \pmb{\xi}_{\pmb{\eta}}$ . Since we are operating in continuous time, all that is required is to show that $\mathfrak{f}_{\pmb{\eta}}(\mathbf{w}^{(t)}) = g' < g$ implies that $\mathfrak{f}_{\pmb{\eta}}(\mathbf{w}^{(t + \epsilon)}) < g'$ for all sufficiently small $\epsilon > 0$ . + +Recall that $\frac{d\mathbf{f}_i}{dt}(\mathbf{w})(\mathbf{w}) \coloneqq \pmb{\xi}_i^{\top} \cdot \nabla_{ii}^2\pi_i \cdot \pmb{\xi}_i = D_{\pmb{\xi}_i}\left(\frac{1}{2}\|\pmb{\xi}_i\|_2^2\right)$ . It follows immediately that $D_{\pmb{\xi}_\eta}\left(\frac{1}{2}\|\pmb{\xi}_\eta\|_2^2\right) = \sum_i\eta_i \cdot \frac{d\mathbf{f}_i}{dt}(\mathbf{w}) < 0$ for all $\mathbf{w}$ in a sufficiently small ball centered at $\mathbf{w}^{(t)}$ . In other words, the dynamics $\frac{d\mathbf{w}}{dt} = \pmb{\xi}_\eta$ reduce $\mathfrak{f}_\eta$ and the result follows. + +# C NEAR SM-GAMES: EXPERIENTIAL VALUE AND THE EXCHANGE OF GOODS + +Definition 3 proposes a model of monetary exchange in smooth markets. It ignores some major aspects of actual markets. For example, SM-games do not model inventories, investment, borrowing or interest rates. Moreover, in practice money is typically exchanged in return for goods or services – which are ignored by the model. + +In this section, we sketch one way to extend SM-games to model the exchange of both money and goods - although still without accounting for inventories, which would more significantly complicate the model. The proposed extension is extremely simplistic. It is provided to indicate how the model's expressive power can be increased, and complications that results. + +Suppose + +$$ +\pi_ {i} (\mathbf {w}) = f _ {i} (\mathbf {w}) + \sum_ {j \neq i} \left(\alpha_ {i j} \omega_ {i j} (\mathbf {w} _ {i}, \mathbf {w} _ {j}) - g _ {i j} (\mathbf {w} _ {i}, \mathbf {w} _ {j})\right). +$$ + +The functions $\omega_{ij}$ measure the amount of goods (say, widgets) that are exchanged between firms $i$ and $j$ . We assume that $\omega_{ij} + \omega_{ji} \equiv 0$ since widgets are physically passed between the firms and therefore one firms increase must be the others decrease. For two firms to enter into an exchange it must be that they subjectively value the widgets differently, hence we introduce the parameters $\alpha_{ij}$ . Note that if $\alpha_{ij} = 1$ for all $ij$ then the model is equivalent to an SM-game. + +The transaction between firms $i$ and $j$ is net beneficial to both firms $i$ and $j$ if + +$$ +\alpha_ {i j} \cdot \omega_ {i j} \left(\mathbf {w} _ {i}, \mathbf {w} _ {j}\right) > g _ {i j} \left(\mathbf {w} _ {i}, \mathbf {w} _ {j}\right) +$$ + +and, simultaneously + +$$ +\alpha_ {j i} \cdot \omega_ {j i} (\mathbf {w} _ {i}, \mathbf {w} _ {j}) > g _ {j i} (\mathbf {w} _ {i}, \mathbf {w} _ {j}). +$$ + +We can interpret the inequalities as follows. First suppose that $\omega_{ij}$ and $g_{ij}$ always have the same sign. The assumption is reasonable so long as firms do not pay to give away widgets. Further assume without loss of generality that $\omega_{ij}$ and $g_{ij}$ are both greater than zero – in other words, firm $i$ is buying widgets from firm $j$ . The above inequalities can then be rewritten as + +$$ +\underbrace {\alpha_ {i j} \cdot \omega_ {i j} (\mathbf {w} _ {i} , \mathbf {w} _ {j})} _ {\text {a m o u n t f i r m} i \text {v a l u e s t h e w i d g e t s}} > \underbrace {g _ {i j} (\mathbf {w} _ {i} , \mathbf {w} _ {j})} _ {\text {a m o u n t f i r m} i \text {p a y s}} +$$ + +and + +$$ +\underbrace {\alpha_ {j i} \cdot \omega_ {i j} (\mathbf {w} _ {i} , \mathbf {w} _ {j})} _ {\text {a m o u n t} j \text {v a l u e s t h e w i d g e t s}} < \underbrace {g _ {i j} (\mathbf {w} _ {i} , \mathbf {w} _ {j})} _ {\text {a m o u n t} j \text {i s p a i d}} +$$ + +It follows that both firms benefit from the transaction. + +Implications for dynamics. The off-block-diagonal terms of the symmetric and anti-symmetric components of the game Jacobian are + +$$ +\mathbf {S} _ {i j} = \frac {\alpha_ {i j} - \alpha_ {j i}}{2} \cdot \nabla_ {i j} ^ {2} \omega_ {i j} (\mathbf {w} _ {i}, \mathbf {w} _ {j}) +$$ + +and + +$$ +\mathbf {A} _ {i j} = \frac {\alpha_ {i j} + \alpha_ {j i}}{2} \cdot \nabla_ {i j} ^ {2} \omega_ {i j} (\mathbf {w} _ {i}, \mathbf {w} _ {j}) +$$ + +where it is easy to check that $\mathbf{S}_{ij} = \mathbf{S}_{ji}$ and $\mathbf{A}_{ij} + \mathbf{A}_{ji} = 0$ . The off-block-diagonal terms of $\mathbf{S}$ has consequences for how forecasts behave: + +$$ +\begin{array}{l} \boldsymbol {\xi} _ {\eta} ^ {\intercal} \cdot \nabla \mathfrak {f} _ {\eta} = \sum_ {i} \eta_ {i} ^ {2} \cdot \underbrace {\left(\boldsymbol {\xi} _ {i} ^ {\intercal} \cdot \nabla_ {i i} ^ {2} \pi_ {i} \cdot \boldsymbol {\xi} _ {i}\right)} _ {\text {s e n t i m e n t}} \tag {5} \\ + \sum_ {i \neq j} \eta_ {i} \eta_ {j} \cdot \underbrace {\left(\alpha_ {i j} - \alpha_ {j i}\right) \cdot \left(\boldsymbol {\xi} _ {i} ^ {\intercal} \cdot \nabla_ {i j} \omega_ {i j} \cdot \boldsymbol {\xi} _ {j}\right)} _ {\text {c o r r e c t i o n t e r m s}} \\ \end{array} +$$ + +When are near SM-games well-behaved? If $\alpha_{ij} = \alpha_{ji}$ for all $i,j$ then the correction is zero; if $\alpha_{ij} \sim \alpha_{ji}$ then the corrections due to different valuations of goods will be negligible, and the game should be correspondingly well-behaved. + +What can go wrong? Eq (5) implies that the dynamics of near SM-games – specifically whether the dynamics are increasing or decreasing the aggregate forecast – cannot be explained in terms of the sum of sentiments of individual terms. The correction terms involve interactions between dynamics of different firms and the (second-order) quantities of goods exchanged. In principle, these terms could be arbitrarily large positive or negative numbers. + +Concretely, the correction terms involving couplings between dynamics of different firms can lead to positive feedback loops, as in example 1, where the dynamics spiral off to infinity even though both players have strongly concave profit functions. + +# D THE STOP_GRADIENT OPERATOR + +Lemma 9. Any smooth vector field can be constructed as the gradient of a function augmented with stop-gradient operators. + +Proof. Suppose $\pmb{\xi} = \left(\frac{\partial f_1(\mathbf{w})}{\partial w_1}, \dots, \frac{\partial f_d(\mathbf{w})}{\partial w_d}\right)$ . Define + +$$ +g (\mathbf {w}) = \sum_ {i = 1} ^ {d} f \left(w _ {i}, \operatorname {s t o p \_ g r a d i e n t} \left(\mathbf {w} _ {i}\right)\right) +$$ + +It follows that + +$$ +\nabla_ {\mathrm {A D}} \left[ g (\mathbf {w}) \right] = \boldsymbol {\xi} (\mathbf {w}) +$$ + +as required. + +![](images/bb5c4d45b4584a7d8046ec8a02b6ba62ad41493b6d14c1512c41c2d68cb00fad.jpg) \ No newline at end of file diff --git a/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/images.zip b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c03db2fa1434f47138dfa9d4105a3a5c26f21855 --- /dev/null +++ b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09b3612c274bc4736b6704a8fb48d06096af11f07a615450e359869079a60de0 +size 438551 diff --git a/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/layout.json b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..36cb5eade9f387e53c2e02efdc5ecb2313b162bd --- /dev/null +++ b/smoothmarketsabasicmechanismfororganizinggradientbasedlearners/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68735afce1f0f7b78fc0456ab62a9b35ce08f3fe9591072c7b824557dc9f3532 +size 830810 diff --git a/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_content_list.json b/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..18daaadb97cc6e6c53e628c03cc09fc42bc2980d --- /dev/null +++ b/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a52ae5be6a81c34cd26abcbc7372c22cba2fc23d43282376c5d83f84de84aa20 +size 229384 diff --git a/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_model.json b/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8643b45e4d872b507ee5752eea97e667b2fe555b --- /dev/null +++ b/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29c9ea54c384ae82ff342a11dbaf6fd111fe7842296d52f52045fa972a454e4a +size 266179 diff --git a/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_origin.pdf b/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6b0bcbe8de793c203a24d6e2b408d45a0e7a4907 --- /dev/null +++ b/smoothnessandstabilityingans/b80d0bc8-8774-47af-bb9b-c40854b9f0f6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac96d082a98493eb8ce3e233a324be662ecfd647e56d5715d766ab7dd8d73790 +size 616159 diff --git a/smoothnessandstabilityingans/full.md b/smoothnessandstabilityingans/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a307da5d4d0d5d71980e2c65a58044d838746a48 --- /dev/null +++ b/smoothnessandstabilityingans/full.md @@ -0,0 +1,1244 @@ +# SMOOTHNESS AND STABILITY IN GANS + +Casey Chu + +Stanford University + +caseychu@stanford.edu + +Kentaro Minami + +Preferred Networks, Inc. + +minami@preferred.jp + +# Kenji Fukumizu + +The Institute of Statistical Mathematics / Preferred Networks, Inc. + +fukumizu@ism.ac.jp + +# ABSTRACT + +Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the stability of various types of GANs. In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized by the GAN and the generator's architecture. We find that existing GAN variants satisfy some, but not all, of these conditions. Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously. In the process, we explain and clarify the need for various existing GAN stabilization techniques, including Lipschitz constraints, gradient penalties, and smooth activation functions. + +# 1 INTRODUCTION: TAMING INSTABILITY WITH SMOOTHNESS + +Generative adversarial networks (Goodfellow et al., 2014), or GANs, are a powerful class of generative models defined through minimax game. GANs and their variants have shown impressive performance in synthesizing various types of datasets, especially natural images. Despite these successes, the training of GANs remains quite unstable in nature, and this instability remains difficult to understand theoretically. + +Since the introduction of GANs, there have been many techniques proposed to stabilize GANs training, including studies of new generator/discriminator architectures, loss functions, and regularization techniques. Notably, Arjovsky et al. (2017) proposed Wasserstein GAN (WGAN), which in principle avoids instability caused by mismatched generator and data distribution supports. In practice, this is enforced by Lipschitz constraints, which in turn motivated developments like gradient penalties (Gulrajani et al., 2017) and spectral normalization (Miyato et al., 2018). Indeed, these stabilization techniques have proven essential to achieving the latest state-of-the-art results (Karras et al., 2018; Brock et al., 2019). + +On the other hand, a solid theoretical understanding of training stability has not been established. Several empirical observations point to an incomplete understanding. For example, why does applying a gradient penalty together spectral norm seem to improve performance (Miyato et al., 2018), even though in principle they serve the same purpose? Why does applying only spectral normalization with the Wasserstein loss fail (Miyato, 2018), even though the analysis of Arjovsky et al. (2017) suggests it should be sufficient? Why is applying gradient penalties effective, even outside their original context of the Wasserstein GAN (Fedus et al., 2018)? + +In this work, we develop a framework to analyze the stability of GAN training that resolves these apparent contradictions and clarifies the roles of these regularization techniques. Our approach considers the smoothness of the loss function used. In optimization, smoothness is a well-known condition that ensures that gradient descent and its variants become stable (see e.g., Bertsekas (1999)). For example, the following well-known proposition is the starting point of our stability analysis: + +Proposition 1 (Bertsekas (1999), Proposition 1.2.3). Suppose $f: \mathbb{R}^p \to \mathbb{R}$ is $L$ -smooth and bounded below. Let $x_{k+1} := x_k - \frac{1}{L} \nabla f(x_k)$ . Then $||\nabla f(x_k)|| \to 0$ as $k \to \infty$ . + +This proposition says that under a smoothness condition on the function, gradient descent with a constant step size $\frac{1}{L}$ approaches stationarity (i.e., the gradient norm approaches zero). This is a rather weak notion of convergence, as it does not guarantee that the iterates converge to a point, and even if the iterates do converge, the limit is a stationary point and not necessarily an minimizer. + +Nevertheless, empirically, not even this stationarity is satisfied by GANs, which are known to frequently destabilize and diverge during training. To diagnose this instability, we consider the smoothness of the GAN's loss function. GANs are typically framed as minimax problems of the form + +$$ +\inf _ {\theta} \sup _ {\varphi} \mathcal {J} (\mu_ {\theta}, \varphi), \tag {1} +$$ + +where $\mathcal{I}$ is a loss function that takes a generator distribution $\mu_{\theta}$ and discriminator $\varphi$ , and $\theta \in \mathbb{R}^p$ denotes the parameters of the generator. Unfortunately, the minimax nature of this problem makes stability and convergence difficult to analyze. To make the analysis more tractable, we define $J(\mu) = \sup_{\varphi} \mathcal{I}(\mu, \varphi)$ , so that (1) becomes simply + +$$ +\inf _ {\theta} J (\mu_ {\theta}). \tag {2} +$$ + +This choice corresponds to the common assumption that the discriminator is allowed to reach optimality at every training step. Now, the GAN algorithm can be regarded as simply gradient descent on the $\mathbb{R}^p\to \mathbb{R}$ function $\theta \mapsto J(\mu_{\theta})$ , which may be analyzed using Proposition 1. In particular, if this function $\theta \mapsto J(\mu_{\theta})$ satisfies the smoothness assumption, then the GAN training should be stable in that it should approach stationarity under the assumption of an optimal discriminator. + +In the remainder of this paper, we investigate whether the smoothness assumption is satisfied for various GAN losses. Our analysis answers two questions: + +Q1. Which existing GAN losses, if any, satisfy the smoothness condition in Proposition 1? +Q2. Are there choices of loss, regularization, or architecture that enforce smoothness in GANs? + +As results of our analysis, our contributions are as follows: + +1. We derive sufficient conditions for the GAN algorithm to be stationary under certain assumptions (Theorem 1). Our conditions relate to the smoothness of GAN loss used as well as the parameterization of the generator. +2. We show that most common GAN losses do not satisfy the all of the smoothness conditions, thereby corroborating their empirical instability. +3. We develop regularization techniques that enforce the smoothness conditions. These regularizers recover common GAN stabilization techniques such as gradient penalties and spectral normalization, thereby placing their use on a firmer theoretical foundation. +4. Our analysis provides several practical insights, suggesting for example the use of smooth activation functions, simultaneous spectral normalization and gradient penalties, and a particular learning rate for the generator. + +# 1.1 RELATED WORK + +Divergence minimization Our analysis regards the GAN algorithm as minimizing a divergence between the current generator distribution and the desired data distribution, under the assumption of an optimal discriminator at every training step. This perspective originates from the earliest GAN paper, in which Goodfellow et al. (2014) show that the original minimax GAN implicitly minimizes the Jensen-Shannon divergence. Since then, the community has introduced a large number of GAN or GAN-like variants that learn generative models by implicitly minimizing various divergences, including $f$ -divergences (Nowozin et al., 2016), Wasserstein distance (Arjovsky et al., 2017), and maximum-mean discrepancy (Li et al., 2015; Unterthiner et al., 2018). Meanwhile, the non-saturating GAN (Goodfellow et al., 2014) has been shown to minimize a certain Kullback-Leibler divergence (Arjovsky & Bottou, 2017). Several more theoretical works consider the topological, geometric, and convexity properties of divergence minimization (Arjovsky & Bottou, 2017; Liu et al., 2017; Bottou et al., 2018; Farnia & Tse, 2018; Chu et al., 2019), perspectives that we draw heavily upon. Sanjabi et al. (2018) also prove smoothness of GAN losses in the specific case of the regularized optimal transport loss. Their assumption for smoothness is entangled in that it involves a composite condition on generators and discriminators, while our analysis addresses them separately. + +Table 1: Common GANs, their corresponding loss functions, and their optimal discriminators. + +
Loss function J(μ)Optimal discriminator Φμ(x)
Minimax GANDJS(μ||μ0)1/2 log μ(x)/μ(x)+μ0(x)
Non-saturating GANDKL(1/2μ+1/2μ0||μ0)-1/2 log μ0(x)/μ(x)+μ0(x)
Wasserstein GANW1(μ,μ0)arg maxf∈Lip1Ey~μ[f(y)] - Ey~μ0[f(y)]
GMMN, Coulomb GAN1/2MMD2(μ,μ0)Ey~μ[K(x,y)] - Ey~μ0[K(x,y)]
IPM-GANIPM_F(μ,μ0)arg maxf∈F Ey~μ[f(y)] - Ey~μ0[f(y)]
+ +Other approaches Even though many analyses, including ours, operate under the assumption of an optimal discriminator, this assumption is unrealistic in practice. Li et al. (2017b) contrast this optimal discriminator dynamics with first-order dynamics, which assumes that the generator and discriminator use alternating gradient updates and is what is used computationally. As this is a differing approach from ours, we only briefly mention some results in this area, which typically rely on game-theoretic notions (Kodali et al., 2017; Grnarova et al., 2018; Oliehoek et al., 2018) or local analysis (Nagarajan & Kolter, 2017; Mescheder et al., 2018). Some of these results rely on continuous dynamics approximations of gradient updates; in contrast, our work focuses on discrete dynamics. + +# 1.2 NOTATION + +Let $\bar{\mathbb{R}}\coloneqq \mathbb{R}\cup \{\infty , - \infty \}$ . We let $\mathcal{P}(X)$ denote the set of all probability measures on a compact set $X\subseteq \mathbb{R}^d$ . We let $\mathcal{M}(X)$ and $\mathcal{C}(X)$ denote the dual pair consisting of the set of all finite signed measures on $X$ and the set of all continuous functions $X\to \mathbb{R}$ . For any statement $A$ , we let $\chi \{A\}$ be 0 if $A$ is true and $\infty$ if $A$ is false. For a Euclidean vector $x$ , its Euclidean norm is denoted by $\| x\| _2$ , and the operator norm of a matrix $A$ is denoted by $\| A\| _2$ , i.e., $\| A\| _2 = \sup_{\| x\| _2\leq 1}\| Ax\| _2 / \| x\| _2$ . A function $f:X\to Y$ between two metric spaces is $\alpha$ -Lipschitz if $d_Y(f(x_1),f(x_2))\leq \alpha d_X(x_1,x_2)$ . A function $f:\mathbb{R}^d\rightarrow \mathbb{R}$ is $\beta$ -smooth if its gradients are $\beta$ -Lipschitz, that is, for all $x,y\in \mathbb{R}^d$ , $\| \nabla f(x) - \nabla f(y)\| _2\leq \beta \| x - y\| _2$ . + +# 2 SMOOTHNESS OF GAN LOSSES + +This section presents Theorem 1, which provides concise criteria for the smoothness of GAN losses. + +In order to keep our analysis agnostic to the particular GAN used, let $J: \mathcal{P}(X) \to \bar{\mathbb{R}}$ be an arbitrary convex loss function, which takes a distribution over $X \subset \mathbb{R}^d$ and outputs a real number. Note that the typical minimax formulation of GANs can be recovered from just the loss function $J$ using convex duality. In particular, recall that the convex conjugate $J^\star: \mathcal{C}(X) \to \bar{\mathbb{R}}$ of $J$ satisfies the following remarkable duality, known as the Fenchel-Moreau theorem: + +$$ +J ^ {\star} (\varphi) := \sup _ {\mu \in \mathcal {M} (X)} \int \varphi (x) d \mu - J (\mu), \quad J (\mu) = \sup _ {\varphi \in \mathcal {C} (X)} \int \varphi (x) d \mu - J ^ {\star} (\varphi). \tag {3} +$$ + +Based on this duality, minimizing $J$ can be framed as the minimax problem + +$$ +\inf _ {\mu \in \mathcal {P} (X)} J (\mu) = \inf _ {\mu \in \mathcal {P} (X)} \sup _ {\varphi \in \mathcal {C} (X)} \int \varphi (x) d \mu - J ^ {\star} (\varphi) := \inf _ {\mu \in \mathcal {P} (X)} \sup _ {\varphi \in \mathcal {C} (X)} \mathcal {J} (\mu , \varphi), \tag {4} +$$ + +recovering the well-known adversarial formulation of GANs. We now define the notion of an optimal discriminator for an arbitrary loss function $J$ , based on this convex duality: + +Definition 1 (Optimal discriminator). Let $J: \mathcal{M}(X) \to \mathbb{R}$ be a convex, l.s.c., proper function. An optimal discriminator for a probability distribution $\mu \in \mathcal{P}(X)$ is a continuous function $\Phi_{\mu}: X \to \mathbb{R}$ that attains the maximum of the second equation in (3), i.e., $J(\mu) = \int \Phi_{\mu}(x) d\mu - J^{\star}(\Phi_{\mu})$ . + +This definition recovers the optimal discriminators of many existing GAN and GAN-like algorithms (Farnia & Tse, 2018; Chu et al., 2019), most notably those in Table 1. Our analysis will apply to any algorithm in this family of algorithms. See Appendix B for more details on this perspective. + +We also formalize the notion of a family of generators: + +Definition 2 (Family of generators). A family of generators is a set of pushforward probability measures $\{\mu_{\theta} = f_{\theta \#}\omega : \theta \in \mathbb{R}^p\}$ , where $\omega$ is a fixed probability distribution on $Z$ (the latent variable) and $f_{\theta}: Z \to X$ is a measurable function (the generator). + +Now, in light of Proposition 1, we are interested in the smoothness of the mapping $\theta \mapsto J(\mu_{\theta})$ which would guarantee the stationarity of gradient descent on this objective, which in turn implies stationarity of the GAN algorithm under the assumption of an optimal discriminator. The following theorem is our central result, which decomposes the smoothness of $\theta \mapsto J(\mu_{\theta})$ into conditions on optimal discriminators and the family of generators. + +Theorem 1 (Smoothness decomposition for GANs). Let $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ be a convex function whose optimal discriminators $\Phi_{\mu}: X \to \mathbb{R}$ satisfy the following regularity conditions: + +(D1) $x\mapsto \Phi_{\mu}(x)$ is $\alpha$ -Lipschitz, +(D2) $x\mapsto \nabla_{x}\Phi_{\mu}(x)$ is $\beta_{1}$ -Lipschitz, +(D3) $\mu \mapsto \nabla_{x}\Phi_{\mu}(x)$ is $\beta_{2}$ -Lipschitz w.r.t. the 1-Wasserstein distance. + +Also, let $\mu_{\theta} = f_{\theta \#}\omega$ be a family of generators that satisfies: + +(G1) $\theta \mapsto f_{\theta}(z)$ is $A$ -Lipschitz in expectation for $z \sim \omega$ , i.e., $\mathbb{E}_{z \sim \omega}[\|f_{\theta_1}(z) - f_{\theta_2}(z)\|_2] \leq A\|\theta_1 - \theta_2\|_2$ , and +(G2) $\theta \mapsto D_{\theta}f_{\theta}(z)$ is $B$ -Lipschitz in expectation for $z \sim \omega$ , i.e., $\mathbb{E}_{z \sim \omega}[\|D_{\theta_1}f_{\theta_1}(z) - D_{\theta_2}f_{\theta_2}(z)\|_2] \leq B\|\theta_1 - \theta_2\|_2$ . + +Then $\theta \mapsto J(\mu_{\theta})$ is $L$ -smooth, with $L = \alpha B + A^{2}(\beta_{1} + \beta_{2})$ . + +Theorem 1 connects the smoothness properties of the loss function $J$ with the smoothness properties of the optimal discriminator $\Phi_{\mu}$ , and once paired with Proposition 1, it suggests a quantitative value $\frac{1}{L}$ for a stable generator learning rate. In order to obtain claims of stability for practically sized learning rates, it is important to tightly bound the relevant constants. + +In Sections 4 to 6, we carefully analyze which GAN losses satisfy (D1), (D2), and (D3), and with what constants. We summarize our results in Table 2: it turns out that none of the listed losses, except for one, satisfy (D1), (D2), and (D3) simultaneously with a finite constant. The MMD-based loss satisfies the three conditions, but its constant for (D1) grows as $\alpha = O(\sqrt{d})$ , which is an unfavorable dependence on the data dimension $d$ that forces an unacceptably small learning rate. See for complete details of each condition. This failure of existing GANs to satisfy the stationarity conditions corroborates the observed instability of GANs. + +Table 2: Regularity of common GAN losses. + +
(D1)(D2)(D3)
Minimax GANXXX
Non-saturating GANXXX
Wasserstein GANX?
IPMSX?
MMD2✓*
+ +Theorem 1 decomposes smoothness into conditions on the generator and conditions on the discriminator, allowing a clean separation of concerns. In this paper, we focus on the discriminator conditions (D1), (D2), and (D3) and only provide an extremely simple example of a generator that satisfies (G1) and (G2), in Section 7. Because analysis of the generator conditions may become quite complicated and will vary with the choice of architecture considered (feedforward, convolutional, ResNet, etc.), we leave a detailed analysis of the generator conditions (G1) and (G2) as a promising avenue for future work. Indeed, such analyses may lead to new generator architectures or generator regularization techniques that stabilize GAN training. + +# 3 ENFORCING SMOOTHNESS WITH INF-CONVOLUTIONS + +In this section, we present a generic regularization technique that imposes the three conditions sufficient for stable learning on an arbitrary loss function $J$ , thereby stabilizing training. In Section 2, we observe that the Wasserstein, IPM, and MMD losses respectively satisfy (D1), (D2), and (D3) individually, but not all of them at the same time. Using techniques from convex analysis, we convert these three GAN losses into three regularizers that, when applied simultaneously, causes the resulting loss to satisfy all the three conditions. Here, we only outline the technique; the specifics of each case are deferred to Sections 4 to 6. + +We start with an arbitrary base loss function $J$ to be regularized. Next, we take an existing GAN loss that satisfies the desired regularity condition and convert it into a regularizer function $R: \mathcal{M}(X) \to \mathbb{R}$ . Then, we consider $J \oplus R$ , which denotes the inf-convolution defined as + +$$ +(J \oplus R) (\xi) = \inf _ {\tilde {\xi} \in \mathcal {M} (X)} J (\tilde {\xi}) + R (\xi - \tilde {\xi}). \tag {5} +$$ + +This new function $J \oplus R$ inherits the regularity of $R$ , making it a stable candidate as a GAN loss. Moreover, because the inf-convolution is a commutative operation, we can sequentially apply multiple regularizers $R_{1}, R_{2}$ , and $R_{3}$ without destroying the added regularity. In particular, if we carefully choose functions $R_{1}, R_{2}$ , and $R_{3}$ , then $\tilde{J} = J \oplus R_{1} \oplus R_{2} \oplus R_{3}$ will satisfy (D1), (D2), and (D3) simultaneously. Moreover, under some technical assumptions, this composite function $\tilde{J}$ inherits the original minimizers of $J$ , making it a sensible GAN loss: + +Proposition 2 (Invariance of minimizers). Let $R_{1}(\xi) \coloneqq \| \xi \|_{\mathrm{KR}}$ , $R_{2}(\xi) \coloneqq \| \xi \|_{\mathcal{S}^{*}}$ , and $R_{3}(\xi) \coloneqq \frac{1}{2} \| \hat{\xi} \|_{\mathcal{H}}^{2}$ be the three regularizers defined by (8), (12), and (19) respectively. Assume that $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ has a unique minimizer at $\mu_0$ with $J(\mu_0) = 0$ , and $J(\mu) \geq c \| \hat{\mu} - \hat{\mu}_0 \|_{\mathcal{H}}$ for some $c > 0$ . Then the inf-convolution $\tilde{J} = J \oplus R_1 \oplus R_2 \oplus R_3$ has a unique minimizer at $\mu_0$ with $\tilde{J}(\mu_0) = 0$ . + +The duality formulation (4) provides a practical method for minimizing this composite function. We leverage the duality relation $(J\oplus R_1\oplus R_2\oplus R_3)^\star = J^\star +R_1^\star +R_2^\star +R_3^\star$ and apply (4): + +$$ +\begin{array}{l} \inf _ {\mu} (J \oplus R _ {1} \oplus R _ {2} \oplus R _ {3}) (\mu) = \inf _ {\mu} \sup _ {\varphi} \int \varphi d \mu - J ^ {\star} (\varphi) - R _ {1} ^ {\star} (\varphi) - R _ {2} ^ {\star} (\varphi) - R _ {3} ^ {\star} (\varphi) (6) \\ = \inf _ {\mu} \sup _ {\varphi} \mathcal {J} (\mu , \varphi) - R _ {1} ^ {\star} (\varphi) - R _ {2} ^ {\star} (\varphi) - R _ {3} ^ {\star} (\varphi). (7) \\ \end{array} +$$ + +This minimax problem can be seen as a GAN whose discriminator objective has three added regularization terms. + +The concrete form of these regularizers are summarized in Table 3. Notably, we observe that we recover standard techniques for stabilizing GANs: + +- (D1) is enforced by Lipschitz constraints (i.e., spectral normalization) on the discriminator. +- (D2) is enforced by spectral normalization and a choice of Lipschitz, smooth activation functions for the discriminator. + +Table 3: Smoothness-inducing regularizers and their convex conjugates. To enforce a regularity condition on a loss function $J$ , we take a source loss that satisfies it and view it as a regularizer $R$ . We then consider the inf-convolution $J \oplus R$ , which corresponds to an added regularization term $R^{\star}$ on the discriminator $\varphi$ . These regularization terms correspond to existing GAN techniques. + +
PurposeSource lossR(ξ)R*(φ)GAN techniques
(D1)W1||ξ||KRχ{||φ||Lip ≤ 1}spectral norm
(D2)IPMS||ξ||S*χ{φ ∈ S}smooth activations, spectral norm
(D3)MMD21/2||ξ||H21/2||φ||H2gradient penalties
+ +- (D3) is enforced by gradient penalties on the discriminator. + +Our analysis therefore puts these regularization techniques on a firm theoretical foundation (Proposition 1 and Theorem 1) and provides insight into their function. + +# 4 ENFORCING (D1) WITH LIPSCHITZ CONSTRAINTS + +In this section, we show that enforcing (D1) leads to techniques and notions commonly used to stabilize GANs, including the Wasserstein distance, Lipschitz constraints and spectral normalization. Recall that (D1) demands that the optimal discriminator $\Phi_{\mu}$ is Lipschitz: + +(D1) $x\mapsto \Phi_{\mu}(x)$ is $\alpha$ -Lipschitz for all $\mu \in \mathcal{P}(X)$ , i.e., $|\Phi_{\mu}(x) - \Phi_{\mu}(y)|\leq \alpha ||x - y||_2.$ + +If $\Phi_{\mu}$ is differentiable, this is equivalent to that the optimal discriminator has a gradient with bounded norm. This is a sensible criterion, since a discriminator whose gradient norm is too large may push the generator too hard and destabilize its training. + +To check (D1), the following proposition shows that it suffices to check whether $|J(\mu) - J(\nu)| \leq \alpha W_1(\mu, \nu)$ for all distributions $\mu, \nu$ : + +Proposition 3. (D1) holds if and only if $J$ is $\alpha$ -Lipschitz w.r.t. the Wasserstein-1 distance. + +Arjovsky et al. (2017) show that this property does not hold for common divergences based on the Kullback-Leibler or Jensen-Shannon divergence, while it does hold for the Wasserstein-1 distance. Indeed, it is this desirable property that motivates their introduction of the Wasserstein GAN. Framed in our context, their result is summarized as follows: + +Proposition 4. The minimax and non-saturating GAN losses do not satisfy (D1) for some $\mu_0$ . + +Proposition 5. The Wasserstein GAN loss satisfies (D1) with $\alpha = 1$ for any $\mu_0$ . + +Our stability analysis therefore deepens the analysis of Arjovsky et al. (2017) and provides an alternative reason that the Wasserstein distance is desirable as a metric: it is part of a sufficient condition that ensures stationarity of gradient descent. + +# 4.1 FROM WASSERSTEIN DISTANCE TO LIPSCHITZ CONSTRAINTS + +Having identified the Wasserstein GAN loss as one that satisfies (D1), we next follow the strategy outlined in Section 3 to convert it into a regularizer for an arbitrary loss function. Towards this, we define the regularizer $R_1: \mathcal{M}(X) \to \mathbb{R}$ and compute its convex conjugate $R_1^\star: \mathcal{C}(X) \to \mathbb{R}$ : + +$$ +R _ {1} (\xi) := \alpha \| \xi \| _ {\mathrm {K R}} = \alpha \sup _ { \begin{array}{c} f \in \mathcal {C} (X) \\ | | f | | _ {\mathrm {L i p}} \leq 1 \end{array} } \int f d \xi , \quad R _ {1} ^ {\star} (\varphi) = \left\{ \begin{array}{l l} 0 & \| \varphi \| _ {\mathrm {L i p}} \leq \alpha \\ \infty & \text {o t h e r w i s e .} \end{array} \right. \tag {8} +$$ + +This norm is the Kantorovich-Rubinstein norm (KR norm), which extends the Wasserstein-1 distance to $\mathcal{M}(X)$ ; it holds that $\| \mu - \nu \|_{\mathrm{KR}} = W_1(\mu, \nu)$ for $\mu, \nu \in \mathcal{P}(X)$ . Then, its inf-convolution with an arbitrary function inherits the Lipschitz property held by $R_1$ : + +Proposition 6 (Pasch-Hausdorff). Let $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ be a function, and define $\tilde{J} \coloneqq J \oplus R_1$ . Then $\tilde{J}$ is $\alpha$ -Lipschitz w.r.t. the distance induced by the KR norm, and hence the Wasserstein-1 distance when restricted to $\mathcal{P}(X)$ . + +Due to Proposition 3, we now obtain a transformed loss function $\tilde{J}$ that automatically satisfies (D1). This function is a generalization of the Pasch-Hausdorff envelope (see Chapter 9 in Rockafeller & Wets (1998)), also known as Lipschitz regularization or the McShane-Whitney extension (McShane, 1934; Whitney, 1934; Kirschbraun, 1934; Hiriart-Urruty, 1980). + +The convex conjugate computation in (8) shows that $\tilde{J}$ can be minimized in practice by imposing Lipschitz constraints on discriminators. Indeed, by (4), + +$$ +\inf _ {\mu} (J \oplus \alpha \| \cdot \| _ {\mathrm {K R}}) (\mu) = \inf _ {\mu} \sup _ {\varphi} \mathbb {E} _ {x \sim \mu} [ \varphi (x) ] - J ^ {\star} (\varphi) - \chi \{\| \varphi \| _ {\mathrm {L i p}} \leq \alpha \} \tag {9} +$$ + +$$ += \inf _ {\mu} \sup _ {\varphi : \| \varphi \| _ {\mathrm {L i p}} \leq \alpha} \mathcal {J} (\mu , \varphi). \tag {10} +$$ + +Farnia & Tse (2018) consider this loss $\tilde{J}$ in the special case of an $f$ -GAN with $J(\mu) = D_f(\mu, \mu_0)$ ; they showed that minimizing $\tilde{J}$ corresponds to training a $f$ -GAN normally but constraining the discriminator to be $\alpha$ -Lipschitz. We show that this technique is in fact generic for any $J$ : minimizing the transformed loss can be achieved by training the GAN as normal, but imposing a Lipschitz constraint on the discriminator. + +Our analysis therefore justifies the use of Lipschitz constraints, such as spectral normalization (Miyato et al., 2018) and weight clipping (Arjovsky & Bottou, 2017), for general GAN losses. However, Theorem 1 also suggests that applying only Lipschitz constraints may not be enough to stabilize GANs, as (D1) alone does not ensure that the GAN objective is smooth. + +# 5 ENFORCING (D2) WITH DISCRIMINATOR SMOOTHNESS + +(D2) demands that the optimal discriminator $\Phi_{\mu}$ is smooth: +(D2) $x \mapsto \nabla \Phi_{\mu}(x)$ is $\beta_{1}$ -Lipschitz for all $\mu \in \mathcal{P}(X)$ , i.e., $\| \nabla \Phi_{\mu}(x) - \nabla \Phi_{\mu}(y) \|_2 \leq \beta_1 \| x - y \|_2$ . Intuitively, this says that for a fixed generator $\mu$ , the optimal discriminator $\Phi_{\mu}$ should not provide gradients that change too much spatially. + +Although the Wasserstein GAN loss (D1), we see that it, along with the minimax GAN and the non-saturating GAN, do not satisfy (D2): + +Proposition 7. The Wasserstein, minimax, and non-saturating GAN losses do not satisfy (D2) for some $\mu_0$ . + +We now construct a loss that by definition satisfies (D2). Let $S$ be the class of 1-smooth functions, that is, for which $\| \nabla f(x) - \nabla f(y) \|_2 \leq \| x - y \|_2$ , and consider the integral probability metric (IPM) (Müller, 1997) w.r.t. $S$ , defined by + +$$ +\operatorname {I P M} _ {\mathcal {S}} (\mu , \nu) := \sup _ {f \in \mathcal {S}} \int f d \mu - \int f d \nu . \tag {11} +$$ + +The optimal discriminator for the loss $\mathrm{IPM}_S(\mu, \mu_0)$ is the function that maximizes the supremum in the definition. This function by definition belongs to $S$ and therefore is 1-smooth. Hence, this IPM loss satisfies (D2) with $\beta_1 = 1$ by construction. + +# 5.1 FROM INTEGRAL PROBABILITY METRIC TO SMOOTH DISCRIMINATORS + +Having identified the IPM-based loss as one that satisfies (D2), we next follow the strategy outlined in Section 3 to convert it into a regularizer for an arbitrary loss function. To do so, we define a regularizer $R_2: \mathcal{M}(X) \to \overline{\mathbb{R}}$ and compute its convex conjugate $R_2^\star: \mathcal{C}(X) \to \bar{\mathbb{R}}$ : + +$$ +R _ {2} (\xi) := \beta_ {1} \| \xi \| _ {\mathcal {S} ^ {*}} = \beta_ {1} \sup _ {f \in \mathcal {S}} \int f d \xi , \quad R _ {2} ^ {\star} (\varphi) = \left\{ \begin{array}{l l} 0 & \varphi \in \beta_ {1} \mathcal {S} \\ \infty & \text {o t h e r w i s e .} \end{array} \right. \tag {12} +$$ + +The norm is the dual norm to $S$ , which extends the IPM to signed measures; it holds that $\mathrm{IPM}_{S}(\mu, \nu) = \| \mu - \nu \|_{S^*}$ for $\mu, \nu \in \mathcal{P}(X)$ . Similar to the situation in the previous section, inf-convolution preserves the smoothness property of $R_2$ : + +Proposition 8. Let $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ be a convex, proper, lower semicontinuous function, and define $\tilde{J} := J \oplus R_2$ . Then the optimal discriminator for $\tilde{J}$ is $\beta_{1}$ -smooth. + +Applying (4) and (12), we see that we can minimize this transformed loss function by restricting the family of discriminators to only $\beta_{1}$ -smooth discriminators: + +$$ +\begin{array}{l} \inf _ {\mu} \left(J \oplus \beta_ {1} \| \cdot \| _ {\mathcal {S} ^ {*}}\right) (\mu) = \inf _ {\mu} \sup _ {\varphi} \mathbb {E} _ {x \sim \mu} [ \varphi (x) ] - J ^ {\star} (\varphi) - \chi \{\varphi \in \beta_ {1} \mathcal {S} \} (13) \\ = \inf _ {\mu} \sup _ {\varphi \in \beta_ {1} S} \mathcal {J} (\mu , \varphi). (14) \\ \end{array} +$$ + +In practice, we can enforce this by applying spectral normalization (Miyato et al., 2018) and using a Lipschitz, smooth activation function such as ELU (Clevert et al., 2016) or sigmoid. + +Proposition 9. Let $f: \mathbb{R}^d \to \mathbb{R}$ be a neural network consisting of $k$ layers whose linear transformations have spectral norm 1 and whose activation functions are 1-Lipschitz and 1-smooth. Then $f$ is $k$ -smooth. + +# 6 ENFORCING (D3) WITH GRADIENT PENALTIES + +(D3) is the following smoothness condition: + +(D3) $\mu \mapsto \nabla \Phi_{\mu}(x)$ is $\beta_{2}$ -Lipschitz for any $x \in X$ , i.e., $\| \nabla \Phi_{\mu}(x) - \nabla \Phi_{\nu}(x)\|_2 \leq \beta_2 W_1(\mu, \nu)$ . + +(D3) requires that the gradients of the optimal discriminator do not change too rapidly in response to changes in $\mu$ . Indeed, if the discriminator's gradients are too sensitive to changes in the generator, the generator may not be able to accurately follow those gradients as it updates itself using a finite step size. In finite-dimensional optimization of a function $f: \mathbb{R}^d \to \mathbb{R}$ , this condition is analogous to $f$ having a Lipschitz gradient. + +We now present an equivalent characterization of (D3) that is easier to check in practice. We define the Bregman divergence of a convex function $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ by + +$$ +\mathfrak {D} _ {J} (\nu , \mu) := J (\nu) - J (\mu) - \int \Phi_ {\mu} (x) d (\nu - \mu), \tag {15} +$$ + +where $\Phi_{\mu}$ is the optimal discriminator for $J$ at $\mu$ . Then, (D3) is characterized in terms of the Bregman divergence and the KR norm as follows: + +Proposition 10. Let $J:\mathcal{M}(X)\to \mathbb{R}$ be a convex function. Then $J$ satisfies (D3) if and only if $\mathfrak{D}_J(\nu ,\mu)\leq \frac{\beta_2}{2}\| \mu -\nu \|_{\mathrm{KR}}^2$ for all $\mu ,\nu \in \mathcal{M}(X)$ + +It is straightforward to compute the Bregman divergence corresponding to several popular GANs: + +$$ +\mathfrak {D} _ {D _ {\mathrm {J S}} (\cdot | | \mu_ {0})} (\nu , \mu) = D _ {\mathrm {K L}} \left(\frac {1}{2} \nu + \frac {1}{2} \mu_ {0} \mid | \frac {1}{2} \mu + \frac {1}{2} \mu_ {0}\right) + \frac {1}{2} D _ {\mathrm {K L}} (\nu | | \mu), \tag {16} +$$ + +$$ +\mathfrak {D} _ {D _ {\mathrm {K L}} \left(\frac {1}{2} \cdot + \frac {1}{2} \mu_ {0} \mid | \mu_ {0}\right)} (\nu , \mu) = D _ {\mathrm {K L}} \left(\frac {1}{2} \nu + \frac {1}{2} \mu_ {0} \mid | \frac {1}{2} \mu + \frac {1}{2} \mu_ {0}\right), \tag {17} +$$ + +$$ +\mathfrak {D} _ {\frac {1}{2} \mathrm {M M D} ^ {2} (\cdot , \mu_ {0})} (\nu , \mu) = \frac {1}{2} \mathrm {M M D} ^ {2} (\nu , \mu). \tag {18} +$$ + +The first two Bregman divergences are not bounded above by $||\mu - \nu||_{\mathrm{KR}}^2$ for reasons similar to those discussed in Section 4, and hence: + +Proposition 11. The minimax and non-saturating GAN losses do not satisfy (D3) for some $\mu_0$ . + +Even so, the Bregman divergence for the non-saturating loss is always less than that of the minimax GAN, suggesting that the non-saturating loss should be stable in more situations than the minimax GAN. On the other hand, the MMD-based loss (Li et al., 2015) does satisfy (D3) when its kernel is the Gaussian kernel $K(x,y) = e^{-\pi ||x - y||^2}$ : + +Proposition 12. The MMD loss with Gaussian kernel satisfies (D3) with $\beta_{2} = 2\pi$ for all $\mu_0$ . + +# 6.1 FROM MAXIMUM MEAN DISCREPANCY TO GRADIENT PENALTIES + +Having identified the MMD-based loss as one that satisfies (D3), we next follow the strategy outlined in Section 3 to convert it into a regularizer for an arbitrary loss function. To do so, we define the regularizer $R_{3}:\mathcal{M}(X)\to \mathbb{R}$ and compute its convex conjugate $R_3^\star :\mathcal{C}(X)\to \mathbb{R}$ : + +$$ +R _ {3} (\xi) := \frac {\beta_ {2}}{4 \pi} \| \hat {\xi} \| _ {\mathcal {H}} ^ {2}, \quad R _ {3} ^ {\star} (\varphi) = \frac {\pi}{\beta_ {2}} \| \varphi \| _ {\mathcal {H}} ^ {2}. \tag {19} +$$ + +The norm is the norm of a reproducing kernel Hilbert space norm (RKHS) $\mathcal{H}$ with Gaussian kernel; this norm extends the MMD to signed measures, as it holds that $\mathrm{MMD}(\mu ,\nu) = \| \hat{\mu} -\hat{\nu}\|_{\mathcal{H}}$ for $\mu ,\nu \in \mathcal{P}(X)$ . Here, $\hat{\xi} = \int K(x,\cdot)\xi (dx)\in \mathcal{H}$ denotes the mean embedding of a signed measure $\xi \in \mathcal{M}(X)$ ; we also adopt the convention that $||\varphi ||_{\mathcal{H}} = \infty$ if $\varphi \notin \mathcal{H}$ . Similar to the situation in the previous sections, inf-convolution preserves the smoothness property of $R_{3}$ : + +Proposition 13 (Moreau-Yosida regularization). Suppose $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ is convex, and define $\tilde{J} \coloneqq J \oplus R_3$ . Then $\tilde{J}$ is convex, and $\mathfrak{D}_{\tilde{J}}(\nu, \mu) \leq \frac{\beta_2}{2} ||\mu - \nu||_{\mathrm{KR}}^2$ . + +By Proposition 10, this transformed loss function satisfies (D3), having inherited the regularity properties of the squared MMD. This transformed function is a generalization of Moreau-Yosida regularization or the Moreau envelope (see Chapter 1 in Rockafeller & Wets (1998)). It is well-known that in the case of a function $f: \mathbb{R}^n \to \mathbb{R}$ , this regularization results in a function with Lipschitz gradients, so it is unsurprising that this property carries over to the infinite-dimensional case. + +Applying (4) and (19), we see that the transformed loss function can be minimized as a GAN by implementing an RKHS squared norm penalty on the discriminator: + +$$ +\inf _ {\mu} \left(J \oplus \frac {\beta_ {2}}{4 \pi} | | \cdot | | _ {\mathcal {H}} ^ {2}\right) (\mu) = \inf _ {\mu} \sup _ {\varphi} \mathbb {E} _ {x \sim \mu} [ \varphi (x) ] - J ^ {\star} (\varphi) - \frac {\pi}{\beta_ {2}} | | \varphi | | _ {\mathcal {H}} ^ {2}. \tag {20} +$$ + +Computationally, the RKHS norm is difficult to evaluate. We propose taking advantage of the following infinite series representation of $||f||_{\mathcal{H}}^2$ in terms of the derivatives of $f$ (Fasshauer & Ye, 2011; Novak et al., 2018): + +Proposition 14. Let $\mathcal{H}$ be an RKHS with the Gaussian kernel $K(x,y) = e^{-\pi ||x - y||^2}$ . Then for $f\in \mathcal{H}$ , + +$$ +\begin{array}{l} \left| \left| f \right| \right| _ {\mathcal {H}} ^ {2} = \sum_ {k = 0} ^ {\infty} (4 \pi) ^ {- k} \sum_ {k _ {1} + \dots + k _ {d} = k} \frac {1}{\prod_ {i = 1} ^ {d} k _ {i} !} \left| \left| \partial_ {x _ {1}} ^ {k _ {1}} \dots \partial_ {x _ {d}} ^ {k _ {d}} f \right| \right| _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} (21) \\ = | | f | | _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} + \frac {1}{4 \pi} | | \nabla f | | _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} + \frac {1}{1 6 \pi^ {2}} | | \nabla^ {2} f | | _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} + o t h e r t e r m s. (22) \\ \end{array} +$$ + +In an ideal world, we would use this expression as a penalty on the discriminator to enforce (D3). Of course, as an infinite series, this formulation is computationally impractical. However, the first two terms are very close to common GAN techniques like gradient penalties (Gulrajani et al., 2017) and penalizing the output of the discriminator (Karras et al., 2018). We therefore interpret these common practices as partially applying the penalty given by the RKHS norm squared, approximately enforcing (D3). We view the choice of only using the leading terms as a disadvantageous but practical necessity. + +Interestingly, according to our analysis, gradient penalties and spectral normalization are not interchangeable, even though both techniques were designed to constrain the Lipschitz constant of the discriminator. Instead, our analysis suggests that they serve different purposes: gradient penalties enforce the variational smoothness (D3), while spectral normalization enforces Lipschitz continuity (D1). This demystifies the puzzling observation of Miyato (2018) that GANs using only spectral normalization with a WGAN loss do not seem to train well; it also explains why using both spectral normalization and a gradient penalty is a reasonable strategy. It also motivates the use of gradient penalties applied to losses other than the Wasserstein loss (Fedus et al., 2018). + +# 7 VERIFYING THE THEORETICAL LEARNING RATE + +In this section, we empirically test the theoretical learning rate given by Theorem 1 and Proposition 1 as well as our regularization scheme (7) based on inf-convolutions. We approximately implement our composite regularization scheme (7) on a trivial base loss of $J(\mu) = \chi \{\mu = \mu_0\}$ by alternating stochastic gradient steps on + +$$ +\inf _ {\mu} \sup _ {\varphi} \mathbb {E} _ {x \sim \mu} [ \varphi (x) ] - \mathbb {E} _ {x \sim \mu_ {0}} [ \varphi (x) ] - \frac {\pi}{\beta_ {2}} \mathbb {E} _ {x \sim \tilde {\mu}} \left[ \varphi (x) ^ {2} + \frac {1}{4 \pi} | | \nabla \varphi (x) | | ^ {2} \right], \tag {23} +$$ + +where $\tilde{\mu}$ is a random interpolate between samples from $\mu$ and $\mu_0$ , as used in Gulrajani et al. (2017). The regularization term is a truncation of the series for the squared RKHS norm (22) and approximately enforces (D3). The discriminator is a 7-layer convolutional neural network with spectral normalization1 and ELU activations, an architecture that enforces (D1) and (D2). We include a final scalar multiplication by $\alpha$ so that by Proposition 9, $\beta_{1} = 7\alpha$ . We take two discriminator steps for every generator step, to better approximate our assumption of an optimal discriminator. + +![](images/b53ceac48d7bbda054e49c2ee7964bfa69b11e3de4b2abe5738ea555ada2aa6c.jpg) +Figure 1: FID of simple particle generators with various learning rates. The horizontal axis shows the ratio $\gamma / \gamma_0$ between the actual learning rate $\gamma$ and the theoretical learning rate $\gamma_0$ suggested by Theorem 1 and Proposition 1. The vertical axis shows the FID after 100,000 SGD iterations. + +For the generator, we use an extremely simple particle-based generator which satisfies (G1) and (G2), in order to minimize the number of confounding factors in our experiment. Let $\omega$ be the discrete uniform distribution on $Z = \{1,\dots ,N\}$ . For an $N\times d$ matrix $\theta$ and $z\in Z$ , define $f_{\theta}:Z\to \mathbb{R}^{d}$ so that $f_{\theta}(z)$ is the $z$ th row of $\theta$ . The particle generator $\mu_{\theta} = f_{\theta \#}\omega$ satisfies (G1) with $A = \frac{1}{\sqrt{N}}$ , since + +$$ +\mathbb {E} _ {z} \left[ \| f _ {\theta} (z) - f _ {\theta^ {\prime}} (z) \| _ {2} \right] = \frac {1}{N} \sum_ {z = 1} ^ {n} \| \theta_ {z} - \theta_ {z} ^ {\prime} \| _ {2} \leq \frac {1}{\sqrt {N}} \| \theta - \theta^ {\prime} \| _ {\mathrm {F}}, \tag {24} +$$ + +and it satisfies (G2) with $B = 0$ , since $D_{\theta}f_{\theta}(z)$ is constant w.r.t. $\theta$ . With this setup, Theorem 1 suggests a theoretical learning rate of + +$$ +\gamma_ {0} = \frac {1}{L} = \frac {1}{\alpha B + A ^ {2} \left(\beta_ {1} + \beta_ {2}\right)} = \frac {N}{7 \alpha + \beta_ {2}}. \tag {25} +$$ + +We randomly generated hyperparameter settings for the Lipschitz constant $\alpha$ , the smoothness constant $\beta_{2}$ , the number of particles $N$ , and the learning rate $\gamma$ . We trained each model for 100,000 steps on CIFAR-10 and evaluate each model using the Fréchet Inception Distance (FID) of Heusel et al. (2017). We hypothesize that stability is correlated with image quality; Figure 1 plots the FID for each hyperparameter setting in terms of the ratio of the true learning rate $\gamma$ and the theoretically motivated learning rate $\gamma_0$ . We find that the best FID scores are obtained in the region where $\gamma/\gamma_0$ is between 1 and 1000. For small learning rates $\gamma/\gamma_0 \ll 1$ , we observe that the convergence is too slow to make a reasonable progress on the objective, whereas as the learning rate gets larger $\gamma/\gamma_0 \gg 1$ , we observe a steady increase in FID, signalling unstable behavior. It also makes sense that learning rates slightly above the optimal rate produce good results, since our theoretical learning rate is a conservative lower bound. Note that our intention is to test our theory, not to generate good images, which is difficult due to our weak choice of generator. Overall, this experiment shows that our theory and regularization scheme are sensible. + +# 8 FUTURE WORK + +Inexact gradient descent In this paper, we employed several assumptions in order to regard the GAN algorithm as gradient descent. However, real-world GAN algorithms must be treated as "inexact" descent algorithms. As such, future work includes: (i) relaxing the optimal discriminator assumption (cf. Sanjabi et al. (2018)) or providing a stability result for discrete simultaneous gradient descent (cf. continuous time analysis in Nagarajan & Kolter (2017); Mescheder et al. (2018)), (ii) addressing stochastic approximations of gradients (i.e., SGD), and (iii) providing error bounds for the truncated gradient penalty used in (23). + +Generator architectures Another important direction of research is to seek more powerful generator architectures that satisfy our smoothness assumptions (G1) and (G2). In practice, generators are often implemented as deep neural networks, and involve some specific architectures such as deconvolution layers (Radford et al., 2015) and residual blocks (e.g., Gulrajani et al. (2017); Miyato et al. (2018)). In this paper, we did not provide results on the smoothness of general classes of generators, since our focus is to analyze stability properties influenced by the choice of loss function $J$ (and therefore optimal discriminators). However, our conditions (G1) and (G2) shed light on how to obtain smoothly parameterized neural networks, which is left for future work. + +# ACKNOWLEDGMENTS + +We would like to thank Kohei Hayashi, Katsuhiko Ishiguro, Masanori Koyama, Shin-ichi Maeda, Takeru Miyato, Masaki Watanabe, and Shoichiro Yamaguchi for helpful discussions. + +# REFERENCES + +Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. Gradient Flows in Metric Spaces and in the Space of Probability Measures. Birkhäuser, second edition, 2008. +Michael Arbel, Dougal Sutherland, Mikołaj Binkowski, and Arthur Gretton. On gradient regularizers for MMD GANs. In Advances in Neural Information Processing Systems, pp. 6700-6710, 2018. +Martin Arjovsky and Leon Bottou. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations, 2017. +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning, pp. 214-223, 2017. +Heinz H. Bauschke and Patrick L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011. +Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, 2nd edition, 1999. +Leon Bottou, Martin Arjovsky, David Lopez-Paz, and Maxime Oquab. Geometrical insights for implicit generative modeling. In *Braverman Readings in Machine Learning*. Key Ideas from Inception to Current State, pp. 229-268. Springer, 2018. +Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. +Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. +Sebastien Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 8(3-4):231-357, 2015. +Casey Chu, Jose Blanchet, and Peter Glynn. Probability functional descent: A unifying perspective on GANs, variational inference, and reinforcement learning. In International Conference on Machine Learning, 2019. +Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). 2016. +Farzan Farnia and David Tse. A convex duality framework for GANs. In Advances in Neural Information Processing Systems, pp. 5248-5258, 2018. +Gregory E Fasshauer and Qi Ye. Reproducing kernels of generalized Sobolev spaces via a Green function approach with distributional operators. Numerische Mathematik, 119(3):585-611, 2011. +William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M Dai, Shakir Mohamed, and Ian Goodfellow. Many paths to equilibrium: GANs do not need to decrease a divergence at every step. In International Conference on Learning Representations, 2018. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672-2680, 2014. +Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(1):723-773, 2012. +Paulina Grinarova, Kfir Y Levy, Aurelien Lucchi, Thomas Hofmann, and Andreas Krause. An online learning approach to generative adversarial networks. In International Conference on Learning Representations, 2018. + +Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of Wasserstein GANs. In Advances in Neural Information Processing Systems, pp. 5767-5777, 2017. +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, 2017. +J-B Hiriart-Urruty. Extension of Lipschitz functions. Journal of Mathematical Analysis and Applications, 77(2):539-554, 1980. +Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations, 2018. +M Kirschbraun. Über die zusammenziehende und Lipschitzische Transformationen. Fundamenta Mathematicae, 22(1):77-108, 1934. +Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of GANs. arXiv preprint arXiv:1705.07215, 2017. +Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Poczos. MMD GAN: Towards deeper understanding of moment matching network. In Advances in Neural Information Processing Systems, pp. 2203-2213, 2017a. +Jerry Li, Aleksander Madry, John Peebles, and Ludwig Schmidt. On the limitations of first-order approximation in GAN dynamics. In International Conference on Machine Learning, 2017b. +Yujia Li, Kevin Swersky, and Rich Zemel. Generative moment matching networks. In International Conference on Machine Learning, pp. 1718-1727, 2015. +Shuang Liu, Olivier Bousquet, and Kamalika Chaudhuri. Approximation and convergence properties of generative adversarial learning. In Advances in Neural Information Processing Systems, pp. 5545-5553, 2017. +Edward James McShane. Extension of range of functions. Bulletin of the American Mathematical Society, 40(12):837-842, 1934. +Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? In International Conference on Machine Learning, 2018. +Takeru Miyato. How to combine WGAN and spectral norm? https://github.com/pfnet-research/sngan_projection/issues/15, 2018. Accessed: 2019-09-24. +Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018. +Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429-443, 1997. +Vaishnavh Nagarajan and J. Zico Kolter. Gradient descent GAN optimization is locally stable. In Advances in Neural Information Processing Systems, 2017. +Erich Novak, Mario Ullrich, Henryk Wozniakowski, and Shun Zhang. Reproducing kernels of Sobolev spaces on $\mathbb{R}^d$ and applications to embedding constants and tractability. Analysis and Applications, 16(05):693-715, 2018. +Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271-279, 2016. +Frans A Oliehoek, Rahul Savani, Jose Gallego, Elise van der Pol, and Roderich Groß. Beyond local Nash equilibria for adversarial networks. arXiv preprint arXiv:1806.07268, 2018. + +Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. +R. T. Rockafeller and R. J-B Wets. Variational Analysis, volume 317 of Grundlehren der mathematischen Wissenschaften. Springer-Verlag, Berlin, Heidelberg, 1998. +Walter Rudin. The Principles of Mathematical Analysis. McGraw-Hill New York, 3rd edition, 1964. +Maziar Sanjabi, Jimmy Ba, Meisam Razaviyayn, and Jason D Lee. On the convergence and robustness of training GANs with regularized optimal transport. In Advances in Neural Information Processing Systems, 2018. +Aaron Sidford. MS&E 213 / CS 2690 : Chapter 5 - smooth convex generalizations, 2017. URL http://www.aaronsidford.com/chap_5Extend_v7.pdf. +Bharath K, Striperumbudur, Kenji Fukumizu, Arthur Gretton, Gert R Lanckriet, and Bernhard Scholkopf. Kernel choice and classifiability for rkhs embeddings of probability distributions. In Advances in neural information processing systems, pp. 1750-1758, 2009. +Thomas Strömberg. The operation of infimal convolution. Instytut Matematyczny Polskiej Akademi Nauk, 1996. URL http://eudml.org/doc/271746. +Thomas Unterthiner, Bernhard Nessler, Calvin Seward, Günter Klambauer, Martin Heusel, Hubert Ramsauer, and Sepp Hochreiter. Coulomb GANs: Provably optimal Nash equilibria via potential fields. In International Conference on Learning Representations, 2018. +Cédrik Villani. Optimal Transport: Old and New. Springer, 2009. +Hassler Whitney. Analytic extensions of differentiable functions defined in closed sets. Transactions of the American Mathematical Society, 36(1):63-89, 1934. +Xingyu Zhou. On the Fenchel duality between strong convexity and Lipschitz continuous gradient. arXiv preprint arXiv:1803.06573, 2018. + +# A INF-CONVOLUTION IN $\mathbb{R}^d$ + +To gain intuition on the inf-convolution, we present a finite-dimensional analogue of the techniques in Section 3. For simplicity of presentation, we will omit any regularity conditions (e.g., lower semicontinuity). We refer readers to Chapter 12 of Bauschke & Combettes (2011) for a detailed introduction. + +Let $J$ and $R$ be convex functions on $\mathbb{R}^d$ . The inf-convolution of $J$ and $R$ is a function $J \star R$ defined as + +$$ +(J \oplus R) (x) := \inf _ {z \in \mathbb {R} ^ {d}} J (z) + R (x - z). +$$ + +The inf-convolution is often called the epigraphic sum since the epigraph of $J \star R$ coincides with the Minkowski sum of epigraphs of $J$ and $R$ , as Figure 2 illustrates. The inf-convolution is associative and commutative operation; that is, it is always true that $(J_1 \oplus J_2) \oplus J_3 = J_1 \oplus (J_2 \oplus J_3) =: J_1 \oplus J_2 \oplus J_3$ and $J_1 \oplus J_2 = J_2 \oplus J_1$ . + +There are two important special cases of inf-convolutions: The first one is the Pasch-Hausdorff envelope $J_{\alpha}$ , which is the inf-convolution between $J$ and $\alpha \| \cdot \|_2$ ( $\alpha > 0$ ). It is known that $J_{\alpha}$ becomes $\alpha$ -Lipschitz. The second important example is the Moreau envelope $J^{\beta} = J \oplus \frac{1}{2\beta} \| \cdot \|_2^2$ , i.e., the inf-convolution with the quadratic regularizer $\frac{1}{2\beta} \| \cdot \|_2^2$ . The Moreau envelope $J^{\beta}$ is always differentiable, and the gradient of $J^{\beta}$ is $\beta$ -Lipschitz (thus $J^{\beta}$ is $\beta$ -smooth). + +It is worth noting that the set of minimizers does not change after these two operations. More generally, we have the following result: + +Proposition 15. Let $J, R: \mathbb{R}^d \to \bar{\mathbb{R}}$ be proper and lower semicontinuous functions with $\min J > -\infty$ and $\min R = 0$ . Suppose $R(0) = 0$ and $R(x) \geq \psi(\|x\|_2)$ for some increasing function $\psi: \mathbb{R}_{\geq 0} \to \mathbb{R}$ . Then, $\min J \oplus R = \min J$ and $\arg \min J \oplus R = \arg \min J$ . + +![](images/c601716dcaf5404e9e18e10d2c617f87a5b1287b2876c9a785dec3d7442d4894.jpg) + +![](images/52b7058f125d541840ee62b38fa0ccc8e7794bfef67e7d7b48d83c2a5623e880.jpg) + +![](images/1b9482b216f8786d5a61bab1ee47874db93a1c769f3682bc7cc8790a420f2d12.jpg) + +![](images/6afe39455d3521bdd1687a0c44bed9c1dd02221248e4f3bf286a85c40a499646.jpg) +Figure 2: Illustration of inf-convolutions and their convex conjugates in $\mathbb{R}$ . Top row: Generally, inf-convolutions inherit the regularity of their parent functions. In this example, $J_{1}(x) = |x|$ is 1-Lipschitz but not smooth, while $J_{2}(x) = x^{2} / 2$ is 1-smooth but not Lipschitz. The inf-convolution $J_{1} \oplus J_{2}$ is the well-known Huber function, which is both 1-Lipschitz and 1-smooth. Bottom row: Convex conjugates of the functions in the top row. The conjugate of $J_{1} \oplus J_{2}$ is given as the sum of conjugates $J_{1}^{\star}(z) = \chi\{|z| \leq 1\}$ and $J_{2}^{\star}(z) = z^{2} / 2$ . + +![](images/746ee5118d446aaa615615547679497e32c5fe65b835fac3b29ea2597be1127a.jpg) + +![](images/e3f770521b0bf13336ed3c4ec6317677b2999b2abdc0bc05523522ec41327fb5.jpg) + +To sum up, given a function $J$ , we can always construct a regularized alternative $J_{\alpha}^{\beta}$ that is $\alpha$ -Lipschitz and $\beta$ -smooth and has the same minimizers as $J$ . + +The next question is how to implement the inf-convolution in GAN-like optimization problems. For this, it is convenient to consider the convex conjugate. Recall that the Fenchel-Moreau theorem says that there is a duality between a convex function $J$ and its convex conjugate $J^{\star}$ as $J(x) = \sup_{z\in \mathbb{R}^{d}}\langle x,z\rangle -J^{\star}(z)$ and $J^{\star}(z) = \sup_{x\in \mathbb{R}^{d}}\langle x,z\rangle -J(x)$ . The important property is that the convex conjugate of the inf-convolution is the sum of convex conjugates, that is, we always have + +$$ +(J \oplus R) ^ {\star} (z) = J ^ {\star} (z) + R ^ {\star} (z). +$$ + +This property can be useful for implementing the regularized objective $J_{\alpha}^{\beta}$ as follows. First, we can check that the convex conjugates of the norm and the squared norm are given as $(\| \cdot \|_2)^\star = \chi \{ \| \cdot \| \leq 1 \}$ and $(\frac{1}{2} \| \cdot \|_2^2)^\star = \frac{1}{2} \| \cdot \|_2^2$ . Hence, we have + +$$ +J _ {\alpha} ^ {\beta} (x) := \left(J \oplus \alpha \| \cdot \| _ {2} \oplus \frac {1}{2 \beta} \| \cdot \| _ {2} ^ {2}\right) (x) = \sup _ {z: \| z \| _ {2} \leq \alpha} \langle x, z \rangle - J ^ {\star} (z) - \frac {\beta}{2} \| z \| _ {2} ^ {2}, +$$ + +which means that minimizing $J_{\alpha}^{\beta}$ can be recast in min-max problem with the norm clipping and $\ell_2$ -regularization on the dual variable $z$ . + +# B COMMON GAN LOSSES + +For completeness and clarity, we explicitly write out the expressions for the losses listed in Table 1. For more detailed computations of optimal discriminators, see Chu et al. (2019); for more details on the convex duality interpretation, see Farnia & Tse (2018). + +Minimax GAN Goodfellow et al. (2014) originally proposed the minimax GAN and showed that the corresponding loss function for the minimax GAN is the Jensen-Shannon divergence, defined as + +$$ +J (\mu) := D _ {\mathrm {J S}} (\mu | | \mu_ {0}) := \frac {1}{2} D _ {\mathrm {K L}} (\mu | | \frac {1}{2} \mu + \frac {1}{2} \mu_ {0}) + \frac {1}{2} D _ {\mathrm {K L}} (\mu_ {0} | | \frac {1}{2} \mu + \frac {1}{2} \mu_ {0}), +$$ + +where $\mu_0\in \mathcal{P}(X)$ is a fixed probability measure (usually the empirical measure of the data), and $D_{\mathrm{KL}}(\mu ||\nu)$ is the Kullback-Leibler divergence between $\mu$ and $\nu$ . The optimal discriminator in the sense of Definition 1 is given as + +$$ +\Phi_ {\mu} (x) = \frac {1}{2} \log \frac {d \mu}{d (\mu + \mu_ {0})} (x), +$$ + +where $\frac{d\mu}{d(\mu + \mu_0)}$ is the Radon-Nikodym derivative. If $\mu$ and $\mu_0$ have densities $\mu (x)$ and $\mu_0(x)$ , then + +$$ +\frac {d \mu}{d (\mu + \mu_ {0})} (x) = \frac {\mu (x)}{\mu (x) + \mu_ {0} (x)}, +$$ + +so our optimal discriminator matches that of Goodfellow et al. (2014) up to a constant factor and logarithm. To recover the minimax formulation, the convex duality (4) yields: + +$$ +\begin{array}{l} \inf _ {\mu} D _ {\mathrm {J S}} (\mu , \mu_ {0}) = \inf _ {\mu} \sup _ {\varphi} \mathbb {E} _ {x \sim \mu} [ \varphi (x) ] - (\underbrace {- \frac {1}{2} \mathbb {E} _ {x \sim \mu_ {0}} [ \log (1 - e ^ {2 \varphi (x) + \log 2}) ] - \frac {1}{2} \log 2} _ {(D _ {\mathrm {J S}} (\cdot , \mu_ {0})) ^ {\star} (\varphi)}) \\ = \inf _ {\mu} \sup _ {D} \frac {1}{2} \mathbb {E} _ {x \sim \mu} [ \log (1 - D (x)) ] + \frac {1}{2} \mathbb {E} _ {x \sim \mu_ {0}} [ \log D (x) ], \\ \end{array} +$$ + +using the substitution $\varphi = \frac{1}{2}\log (1 - D) - \frac{1}{2}\log 2$ + +Non-saturating GAN Goodfellow et al. (2014) also proposed the heuristic non-saturating GAN. Theorem 2.5 of Arjovsky & Bottou (2017) shows that the loss function minimized is + +$$ +J (\mu) := D _ {\mathrm {K L}} \left(\frac {1}{2} \mu + \frac {1}{2} \mu_ {0} | | \mu_ {0}\right) = \frac {1}{2} D _ {\mathrm {K L}} (\mu | | \mu_ {0}) - D _ {\mathrm {J S}} (\mu | | \mu_ {0}). +$$ + +The optimal discriminator is + +$$ +\Phi_ {\mu} (x) = - \frac {1}{2} \log \frac {d \mu_ {0}}{d (\mu + \mu_ {0})} (x). +$$ + +Wasserstein GAN Arjovsky et al. (2017) proposed the Wasserstein GAN, which minimizes the Wasserstein-1 distance between the input $\mu$ and a fixed measure $\mu_0$ : + +$$ +J (\mu) := W _ {1} (\mu , \mu_ {0}) := \inf _ {\pi} \mathbb {E} _ {(x, y) \sim \pi} [ | | x - y | | ], +$$ + +where the infimum is taken over all couplings $\pi$ , probability distributions over $X \times X$ whose marginals are $\mu$ and $\mu_0$ respectively. The optimal discriminator $\Phi_{\mu}$ is called the Kantorovich potential in the optimal transport literature (Villani, 2009). The convex duality (4) recovers the Wasserstein GAN: + +$$ +\begin{array}{l} \inf _ {\mu} W _ {1} (\mu , \mu_ {0}) = \inf _ {\mu} \sup _ {\varphi} \mathbb {E} _ {x \sim \mu} [ \varphi (x) ] - (\underbrace {\mathbb {E} _ {x \sim \mu_ {0}} [ \varphi (x) ] + \chi \{\| \varphi \| _ {\mathrm {L i p}} \leq 1 \}} _ {(W _ {1} (\cdot , \mu_ {0})) ^ {\star} (\varphi)}) \\ = \inf _ {\mu} \sup _ {| | \varphi | | _ {\mathrm {L i p}} \leq 1} \mathbb {E} _ {x \sim \mu} [ \varphi (x) ] - \mathbb {E} _ {x \sim \mu_ {0}} [ \varphi (x) ], \\ \end{array} +$$ + +an expression of Kantorovich-Rubinstein duality. The Lipschitz constraint on the discriminator is typically enforced by spectral normalization (Miyato et al., 2018), less frequently by weight clipping (Arjovsky et al., 2017), or heuristically by gradient penalties (Gulrajani et al., 2017) (although this work shows that gradient penalties may serve a different purpose altogether). + +Maximum mean discrepancy Given a positive definite kernel $K: X \times X \to \mathbb{R}$ , the maximum mean discrepancy (MMD, Gretton et al. (2012)) between $\mu$ and $\nu$ is defined by + +$$ +J (\mu) := \frac {1}{2} \mathrm {M M D} _ {K} ^ {2} (\mu , \nu) := \frac {1}{2} \int K (x, y) (\mu - \nu) (d x) (\mu - \nu) (d y). +$$ + +where $(\mathcal{H},\| \cdot \|_{\mathcal{H}})$ is the reproducing kernel Hilbert space (RKHS) for $K$ . The generative moment-matching network (GMMN, Li et al. (2015)) and the Coulomb GAN (Unterthiner et al., 2018) use the squared MMD as the loss function. The optimal discriminator in this case is + +$$ +\Phi_ {\mu} (x) = \mathbb {E} _ {y \sim \mu} [ K (x, y) ] - \mathbb {E} _ {y \sim \mu_ {0}} [ K (x, y) ], +$$ + +which in constraint to other GANs, may be approximated by simple Monte Carlo, rather than an auxiliary optimization problem. + +Note that MMD-GANs (Li et al., 2017a; Arbel et al., 2018) minimize a modified version of the MMD, the Optimized MMD (Sriperumbudur et al., 2009; Arbel et al., 2018). These MMD-GANs are adversarial in a way that does not arise from convex duality, so our theory currently does not apply to these GANs. + +Integral probability metrics An integral probability metric (Muller, 1997) is defined by + +$$ +J (\mu) := \operatorname {I P M} _ {\mathcal {F}} (\mu , \mu_ {0}) := \sup _ {f \in \mathcal {F}} \int f d \mu - \int f d \mu_ {0}, +$$ + +where $\mathcal{F}$ is a class of functions. The optimal discriminator is the function that maximizes the supremum in the definition. The Wasserstein distance may be thought of as an IPM with $\mathcal{F}$ containing all 1-Lipschitz functions. The MMD may be thought of as an IPM with $\mathcal{F}$ all functions with RKHS norm at most 1, but no GANs based on MMD are actually trained this way, as it is difficult to constrain the discriminator to such functions. + +# C OPTIMAL DISCRIMINATORS ARE FUNCTIONAL DERIVATIVES + +Let $J:\mathcal{P}(X)\to \bar{\mathbb{R}}$ be a convex function. Recall the definition of the optimal discriminator (Definition 1): + +$$ +\Phi_ {\mu} \in \operatorname * {a r g m a x} _ {\varphi \in \mathcal {C} (X)} \int \varphi d \mu - J ^ {\star} (\varphi). +$$ + +This definition can be understood as an infinite dimensional analogue of subgradients. In fact, in finite-dimensional convex analysis, $z$ is a subgradient of $f: \mathbb{R}^d \to \mathbb{R}$ if and only if it can be written as $z \in \arg \max_{z'} \langle z', x \rangle - f^*(z')$ . The calculus of subgradients shares many properties with the standard calculus of derivatives, such as chain rules (Rockafeller & Wets, 1998). This motivate us to investigate derivative-like features of optimal discriminators. + +We introduce the functional derivative, also known as the von Mises influence function: + +Definition 3 (Functional derivative). Let $J: \mathcal{P}(X) \to \bar{\mathbb{R}}$ be a function of probability measures. We say that a continuous function $\Phi_{\mu}: X \to \mathbb{R}$ is a functional derivative of $J$ at $\mu$ if + +$$ +J (\mu + \epsilon \xi) = J (\mu) + \epsilon \int \Phi_ {\mu} d \xi + O (\epsilon^ {2}) +$$ + +holds for any $\xi = \nu -\mu$ with $\nu \in \mathcal{P}(X)$ + +Under this definition, optimal discriminators are actually functional derivatives. + +Proposition 16 (Chu et al. (2019), Theorem 2). Let $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ be a proper, lower semicontinuous, and convex function. If there exists a maximizer $\Phi_{\mu}$ of $\varphi \mapsto \int \varphi d\mu - J^{\star}(\varphi)$ , then $\Phi_{\mu}$ is a functional derivative of $J$ at $\mu$ in the sense of Definition 3. + +The following result relates the derivative of the loss function with the derivative of the optimal discriminator: + +Proposition 17 (Chu et al. (2019), Theorem 1). Let $J:\mathcal{P}(X)\to \mathbb{R}$ be continuously differentiable, in the sense that the functional derivative $\Phi_{\mu}$ exists and $(\mu ,\nu)\mapsto \mathbb{E}_{x\sim \nu}[\Phi_{\mu}(x)]$ is continuous. Let $\theta \mapsto \mu_{\theta}$ be continuous in the sense that $\frac{1}{\|h\|} (\mu_{\theta +h} - \mu_{\theta})$ converges to a weak limit as $\| h\| \to 0$ . Then, we have + +$$ +\nabla_ {\theta} J (\mu_ {\theta}) = \nabla_ {\theta} \mathbb {E} _ {x \sim \mu_ {\theta}} [ \hat {\Phi} (x) ], +$$ + +where $\tilde{\Phi} = \Phi_{\mu_\theta}$ is treated as a function $X \to \mathbb{R}$ that is not dependent on $\theta$ . + +We use this important computational tool in many of our proofs. For the case of the generator model $\mu_{\theta} = f_{\theta \#}\omega$ , an important consequence of Proposition 17 is that + +$$ +\nabla_ {\theta} J (\mu_ {\theta}) = \nabla_ {\theta} \mathbb {E} _ {z \sim \omega} [ \hat {\Phi} (f _ {\theta} (z)) ] = \mathbb {E} _ {z \sim \omega} [ \nabla_ {x = f _ {\theta} (z)} \Phi_ {\mu} (f _ {\theta} (z)) \cdot D _ {\theta} f _ {\theta} (z) ]. +$$ + +We use this fact in the proof of Theorem 1. + +# D PROOFS FOR SECTIONS 1 AND 2 + +The following result is well known in the dynamical systems and the optimization literature. For the sake of completeness, we provide its proof. + +Proposition 1 (Bertsekas (1999), Proposition 1.2.3). Suppose $f: \mathbb{R}^p \to \mathbb{R}$ is $L$ -smooth and bounded below. Let $x_{k+1} := x_k - \frac{1}{L} \nabla f(x_k)$ . Then $||\nabla f(x_k)|| \to 0$ as $k \to \infty$ . + +Proof. It is known that if $f$ is $L$ -smooth, then + +$$ +f (y) \leq f (x) + \langle \nabla f (x), y - x \rangle + \frac {L}{2} | | y - x | | ^ {2} +$$ + +holds for any $x, y \in \mathbb{R}^d$ (see e.g. Lemma 3.4 in Bubeck (2015)). Then, we have + +$$ +\begin{array}{l} f (x _ {k + 1}) \leq f (x _ {k}) + \langle \nabla f (x _ {k}), x _ {k + 1} - x _ {k} \rangle + \frac {L}{2} | | x _ {k + 1} - x _ {k} | | ^ {2} \\ \leq f (x _ {k}) - \frac {1}{L} | | \nabla f (x _ {k}) | | ^ {2} + \frac {1}{2 L} | | \nabla f (x _ {k}) | | ^ {2} \\ = f (x _ {k}) - \frac {1}{2 L} | | \nabla f (x _ {k}) | | ^ {2}. \\ \end{array} +$$ + +Summing this inequality over $k$ , we have + +$$ +\frac {1}{2 L} \sum_ {k = 0} ^ {n - 1} | | \nabla f (x _ {k}) | | ^ {2} \leq f (x _ {0}) - f (x _ {n}), +$$ + +from which we conclude that + +$$ +\min _ {0 \leq k \leq n - 1} | | \nabla f (x _ {k}) | | ^ {2} \leq \frac {2 L}{n} (f (x _ {0}) - \inf _ {x} f (x)). +$$ + +![](images/148826ae1398fe8c0eec9a75c444b7ada6763a1996808def003544f888d6f90f.jpg) + +Next, we move on to prove Theorem 1, which we restate here for readability. + +Theorem 1 (Smoothness decomposition for GANs). Let $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ be a convex function whose optimal discriminators $\Phi_{\mu}: X \to \mathbb{R}$ satisfy the following regularity conditions: + +(D1) $x\mapsto \Phi_{\mu}(x)$ is $\alpha$ -Lipschitz, +(D2) $x\mapsto \nabla_{x}\Phi_{\mu}(x)$ is $\beta_{1}$ -Lipschitz, +(D3) $\mu \mapsto \nabla_{x}\Phi_{\mu}(x)$ is $\beta_{2}$ -Lipschitz w.r.t. the 1-Wasserstein distance. + +Also, let $\mu_{\theta} = f_{\theta \#}\omega$ be a family of generators that satisfies: + +(G1) $\theta \mapsto f_{\theta}(z)$ is $A$ -Lipschitz in expectation for $z \sim \omega$ , i.e., $\mathbb{E}_{z \sim \omega}[\|f_{\theta_1}(z) - f_{\theta_2}(z)\|_2] \leq A\|\theta_1 - \theta_2\|_2$ , and +(G2) $\theta \mapsto D_{\theta}f_{\theta}(z)$ is $B$ -Lipschitz in expectation for $z \sim \omega$ , i.e., $\mathbb{E}_{z \sim \omega}[\|D_{\theta_1}f_{\theta_1}(z) - D_{\theta_2}f_{\theta_2}(z)\|_2] \leq B\|\theta_1 - \theta_2\|_2$ . + +Then $\theta \mapsto J(\mu_{\theta})$ is $L$ -smooth, with $L = \alpha B + A^{2}(\beta_{1} + \beta_{2})$ . + +Intuitively, Theorem 1 can be understood as the chain rule. For simplicity, let us consider the smoothness of a composite function $D \circ G$ , where $D$ and $G$ are functions on $\mathbb{R}$ . A sufficient condition for $D \circ G$ to be smooth is that its second derivative is bounded. For this, suppose that (i) $D$ is $\alpha$ -Lipschitz and $\beta$ -smooth and (ii) $G$ is $A$ -Lipschitz and $B$ -smooth. By the chain rule, the second derivative is bounded as $(D \circ G)^{\prime \prime} = (D^{\prime} \circ G)G^{\prime \prime} + (D^{\prime \prime} \circ G)(G^{\prime})^{2} \leq \alpha B + A^{2}\beta$ , which has the same form as the consequence of Theorem 1. For GANs, we need somewhat more involved conditions; We need two types of smoothness (D2) and (D3) for the optimal discriminators. To this end, we utilize the functional gradient point of view that we explained in Appendix C. + +Proof of Theorem 1. First, using the functional gradient interpretation of the optimal discriminator, we have + +$$ +\begin{array}{l} \left| \left| \nabla_ {\theta_ {1}} J \left(\mu_ {\theta_ {1}}\right) - \nabla_ {\theta_ {2}} J \left(\mu_ {\theta_ {2}}\right) \right| \right| \\ = \left\| \mathbb {E} _ {z \sim \omega} \left[ \nabla \Phi_ {\mu_ {\theta_ {1}}} \left(f _ {\theta_ {1}} (z)\right) \cdot D _ {\theta_ {1}} f _ {\theta_ {1}} - \nabla \Phi_ {\mu_ {\theta_ {2}}} \left(f _ {\theta_ {2}} (z)\right) \cdot D _ {\theta_ {2}} f _ {\theta_ {2}} \right] \right\| \quad \text {P r o p o s i t i o n 1 7} \\ = \left\| \mathbb {E} _ {z \sim \omega} \left[ \nabla \Phi_ {\mu_ {\theta_ {1}}} \left(f _ {\theta_ {1}} (z)\right) \cdot \left(D _ {\theta_ {1}} f _ {\theta_ {1}} - D _ {\theta_ {2}} f _ {\theta_ {2}}\right) \right. \right. \\ + \left(\nabla \Phi_ {\mu_ {\theta_ {1}}} \left(f _ {\theta_ {1}} (z)\right) - \nabla \Phi_ {\mu_ {\theta_ {2}}} \left(f _ {\theta_ {1}} (z)\right)\right) \cdot D _ {\theta_ {2}} f _ {\theta_ {2}} \\ \left. + \left(\nabla \Phi_ {\mu_ {\theta_ {2}}} \left(f _ {\theta_ {1}} (z)\right) - \nabla \Phi_ {\mu_ {\theta_ {2}}} \left(f _ {\theta_ {2}} (z)\right)\right) \cdot D _ {\theta_ {2}} f _ {\theta_ {2}} \right] \| \\ \leq \left\| \mathbb {E} _ {z \sim \omega} \left[ \nabla \Phi_ {\mu_ {\theta_ {1}}} \left(f _ {\theta_ {1}} (z)\right) \cdot \left(D _ {\theta_ {1}} f _ {\theta_ {1}} - D _ {\theta_ {2}} f _ {\theta_ {2}}\right) \right] \right\| (a) \\ + \left\| \mathbb {E} _ {z \sim \omega} \left[ \left(\nabla \Phi_ {\mu_ {\theta_ {1}}} \left(f _ {\theta_ {1}} (z)\right) - \nabla \Phi_ {\mu_ {\theta_ {2}}} \left(f _ {\theta_ {1}} (z)\right)\right) \cdot D _ {\theta_ {2}} f _ {\theta_ {2}} \right] \right\| (b) \\ + \left\| \mathbb {E} _ {z \sim \omega} \left[ \left(\nabla \Phi_ {\mu_ {\theta_ {2}}} \left(f _ {\theta_ {1}} (z)\right) - \nabla \Phi_ {\mu_ {\theta_ {2}}} \left(f _ {\theta_ {2}} (z)\right)\right) \cdot D _ {\theta_ {2}} f _ {\theta_ {2}} \right] \right\|. (c) \\ \end{array} +$$ + +By assumption, there are bounded non-negative numbers $\alpha, \beta_1, \beta_2, A$ , and $B$ such that + +$$ +\begin{array}{l} \left(\mathrm {D} 1 ^ {\prime}\right) \quad \alpha = \sup _ {\mu \in \mathcal {P} (X)} \sup _ {x \sim \mu} \| \Phi_ {\mu} (x) \| _ {2}, \\ \left(\mathrm {D} 2 ^ {\prime}\right) \sup _ {\mu \in \mathcal {P} (X)} \| \nabla_ {x _ {1}} \Phi_ {\mu} (x _ {1}) - \nabla_ {x _ {2}} \Phi_ {\mu} (x _ {2}) \| _ {2} \leq \beta_ {1} \| x _ {1} - x _ {2} \| _ {2} \\ \left(\mathrm {D} 3 ^ {\prime}\right) \quad \sup _ {x \in X} \| \nabla_ {x} \Phi_ {\mu_ {1}} (x) - \nabla_ {x} \Phi_ {\mu_ {2}} (x) \| _ {2} \leq \beta_ {2} W _ {1} (\mu_ {1}, \mu_ {2}), \\ \left(\mathrm {G} 1 ^ {\prime}\right) \quad A = \mathbb {E} _ {z \sim \omega} \sup _ {\theta} | | D _ {\theta} f _ {\theta} (z) | | _ {\mathrm {o p}}, \quad \text {a n d} \\ \left. \mathbb {E} _ {z \sim \omega} \right| \left| D _ {\theta_ {1}} f _ {\theta_ {1}} (z) - D _ {\theta_ {2}} f _ {\theta_ {2}} (z) \right| | _ {\mathrm {o p}} \leq B \left| \left| \theta_ {1} - \theta_ {2} \right| \right|. \tag {G2'} \\ \end{array} +$$ + +Here, we wrote $\sup_{x\sim \mu}f(x)$ for the essential supremum of $f$ w.r.t. $\mu$ . From (D1') and (G2'), the first term (a) is bounded as + +$$ +\left\| \mathbb {E} _ {z \sim \omega} \left[ \nabla \Phi_ {\mu_ {\theta_ {1}}} \left(f _ {\theta_ {1}} (z)\right) \cdot \left(D _ {\theta_ {1}} f _ {\theta_ {1}} - D _ {\theta_ {2}} f _ {\theta_ {2}}\right) \right] \right\| \leq \alpha B \| \theta_ {1} - \theta_ {2} \|. +$$ + +From $(\mathrm{D}3^{\prime})$ , (G1'), and the Cauchy-Schwarz inequality, the second term (b) is bounded as + +$$ +\begin{array}{l} \left\| \mathbb {E} _ {z \sim \omega} \left[ \left(\nabla \Phi_ {\mu_ {\theta_ {1}}} \left(f _ {\theta_ {1}} (z)\right) - \nabla \Phi_ {\mu_ {\theta_ {2}}} \left(f _ {\theta_ {1}} (z)\right)\right) \cdot D _ {\theta_ {2}} f _ {\theta_ {2}} \right] \right\| \\ \leq A \beta_ {2} W _ {1} \left(\mu_ {\theta_ {1}}, \mu_ {\theta_ {2}}\right) \\ \leq A \beta_ {2} \mathbb {E} _ {z \sim \omega} \| f _ {\theta_ {1}} (z) - f _ {\theta_ {2}} (z) \| \leq A ^ {2} \beta_ {2} \| \theta_ {1} - \theta_ {2} \|, \\ \end{array} +$$ + +where the second inequality holds from the following optimal transport interpretation of $W_{1}$ : + +$$ +W_{1}(\mu_{\theta_{1}},\mu_{\theta_{2}}) = \inf_{\substack{\gamma \in \mathcal{P}(X\times X):\\ \text{coupling of}\mu_{\theta_{1}}\text{and}\mu_{\theta_{2}}}}\int \| x - y\| d\gamma \leq \mathbb{E}_{z\sim \omega}\| f_{\theta_{1}}(z) - f_{\theta_{2}}(z)\| . +$$ + +Lastly, from $(\mathrm{D}2^{\prime})$ , (G1') and the Cauchy-Schwarz inequality, the term (c) is bounded as + +$$ +\left\| \mathbb {E} _ {z \sim \omega} \big [ \big (\nabla \Phi_ {\mu_ {\theta_ {2}}} (f _ {\theta_ {1}} (z)) - \nabla \Phi_ {\mu_ {\theta_ {2}}} (f _ {\theta_ {2}} (z)) \big) \cdot D _ {\theta_ {2}} f _ {\theta_ {2}} \big ] \right\| \leq A \beta_ {1} \mathbb {E} _ {z \sim \omega} \| f _ {\theta_ {1}} (z) - f _ {\theta_ {2}} (z) \| \leq A ^ {2} \beta_ {1} \| \theta_ {1} - \theta_ {2} \|. +$$ + +Combining the above upper bounds for (a)-(c), we conclude that + +$$ +\left| \left| \nabla_ {\theta_ {1}} J \left(\mu_ {\theta_ {1}}\right) - \nabla_ {\theta_ {2}} J \left(\mu_ {\theta_ {2}}\right) \right| \right| \leq \left(\alpha B + A ^ {2} \left(\beta_ {1} + \beta_ {2}\right)\right) \| \theta_ {1} - \theta_ {2} \|. +$$ + +![](images/79fdd86d2b46aafb6933c195f7b11e0235bb8549a1252729493717727dc55777.jpg) + +# E PROOFS FOR SECTION 3 + +In the finite-dimensional case, Proposition 15 says that taking inf-convolution with a "coercive" regularizer does not change the set of minimizers of the original objective. A similar invariance holds for GAN objectives, which are defined on infinite-dimensional space of signed measures, for the regularizers $R_{1}$ , $R_{2}$ and $R_{3}$ introduced in Sections 4 to 6: + +Proposition 2 (Invariance of minimizers). Let $R_{1}(\xi) \coloneqq \| \xi \|_{\mathrm{KR}}$ , $R_{2}(\xi) \coloneqq \| \xi \|_{\mathcal{S}^*}$ , and $R_{3}(\xi) \coloneqq \frac{1}{2} \| \hat{\xi} \|_{\mathcal{H}}^{2}$ be the three regularizers defined by (8), (12), and (19) respectively. Assume that $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ has a unique minimizer at $\mu_0$ with $J(\mu_0) = 0$ , and $J(\mu) \geq c \| \hat{\mu} - \hat{\mu}_0 \|_{\mathcal{H}}$ for some $c > 0$ . Then the inf-convolution $\tilde{J} = J \oplus R_1 \oplus R_2 \oplus R_3$ has a unique minimizer at $\mu_0$ with $\tilde{J}(\mu_0) = 0$ . + +Proof. In the following, we endow $\mathcal{M}(X)$ with the Gaussian RKHS norm $\| \cdot \|_{\mathcal{H}}$ , and for convenience $\| \cdot \|$ will refer to this norm if not otherwise specified. To show the result, we apply Lemma 1 three times, first on $R_{3}$ and $R_{1}$ , and then again on $R_{3} \oplus R_{1}$ and $R_{2}$ , and then finally on $R_{3} \oplus R_{1} \oplus R_{2}$ and $J(\cdot + \mu_{0})$ . The result follows from noting that $\tilde{J}(\mu) = [R_{3} \oplus R_{1} \oplus R_{2} \oplus J(\cdot + \mu_{0})](\mu - \mu_{0})$ . + +In order to apply the lemma, it suffices to show that $R_{3}$ is uniformly continuous on bounded sets and coercive, and that there exist constants $c_{1}$ and $c_{2}$ such that $R_{1}(\xi) \geq c_{1}\| \xi \|$ and $R_{2}(\xi) \geq c_{2}\| \xi \|$ . $R_{3}$ is uniformly continuous on bounded sets: suppose $\| \xi \| < C$ and $\| \xi^{\prime}\| < C$ ; then $|R_3(\xi) - R_3(\xi ')| = |\frac{1}{2}\| \xi \| ^2 -\frac{1}{2}\| \xi '\| ^2 | = |\frac{1}{2}\langle \xi -\xi ',\xi +\xi '\rangle |\leq C\| \xi -\xi '\|$ by the Cauchy-Schwarz and triangle inequality. $R_{3}$ is also coercive, as $R_{3}(\xi) = \frac{1}{2}\| \xi \|^{2}\to \infty$ when $\| \xi \| \rightarrow \infty$ . $R_{1}$ satisfies $R_{1}(\xi)\geq c_{1}\| \xi \|$ by Lemma 7. $R_{2}$ satisfies $R_{2}(\xi)\geq c_{2}\| \xi \|$ by Lemma 5. + +Recall that a function $F$ is said to be coercive if $F(\xi)\to \infty$ when $\| \xi \| \to \infty$ . + +Lemma 1. Suppose $F: \mathcal{M}(X) \to \mathbb{R}$ has a unique minimizer at 0 with $F(0) = 0$ , and is uniformly continuous on bounded sets and coercive. Suppose $G: \mathcal{M}(X) \to \bar{\mathbb{R}}$ has a unique minimizer at 0 with $G(0) = 0$ , and $G(\cdot) \geq c \| \cdot \|$ for some $c > 0$ . Then the inf-convolution $F \oplus G$ has a unique minimizer at 0 with $(F \oplus G)(0) = 0$ , and is uniformly continuous on bounded sets, coercive, and real-valued. + +Proof. From Theorem 2.3 of Strömberg (1996), $\inf F \oplus G = 0$ , and $0 \in \arg \min F \oplus G$ . To show that $0$ is the unique minimizer, let $\xi \in \arg \min F \oplus G$ . From the definition of inf-convolution, for every $n$ there exists an $\xi_n \in \mathcal{M}(X)$ that satisfies + +$$ +\lim _ {n \rightarrow \infty} F (\xi_ {n}) + G (\xi - \xi_ {n}) = (F \oplus G) (\xi) = 0. +$$ + +Since $F$ and $G$ are non-negative, $F(\xi_n) \to 0$ and $G(\xi - \xi_n) \to 0$ . By our assumption that $G(\cdot) \geq c \| \cdot \|$ , the latter limit implies that $\| \xi - \xi_n \| \to 0$ , which implies that $F(\xi_n) \to F(\xi)$ by continuity of $F$ . Comparing our two expressions for the limit of $F(\xi_n)$ , we find that $F(\xi) = 0$ , which implies that $\xi = 0$ , since $F$ has a unique minimizer at 0. Hence $\arg \min F \oplus G = \{0\}$ . + +From Theorem 2.10 of Strömberg (1996), $F \oplus G$ is uniformly continuous on bounded sets and real-valued. To show that $F \oplus G$ is coercive, we show the equivalent condition its sublevel sets are bounded (see Proposition 11.11 of Bauschke & Combettes (2011)). That is, we show that for all $a > 0$ , there exists a constant $b$ such that if $(F \oplus G)(\xi) \leq a$ , then $\| \xi \| \leq b$ . Let $a > 0$ , and let $\xi \in \mathcal{M}(X)$ be such that $(F \oplus G)(\xi) \leq a$ . Let $\epsilon > 0$ ; from the definition of inf-convolution, there exists a $\tilde{\xi} \in \mathcal{M}(X)$ such that + +$$ +F (\tilde {\xi}) + G (\xi - \tilde {\xi}) \leq (F \oplus G) (\xi) + \epsilon \leq a + \epsilon . +$$ + +Because $F$ and $G$ are non-negative, we have that + +$$ +F (\tilde {\xi}) \leq a + \epsilon , \quad G (\xi - \tilde {\xi}) \leq a + \epsilon . +$$ + +Using the fact that $F$ is coercive and hence its sublevel sets are bounded, let $b'$ be such that if $F(\nu) \leq a + \epsilon$ , then $\| \nu \| \leq b'$ . By the triangle inequality and our assumption that $G(\cdot) \geq c \| \cdot \|$ , + +$$ +\| \xi \| \leq \| \tilde {\xi} \| + \| \xi - \tilde {\xi} \| \leq b ^ {\prime} + \frac {G (\xi - \tilde {\xi})}{c} \leq b ^ {\prime} + \frac {a + \epsilon}{c}. +$$ + +Because this constant is independent of $\xi$ , this shows that $F \oplus G$ is coercive. + +# F PROOFS FOR SECTION 4 + +Proposition 3. (D1) holds if and only if $J$ is $\alpha$ -Lipschitz w.r.t. the Wasserstein-1 distance. + +Proof. First, we check the forward implication: + +$$ +\left| J (\mu) - J (\nu) \right| \leq \alpha W _ {1} (\mu , \nu) \quad \Rightarrow \quad \sup _ {x \sim \mu} \left\| \nabla_ {x} \Phi_ {\mu} (x) \right\| _ {2} \leq \alpha . +$$ + +From the duality between $L^{\infty}$ - and $L^1$ -norms, we have + +$$ +\sup _ {x \sim \mu} \| \nabla_ {x} \Phi_ {\mu} (x) \| _ {2} = \| \nabla_ {x} \Phi_ {\mu} (\cdot) \| _ {L ^ {\infty} (\mu ; \mathbb {R} ^ {d})} = \sup _ {\| v \| _ {L ^ {1} (\mu ; \mathbb {R} ^ {d})} = 1} \int \nabla \Phi_ {\mu} (x) \cdot v (x) d \mu (x). \tag {26} +$$ + +Below, we will abbreviate $\| \cdot \|_{L^1 (\mu ;\mathbb{R}^d)}$ as $\| \cdot \|_{L^1}$ . We utilize the fact that the optimal discriminator $\Phi_{\mu}$ is the functional gradient of $J$ in the sense of Definition 3. For any $t\geq 0$ , let $\mu_t = (\mathrm{id} + tv)_{\#}\mu$ be the law of $x + tv(x)$ with $x\sim \mu$ . Then, $\mu_t$ converges weakly to $\mu = \mu_0$ as $t\to 0$ since $W_{1}(\mu_{t},\mu)\leq t\| v\|_{L^{1}(\mu)}\to 0$ . Applying Proposition 17 to $J$ and $\mu_t$ , we have + +$$ +\left. \frac {d}{d t} J (\mu_ {t}) \right| _ {t = 0} = \left. \frac {d}{d t} \int \Phi_ {\mu} (x + t v (x)) d \mu (x) \right| _ {t = 0} = \int \nabla \Phi_ {\mu} (x) \cdot v (x) d \mu . +$$ + +In particular, the right-hand side of (26) is bounded from above as + +$$ +\begin{array}{l} \sup _ {\| v \| _ {L ^ {1} (\mu)} = 1} \int \nabla \Phi_ {\mu} (x) \cdot v (x) d \mu (x) \\ = \sup _ {\| v \| _ {L ^ {1} (\mu)} = 1} \left. \frac {d}{d t} J \left(\mu_ {t}\right) \right| _ {t = 0} \\ = \sup _ {\| v \| _ {L ^ {1} (\mu)} = 1} \lim _ {t \rightarrow 0} \frac {J (\mu_ {t}) - J (\mu)}{t} \\ \leq \sup _ {\| v \| _ {L ^ {1} (\mu)} = 1} \lim _ {t \rightarrow 0} \frac {\alpha W _ {1} (\mu_ {t} , \mu)}{t} \\ \leq \sup _ {\| v \| _ {L ^ {1} (\mu)} = 1} \lim _ {t \rightarrow 0} \frac {\alpha t \| v \| _ {L ^ {1} (\mu)}}{t} \\ = \alpha , \\ \end{array} +$$ + +which proves the first half of the statement. + +For the converse, we borrow some techniques from the optimal transport theory. Let $p > 1$ , and let $W_{p}(\mu ,\nu)$ denote the Wasserstein- $p$ distance between $\mu$ and $\nu$ . Suppose that $\mu_t:[0,1]\to \mathcal{P}(X)$ is an $\mathcal{P}_p(X)$ -absolutely continuous curve, that is, every $\mu_t$ has a finite $p$ -moment and there exists $v\in L^{1}([0,1])$ such that $W_{p}(\mu_{s},\mu_{t})\leq \int_{s}^{t}v(r)dr$ holds for any $0\leq s\leq t\leq 1$ . Then, the limit $|\mu^{\prime}|_{W_p}(t)\coloneqq \lim_{h\to 0}|h|^{-1}W_p(\mu_{t + h},\mu_t)$ exists for almost all $t\in [0,1]$ (see Ambrosio et al. (2008), Theorem 1.1.2). Such $|\mu^{\prime}|_{W_p}$ is called the metric derivative of $\mu_t$ . + +Lemma 2 (Ambrosio et al. (2008), Theorem 8.3.1). Let $\mu_t : [0,1] \to \mathcal{P}(X)$ be an $\mathcal{P}_p(X)$ -absolutely continuous curve, and let $|\mu'|_{W_p} \in L^1([0,1])$ be its metric derivative. Then, there exists a vector field $v : (x,t) \mapsto v_t(x) \in \mathbb{R}^d$ such that + +$$ +v _ {t} \in L ^ {p} \left(\mu_ {t}; X\right), \quad \| v _ {t} \| \leq | \mu^ {\prime} | _ {W _ {p}} (t) +$$ + +for almost all $t\in [0,1]$ , and + +$$ +\int_ {0} ^ {1} \frac {d}{d t} \varphi (x, t) \mu_ {t} (d x) d t = - \int_ {0} ^ {1} \langle v _ {t} (x), \nabla_ {x} \varphi (x, t) \rangle \mu_ {t} (d x) d t +$$ + +holds for any cylindrical function $\varphi (x,t)$ + +Given $p > 1$ , let $\mu_t$ be a $\mathcal{P}_p(X)$ -absolutely continuous curve such that $\mu_0 = \mu$ and $\mu_1 = \nu$ . Then + +$$ +\begin{array}{l} | J (\mu) - J (\nu) | = \left| \int_ {0} ^ {1} \frac {d}{d t} J (\mu_ {t}) d t \right| \\ = \left| \int_ {0} ^ {1} \int v _ {t} \cdot \nabla \Phi_ {\mu_ {t}} d \mu_ {t} d t \right| \\ \leq \int_ {0} ^ {1} \left| \left| v _ {t} \right| \right| _ {L ^ {p} \left(\mu_ {t}\right)} \left\| \nabla \Phi_ {\mu_ {t}} \right\| _ {L ^ {q} \left(\mu_ {t}\right)} d t \\ \leq \int_ {0} ^ {1} \| v _ {t} \| _ {L ^ {p} (\mu_ {t})} d t \cdot \sup _ {\tilde {\mu}} \| \nabla \Phi_ {\tilde {\mu}} \| _ {L ^ {q} (\mu)} \\ \leq \int_ {0} ^ {1} | \mu_ {t} ^ {\prime} | _ {W _ {p}} d t \cdot \sup _ {\tilde {\mu}} \| \nabla \Phi_ {\tilde {\mu}} \| _ {L ^ {q} (\mu)}, \\ \end{array} +$$ + +where $|\mu_t^{\prime}|$ is the metric derivative. The existence of $v_{t}$ and the bound $\| v_{t}\|_{L^{p}(\mu_{t})}\leq |\mu_{t}^{\prime}|_{W_{p}}$ is due to Lemma 2. Taking the infimum over all possible such curves, we find that + +$$ +| J (\mu) - J (\nu) | \leq W _ {p} (\mu , \nu) \cdot \sup _ {\tilde {\mu}} \| \nabla \Phi_ {\tilde {\mu}} \| _ {L ^ {q} (\mu)}. +$$ + +To conclude, we consider the limit $p \to 1$ . Using the fact that $q \mapsto ||f||_{L^q(\mu)}$ is increasing in $q$ , we have + +$$ +| J (\mu) - J (\nu) | \leq W _ {p} (\mu , \nu) \cdot \sup _ {\tilde {\mu}} \| \nabla \Phi_ {\tilde {\mu}} \| _ {L ^ {\infty} (\mu)} \leq \alpha W _ {p} (\mu , \nu). +$$ + +Then + +$$ +\begin{array}{l} |J(\mu) - J(\nu)|\leq \inf_{p > 1}\alpha W_{p}(\mu ,\nu) \\ = \alpha \inf _ {\pi \in \Pi (\mu , \nu)} \inf _ {p > 1} | | x - y | | _ {L ^ {p} (\pi)} \\ = \alpha \inf _ {\pi \in \Pi (\mu , \nu)} | | x - y | | _ {L ^ {1} (\pi)} \\ = \alpha W _ {1} (\mu , \nu). \\ \end{array} +$$ + +![](images/ba3ab0d9530abc69010f8e8bb4bf2fd8643df20cc612ce78f6459082c18885ea.jpg) + +Proposition 4. The minimax and non-saturating GAN losses do not satisfy (D1) for some $\mu_0$ . + +Proposition 5. The Wasserstein GAN loss satisfies (D1) with $\alpha = 1$ for any $\mu_0$ . + +Proof for Proposition 4 and Proposition 5. The counterexample for Proposition 4 is a slight simplification of Example 1 of Arjovsky et al. (2017). Let $\delta_x$ denote the distribution that outputs $x \in \mathbb{R}$ with probability 1. Then, we have + +$$ +D_{\mathrm{JS}}(\delta_{x}\mid \mid \delta_{0}) = \left\{ \begin{array}{ll}\log 2 & x\neq 0\\ 0 & x = 0, \end{array} \right.\quad D_{\mathrm{KL}}(\frac{1}{2}\delta_{x} + \frac{1}{2}\delta_{0}\mid \mid \delta_{0}) = \left\{ \begin{array}{ll}\infty & x\neq 0\\ 0 & x = 0, \end{array} \right.\quad \text{and}\quad W_{1}(\delta_{x},\delta_{0}) = |x|. +$$ + +Therefore, for $J(\mu) = D_{\mathrm{JS}}(\mu ||\delta_0)$ , the inequality $|J(\delta_x) - J(\delta_0)| \leq \alpha W_1(\delta_x,\delta_0)$ cannot hold for sufficiently small $x \neq 0$ , because the left-hand side equals $\log 2$ while the right-hand side equals $\alpha |x|$ . For $J(\mu) = D_{\mathrm{KL}}(\frac{1}{2}\mu +\frac{1}{2}\delta_0||\delta_0)$ , the inequality will not hold for any $x \neq 0$ . + +Next, we verify Proposition 5. For $J(\mu) = W_{1}(\mu, \mu_{0})$ , we have $J(\mu) - J(\nu) = W_{1}(\mu, \mu_{0}) - W_{1}(\nu, \mu_{0}) \leq W_{1}(\mu, \nu)$ by the triangle inequality (see e.g., Section 6 in Villani (2009)). Reversing the roles of $\mu$ and $\nu$ , we can conclude that $|J(\mu) - J(\nu)| \leq W_{1}(\mu, \nu)$ . The result follows from Proposition 3. + +Proposition 6 (Pasch-Hausdorff). Let $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ be a function, and define $\tilde{J} \coloneqq J \oplus R_1$ . Then $\tilde{J}$ is $\alpha$ -Lipschitz w.r.t. the distance induced by the KR norm, and hence the Wasserstein-1 distance when restricted to $\mathcal{P}(X)$ . + +Proof. Observe that by the triangle inequality, + +$$ +\begin{array}{l} \tilde {J} (\mu) - \tilde {J} (\nu) = \sup _ {\tilde {\nu}} \inf _ {\tilde {\mu}} J (\tilde {\mu}) + \alpha W _ {1} (\mu , \tilde {\mu}) - J (\tilde {\nu}) - \alpha W _ {1} (\nu , \tilde {\nu}) \\ \leq \sup _ {\tilde {\nu}} \alpha W _ {1} (\mu , \tilde {\nu}) - \alpha W _ {1} (\nu , \tilde {\nu}) \\ \leq \sup _ {\tilde {\nu}} \alpha W _ {1} (\mu , \nu) \\ = \alpha W _ {1} (\mu , \nu). \\ \end{array} +$$ + +Interchanging the roles of $\mu$ and $\nu$ completes the proof. + +![](images/1019c39cfc89f30170cd5ac447112108f85cd49b85be485ca972bf4ab6b4e5c0.jpg) + +Lemma 3. The convex conjugate of $\mu \mapsto \alpha ||\mu ||_{\mathrm{KR}}$ is given as $\varphi \mapsto \chi \{\| \varphi \|_{\mathrm{Lip}}\leq \alpha \}$ + +Proof. The convex conjugate is + +$$ +\begin{array}{l} \sup _ {\mu} \left[ \int f d \mu - \alpha \| \mu \| _ {\mathrm {K R}} \right] = \sup _ {\mu} \left[ \int f d \mu - \alpha \sup _ {| | g | | _ {\mathrm {L i p}} \leq 1} \int g d \mu \right] \\ = \sup _ {\mu} \left[ \int f d \mu - \sup _ {| | g | | _ {\mathrm {L i p}} \leq \alpha} \int g d \mu \right] \\ = \sup _ {\mu} \left[ \inf _ {| | g | | _ {L i p} \leq \alpha} \int f d \mu - \int g d \mu \right] \\ = \inf _ {\left\| g \right\| _ {\mathrm {L i p}} \leq \alpha} \sup _ {\mu} \left[ \int f d \mu - \int g d \mu \right] \\ = \inf _ {| | g | | _ {\mathrm {L i p}} \leq \alpha} \chi \{f = g \} \\ = \chi \{\| f \| _ {\mathrm {L i p}} \leq \alpha \}, \\ \end{array} +$$ + +where Sion's minimax theorem ensures that we can swap the inf and the sup. + +![](images/fed12330783515c72c2985ee97b53580a69c4d5ac7737e95561c077320046185.jpg) + +# G PROOFS FOR SECTION 5 + +Proposition 7. The Wasserstein, minimax, and non-saturating GAN losses do not satisfy (D2) for some $\mu_0$ . + +Proof. First, we prove the statement for the Wasserstein GAN. Consider $J(\mu) = W_{1}(\mu, \delta_{0})$ evaluated at $\mu = \frac{1}{2}\delta_{-1} + \frac{1}{2}\delta_{1}$ , a mixture of spikes at $\pm 1$ . The optimal discriminator of $J$ at this mixture is the Kantorovich potential that transfers $\mu$ to $\delta_{0}$ , which is $\Phi_{\mu}(x) = |x|$ . The gradient of this Kantorovich potential is discontinuous at 0, and hence not Lipschitz. + +For the minimax GAN $J(\mu) = D_{\mathrm{JS}}(\mu ||\mu_0)$ , let $\mu$ and $\mu_0$ be probability distributions on $\mathbb{R}$ whose densities are $p(x) \propto \exp(-|x|/2)$ and $p_0(x) \propto \exp(-|x|)$ , respectively (i.e., Laplace distributions). Then, the optimal discriminator at $\mu$ is given as $\Phi_\mu(x) = \frac{1}{2}\log\frac{p(x)}{p(x) + p_0(x)} = -\frac{1}{2}\log(1 + 2\exp(-|x|/2))$ . By elementary calculations, we can see that this function is not differentiable at $x = 0$ , which implies that the minimax GAN does not satisfy (D2). For the non-saturating GAN, we obtain a similar result by swapping the role of $\mu$ and $\mu_0$ . + +Proposition 8. Let $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ be a convex, proper, lower semicontinuous function, and define $\tilde{J} := J \oplus R_2$ . Then the optimal discriminator for $\tilde{J}$ is $\beta_{1}$ -smooth. + +Proof. $\tilde{J}$ is convex, proper, lower semicontinuous. Hence + +$$ +\tilde {J} (\mu) = (J \oplus \lambda | | \cdot | | _ {\mathcal {F} ^ {*}}) (\mu) = \sup _ {\varphi} \int \varphi d \mu - J ^ {\star} (\varphi) - \chi \left\{\frac {1}{\lambda} \varphi \in \mathcal {F} \right\}. +$$ + +Then by the envelope theorem, $\frac{\delta J}{\delta \mu}$ is the $\varphi$ that maximizes the right-hand side, and hence $\frac{1}{\lambda} \frac{\delta J}{\delta \mu} \in \mathcal{F}$ . + +Lemma 4. The convex conjugate of $\mu \mapsto \beta_{1}\| \mu \|_{\mathcal{S}^{*}}$ is $\varphi \mapsto \chi \{\varphi \in \beta_1\mathcal{S}\}$ + +Proof. The proof of Lemma 4 is nearly identical to the proof of Lemma 3. + +![](images/fd6816f3a653e32ee2967eb3547555f669ed2473418fdcb04a4bd1a66d06a60c.jpg) + +Proposition 9. Let $f: \mathbb{R}^d \to \mathbb{R}$ be a neural network consisting of $k$ layers whose linear transformations have spectral norm 1 and whose activation functions are 1-Lipschitz and 1-smooth. Then $f$ is $k$ -smooth. + +Proof. In this section, let $\ell(f)$ and $s(f)$ denote the Lipschitz constant of $f$ and $\nabla f$ respectively. This inequality holds + +$$ +s (f \circ g) \leq s (f) \ell (g) ^ {2} + \ell (f) s (g), \tag {27} +$$ + +since + +$$ +\begin{array}{l} \left| \left| d (f \circ g) (x) - d (f \circ g) (y) \right| \right| = \left| \left| d f (g (x)) d g (x) - d f (g (y)) d g (y) \right| \right| \\ = \left\| d f (g (x)) d g (x) - d f (g (y)) d g (x) + d f (g (y)) d g (x) - d f (g (y)) d g (y) \right\| \\ \leq | | d f (g (x)) - d f (g (y)) | | | | d g (x) | | + | | d f (g (y)) | | | | d g (x) - d g (y) | | \\ \leq s (f) | | g (x) - g (y) | | \ell (g) + \ell (f) s (g) | | x - y | | \\ \leq \left(s (f) \ell (g) ^ {2} + \ell (f) s (g)\right) | | x - y | |. \\ \end{array} +$$ + +Let $\sigma$ be an elementwise activation function with $\ell(\sigma) = s(\sigma) = 1$ , and let $A$ be a linear layer with spectral norm 1. Then $\ell(A) = 1$ and $s(A) = 0$ , so + +$$ +\ell (\sigma \circ A) \leq \ell (\sigma) \ell (A) = 1 +$$ + +$$ +s (\sigma \circ A) \leq s (\sigma) \ell (A) ^ {2} + \ell (\sigma) s (A) = 1. +$$ + +We note that we can use the same value of $s(\sigma)$ whether we consider $\sigma : \mathbb{R} \to \mathbb{R}$ or elementwise as $\sigma : \mathbb{R}^d \to \mathbb{R}^d$ , since for an elementwise $\sigma$ , we have + +$$ +\begin{array}{l} \left| \left| d \sigma (x) - d \sigma (y) \right| \right| _ {2} = \left| \left| \operatorname {d i a g} \sigma^ {\prime} (x) - \operatorname {d i a g} \sigma^ {\prime} (y) \right| \right| _ {2} \\ = \left\| \operatorname {d i a g} \left(\sigma^ {\prime} (x) - \sigma^ {\prime} (y)\right) \right\| _ {2} \\ = \max _ {i} \left| \sigma^ {\prime} \left(x _ {i}\right) - \sigma^ {\prime} \left(y _ {i}\right) \right| \\ \leq \max _ {i} s (\sigma) | x _ {i} - y _ {i} | \\ = s (\sigma) | | x - y | | _ {\infty} \\ \leq s (\sigma) \| x - y \| _ {2}. \\ \end{array} +$$ + +We apply the inequality (27) recursively for all $k$ layers to obtain that the entire network is $k$ -smooth. + +Lemma 5. Let $\mathcal{H}$ be an RKHS with the Gaussian kernel. Then $||\hat{\mu} ||_{\mathcal{H}}\leq \frac{\sqrt{2}}{\sigma^2} ||\mu ||_{\mathcal{S}^*}$ . + +Proof. It follows from $\frac{\partial f(x)}{\partial x_i} = \langle f, \frac{\partial K(x,\cdot)}{\partial x_i} \rangle$ that, for $f \in \mathcal{H}$ , + +$$ +\begin{array}{l} \| \nabla f (x) - \nabla f (y) \| ^ {2} = \sum_ {i = 1} ^ {d} \left| \frac {\partial f (x)}{\partial x _ {i}} - \frac {\partial f (y)}{\partial y _ {i}} \right| ^ {2} \\ \leq \sum_ {i = 1} ^ {d} \| f \| _ {\mathcal {H}} ^ {2} \left\| \frac {\partial K (x , \cdot)}{\partial x _ {i}} - \frac {\partial K (y , \cdot)}{\partial y _ {i}} \right\| _ {\mathcal {H}} ^ {2} \\ = \| f \| _ {\mathcal {H}} ^ {2} \sum_ {i = 1} ^ {d} \left\{\frac {\partial^ {2} K (x , \tilde {x})}{\partial x _ {i} \partial \tilde {x} _ {i}} - 2 \frac {\partial^ {2} K (x , y)}{\partial x _ {i} \partial y _ {i}} + \frac {\partial^ {2} K (y , \tilde {y})}{\partial y _ {i} \partial \tilde {y} _ {i}} \right\} \\ \leq \| f \| _ {\mathcal {H}} ^ {2} \cdot (\Gamma (x, y) + \Gamma (y, x)), \\ \end{array} +$$ + +where $\tilde{x}$ and $\tilde{y}$ denote copies of $x$ and $y$ , and + +$$ +\Gamma (x, y) = \sum_ {i = 1} ^ {d} \left| \frac {\partial^ {2} K (x , \tilde {x})}{\partial x _ {i} \partial \tilde {x} _ {i}} - \frac {\partial^ {2} K (x , y)}{\partial x _ {i} \partial \tilde {x} _ {i}} \right|. +$$ + +From (30), we find that for a Gaussian kernel, + +$$ +\Gamma (x, y) = \frac {1}{\sigma^ {4}} \exp \Bigl (- \frac {\| x - y \| ^ {2}}{2 \sigma^ {2}} \Bigr) \| x - y \| ^ {2}, +$$ + +and thus, + +$$ +\| \nabla f (x) - \nabla f (y) \| \leq \frac {\sqrt {2}}{\sigma^ {2}} \| f \| _ {\mathcal {H}} \| x - y \|. +$$ + +This implies that + +$$ +\| \hat {\mu} \| _ {\mathcal {H}} = \sup _ {f \in \mathcal {H}, \| f \| _ {\mathcal {H}} \leq 1} \int f d \mu \leq \sup _ {\| f \| _ {\mathcal {S}} \leq \frac {\sqrt {2}}{\sigma^ {2}}} \int f d \mu = \frac {\sqrt {2}}{\sigma^ {2}} \| \mu \| _ {\mathcal {S} ^ {*}}. +$$ + +![](images/754e31944d49127c31397286e268a7ba967c527bd571835495a405ada7233b78.jpg) + +# H PROOFS FOR SECTION 6 + +Lemma 6. The convex conjugate of $\mu \mapsto \frac{\lambda}{2} ||\mu||_{\mathrm{KR}}^2$ is $f\mapsto \frac{1}{2\lambda} ||f||_{\mathrm{Lip}}^2$ + +Proof. This proof is generalized from the finite-dimensional case (Boyd & Vandenberghe, 2004). We can bound the convex conjugate from above by + +$$ +\begin{array}{l} \sup _ {\mu} \left[ \int f d \mu - \frac {\lambda}{2} | | \mu | | _ {\mathrm {K R}} ^ {2} \right] \leq \sup _ {\mu} \left[ | | f | | _ {\mathrm {L i p}} | | \mu | | _ {\mathrm {K R}} - \frac {\lambda}{2} | | \mu | | _ {\mathrm {K R}} ^ {2} \right] \\ \leq \sup _ {z \in \mathbb {R}} \left[ | | f | | _ {\mathrm {L i p}} z - \frac {\lambda}{2} z ^ {2} \right] \\ = \left\| f \right\| _ {\mathrm {L i p}} \cdot \frac {\left\| f \right\| _ {\mathrm {L i p}}}{\lambda} - \frac {\lambda}{2} \cdot \frac {\left\| f \right\| _ {\mathrm {L i p}} ^ {2}}{\lambda^ {2}} \\ = \frac {1}{2 \lambda} | | f | | _ {\mathrm {L i p}} ^ {2}. \\ \end{array} +$$ + +If $f$ is constant, then we may choose $\mu = 0$ to see that + +$$ +\sup _ {\mu} \left[ \int f d \mu - \frac {\lambda}{2} | | \mu | | _ {\mathrm {K R}} ^ {2} \right] \geq 0 = \frac {1}{2 \lambda} | | f | | _ {\mathrm {L i p}} ^ {2}, +$$ + +and we are done. + +Otherwise, for $f(x) \neq f(y)$ , let $\mu_{x,y}$ be the signed measure given by + +$$ +\mu_ {x, y} = \frac {| | f | | _ {\mathrm {L i p}} ^ {2}}{\lambda (f (x) - f (y))} (\delta_ {x} - \delta_ {y}). +$$ + +Note that + +$$ +\begin{array}{l} \| \mu_ {x, y} \| _ {\mathrm {K R}} = \sup _ {\| g \| _ {\mathrm {L i p}} \leq 1} \int g d \mu_ {x, y} \\ = \frac {\left\| f \right\| _ {\mathrm {L i p}} ^ {2}}{\lambda (f (x) - f (y))} \sup _ {\left\| g \right\| _ {\mathrm {L i p}} \leq 1} \Big (g (x) - g (y) \Big) \\ = \frac {\left\| f \right\| _ {\operatorname {L i p}} ^ {2}}{\lambda (f (x) - f (y))} \cdot d (x, y) \\ \end{array} +$$ + +and + +$$ +\int f d \mu_ {x, y} = \frac {| | f | | _ {\mathrm {L i p}} ^ {2}}{\lambda (f (x) - f (y))} \Big (f (x) - f (y) \Big) = \frac {1}{\lambda} | | f | | _ {\mathrm {L i p}} ^ {2}. +$$ + +Then + +$$ +\begin{array}{l} \sup _ {\mu} \left[ \int f d \mu - \frac {\lambda}{2} | | \mu | | _ {\mathrm {K R}} ^ {2} \right] \geq \sup _ {f (x) \neq f (y)} \left[ \int f d \mu_ {x, y} - \frac {\lambda}{2} | | \mu_ {x, y} | | _ {\mathrm {K R}} ^ {2} \right] \\ = \sup _ {f (x) \neq f (y)} \left[ \frac {1}{\lambda} \| f \| _ {\mathrm {L i p}} ^ {2} - \frac {\lambda}{2} \left(\frac {1}{\lambda} \| f \| _ {\mathrm {L i p}} ^ {2} \frac {d (x , y)}{f (x) - f (y)}\right) ^ {2} \right] \\ = \frac {1}{\lambda} | | f | | _ {\mathrm {L i p}} ^ {2} - \frac {\lambda}{2} \left(\frac {1}{\lambda} | | f | | _ {\mathrm {L i p}} ^ {2} \frac {1}{| | f | | _ {\mathrm {L i p}}}\right) ^ {2} \\ = \frac {1}{2 \lambda} | | f | | _ {\mathrm {L i p}} ^ {2}. \\ \end{array} +$$ + +![](images/87a215376d4edfd8860609b8952c779e2752f019859f4058b1562d4c1cda67f8.jpg) + +Proposition 10. Let $J:\mathcal{M}(X)\to \mathbb{R}$ be a convex function. Then $J$ satisfies (D3) if and only if $\mathfrak{D}_J(\nu ,\mu)\leq \frac{\beta_2}{2}\| \mu -\nu \|_{\mathrm{KR}}^2$ for all $\mu ,\nu \in \mathcal{M}(X)$ + +Proof. Recall the definition of the Bregman divergence: + +$$ +\mathfrak {D} _ {J} (\nu , \mu) := J (\nu) - J (\mu) - \int \Phi_ {\mu} (x) d (\nu - \mu). +$$ + +Proposition 10 claims that the optimal discriminator $\Phi_{\mu}$ is $\beta_{2}$ -smooth as a function of $\mu$ if and only if + +$$ +\forall \mu , \nu \in \mathcal {M} (X), \quad J (\nu) - J (\mu) - \int \Phi_ {\mu} (x) d (\nu - \mu) \leq \frac {\beta_ {2}}{2} \| \mu - \nu \| _ {\mathrm {K R}} ^ {2}. \tag {28} +$$ + +This proof is generalized from the finite-dimensional case (Zhou, 2018; Sidford, 2017). Suppose that $\| \nabla \Phi_{\mu}(x) - \nabla \Phi_{\nu}(x)\| _2\leq \beta_2\| \mu -\nu \|_{\mathrm{KR}}$ holds for all $x$ . Let $\mu_t$ $(0\leq t\leq 1)$ be defined as a mixture between $\mu_t = (1 - t)\mu +t\nu$ so that + +$$ +\mu_ {0} = \mu , \quad \mu_ {1} = \nu , \quad \text {a n d} \quad \| \mu_ {t} - \mu_ {s} \| _ {\mathrm {K R}} = | t - s | \cdot \| \mu - \nu \| _ {\mathrm {K R}}. +$$ + +Then, we have + +$$ +\begin{array}{l} | \mathfrak {D} _ {J} (\nu , \mu) | = \left| \int_ {0} ^ {1} \frac {d}{d t} J (\mu_ {t}) d t - \int \Phi_ {\mu} d (\nu - \mu) \right| \\ = \left| \int_ {0} ^ {1} \int \Phi_ {\mu_ {t}} d (\nu - \mu) d t - \int \Phi_ {\mu} d (\nu - \mu) \right| \\ = \left| \int_ {0} ^ {1} \int \left(\Phi_ {\mu_ {t}} - \Phi_ {\mu}\right) d (\nu - \mu) d t \right| \\ \leq \int_ {0} ^ {1} \| \Phi_ {\mu_ {t}} - \Phi_ {\mu} \| _ {\mathrm {L i p}} \| \mu - \nu \| _ {\mathrm {K R}} d t \\ \leq \int_ {0} ^ {1} \beta_ {2} \| \mu_ {t} - \mu \| _ {\mathrm {K R}} \| \mu - \nu \| _ {\mathrm {K R}} d t \\ \leq \int_ {0} ^ {1} t \beta_ {2} \| \mu - \nu \| _ {\mathrm {K R}} ^ {2} d t \\ = \frac {\beta_ {2}}{2} \| \mu - \nu \| _ {\mathrm {K R}} ^ {2}. \\ \end{array} +$$ + +Since the choice of $\mu$ and $\nu$ is arbitrary, we have proved (28) by assuming (D2). + +Next, we move on to prove the converse. Before proceeding, note that the Bregman divergence $\mathfrak{D}_J(\nu ,\mu)$ is always non-negative. In fact, for any $\mu ,\nu \in \mathcal{M}(X)$ , we have + +$$ +\int \Phi_ {\mu} d (\nu - \mu) = \underbrace {\int \Phi_ {\mu} d \nu - J ^ {\star} (\Phi_ {\mu})} _ {\leq J (\nu)} - \underbrace {\left[ \int \Phi_ {\mu} d \mu - J ^ {\star} (\Phi_ {\mu}) \right]} _ {= J (\mu)} +$$ + +$$ +\leq J (\nu) - J (\mu), +$$ + +which implies $\mathfrak{D}_J(\nu ,\mu)\geq 0$ + +Choose $\xi \in \mathcal{M}(X)$ arbitrarily. If $\xi = \mu$ , the inequality $\|\nabla \Phi_{\mu} - \nabla \Phi_{\xi}\|_2 \leq \beta_2\|\xi - \mu\|_{\mathrm{KR}}$ is trivial, so we assume $\xi \neq \mu$ below. By the non-negativity of the Bregman divergence, we have + +$$ +\begin{array}{l} 0 \leq \mathfrak {D} _ {J} (\nu , \mu) \\ = J (\nu) - \left[ J (\mu) + \int \Phi_ {\mu} d (\nu - \mu) \right] \\ \leq \left[ J (\xi) + \int \Phi_ {\xi} d (\nu - \xi) + \frac {\beta_ {2}}{2} | | \xi - \nu | | _ {\mathrm {K R}} ^ {2} \right] - \left[ J (\mu) + \int \Phi_ {\mu} d (\nu - \mu) \right] \\ = J (\xi) - J (\mu) - \int \Phi_ {\mu} d (\xi - \mu) + \int \left[ \Phi_ {\xi} - \Phi_ {\mu} \right] d (\nu - \xi) + \frac {\beta_ {2}}{2} | | \xi - \nu | | _ {\mathrm {K R}} ^ {2}. \tag {29} \\ \end{array} +$$ + +Since the above inequality holds for all $\nu \in \mathcal{M}(X)$ , the last expression is still non-negative if we take the infimum over all $\nu$ . In particular, + +$$ +\begin{array}{l} \inf _ {\nu \in \mathcal {M} (X)} \int [ \Phi_ {\xi} - \Phi_ {\mu} ] d (\nu - \xi) + \frac {\beta_ {2}}{2} | | \xi - \nu | | _ {\mathrm {K R}} ^ {2} \\ = - \sup _ {\nu \in \mathcal {M} (X)} \left[ \int \left[ \Phi_ {\mu} - \Phi_ {\xi} \right] d (\nu - \xi) - \frac {\beta_ {2}}{2} | | \xi - \nu | | _ {\mathrm {K R}} ^ {2} \right] \\ = - \sup _ {\zeta \in \mathcal {M} (X)} \left[ \int \left[ \Phi_ {\mu} - \Phi_ {\xi} \right] d \zeta - \frac {\beta_ {2}}{2} | | \zeta | | _ {\mathrm {K R}} ^ {2} \right] \\ = - \frac {1}{2 \beta_ {2}} \left\| \Phi_ {\mu} - \Phi_ {\xi} \right\| _ {\mathrm {L i p}} ^ {2}, \\ \end{array} +$$ + +using Lemma 6 for the last equality. + +Continuing from (29), we have + +$$ +0 \leq J (\xi) - J (\mu) - \int \Phi_ {\mu} d (\xi - \mu) - \frac {1}{2 \beta_ {2}} \| \Phi_ {\mu} - \Phi_ {\xi} \| _ {\mathrm {L i p}} ^ {2}. +$$ + +Swapping the roles of $\mu$ and $\xi$ , we obtain a similar inequality. Adding both sides of thus obtained two inequalities, we obtain + +$$ +0 \leq \int \left[ \Phi_ {\xi} - \Phi_ {\mu} \right] d (\xi - \mu) - \frac {1}{\beta_ {2}} \| \Phi_ {\mu} - \Phi_ {\xi} \| _ {\mathrm {L i p}} ^ {2}. +$$ + +Finally, we have + +$$ +\| \Phi_ {\mu} - \Phi_ {\xi} \| _ {\mathrm {L i p}} ^ {2} \leq \beta_ {2} \int [ \Phi_ {\xi} - \Phi_ {\mu} ] d (\xi - \mu) \leq \beta_ {2} \| \Phi_ {\mu} - \Phi_ {\xi} \| _ {\mathrm {L i p}} \| \xi - \mu \| _ {\mathrm {K R}}, +$$ + +and hence we have the desired result: + +$$ +\left\| \Phi_ {\mu} - \Phi_ {\xi} \right\| _ {\mathrm {L i p}} \leq \beta_ {2} \| \xi - \mu \| _ {\mathrm {K R}}. +$$ + +Lemma 7. Let $K(x,y) = \exp \left(-\frac{\|x - y\|^2}{2\sigma^2}\right)$ . Then $\mathrm{MMD}_K^2 (\mu ,\nu)\leq \frac{1}{\sigma^2} ||\mu -\nu ||_{\mathrm{KR}}^2$ . + +Proof. First, we note that + +$$ +\begin{array}{l} \mathrm {M M D} _ {K} ^ {2} (\mu , \nu) = \int K (x, y) (\mu - \nu) (d x) (\mu - \nu) (d y) \\ \leq \left[ \sup _ {x \neq x ^ {\prime}} \frac {\int (K (x , y) - K \left(x ^ {\prime} , y\right)) (\mu - \nu) (d y)}{d \left(x , x ^ {\prime}\right)} \right] | | \mu - \nu | | _ {\mathrm {K R}} \\ \leq \left[ \sup _ {x \neq x ^ {\prime}, y \neq y ^ {\prime}} \frac {K (x , y) - K (x , y ^ {\prime}) - K (x ^ {\prime} , y) + K (x ^ {\prime} , y ^ {\prime})}{d (x , x ^ {\prime}) d (y , y ^ {\prime})} \right] | | \mu - \nu | | _ {\mathrm {K R}} ^ {2}. \\ \end{array} +$$ + +Next, we show that + +$$ +c = \sup _ {x \neq x ^ {\prime}, y \neq y ^ {\prime}} \frac {K (x , y) - K (x , y ^ {\prime}) - K (x ^ {\prime} , y) + K (x ^ {\prime} , y ^ {\prime})}{d (x , x ^ {\prime}) d (y , y ^ {\prime})} = \frac {1}{\sigma^ {2}}. +$$ + +Because $K$ is differentiable, it suffices to check that the operator norm of $\partial_{x_i}\partial_{y_j}K(x,y)$ is bounded. To see this, note that + +$$ +\begin{array}{l} c = \sup _ {x \neq x ^ {\prime}} \frac {\operatorname {L i p} \left(K (x , \cdot) - K \left(x ^ {\prime} , \cdot\right)\right)}{\left| \left| x - x ^ {\prime} \right| \right|} \\ = \sup _ {x \neq x ^ {\prime}, y} \frac {| | \nabla_ {y} K (x , y) - \nabla_ {y} K \left(x ^ {\prime} , y\right) | | _ {2}}{| | x - x ^ {\prime} | |} \\ \leq \sup _ {x, y} | | \nabla_ {x} \nabla_ {y} K (x, y) | | _ {\mathrm {o p}}, \\ \end{array} +$$ + +where we obtained by the last equality by applying the vector-valued mean value theorem (Rudin, 1964) to $t\mapsto \nabla_yK((1 - t)x + tx',y)$ , thereby obtaining + +$$ +\begin{array}{l} \left| \left| \nabla_ {y} K (x, y) - \nabla_ {y} K \left(x ^ {\prime}, y\right) \right| \right| _ {2} \leq \left| \left| \partial_ {t} \nabla_ {y} K ((1 - t) x + t x ^ {\prime}, y) \right| \right| _ {2} \\ = \left\| \left(x ^ {\prime} - x\right) \cdot \nabla_ {x} \nabla_ {y} K \left(\left(1 - t\right) x + t x ^ {\prime}, y\right) \right\| _ {2} \\ \leq \left\| x ^ {\prime} - x \right\| _ {2} \left\| \nabla_ {x} \nabla_ {y} K ((1 - t) x + t x ^ {\prime}, y) \right\| _ {\mathrm {o p}}. \\ \end{array} +$$ + +Now + +$$ +\begin{array}{l} \partial_ {x _ {i}} \partial_ {y _ {j}} K (x, y) = \partial_ {x _ {i}} \partial_ {y _ {j}} \exp \left(- \frac {1}{2 \sigma^ {2}} \sum_ {k} (x _ {k} - y _ {k}) ^ {2}\right) \\ = \partial_ {x _ {i}} \exp \left(- \frac {1}{2 \sigma^ {2}} \sum_ {k} \left(x _ {k} - y _ {k}\right) ^ {2}\right) \cdot - \frac {1}{\sigma^ {2}} \left(x _ {j} - y _ {j}\right) \tag {30} \\ = \exp \left(- \frac {1}{2 \sigma^ {2}} \sum_ {k} (x _ {k} - y _ {k}) ^ {2}\right) \left[ \frac {1}{\sigma^ {4}} (x _ {i} - y _ {i}) (x _ {j} - y _ {j}) - \frac {1}{\sigma^ {2}} \delta_ {i j} \right]. \\ \end{array} +$$ + +This is the symmetric matrix + +$$ +\frac {1}{\sigma^ {2}} \exp \left(- \frac {| | x - y | | ^ {2}}{2 \sigma^ {2}}\right) \Big [ \frac {1}{\sigma^ {2}} (x - y) (x - y) ^ {T} - I \Big ]. +$$ + +By inspection, we see that $x - y$ is an eigenvector with eigenvalue $\frac{1}{\sigma^2} \exp \left( -\frac{||x - y|^2}{2\sigma^2} \right) \left( \frac{1}{\sigma^2} | | x - y | |^2 - 1 \right)$ , and the other eigenvectors are orthogonal to $x - y$ with eigenvalues $-\frac{1}{\sigma^2} \exp \left( -\frac{||x - y|^2}{2\sigma^2} \right)$ . + +Setting $z = \frac{\|x - y\|^2}{2\sigma^2}$ and taking the absolute value of the eigenvalues, we see the maximum operator norm over all $x, y$ is equal to + +$$ +\max \left\{\sup _ {z \geq 0} \frac {1}{\sigma^ {2}} e ^ {- z} | 2 z - 1 |, \sup _ {z \geq 0} \frac {1}{\sigma^ {2}} e ^ {- z} \right\} = \frac {1}{\sigma^ {2}}. +$$ + +Therefore + +$$ +c \leq \frac {1}{\sigma^ {2}}. +$$ + +It is not strictly necessary for the proof to show equality. However, to show equality, let $u$ be a unit vector, and set $x' = y' = 0$ , and $x = y = tu$ . Then + +$$ +\begin{array}{l} c \geq \sup _ {t \neq 0} \frac {K (t u , t u) - K (t u , 0) - K (0 , t u) + K (0 , 0)}{d (t u , 0) d (t u , 0)} \\ = \sup _ {t \neq 0} \frac {2 - 2 \exp \left(- \frac {t ^ {2}}{2 \sigma^ {2}}\right)}{t ^ {2}} \\ \geq \lim _ {t \rightarrow 0} \frac {2 - 2 \exp (- \frac {t ^ {2}}{2 \sigma^ {2}})}{t ^ {2}} \\ = \lim _ {t \rightarrow 0} \frac {- 2 \exp \left(- \frac {t ^ {2}}{2 \sigma^ {2}}\right) \cdot - \frac {2 t}{2 \sigma^ {2}}}{2 t} \\ = \frac {1}{\sigma^ {2}}, \\ \end{array} +$$ + +where we used l'Hopital's rule to evaluate the limit. + +![](images/1263cddcaa2e37967aacb5b54b4ae4a5d65a72cf35e0c77b254431f6abfaa135.jpg) + +Proposition 11. The minimax and non-saturating GAN losses do not satisfy (D3) for some $\mu_0$ . + +Proof. For the minimax loss, the Bregman divergence is: + +$$ +\begin{array}{l} \mathfrak {D} _ {D _ {\mathrm {J S}} (\cdot | | \mu_ {0})} (\nu , \mu) = \int \left[ \frac {1}{2} \log \frac {\mu_ {0}}{\frac {1}{2} \mu_ {0} + \frac {1}{2} \nu} d \mu_ {0} + \frac {1}{2} \log \frac {\nu}{\frac {1}{2} \mu_ {0} + \frac {1}{2} \nu} d \nu \right] \\ - \int \left[ \frac {1}{2} \log \frac {\mu_ {0}}{\frac {1}{2} \mu_ {0} + \frac {1}{2} \mu} d \mu_ {0} + \frac {1}{2} \log \frac {\mu}{\frac {1}{2} \mu_ {0} + \frac {1}{2} \mu} d \mu \right] \\ - \int \frac {1}{2} \log \frac {\mu}{\mu_ {0} + \mu} d (\nu - \mu) \\ = \frac {1}{2} \int \log {\frac {\mu_ {0} + \mu}{\mu_ {0} + \nu}} d \mu_ {0} + \frac {1}{2} \int \left[ \log {\frac {\nu}{\frac {1}{2} \mu_ {0} + \frac {1}{2} \nu}} - \log {\frac {\mu}{\mu_ {0} + \mu}} \right] d \nu + \frac {1}{2} \log {\frac {1}{2}} \\ = \frac {1}{2} \int \log \frac {\mu_ {0} + \mu}{\mu_ {0} + \nu} d (\mu_ {0} + \nu) + \frac {1}{2} \int \log \frac {\nu}{\mu} d \nu \\ = D _ {\mathrm {K L}} \left(\frac {1}{2} \nu + \frac {1}{2} \mu_ {0} | | \frac {1}{2} \mu + \frac {1}{2} \mu_ {0}\right) + \frac {1}{2} D _ {\mathrm {K L}} (\nu | | \mu), \\ \end{array} +$$ + +and for the non-saturating loss, the Bregman divergence is + +$$ +\begin{array}{l} \mathfrak {D} _ {D _ {\mathrm {K L}} \left(\frac {1}{2}, + \frac {1}{2} \mu_ {0} | | \mu_ {0}\right)} (\nu , \mu) = \int \log \frac {\frac {1}{2} \mu_ {0} + \frac {1}{2} \nu}{\mu_ {0}} d \left(\frac {1}{2} \nu + \frac {1}{2} \mu_ {0}\right) \\ - \int \log \frac {\frac {1}{2} \mu_ {0} + \frac {1}{2} \mu}{\mu_ {0}} d \left(\frac {1}{2} \mu + \frac {1}{2} \mu_ {0}\right) \\ + \int \frac {1}{2} \log \frac {\mu_ {0}}{\mu_ {0} + \mu} d (\nu - \mu) \\ = \frac {1}{2} \int \log \frac {\mu_ {0} + \nu}{\mu_ {0} + \mu} d \mu_ {0} + \frac {1}{2} \int \log \frac {\frac {1}{2} \mu_ {0} + \frac {1}{2} \nu}{\mu_ {0} + \mu} d \nu - \frac {1}{2} \log \frac {1}{2} \\ = D _ {\mathrm {K L}} \left(\frac {1}{2} \nu + \frac {1}{2} \mu_ {0} \right\lvert \left. \frac {1}{2} \mu + \frac {1}{2} \mu_ {0}\right). \\ \end{array} +$$ + +Choosing $\nu$ to be not absolutely continuous w.r.t. $\frac{1}{2}\mu +\frac{1}{2}\mu_0$ makes the Bregman divergence $\infty$ which is sufficient to show that the Bregman divergence is not bounded by $||\mu -\nu ||_{\mathrm{KR}}^2$ + +Proposition 12. The MMD loss with Gaussian kernel satisfies (D3) with $\beta_{2} = 2\pi$ for all $\mu_0$ . + +Proof. We work with the Gaussian kernel $K(x,y) = e^{-\frac{||x - y||^2}{2\sigma^2}}$ and use Proposition 10: + +$$ +\begin{array}{l} \mathfrak {D} _ {\frac {1}{2} \mathrm {M M D} ^ {2} (\cdot , \mu_ {0})} (\nu , \mu) = \frac {1}{2} \int K (x, y) (\nu - \mu_ {0}) (d x) (\nu - \mu_ {0}) (d y) \\ - \frac {1}{2} \int K (x, y) (\mu - \mu_ {0}) (d x) (\mu - \mu_ {0}) (d y) \\ - \int (K (x, y) \mu (d y) - K (x, y) \mu_ {0} (d y)) (\nu - \mu) (d x) \\ = \frac {1}{2} \int K (x, y) (\nu - \mu) (d x) (\nu - \mu) (d y) \\ = \frac {1}{2} \mathrm {M M D} ^ {2} (\mu , \nu) \\ \leq \frac {1}{2 \sigma^ {2}} | | \mu - \nu | | _ {\mathrm {K R}} ^ {2}, \\ \end{array} +$$ + +where the last line is from Lemma 7. The result follows with $\sigma = (2\pi)^{-1 / 2}$ . + +The result actually applies more generally for the MMD loss with a differentiable kernel $K$ that satisfies $\beta_{2} := \sup_{x,y} \| \nabla_{x} \nabla_{y} K(x,y) \|_{2} < \infty$ . + +Proposition 13 (Moreau-Yosida regularization). Suppose $J: \mathcal{M}(X) \to \bar{\mathbb{R}}$ is convex, and define $\tilde{J} := J \oplus R_3$ . Then $\tilde{J}$ is convex, and $\mathfrak{D}_{\tilde{J}}(\nu, \mu) \leq \frac{\beta_2}{2} ||\mu - \nu||_{\mathrm{KR}}^2$ . + +Proof. We work on the more general case of $K(x,y) = (2\pi \sigma^2)^{-d / 2}\exp (-\frac{\|x - y\|^2}{2\sigma^2})$ , and define + +$$ +\tilde {J} (\mu) = \inf _ {\tilde {\mu}} J (\tilde {\mu}) + \frac {c L}{2} \mathrm {M M D} _ {K} ^ {2} (\mu , \tilde {\mu}) +$$ + +for $c = \sigma^2 (2\pi \sigma^2)^{d / 2}$ + +Let $\mu^{*}$ be the unique minimizer of the infimum, which exists because the function is a strongly convex. By the envelope theorem, we compute that + +$$ +\begin{array}{l} \left. \frac {d}{d \epsilon} \tilde {J} (\mu + \epsilon \chi) \right| _ {\epsilon = 0} = \left. \frac {d}{d \epsilon} \frac {c L}{2} | | \mu + \epsilon \chi - \mu^ {*} | | _ {\mathcal {H}} ^ {2} \right| _ {\epsilon = 0} \\ = \frac {d}{d \epsilon} \frac {c L}{2} \left(\left| \left| \mu - \mu^ {*} \right| \right| _ {\mathcal {H}} ^ {2} + 2 \epsilon \langle \mu - \mu^ {*}, \chi \rangle_ {\mathcal {H}} + \epsilon^ {2} \left| \left| \chi \right| \right| _ {\mathcal {H}} ^ {2}\right) \Big | _ {\epsilon = 0} \\ = c L \langle \mu - \mu^ {*}, \chi \rangle_ {\mathcal {H}} \\ = c L \int (\mu - \mu^ {*}) d \chi , \\ \end{array} +$$ + +so $\frac{\delta\tilde{J}}{\delta\mu} = cL(\mu -\mu^{*}).$ + +Then + +$$ +\begin{array}{l} \mathfrak {D} _ {\tilde {J}} (\nu , \mu) = \tilde {J} (\nu) - \tilde {J} (\mu) - \int \frac {\delta \tilde {J}}{\delta \mu} d (\nu - \mu) \\ = \inf _ {\tilde {\nu}} J (\tilde {\nu}) + \frac {c L}{2} | | \nu - \tilde {\nu} | | _ {\mathcal {H}} ^ {2} - \left[ J \left(\mu^ {*}\right) + \frac {c L}{2} | | \mu - \mu^ {*} | | _ {\mathcal {H}} ^ {2} \right] - c L \langle \mu - \mu^ {*}, \nu - \mu \rangle_ {\mathcal {H}} \\ \leq J (\mu^ {*}) + \frac {c L}{2} | | \nu - \mu^ {*} | | _ {\mathcal {H}} ^ {2} - \left[ J (\mu^ {*}) + \frac {c L}{2} | | \mu - \mu^ {*} | | _ {\mathcal {H}} ^ {2} \right] - c L \langle \mu - \mu^ {*}, \nu - \mu \rangle_ {\mathcal {H}} \\ = \frac {c L}{2} | | \mu - \nu | | _ {\mathcal {H}} ^ {2} \\ \leq \frac {c L}{2} \cdot (2 \pi \sigma^ {2}) ^ {- d / 2} \cdot \frac {1}{\sigma^ {2}} | | \mu - \nu | | _ {\mathrm {K R}} ^ {2} \\ = \frac {L}{2} | | \mu - \nu | | _ {\mathrm {K R}} ^ {2}, \\ \end{array} +$$ + +where we used Lemma 7 for the second-to-last line. The proposition follows for $\sigma = (2\pi)^{-1 / 2}$ . + +Regarding this choice of $\sigma$ , it will turn out that in the general case, the dual penalty includes a numerically unfavorable factor of $(2\pi \sigma^2)^{-d/2}$ , dependent on the dimension of the problem. In practical applications, such as image generation, $d$ can be quite large, making the accurate computation of $(2\pi \sigma^2)^{-d/2}$ completely infeasible. For numerical stability, we propose choosing the critical parameter $\sigma = (2\pi)^{-1/2}$ , which corresponds to the dimension-free kernel $K(x,y) = e^{-\pi ||x - y||^2}$ . + +Lemma 8. Let $\mathcal{H}$ be an RKHS with a continuous kernel $K$ on a compact domain $X$ . The convex conjugate of $\mu \mapsto \frac{\lambda}{2} ||\mu||_{\mathcal{H}}^2$ is $f \mapsto \frac{1}{2\lambda} ||f||_{\mathcal{H}}^2 + \chi \{f \in \mathcal{H}\}$ . + +Proof. First we show an upper bound for $f \in \mathcal{H}$ : + +$$ +\begin{array}{l} \sup _ {\mu} \left[ \int f d \mu - \frac {\lambda}{2} | | \mu | | _ {\mathcal {H}} ^ {2} \right] = \sup _ {\mu} \left[ \langle f, \mu \rangle_ {\mathcal {H}} - \frac {\lambda}{2} | | \mu | | _ {\mathcal {H}} ^ {2} \right] \\ \leq \sup _ {\mu} \left[ | | f | | _ {\mathcal {H}} | | \mu | | _ {\mathcal {H}} - \frac {\lambda}{2} | | \mu | | _ {\mathcal {H}} ^ {2} \right] \\ \leq \sup _ {z \in \mathbb {R}} \left[ | | f | | _ {\mathcal {H}} z - \frac {\lambda}{2} z ^ {2} \right] \\ = | | f | | _ {\mathcal {H}} \cdot \frac {| | f | | _ {\mathcal {H}}}{\lambda} - \frac {\lambda}{2} \cdot \frac {| | f | | _ {\mathcal {H}} ^ {2}}{\lambda^ {2}} \\ = \frac {1}{2 \lambda} | | f | | _ {\mathcal {H}} ^ {2}. \\ \end{array} +$$ + +To derive a lower bound for general $f \in \mathcal{C}(X)$ , we use the Mercer decomposition of the positive definite kernel $K$ , + +$$ +K (x, y) = \sum_ {i = 1} ^ {\infty} \gamma_ {i} \phi_ {i} (x) \phi_ {i} (y) +$$ + +where $\gamma_{i}\geq 0$ are eigenvalues and $\{\phi_i\}_{i = 1}^{\infty}$ is a complete orthonormal sequence of $L^2 (X;dx)$ . It is well-known that the corresponding RKHS is given by + +$$ +\mathcal {H} = \left\{g \in L ^ {2} (X, d x) \mid \sum_ {i = 1} ^ {\infty} \frac {(g , \phi_ {i}) _ {L ^ {2} (X)} ^ {2}}{\gamma_ {i}} < \infty \right\}, +$$ + +and the norm of $g\in \mathcal{H}$ by + +$$ +\| g \| _ {\mathcal {H}} ^ {2} = \sum_ {j = 1} ^ {\infty} \frac {(g , \phi_ {i}) _ {L ^ {2} (X)} ^ {2}}{\gamma_ {i}}. +$$ + +Let $f \in \mathcal{C}(X) \subset L^2(X, dx)$ be an arbitrary function. We replace the supremum of the conjugate function + +$$ +\sup _ {\mu} \left[ \int f d \mu - \frac {\lambda}{2} \left| | \hat {\mu} | \right| _ {\mathcal {H}} ^ {2} \right] \tag {31} +$$ + +by $\mu \in \mathcal{M}(X)$ that has a square-integrable Radon-Nykodim derivative with respect to the Lebesgue measure $dx$ , and then we have a lower bound. We use $\mu(x)$ for the Radon-Nykodim derivative with slight abuse of notation. + +Suppose $f(x) = \sum_{j=1}^{\infty} a_j \phi_j(x)$ and $\mu(x) = \sum_{j=1}^{\infty} b_j \phi_j(x)$ are the expansion. Note that, since the kernel embedding is given by + +$$ +\hat {\mu} (x) = \int K (x, y) \mu (y) d y = \sum_ {j = 1} ^ {\infty} \gamma_ {j} b _ {j} \phi_ {j} (x), +$$ + +the above maximization is reduced to + +$$ +\sup _ {b _ {j}} \sum_ {j = 1} ^ {\infty} a _ {j} b _ {j} - \frac {\lambda}{2} \gamma_ {j} b _ {j} ^ {2}. +$$ + +This is maximized when $b_{j} = a_{j} / (\lambda \gamma_{j})$ , and the maximum value is + +$$ +\frac {1}{2 \lambda} \sum_ {j = 1} ^ {\infty} \frac {a _ {j} ^ {2}}{\gamma_ {j}}. \tag {32} +$$ + +If $f \in \mathcal{H}$ , this value is finite, and the lower bound of the conjugate given by is $\frac{1}{2\lambda} \|f\|_{\mathcal{H}}^2$ , which is the same as the upper bound. If $f \notin \mathcal{H}$ , the value (32) is infinite, and the conjugate function takes $+\infty$ . + +Proposition 14. Let $\mathcal{H}$ be an RKHS with the Gaussian kernel $K(x,y) = e^{-\pi ||x - y||^2}$ . Then for $f\in \mathcal{H}$ , + +$$ +\begin{array}{l} \left| \left| f \right| \right| _ {\mathcal {H}} ^ {2} = \sum_ {k = 0} ^ {\infty} (4 \pi) ^ {- k} \sum_ {k _ {1} + \dots + k _ {d} = k} \frac {1}{\prod_ {i = 1} ^ {d} k _ {i} !} \left| \left| \partial_ {x _ {1}} ^ {k _ {1}} \dots \partial_ {x _ {d}} ^ {k _ {d}} f \right| \right| _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} (21) \\ = \left\| f \right\| _ {L ^ {2} \left(\mathbb {R} ^ {d}\right)} ^ {2} + \frac {1}{4 \pi} \left\| \nabla f \right\| _ {L ^ {2} \left(\mathbb {R} ^ {d}\right)} ^ {2} + \frac {1}{1 6 \pi^ {2}} \left\| \nabla^ {2} f \right\| _ {L ^ {2} \left(\mathbb {R} ^ {d}\right)} ^ {2} + o t h e r t e r m s. (22) \\ \end{array} +$$ + +Proof. We consider the more general case of $K(x,y) = (2\pi \sigma^2)^{-d / 2}e^{-| |x - y|^2 /2\sigma^2}$ , where we have + +$$ +\begin{array}{l} \left| \left| f \right| \right| _ {\mathcal {H}} ^ {2} = \sum_ {k = 0} ^ {\infty} \left(\frac {1}{2} \sigma^ {2}\right) ^ {k} \sum_ {k _ {1} + \dots + k _ {d} = k} \frac {1}{\prod_ {i = 1} ^ {d} k _ {i} !} \left| \left| \partial_ {x _ {1}} ^ {k _ {1}} \dots \partial_ {x _ {d}} ^ {k _ {d}} f \right| \right| _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} \\ = | | f | | _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} + \frac {1}{2} \sigma^ {2} | | \nabla f | | _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} + \frac {1}{4} \sigma^ {4} | | \nabla^ {2} f | | _ {L ^ {2} (\mathbb {R} ^ {d})} ^ {2} + \text {o t h e r t e r m s}. \\ \end{array} +$$ + +Novak et al. (2018) prove this result for $\sigma = 1$ . We sketch the proof for the general case here. We use the Fourier transform convention that + +$$ +\hat {f} (k) = (2 \pi) ^ {- d / 2} \int_ {\mathbb {R} ^ {d}} f (x) e ^ {- i k \cdot x} d x, \quad f (x) = (2 \pi) ^ {- d / 2} \int_ {\mathbb {R} ^ {d}} \hat {f} (k) e ^ {i k \cdot x} d k. +$$ + +Consider the inner product + +$$ +\langle f, g \rangle = \int_ {\mathbb {R} ^ {d}} e ^ {\sigma^ {2} | | k | | ^ {2} / 2} \hat {f} (k) \overline {{\hat {g} (k)}} d k, +$$ + +defined for functions $||f|| < \infty$ . Expanding the exponential in Taylor series, this inner product gives the equation for the norm in the proposition. By use of the Fourier inversion formula, it can be shown that $\langle f, K_x \rangle = f(x)$ , where + +$$ +K _ {x} (y) = (2 \pi \sigma^ {2}) ^ {- d / 2} e ^ {- | | x - y | | ^ {2} / 2 \sigma^ {2}}, +$$ + +so this is an RKHS with the Gaussian kernel. + +![](images/9ce30e616693c00d6a9580bc1f169571d27ebccdc421d0731a396e1783f0b118.jpg) \ No newline at end of file diff --git a/smoothnessandstabilityingans/images.zip b/smoothnessandstabilityingans/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4289edf67dc974f6b674c6550ec8b3a58cb8057d --- /dev/null +++ b/smoothnessandstabilityingans/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8145745b89777df62b696c44d4609b0c3aabde01a6757878dace76f1a56aa56 +size 1498947 diff --git a/smoothnessandstabilityingans/layout.json b/smoothnessandstabilityingans/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..756a9fbb95b5a44bd72a3ee9651f461bd7bee154 --- /dev/null +++ b/smoothnessandstabilityingans/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae52375dc23ad3c8ea900e5f0ba1d9ccee12a374dfebd0389d8265b6ba392faa +size 1493839 diff --git a/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_content_list.json b/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6886ff881bad0e4ffbee6c44d991a12dda807720 --- /dev/null +++ b/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a98e71184bef897210acc0bb25c9d85f5622f7205d993b7a0336bd290781453 +size 100934 diff --git a/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_model.json b/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..92a4171a7ff2608a6e3a0a912eb82a03be2f0f87 --- /dev/null +++ b/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b79dd047dbcbf7ca21a55c2207fa36a8226d05b140cac53d37c38cfd4130491c +size 123943 diff --git a/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_origin.pdf b/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f35020d2bddc8cb292f5cb5e22be3cc281976cc7 --- /dev/null +++ b/snodespectraldiscretizationofneuralodesforsystemidentification/2b7903cf-5bd7-4fd5-9792-39c448fa42d3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:390976bbdfd15b053010a7d4d9e7d74e9367291ffb7e31263c9196c5d3c46fe8 +size 5270097 diff --git a/snodespectraldiscretizationofneuralodesforsystemidentification/full.md b/snodespectraldiscretizationofneuralodesforsystemidentification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..db1fda077577f5aa86b916135f04ac9be7d50f20 --- /dev/null +++ b/snodespectraldiscretizationofneuralodesforsystemidentification/full.md @@ -0,0 +1,537 @@ +# SNODE: SPECTRAL DISCRETIZATION OF NEURAL ODES FOR SYSTEM IDENTIFICATION + +Alessio Quaglino,\* Marco Gallieri, Jonathan Masci, Jan Koutnik + +NNAISENSE, Lugano, Switzerland + +{alessio, marco, jonathan, jan}@nnaisense.com + +# ABSTRACT + +This paper proposes the use of spectral element methods (Canuto et al., 1988) for fast and accurate training of Neural Ordinary Differential Equations (ODE-Nets; Chen et al., 2018) for system identification. This is achieved by expressing their dynamics as a truncated series of Legendre polynomials. The series coefficients, as well as the network weights, are computed by minimizing the weighted sum of the loss function and the violation of the ODE-Net dynamics. The problem is solved by coordinate descent that alternately minimizes, with respect to the coefficients and the weights, two unconstrained sub-problems using standard backpropagation and gradient methods. The resulting optimization scheme is fully time-parallel and results in a low memory footprint. Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique (Chen et al., 2018), on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function. The corresponding testing MSE is one order of magnitude smaller as well, suggesting generalization capabilities increase. + +# 1 INTRODUCTION + +Neural Ordinary Differential Equations (ODE-Nets; Chen et al., 2018) can learn latent models from observations that are sparse in time. This property has the potential to enhance the performance of neural network predictive models in applications where information is sparse in time and it is important to account for exact arrival times and delays. In complex control systems and model-based reinforcement learning, planning over a long horizon is often needed, while high frequency feedback is necessary for maintaining stability (Franklin et al., 2014). Discrete-time models, including RNNs (Jain & Medsker, 1999), often struggle to fully meet the needs of such applications due to the fixed time resolution. ODE-Nets have been shown to provide superior performance with respect to classic RNNs on time series forecasting with sparse training data. However, learning their parameters can be computationally intensive. In particular, ODE-Nets are memory efficient but time inefficient. In this paper, we address this bottleneck and propose a novel alternative strategy for system identification. + +Summary of contributions. We propose SNODE, a compact representation of ODE-Nets for system identification with full state information that makes use of a higher-order approximation of its states by means of Legendre polynomials. This is outlined in Section 4. In order to find the optimal polynomial coefficients and network parameters, we develop a novel optimization scheme, which does not require to solve an ODE at each iteration. The resulting algorithm is detailed in Section 3 and is based on backpropagation (Linnainmaa, 1970; Werbos, 1981; Lecun, 1988) and automatic differentiation (Paszke et al., 2017). The proposed method is fully parallel with respect to time and its approximation error reduces exponentially with the Legendre polynomial order (Canuto et al., 1988). + +Summary of numerical experiments. In Section 5, our method is tested on a 6-state vehicle problem, where it is at least one order or magnitude faster in each optimizer iteration than explicit and adjoint methods, while convergence is achieved in a third of the iterations. At test time, the MSE is reduced by one order of magnitude. In Section 6, the method is used for a 30-state system consisting + +of identical vehicles, coupled via a known collision avoidance policy. Again, our method converges in a third of the iterations required by backpropagation though a solver and each iteration is $50\mathrm{x}$ faster than the fastest explicit scheme. + +# 2 NEURAL ORDINARY DIFFERENTIAL EQUATIONS + +The minimization of a scalar-valued loss function that depends on the output of an ODE-Net can be formulated as a general constrained optimization problem: + +$$ +\min _ {\theta \in \mathbb {R} ^ {m}} \int_ {t _ {0}} ^ {t _ {1}} L (t, x (t)) d t, \tag {1} +$$ + +$$ +\mathrm {s . t .} \dot {x} (t) = f (t, x (t), u (t); \theta), +$$ + +$$ +x (t _ {0}) = x _ {0}, +$$ + +where $x(t) \in \mathbb{X}$ is the state, $u(t) \in \mathbb{U}$ is the input, the loss and ODE functions $L$ and $f$ are given, and the parameters $\theta$ have to be learned. The spaces $\mathbb{X}$ and $\mathbb{U}$ are typically Sobolev (e.g. Hilbert) spaces expressing the smoothness of $x(t)$ and $u(t)$ (see Section 8). Equation (1) can be used to represent several inverse problems, for instance in machine learning, estimation, and optimal control (Stengel, 1994; Law et al., 2015; Ross, 2009). Problem (1) can be solved using gradient-based optimization through several time-stepping schemes for solving the ODE. (Chen et al., 2018; Gholami et al., 2019) have proposed to use the adjoint method when $f$ is a neural network. These methods are typically relying on explicit time-stepping schemes (Butcher & Wanner, 1996). Limitations of these approaches are briefly summarized: + +Limitations of backpropagation through an ODE solver. The standard approach for solving this problem is to compute the gradients $\frac{\partial L}{\partial\theta}$ using backpropagation through a discrete approximation of the constraints, such as Runge-Kutta methods (Runge, 1895; Butcher & Wanner, 1996) or multi-step solvers (Raissi et al., 2018). This ensures that the solution remains feasible (within a numerical tolerance) at each iteration of a gradient descent method. However, it has several drawbacks: 1) the memory cost of storing intermediate quantities during backpropagation can be significant, 2) the application of implicit methods would require solving a nonlinear equation at each step, 3) the numerical error can significantly affect the solution, and 4) the problem topology can be unsuitable for optimization (Petersen et al., 2018). + +Limitations of adjoint methods. ODE-Nets (Chen et al., 2018) solve (1) using the adjoint method, which consists of simulating a dynamical system defined by an appropriate augmented Hamiltonian (Ross, 2009), with an additional state referred to as the adjoint. In the backward pass the adjoint ODE is solved numerically to provide the gradients of the loss function. This means that intermediate states of the forward pass do not need to be stored. An additional step of the ODE solver is needed for the backward pass. This suffers from a few drawbacks: 1) the dynamics of either the hidden state or the adjoint might be unstable, due to the symplectic structure of the underlying Hamiltonian system, referred to as the curse of sensitivity in (Ross, 2009); 2) the procedure requires solving a differential algebraic equation and a boundary value problem which is complex, time consuming, and might not have a solution (Ross & Karpenko, 2012). + +Limitations of hybrid methods. ANODE (Gholami et al., 2019) splits the problem into time batches, where the adjoint is used, storing in memory only few intermediate states from the forward pass. This allows to improve the robustness and generalization of the adjoint method. A similar improvement could be obtained using reversible integrators. However, its computational cost is of the same order of the adjoint method and it does not offer further opportunities for parallelization. + +# 3 RELAXATION OF THE SUPERVISED LEARNING PROBLEM + +Our algorithm is based on two ingredients: i) the discretization of the problem using spectral elements leading to SNODE, detailed in Section 4, and ii) the relaxation of the ODE constraint from (1), enabling efficient training through backpropagation. The latter can be applied directly at the continuous level and significantly reduces the difficulty of the optimization, as shown in our examples. + +The problem in (1) is split into two smaller subproblems: one finds the trajectory $x(t)$ that minimizes an unconstrained relaxation of (1). The other trains the network weights $\theta$ such that the trajectory becomes a solution of the ODE. Both are addressed using standard gradient descent and backpropagation. In particular, a fixed number of ADAM or SGD steps is performed for each problem in an alternate fashion, until convergence. In the following, the details of each subproblem are discussed. + +Step 0: Initial trajectory. The initial trajectory $x(t)$ is chosen by solving the problem + +$$ +\min _ {x (t) \in \mathbb {X}} \int_ {t _ {0}} ^ {t _ {1}} L (t, x (t)) d t, \tag {2} +$$ + +$$ +\begin{array}{l l} \text {s . t .} & x (t _ {0}) = x _ {0}, \end{array} +$$ + +If this problem does not have a unique solution, a regularization term is added. For a quadratic loss, a closed-form solution is readily available. Otherwise, a prescribed number of SGD iterations is used. + +Step 1: Coordinate descent on residual. Once the initial trajectory $x^{\star}(t)$ is found, $\theta$ is computed by solving the unconstrained problem: + +$$ +\min _ {\theta \in \mathbb {R} ^ {m}} \int_ {t _ {0}} ^ {t _ {1}} R (t, x ^ {\star} (t), \theta) d t, \quad R (t, x (t), \theta) := \| \dot {x} (t) - f (t, x (t), u (t); \theta) \| ^ {2} \tag {3} +$$ + +If the value of the residual at the optimum $\theta^{*}$ is smaller than a prescribed tolerance, then the algorithms stop. Otherwise, steps 1 and 2 are iterated until convergence. + +Step 2: Coordinate descent on relaxation. Once the candidate parameters $\theta^*$ are found, the trajectory is updated by minimizing the relaxed objective: + +$$ +\min _ {x (t) \in \mathbb {X}} \int_ {t _ {0}} ^ {t _ {1}} \gamma L (t, x (t)) + R (t, x (t), \theta^ {\star}) d t, \tag {4} +$$ + +$$ +\begin{array}{r l} \mathrm {s . t .} & x (t _ {0}) = x _ {0}. \end{array} +$$ + +Discussion. The proposed algorithm can be seen as an alternating coordinate gradient descent on the relaxed functional used in problem (4), i.e., by alternating a minimization with respect to $x(t)$ and $\theta$ . If $\gamma = 0$ , multiple minima can exist, since each choice of the parameters $\theta$ would induce a different dynamics $x(t)$ , solution of the original constraint. For $\gamma \neq 0$ , the loss function in (4) trades-off the ODE solution residual for the data fitting, providing a unique solution. The choice of $\gamma$ implicitly introduces a satisfaction tolerance $\epsilon(\gamma)$ , i.e., similar to regularized regression (Hastie et al., 2001), implying that $\| \dot{x}(t) - f(t, x(t); \theta) \| \leq \epsilon(\gamma)$ . Concurrently, problem (3) reduces the residual. + +# 4 SNODE - HIGH-ORDER DISCRETIZATION OF THE RELAXED PROBLEM + +In order to numerically solve the problems presented in the previous section, a discretization of $x(t)$ is needed. Rather than updating the values at time points $t_i$ from the past to the future, we introduce a compact representation of the complete discrete trajectory by means of the spectral element method. + +Spectral approximation. We start by representing the scalar unknown trajectory, $x(t)$ , and the known input, $u(t)$ , as truncated series: + +$$ +x (t) = \sum_ {i = 0} ^ {p} x _ {i} \psi_ {i} (t), \quad u (t) = \sum_ {i = 0} ^ {z} u _ {i} \zeta_ {i} (t), \tag {5} +$$ + +where $x_{i}, u_{i} \in \mathbb{R}$ and $\psi_i(t), \zeta_i(t)$ are sets of given basis functions that span the spaces $\mathbb{X}_h \subset \mathbb{X}$ and $\mathbb{U}_h \subset \mathbb{U}$ . In this work, we use orthogonal Legendre polynomials of order $p$ (Canuto et al., 1988) for $\psi_i(t)$ , where $p$ is a hyperparameter, and the cosine Fourier basis for $\zeta_i(t)$ , where $z$ is fixed. + +Collocation and quadrature. In order to compute the coefficients $x_{i}$ of (5), we enforce the equation at a discrete set $\mathbb{Q}$ of collocation points $t_{q}$ . Here, we choose $p + 1$ Gauss-Lobatto nodes, which include $t = t_0$ . This directly enforces the initial condition. Other choices are also possible (Canuto et al., 1988). Introducing the vectors of series coefficients $x_{I} = \{x_{i}\}_{i = 0}^{p}$ and of evaluations at quadrature points $x(t_{\mathbb{Q}}) = \{x(t_q)\}_{q\in \mathbb{Q}}$ , the collocation problem can be solved in matrix form as + +$$ +x _ {I} = M ^ {- 1} x \left(t _ {\mathbb {Q}}\right), \quad M _ {q i} := \psi_ {i} \left(t _ {q}\right). \tag {6} +$$ + +We approximate the integral (3) as a sum of residual evaluations over $\mathbb{Q}$ . Assuming that $x(0) = x_0$ , the integrand at all quadrature points $t_{\mathbb{Q}}$ can be computed as a component-wise norm + +$$ +R (t _ {\mathbb {Q}}, x (t _ {\mathbb {Q}}), \theta) = \| D M ^ {- 1} x (t _ {\mathbb {Q}}) - f (t _ {\mathbb {Q}}, x (t _ {\mathbb {Q}}), u (t _ {\mathbb {Q}}); \theta) \| ^ {2}, \quad D _ {q i} := \dot {\psi} _ {i} (t _ {q}). \qquad (7) +$$ + +Fitting the input data. For the case when problem (2) admits a unique solution, we propose a new direct training scheme, $\delta$ -SNODE, which is summarized in Algorithm 1. In general, a least-squares approach must be used instead. This entails computing the integral in (2), which can be done by evaluating the loss function $L$ at quadrature points $t_q$ . If the input data is not available at $t_q$ , we approximate the integral by evaluating $L$ at the available time points. The corresponding alternating coordinate descent scheme $\alpha$ -SNODE is presented in Algorithm 2. In the next sections, we study the consequences of a low-data scenario on this approach. + +We use fixed numbers $N_{t}$ and $N_{x}$ of updates for, respectively, $\theta$ and $x(t)$ . Both are performed with standard routines, such as SGD. In our experiments, we use ADAM to optimize the parameters and an interpolation order $p = 14$ , but any other orders and solvers are possible. + +Algorithm 1 $\delta$ -SNODE training + +Input: $M,D$ from (6)-(7). + +$$ +x _ {I} ^ {*} \leftarrow \arg \min _ {x} L (M x) +$$ + +while $R(Mx_{I}^{*},\theta^{*}) > \delta$ do + +$$ +\theta^ {*} \leftarrow \operatorname {A D A M} _ {\theta} \left[ R \left(M x _ {I} ^ {*}, \theta\right) \right] +$$ + +end while + +$$ +\text {O u t p u t :} \theta^ {*} +$$ + +Algorithm 2 $\alpha$ -SNODE training + +Input: $M, D, \theta^{*} \in \mathbb{R}^{m}, x_{I}^{*} \in \mathbb{R}^{p}, \gamma > 0$ . + +while $\gamma L(Mx_I^\star) + R(Mx_I^*,\theta^*) > \delta$ do + +for $i = 0\ldots N_{x}$ do + +$$ +x _ {I} ^ {*} \leftarrow \operatorname {S G D} _ {x} \left[ \gamma L (M x) + R (M x, \theta^ {*}) \right] +$$ + +end for + +for $i = 0\ldots N_t$ do + +$$ +\theta^ {*} \leftarrow \operatorname {A D A M} _ {\theta} \left[ R \left(M x _ {I} ^ {*}, \theta\right) \right] +$$ + +end for + +end while + +Output: $\theta^{*}$ + +Ease of time parallelization. If $R(t_{q}) = 0$ is enforced explicitly at $q \in \mathbb{Q}$ , then the resulting discrete system can be seen as an implicit time-stepping method of order $p$ . However, while ODE integrators can only be made parallel across the different components of $x(t)$ , the assembly of the residual can be done in parallel also across time. This massively increases the parallelization capabilities of the proposed schemes compared to standard training routines. + +Memory cost. If an ODE admits a regular solution, with regularity $r > p$ , in the sense of Hilbert spaces, i.e., of the number of square-integrable derivatives, then the approximation error of the SNODE converges exponentially with $p$ (Canuto et al., 1988). Hence, it produces a very compact representation of an ODE-Net. Thanks to this property, $p$ is typically much lower than the equivalent number of time steps of explicit or implicit schemes with a fixed order. This greatly reduces the complexity and the memory requirements of the proposed method, which can be evaluated at any $t$ via (5) by only storing few $x_{i}$ coefficients. + +Stability and vanishing gradients. The forward Euler method is known to have a small region of convergence. In other words, integrating very fast dynamics requires a very small time step, $dt$ , in order to provide accurate results. In particular, for the solver error to be bounded, the eigenvalues of the state Jacobian of the ODE need to lie into the circle of the complex plane centered at $(-1,0)$ with radius $1 / dt$ (Ciccone et al., 2018; Isermann, 1989). Higher-order explicit methods, such as Runge-Kutta (Runge, 1895), have larger but still limited convergence regions. Our algorithms on the other hand are implicit methods, which have a larger region of convergence than recursive (explicit) methods (Hairer et al., 1993). We claim that this results in a more stable and robust training. This + +claim is supported by our experiments. Reducing the time step can improve the Euler accuracy but it can still lead to vanishing or exploding gradients (Zilly et al., 2016; Goodfellow et al., 2016). In Appendix C, we show that our methods do not suffer from this problem. + +Experiments setup and hyperparameters. For all experiments, a common setup was employed and no optimization of hyperparameters was performed. Time horizon $T = 10s$ and batch size of 100 were used. Learning rates were set to $10^{-2}$ for ADAM (for all methods) and $10^{-3}$ for SGD (for $\alpha$ -SNODE). For the $\alpha$ -SNODE method, $\gamma = 3$ and 10 iterations were used for the SGD and ADAM algorithms at each epoch, as outlined in Algorithm 2. The initial trajectory was perturbed as $x_0 = x_0 + \xi$ , $\xi \sim U(-0.1, 0.1)$ . This perturbation prevents the exact convergence of Algorithm 1 during initialization, allowing to perform the alternating coordinate descent algorithm. + +# 5 MODELING OF A PLANAR VEHICLE DYNAMICS + +Let us consider the system + +$$ +\dot {\eta} = J (\eta) v, \quad M \dot {v} + d (v) + C (v) v = u, \tag {8} +$$ + +where $\eta, v \in \mathbb{R}^3$ are the states, $u = (F_x, 0, \tau_{xy})$ is the control, $C(v)$ is the Coriolis matrix, $d(v)$ is the (linear) damping force, and $J(\eta)$ encodes the coordinate transformation from the body to the world frame (Fossen, 2011). A gray-box model is built using a neural network for each matrix + +$$ +\hat {J} (\eta) = f _ {J} (\eta ; \theta_ {J}), \quad \hat {C} (v) = f _ {C} (v; \theta_ {C}), \quad \hat {d} (v) = f _ {d} (v;,, \theta_ {d}). +$$ + +Each network consists of two layers, the first with a tanh activation. Bias is excluded for $f_{C}$ and $f_{d}$ . For $f_{J}$ , $\sin(\phi)$ and $\cos(\phi)$ are used as input features, where $\phi$ is the vehicle orientation. When inserted in (8), these discrete networks produce an ODE-Net that is a surrogate model of the physical system. The trajectories of the system and the learning curves are shown in Appendix A. + +![](images/953368a1e39afed7a40f434fdbd641c6202eb817c83fac80456ba81811eb9695.jpg) +(a) $\delta$ -SNODE, MSE = 0.109 + +![](images/0cd18919f716ac7fd8557829604215bb004bf9008ad07bda24098649aec9e745.jpg) +(b) $\alpha$ -SNODE, MSE = 0.019 + +![](images/07bbdb8370968d54a0bf8d8f0450089ee0df41d3cc183c45d7080a550e5ec93b.jpg) +(c) DoPr5, MSE = 0.307 + +![](images/09a381773cf87d9a2303b6247a153b700a5aac7f86d0389e068bfa1f30a50d87.jpg) +(d) Euler, MSE = 0.476 +Figure 1: Vehicle dynamics testing in the high-data regime. Each plot compares the true trajectory (dotted red) with the prediction from the surrogate (solid blue). The time horizon is $5\mathrm{x}$ than in training. The shown methods are: (a)-(b) the proposed $\delta$ -SNODE and $\alpha$ -SNODE schemes, (c) fifth-order Dormand-Prince, and (d) Explicit Euler. Models trained with our methods have better test MSE and forecast more precisely. + +Comparison of methods in the high-data regime. In the case of $\delta$ -SNODE and $\alpha$ -SNODE, only $p + 1$ points are needed for the accurate integration of the loss function, if such points coincide with the Gauss-Lobatto quadrature points. We found that 100 equally-spaced points produce a comparable result. Therefore, the training performance of the novel and traditional training methods were compared by sampling the trajectories at 100 equally-spaced time points. Table 1a shows that $\delta$ -SNODE outperforms BKPR-DoPr5 by a factor of 50, while producing a significantly improved generalization. The speedup reduces to 20 for $\alpha$ -SNODE, which however yields a further reduction of the testing MSE by a factor of 10, as can be seen in Figure 1. + +Comparison in the low-data regime. The performance of the methods was compared using fewer time points, randomly sampled from a uniform distribution. For the baselines, evenly-spaced points + +Table 1: Simulation results for the vehicle dynamics example. The standard backpropagation schemes are denoted by BKPR, while the adjoint is denoted by ADJ. The Dormand-Prince method of order 5 with adaptive time step is denoted by DoPr5, while Euler denotes explicit Euler with time step equal to the data sampling time. For BKPR-DoPr5 we set rtol=10 $^{-7}$ , atol=10 $^{-9}$ . Experiments were performed on a i9 Apple laptop with 32GB of RAM. Reported times are per iteration and were averaged over the total number of iterations performed. + +(a) High-data regime + +
TrainingTest
Time [ms]#IterFinal lossMSE
δ-SNODE8.34800.0110.109
α-SNODE1341000.0110.019
BKPR-Euler9712000.0470.476
BKPR-DoPr518511400.0110.307
ADJ-Euler20812000.0470.492
ADJ-DoPr5351611400.0110.35
+ +(b) Low-data regime + +
TrainingTest
% data# IterFinal lossMSE
α-SNODE501400.0100.029
α-SNODE251900.0090.052
BKPR-Euler5012000.1631.056
BKPR-Euler2512000.6433.771
BKPR-DoPr55011600.0110.35
BKPR-DoPr52511800.0110.291
+ +were used. Table 1b shows that $\alpha$ -SNODE preserves a good testing MSE, at the price of an increased number of iterations. With only $25\%$ of data, $\alpha$ -SNODE is $10x$ faster than BKPR-DoPr5. Moreover, its test MSE is $1/7$ than BKPR-DoPr5 and up to $1/70$ than BKPR-Euler, showing that the adaptive time step of DoPr5 improves significantly the baseline but it is unable to match the accuracy of the proposed methods. The adjoint method produced the same results as the backprop ( $\pm 2\%$ ). + +# 6 LEARNING A MULTI-AGENT SIMULATION + +Consider the multi-agent system consisting of $N_{a}$ kinematic vehicles: + +$$ +\dot {\eta} _ {i} = J \left(\eta_ {i}\right) \tanh \left(w + K _ {c} \left(\eta_ {i}\right) + \frac {1}{N _ {a}} \sum_ {j \neq i} ^ {N _ {a}} K _ {o} \left(\eta_ {i}, \eta_ {j}\right)\right), \tag {9} +$$ + +where $\eta \in \mathbb{R}^{3N_a}$ are the states (planar position and orientation), $J(\eta_i)$ is the coordinate transform from the body to world frame, common to all agents. The agents velocities are determined by known arbitrary control and collision avoidance policies, respectively, $K_{c}$ and $K_{o}$ plus some additional high frequency measurable signal $w = w(t)$ , shared by all vehicles. The control laws are non-linear and are described in detail in Appendix B. We wish to learn their kinematics matrix by means of a neural network as in Section 5. The task is simpler here, but the resulting ODE has $3N_{a}$ states, coupled by $K_{0}$ . We simulate $N_{a} = 10$ agents in series. + +![](images/1794352d4671700a2a189dc8bcfc6061deadf26f04896a85deb86b7a91b98423.jpg) +(a) $\delta$ -SNODE MSE $= 1.7$ e-4 + +![](images/d07ad4865d8286505661189bb27399507ef977ac22a8a088903fe9ad5623ff7c.jpg) +(b) $\alpha$ -SNODE MSE = 9.1 e-5 + +![](images/0254d400e864b6054bbfd95b2b87a45ce08767623bb4732c087708ad854a2b82.jpg) +(c) BKPR-Euler MSE = 1.3 e-3 +Figure 2: Multi-agent testing in high-data regime. Each plot compares the true trajectory (dotted red) with the prediction from the surrogate (solid blue). The time horizon is $4\mathrm{x}$ longer than in training. The shown methods are: (a)-(b) the proposed $\delta$ -SNODE and $\alpha$ -SNODE schemes, and (c)-(d) Explicit Euler. Models trained with our methods have better test MSE and forecast more precisely. + +![](images/4932d79ecf2f8c9db056800fbc8fa78e99956ff3cbda33407b6668eaa10a66d6.jpg) +(d) ADJ-Euler $\mathbf{MSE} = 5.4\mathrm{e - }4$ + +Comparison of methods with full and sparse data. The learning curves for high-data regime are in Figure 3. For method $\alpha$ -SNODE, training was terminated when the loss in (4) is less than $\gamma \bar{L} + \bar{R}$ , with $\bar{L} = 0.11$ and $\bar{R} = 0.01$ . For the case of $20\%$ data, we set $\bar{L} = 0.01$ . Table 2 summarizes results. $\delta$ -SNODE is the fastest method, followed by $\alpha$ -SNODE which is the best performing. Iteration time of + +BKPR-Euler is 50x slower, with 14x worse test MSE. ADJ-Euler is the slowest but its test MSE is in between BKPR-Euler and our methods. Random down-sampling of the data by $50\%$ and $20\%$ (evenly-spaced for the baselines) makes ADJ-Euler fall back the most. BKPR-DoPr5 failed to find a time step meeting the tolerances, therefore they were increased to $\mathrm{rtol} = 10^{-5}$ , $\mathrm{atol} = 10^{-7}$ . Since the loss continued to increase, training was terminated at 200 epochs. ADJ-DoPr5 failed to compute gradients. Test trajectories are in Figure 2. Additional details are in Appendix B. + +Table 2: Simulation results for the multi-agent example. Experiments were performed on a 2.9 GHz Intel Core i7 Apple laptop with 16GB of RAM. For the $\alpha$ -SNODE method, each iteration consists of 10 steps of SGD and 10 of ADAM. In high-data regime, $\alpha$ -SNODE converges in $5x$ less, $2.6x$ faster, iterations than Euler with $14x$ smaller test MSE. Method $\delta$ -SNODE is even $12x$ faster, with similar performance. Gains are comparable in low-data regime. Method BKPR-DoPr5 failed. + +(a) High-data regime + +
TrainingTest
Time [ms]#IterLossMSE
δ-SNODE1065000.121.7 e-4
α-SNODE19991800.129.1 e-5
BKPR-Euler533012000.1771.3 e-3
BKPR-DoPr59580Fail 20021707-
ADJ-Euler86109900.1045.4 e-4
ADJ-DoPr5-Fail 0--
+ +(b) Low-data regime + +
TrainingTest
% data# IterLossMSE
α-SNODE501600.221.5 e-4
α-SNODE201750.239.1 e-4
BKPR-Euler509900.432.8 e-3
BKPR-Euler209900.6451.6 e-3
ADJ-Euler5029900.539.4 e-3
ADJ-Euler2015900.633.0 e-3
+ +Robustness of the methods. The use of a high order variable-step method (DoPr5), providing an accurate ODE solution, does not however lead to good training results. In particular, the loss function continued to increase over the iterations. On the other hand, despite being nearly 50 times slower than our methods, the fixed-step forward Euler solver was successfully used for learning the dynamics of a 30-state system in the training configuration described in Appendix B. One should however note that, in this configuration, the gains for the collision avoidance policy $K_{o}$ (which couples the ODE) were set to small values. This makes the system simpler and more stable than having a larger gain. As a result, if one attempts to train with the test configuration from Appendix B, where the gains are increased and the system is more unstable, then backpropagating trough Euler simply fails. Comparing Figures 3 and 4, it can be seen that the learning curves of our methods are unaffected by the change in the gains, while BKPR-Euler and ADJ-Euler fail to decrease the loss. + +# 7 RELATED WORK + +RNN training pathologies. One of the first RNNs to be trained successfully were LSTMs (Greff et al., 2017), due to their particular architecture. Training an arbitrary RNN effectively is generally difficult as standard RNN dynamics can become unstable or chaotic during training and this can cause the gradients to explode and SGD to fail (Pascanu et al., 2012). When RNNs consist of discretised ODEs, then stability of SGD is intrinsically related to the size of the convergence region of the solver (Ciccone et al., 2018). Since higher-order and implicit solvers have larger convergence region (Hairer + +![](images/717d54360322e98cea5af354149db8429406d96770daf00dee4750a149bc52d6.jpg) +Figure 3: Multi-agent learning using different methods. Training loss vs. iterations (left) and execution time (right). Our methods converge one order of magnitude faster and to a lower loss. Note that changing the simulation parameters can make Euler unstable (see Figure 4). + +![](images/77a3aa9a6da5640e52e4864e2914a88b91a4737bc4f499dbfdb20ac81ba62ae2.jpg) + +![](images/34905fdfed1f7e0f620e27f213e409745567496568446ec3c5639b7bb5296e20.jpg) +Figure 4: Multi-agent learning with increased controller gains using different methods. Training loss vs. iterations (left) and execution time (right). Our methods remain stable and their learning curves are the same as in Figure 3. BKPR-Euler and ADJ-Euler are not able to train in this configuration. The proposed algorithms are inherently more stable and robust than the baselines. + +![](images/8c60bfbd3b6a636c77bd144bd789b46acfaeff854c85fb1320c97e224c14dd58.jpg) + +et al., 1993), following (Pascanu et al., 2012) it can be argued that our method has the potential to mitigate instabilities and hence to make the learning more efficient. This is supported by our results. + +Unrolled architectures. In (Graves, 2016), an RNN has been used with a stopping criterion, for iterative estimation with adaptive computation time. Highway (Srivastava et al., 2015) and residual networks (He et al., 2015) have been studied in (Greff et al., 2016) as unrolled estimators. In this context, (Haber & Ruthotto, 2017) treated residual networks as autonomous discrete-ODEs and investigated their stability. Finally, in (Ciccone et al., 2018) a discrete-time non-autonomous ODE based on residual networks has been made explicitly stable and convergent to an input-dependant equilibrium, then used for adaptive computation. + +Training stable ODEs. In (Haber & Ruthotto, 2017; Ciccone et al., 2018), ODE stability conditions where used to train unrolled recurrent residual networks. Similarly, when using our method on (Ciccone et al., 2018) ODE stability can be enforced by projecting the state weight matrices, $A$ , into the Hurwitz stable space: i.e. $A \prec 0$ . At test time, overall stability will also depend on the solver (Durran, 2010; Isermann, 1989). Therefore, a high order variable step method (e.g. DoPr5) should be used at test time in order to minimize the approximation error. + +Dynamics and machine learning. A physics prior on a neural network was used by (Jia et al., 2018) in the form of a consistency loss with data from a simulation. In (De Avila Belbute-Peres et al., 2018), a differentiable physics framework was introduced for point mass planar models with contact dynamics. (Ruthotto & Haber, 2018) looked at Partial Differential Equations (PDEs) to analyze neural networks, while (Raissi & Karniadakis, 2018; Raissi et al., 2017) used Gaussian Processes (GP) to model PDEs. The solution of a linear ODE was used in (Soleimani et al., 2017) in conjunction with a structured multi-output GP to model patients outcome of continuous treatment observed at random times. (Pathak et al., 2017) predicted the divergence rate of a chaotic system with RNNs. + +# 8 SCOPE AND LIMITATIONS + +Test time and cross-validation At test time, since the future outputs are unknown, an explicit integrator is needed. For cross-validation, the loss needs instead to be evaluated on a different dataset. In order to do so, one needs to solve the ODE forward in time. However, since the output data is available during cross-validation, a corresponding polynomial representation of the form (5) can be found and the relaxed loss (4) can be evaluated efficiently. + +Nonsmooth dynamics. We have assumed that the ODE-Net dynamics has a regularity $r > p$ in order to take advantage of the exponential convergence of spectral methods, i.e., that their approximation error reduces as $O(h^p)$ , where $h$ is the size of the window used to discretize the interval. However, this might not be true in general. In these cases, the optimal choice would be to use a $hp$ -spectral approach (Canuto et al., 1988), where $h$ is reduced locally only near the discontinuities. This is very closely related to adaptive time-stepping for ODE solvers. + +Topological properties, convergence, and better generalization. There are few theoretical open questions stemming from this work. We argue that one reason for the performance improvement shown by our algorithms is the fact that the set of functions generated by a fixed neural network topology does not possess favorable topological properties for optimization, as discussed in (Petersen et al., 2018). Therefore, the constraint relaxation proposed in this work may improve the properties of the optimization space. This is similar to interior point methods and can help with accelerating the convergence but also with preventing local minima. One further explanation is the fact that the proposed method does not suffer from vanishing nor exploding gradients, as shown in Appendix C. Moreover, our approach very closely resembles the MAC scheme, for which theoretical convergence results are available (Carreira-Perpinan & Wang, 2014). + +Multiple ODEs: Synchronous vs Asynchronous. The proposed method can be used for an arbitrary cascade of dynamical systems as they can be expressed as a single ODE. When only the final state of one ODE (or its trajectory) is fed into the next block, e.g. as in (Ciccone et al., 2018), the method could be extended by means of $2M$ smaller optimizations, where $M$ is the number of ODEs. + +Hidden states. Latent states do not appear in the loss, so training and particularly initializing the polynomial coefficients is more difficult. A hybrid approach is to warm-start the optimizer using few iterations of backpropagation. We plan to investigate a full spectral approach in the future. + +# REFERENCES + +J.C. Butcher and G. Wanner. Runge-Kutta methods: some historical notes. Applied Numerical Mathematics, 22(1-3):113-151, November 1996. ISSN 01689274. doi: 10.1016/S0168-9274(96)00048-7. URL https://linkinghub.elsevier.com/retrieve/pii/S0168927496000487. +C.G. Canuto, M.Y. Hussaini, A. Quarteroni, and T.A. Zang. Spectral Methods in Fluid Dynamics. Springer-Verlag, 1988. +Miguel Carreira-Perpinan and Weiran Wang. Distributed optimization of deeply nested systems. In Artificial Intelligence and Statistics, pp. 10-19, 2014. +Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. Neural Ordinary Differential Equations. In NeurIPS, 2018. +Marco Ciccone, Marco Gallieri, Jonathan Masci, Christian Osendorfer, and Faustino Gomez. Nais-net: Stable deep networks from non-autonomous differential equations. In NeurIPS, 2018. +Filipe De Avila Belbute-Peres, Kevin A. Smith, Kelsey Allen, Joshua B. Tenenbaum, and J. Zico Kolter. End-to-end differentiable physics for learning and control. In NeurIPS, 2018. +Dale R. Durran. Numerical Methods for Fluid Dynamics. Springer New York, 2010. doi: 10.1007/978-1-4419-6412-0. URL https://doi.org/10.1007/978-1-4419-6412-0. +Thor I Fossen. Handbook of marine craft hydrodynamics and motion control. John Wiley & Sons, 2011. +Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini. Feedback Control of Dynamic Systems (7th Edition). Pearson, 2014. ISBN 0133496597. +Amir Gholami, Kurt Keutzer, and George Biros. ANODE: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs. Preprint at arXiv:1902.10298, 2019. +Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. +Alex Graves. Adaptive Computation Time for Recurrent Neural Networks. arXiv:1603.08983 [cs], March 2016. URL http://arxiv.org/abs/1603.08983.arXiv:1603.08983. +Klaus Greff, Rupesh Kumar Srivastava, and Jürgen Schmidhuber. Highway and residual networks learn unrolled iterative estimation. CoRR, abs/1612.07771, 2016. URL http://arxiv.org/abs/1612.07771. + +Klaus Greff, Rupesh Kumar Srivastava, Jan Koutnik, Bas R. Steunebrink, and Jürgen Schmidhuber. LSTM: A Search Space Odyssey. IEEE Transactions on Neural Networks and Learning Systems, 28(10):2222-2232, October 2017. ISSN 2162-237X, 2162-2388. doi: 10.1109/TNNLS.2016.2582924. URL http://arxiv.org/abs/1503.04069. arXiv: 1503.04069. +Eldad Haber and Lars Ruthotto. Stable Architectures for Deep Neural Networks. arXiv preprint arXiv:1705.03341, 2017. URL https://arxiv.org/abs/1705.03341. +E. Hairer, S. P. Nørsett, and G. Wanner. Solving Ordinary Differential Equations I (2Nd Revised. Ed.): Nonstiff Problems. Springer-Verlag, Berlin, Heidelberg, 1993. ISBN 0-387-56670-8. +Trevor Hastie, Robert Tibshirani, and Jerome Friedman. *The Elements of Statistical Learning*. Springer Series in Statistics. Springer New York Inc., New York, NY, USA, 2001. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs], December 2015. URL http://arxiv.org/abs/1512.03385.arXiv:1512.03385. +Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, New York, NY, USA, 2nd edition, 2012. ISBN 0521548233, 9780521548236. +Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 [cs], February 2015. URL http://arxiv.org/abs/1502.03167.arXiv:1502.03167. +Rolf Isermann. Digital Control Systems. Springer Berlin Heidelberg, 1989. doi: 10.1007/978-3-642-86417-9. URL https://doi.org/10.1007/978-3-642-86417-9. +L. C. Jain and L. R. Medsker. Recurrent Neural Networks: Design and Applications (International Series on Computational Intelligence). CRC Press, 1999. ISBN 0849371813. +Xiaowei Jia, Anuj Karpatne, Jared Willard, Michael Steinbach, Jordan S. Read, Paul C. Hanson, Hilary A. Dugan, and Vipin Kumar. Physics guided recurrent neural networks for modeling dynamical systems: Application to monitoring water temperature and quality in lakes. CoRR, abs/1810.02880, 2018. URL http://arxiv.org/abs/1810.02880. +Kody Law, Andrew Stuart, and Kostas Zygalakis. Data assimilation. Cham, Switzerland: Springer, 2015. +Yann Lecun. A theoretical framework for back-propagation. In D. Touretzky, G. Hinton, and T. Sejnowski (eds.), Proceedings of the 1988 Connectionist Models Summer School, CMU, Pittsburg, PA, pp. 21-28. Morgan Kaufmann, 1988. +S. Linnainmaa. The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's thesis, Univ. Helsinki, 1970. +Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. CoRR, abs/1211.5063, 2012. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS 2017 Workshop Autodiff, 2017. +Jaideep Pathak, Zhixin Lu, Brian R. Hunt, Michelle Girvan, and Edward Ott. Using Machine Learning to Replicate Chaotic Attractors and Calculate Lyapunov Exponents from Data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(12):121102, December 2017. ISSN 1054-1500, 1089-7682. doi: 10.1063/1.5010300. URL http://arxiv.org/abs/1710.07313.arXiv: 1710.07313. +Philipp Petersen, Mones Raslan, and Felix Voigtlaender. Topological properties of the set of functions generated by neural networks of fixed size. arXiv preprint arXiv:1806.08459, 2018. + +Maziar Raissi and George Em Karniadakis. Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations. Journal of Computational Physics, 357:125-141, March 2018. ISSN 00219991. doi: 10.1016/j.jcp.2017.11.039. URL http://arxiv.org/abs/1708.00588.arXiv: 1708.00588. +Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Numerical Gaussian Processes for Time-dependent and Non-linear Partial Differential Equations. arXiv:1703.10230 [cs, math, stat], March 2017. URL http://arxiv.org/abs/1703.10230.arXiv:1703.10230. +Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Multistep Neural Networks for Data-driven Discovery of Nonlinear Dynamical Systems. arXiv:1801.01236 [nlin, physics:physics, stat], January 2018. URL http://arxiv.org/abs/1801.01236.arXiv:1801.01236. +I. Michael Ross. A primer on Pontryagin's principle in optimal control, volume 2. Collegiate publishers San Francisco, 2009. +I. Michael Ross and Mark Karpenko. A review of pseudospectral optimal control: From theory to flight. Annual Reviews in Control, 36(2):182-197, 2012. +C. Runge. Ueber die numerische Auflücksung von Differentialgleichungen. Mathematische Annalen, 46(2):167-178, June 1895. ISSN 0025-5831, 1432-1807. doi: 10.1007/BF01446807. URL http://link.springer.com/10.1007/BF01446807. +Lars Ruthotto and Eldad Haber. Deep Neural Networks Motivated by Partial Differential Equations. arXiv:1804.04272 [cs, math, stat], April 2018. URL http://arxiv.org/abs/1804.04272. arXiv:1804.04272. +Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, and Giuseppe Oriolo. Robotics: Modelling, Planning and Control (Advanced Textbooks in Control and Signal Processing). Springer, 2008. +Hossein Soleimani, Adarsh Subbaswamy, and Sushi Saria. Treatment-Response Models for Counterfactual Reasoning with Continuous-time, Continuous-valued Interventions. arXiv:1704.02038 [cs, stat], April 2017. URL http://arxiv.org/abs/1704.02038.arXiv:1704.02038. +Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway Networks. arXiv:1505.00387 [cs], May 2015. URL http://arxiv.org/abs/1505.00387.arXiv:1505.00387. +Robert F. Stengel. Optimal control and estimation. Dover books on advanced mathematics. Dover Publications, New York, 1994. ISBN 978-0-486-68200-6. +P. J. Werbos. Applications of advances in nonlinear sensitivity analysis. In Proceedings of the 10th IFIP Conference, 31.8 - 4.9, NYC, pp. 762-770, 1981. +Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and J urgen Schmidhuber. Recurrent Highway Networks. arXiv:1607.03474 [cs], July 2016. URL http://arxiv.org/abs/ 1607.03474.arXiv:1607.03474. + +# APPENDIX + +# A ADDITIONAL MATERIAL FOR THE VEHICLE DYNAMICS EXAMPLE + +The model is formulated in a concentrated parameter form (Siciliano et al., 2008). We follow the notation of (Fossen, 2011). Recall the system definition: + +$$ +\dot {\eta} = J (\eta) v, \quad M \dot {v} + d (v) + C (v) v = u, +$$ + +where $\eta, v \in \mathbb{R}^3$ are the states, namely, the $x$ , and $y$ coordinates in a fixed (world) frame, the vehicle orientation with respect to this frame, $\phi$ , and the body-frame velocities, $v_x, v_y$ , and angular rate, $\omega$ . The input is a set of torques in the body-frame, $u = (F_x, 0, \tau_{xy})$ . The Kinematic matrix is + +$$ +J (\eta) = \left[ \begin{array}{c c c} \cos (\phi) & - \sin (\phi) & 0 \\ \sin (\phi) & \cos (\phi) & 0 \\ 0 & 0 & 1 \end{array} \right], +$$ + +the mass matrix is + +$$ +M = \left[ \begin{array}{c c c} m & 0 & 0 \\ 0 & m & 0 \\ 0 & 0 & I \end{array} \right], +$$ + +where $m$ is the vehicle mass and $I$ represents the rotational inertia. The Coriolis matrix is + +$$ +C (v) = \left[ \begin{array}{c c c} 0 & - m \omega & 0 \\ m \omega & 0 & 0 \\ 0 & 0 & 0 \end{array} \right], +$$ + +and the damping force is $d(v) = k_{d}v$ . We set $m = 1$ and $K_{d} = 1$ . The input, $u$ , comes from a Fourier series with fundamental amplitude 1. + +![](images/b25d1462b76a084b3b5a4d119dbac53467e70978f4cf74c11426ab44b781e321.jpg) +(a) $i = 20$ + +![](images/697d908dee4c85e187a98927df4c08f7754d881f94c8d817ddbd29f8db360006.jpg) +(b) $i = 100$ + +![](images/fb8b43fd38f9224fa828093e477c398554e996a6633298fb29ded89bcddb3568.jpg) +(c) $i = 300$ + +![](images/08b2c3b65d8bc858933137eff8e601031bb8dd53ae74d00283bc576f3640d0fe.jpg) +(d) Loss function + +![](images/34595851b3c22eafbde125c66cae73e91e1451b2f174b9abec6c2780ffcdf065.jpg) +(e) $i = 20$ + +![](images/6ce1675f995ec9ab2951984684658b56f85baee5031d5c879aa29e8cb27f2082.jpg) +(f) $i = 100$ + +![](images/0538ac7565ab69c16df71294f3936af5a0a39ffa156d9daf97005c6a87ea3d66.jpg) +(g) $i = 300$ + +![](images/88cb9fd68f88eb761348fa58e0a8a7b0017948c3e38fdae6fafc83a0d0cc7975.jpg) +(h) Loss function +Figure 5: Vehicle dynamics learning in high-data regime. Top row: $\delta$ -SNODE method. Bottom row: fifth-order Dormand-Prince method (Butcher & Wanner, 1996) with backpropagation. First three columns: comparison of true trajectories (red) with the prediction from the surrogate (black) at different iterations of the optimization. Last column: loss function at each iteration. $\delta$ -SNODE has faster convergence than Euler. + +![](images/fa9e6591575d61fbdc1c8481decd11b0c6c9d0c12fe8b2bad830e7d840cbf035.jpg) +(a) $i = 20$ + +![](images/b61c24f25c2470a63d3430619beb9188297f553683e73acc0bb6142e56fffee7.jpg) +(b) $i = 60$ + +![](images/538dfc759e500922898ea16dc3e5a103f54f4b56b6f4035f7ea70f12be61d318.jpg) +(c) $i = 100$ + +![](images/10927f8725b3feddfd0c37eba2d57d27200aeb8595169c50f6c7d1cf7985edd9.jpg) +(d) $i = 100$ + +![](images/49f2ced0a870b6e1d96d4c0caf5e2697152eddcbd3f29f9c779ebfbe3e16c2eb.jpg) +(e) $i = 20$ + +![](images/fd7b29309eeba194bdb8f8a31df25fd1a1fff7bf99e9e38451537e2e6aaec656.jpg) +(f) $i = 100$ + +![](images/350bf3a3c06471e5dcadb154776bce667d3a5c236a3a84b2e135834ed52f5b13.jpg) +(g) $i = 300$ + +![](images/e551f7b352d1eaaaed2c4460f063abd60807356434eee75bb96be120666243a9.jpg) +(h) Loss function + +![](images/3b0967687715204384ff58285391353844b0fc0d0b6ec59d71efc980c4022e65.jpg) +Figure 6: Vehicle dynamics learning in high-data regime. Top row: $\alpha$ -SNODE method. Bottom row: Explicit Euler method with back-propagation. First three columns: comparison of true trajectories (red) with the prediction from the surrogate (black) at different iterations of the optimization. Last column: value of the loss function at each iteration. + +![](images/7be467c88b0d855f52349b3350794b181a92cf77d254c7871634080060bb68f8.jpg) +Figure 7: Vehicle dynamics learning with perturbed input data for the $\alpha$ -SNODE method. Left: comparison of true trajectories (red) with the input data (black). Right: comparison in the time domain for the states of the first sample of the batch. + +# B ADDITIONAL MATERIAL FOR THE MULTI-AGENT EXAMPLE + +The multi-agent simulation consists of $N_{a}$ kinematic vehicles: + +$$ +\dot {\eta_ {i}} = J (\eta_ {i}) v _ {i}, \quad v _ {i} = \tanh \left(w + K _ {c} (\eta_ {i}) + \frac {1}{N _ {a}} \sum_ {j \neq i} ^ {N _ {a}} K _ {o} (\eta_ {i}, \eta_ {j})\right), +$$ + +where $\eta \in \mathbb{R}^{3N_a}$ are the states for each vehicle, namely, the $x_{i},y_{i}$ positions and the orientation $\phi_i$ of vehicle $i$ in the world frame, while $v_{i}\in \mathbb{R}^{2N_{a}}$ are the controls signals, in the form of linear and angular velocities, $\nu_{i},\omega_{i}$ . The kinematics matrix is + +$$ +J (\eta_ {i}) = \left[ \begin{array}{c c} \cos (\phi_ {i}) & 0 \\ \sin (\phi_ {i}) & 0 \\ 0 & 1 \end{array} \right]. +$$ + +The agents' velocities are determined by known arbitrary control and collision avoidance policies, respectively, $K_{c}$ and $K_{o}$ . In particular: + +$$ +K _ {c} (\eta_ {i}) = \left[ \begin{array}{c} k _ {v} \\ k _ {\phi} \delta \phi_ {i} \end{array} \right], K _ {o} (\eta_ {i}, \eta_ {j}) = \left[ \begin{array}{c} - k _ {v o} e ^ {- \frac {d}{l _ {s}}} e ^ {- | \delta \phi_ {i j} + \pi / 2 |} \\ k _ {\phi o} \delta \phi_ {i j} \end{array} \right], +$$ + +where $\delta \phi_{i} = \operatorname{atan2}\left((-y_{i}), (-x_{i})\right) - \phi_{i}$ , and $\delta \phi_{ij} = \operatorname{atan2}\left((y_{i} - y_{j}), (x_{i} - x_{j})\right) - \phi_{i}$ . + +Training configuration. We set + +$$ +k _ {v} = 0. 0 5 \quad k _ {\phi} = 0. 1, \quad k _ {v o} = 0. 0 0 1, \quad k _ {\phi o} = 0. 0 1, \quad l _ {s} = 0. 0 1. +$$ + +The signal $w = w(t)$ is generated by a Fourier series with fundamental amplitude 0.1. + +Test configuration. We change $K_{o}(\eta_{i},\eta_{j})$ to: + +$$ +K _ {\mathrm {t e s t}} (\eta_ {i}, \eta_ {j}) = \left[ \begin{array}{c} - k _ {v o} e ^ {- \frac {d}{l _ {s}}} \\ k _ {\phi o} \delta \phi_ {i j} \end{array} \right], +$$ + +with $k_{v o} = 0.05$ and $k_{\phi o} = 0.1$ . We also set $w = 0$ . + +![](images/0630bf69eb0db36510aff9d280661969aef3791ad6b98294e1f3db8254281b18.jpg) +(a) $\alpha$ -SNODE $50\%$ data + +![](images/e053771cc7b813a2c9829da039f72c817b4f6a3631f907bc4f034aa343255ba5.jpg) +(b) Euler $50\%$ data + +![](images/a8ad3365ad42a6234e6a3727ef94388d5090037a43b44b2b804602b0ed0f40f6.jpg) +(c) $\alpha$ -SNODE 20% data + +![](images/383a1bcd6e93c5f80e56612055510414f39de382e56cd939e62b247f9240f964.jpg) +(d) Euler $20\%$ data +Figure 8: Multi-agent testing in low-data regime. + +# C GRADIENT ANALYSIS + +Consider the classic discrete-time RNN: + +$$ +x (t + 1) = f (x (t), u (t); \theta). \tag {10} +$$ + +Then, given a loss $\mathcal{L} = \sum_{t = t_0}^{t_f} L(x(t))$ , the following gradients are used during training: + +$$ +\frac {\partial \mathcal {L}}{\partial \theta} = \sum_ {t = t _ {0}} ^ {t _ {f}} \frac {\partial L (x (t))}{\partial x (t)} \frac {\partial x (t)}{\partial \theta}, \tag {11} +$$ + +where, for any $t$ , the chain rule gives: + +$$ +\frac {\partial x (t + 1)}{\partial \theta} = \frac {\partial f (x (t) , u (t) ; \theta)}{\partial \theta} + J (x (t)) \frac {\partial x (t)}{\partial \theta}, \tag {12} +$$ + +$$ +J (x (t)) = \frac {f (x (t) , u (t) ; \theta)}{\partial x (t)} +$$ + +Iteration of (12) is the main principle of backpropagation through time (Goodfellow et al., 2016). A known fact is that iterating (12) is similar to a geometric series. Therefore, depending on the spectral radius of the Jacobian, $\rho(J(x(t)))$ , it can result in vanishing ( $\rho < 1$ ) or exploding ( $\rho > 1$ ) gradients (Zilly et al., 2016). + +We can now demonstrate that, by avoiding function compositions, our approach is immune to the exploding gradient problem. In particular, our gradients are fully time-independent and their accuracy is not affected by the sequence length. Recall the ODE: + +$$ +\dot {x} (t) = f (x (t), u (t); \theta). \tag {13} +$$ + +The following result is obtained: + +Theorem 1. Assume the full Jacobian of $f(x(t), u(t); \theta)$ has a finite spectral radius for all $t$ . Then, the norms of the gradients used in Algorithm 1 and Algorithm 2 are also finite for all $t$ . + +We will prove the theorem for Algorithm 2 since it includes the relevant parts of Algorithm 1. + +Proof. Given the Legendre basis functions, $\psi_i(t)$ , $i = 1,\dots ,p$ , define the ODE residual as: + +$$ +r (x (t), \theta) = \sum_ {i = 0} ^ {p} x _ {i} \dot {\psi} _ {i} (t) - f \left(\sum_ {i = 0} ^ {p} x _ {i} \psi_ {i} (t), u (t); \theta\right), +$$ + +for which the residual loss is $R(x(t), \theta) = \| r(x(t), \theta) \|^2$ . Then, Algorithm 2 consists of the concurrent minimization of the relaxed loss function: + +$$ +\int_ {t _ {0}} ^ {t _ {1}} \gamma L (x (t))) + R (x (t), \theta^ {\star}) d t, +$$ + +with respect to the coefficients $x_{i}$ of the Legendre polynomial given $\theta^{\star}$ , and of the residual loss: + +$$ +\int_ {t _ {0}} ^ {t _ {1}} R (x (t), \theta) d t, +$$ + +with respect to $\theta$ given the set of coefficients $x_{i}^{\star}$ , $i = 1,\ldots ,p$ . For the residual loss gradients are: + +$$ +\frac {\partial R \left(x ^ {\star} (t) , \theta\right)}{\partial \theta} = - 2 r \left(x ^ {\star} (t), \theta\right) ^ {T} \frac {\partial f \left(\sum_ {i = 0} ^ {p} x _ {i} ^ {\star} \psi_ {i} (t) , u (t) ; \theta\right)}{\partial \theta}, \tag {14} +$$ + +where there is no recursion over the previous values $x(t - \tau)$ , since the basis functions $\psi_i(t)$ are given and the points $x_i^\star$ are treated as data. By assumption, the Jacobian of $f(x(t), u; \theta)$ has finite singular values. Hence, by standard matrix norm identities (see Chapter 5 of Horn & Johnson (2012)) the gradient of $R(x(t), \theta)$ with respect to $\theta$ has a finite norm for all $t$ . + +Similarly, the absence of time recursions in the gradient for the $x_{i}$ update using the relaxed loss also follows trivially from the fact that the coefficients $x_{i}$ are independent variables, that we assume a given $\theta^{\star}$ , that the basis functions $\psi_{i}(t)$ are fixed. Then, the claim follows again from the assumption on the Jacobian of $f(x,u;\theta)$ . + +Note that the result of Theorem 1 cannot easily be achieved when training ODEs using backpropagation through the solver or the adjoint method unless some gradient conditioning, such as clipping or batch normalization (Ioffe & Szegedy, 2015), is performed after applying $f$ . On the other hand, our result relies only on the gradient of $f$ being finite. This is trivial for a shallow $f$ and, for a very deep $f$ , the methods in (Ioffe & Szegedy, 2015; Srivastava et al., 2015; Ciccone et al., 2018; He et al., 2015) can be applied just inside the architecture of $f$ , if necessary. This fundamental difference with standard RNN training allows for $f$ to have unrestricted Jacobian magnitudes which are needed to effectively model instabilities and long-short term dynamics, similar to LSTMs (Greff et al., 2017). \ No newline at end of file diff --git a/snodespectraldiscretizationofneuralodesforsystemidentification/images.zip b/snodespectraldiscretizationofneuralodesforsystemidentification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e78900dfed3f3517521c328db453e58de027e664 --- /dev/null +++ b/snodespectraldiscretizationofneuralodesforsystemidentification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a75cccd514de00a4812657c146bfb5b6d49a29de0fb4c18f867f3b244cd3b0a6 +size 766778 diff --git a/snodespectraldiscretizationofneuralodesforsystemidentification/layout.json b/snodespectraldiscretizationofneuralodesforsystemidentification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b10b41da4fade9ef7fb289eeaa02c53a31dde56b --- /dev/null +++ b/snodespectraldiscretizationofneuralodesforsystemidentification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9afd3b2e6bd277dd2695b8ff8aba77b32f3f0642b5e6da6bb867003b6da87128 +size 673384 diff --git a/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_content_list.json b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4692f77c77d28da574c56652e881d65f00aab543 --- /dev/null +++ b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82f09224f062eb016bad435ded353dfe2285c545ea0df308324126cf7fe26876 +size 77136 diff --git a/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_model.json b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b93dbeefe5907ac8973207bb198a874cd4185dfe --- /dev/null +++ b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a2e132064aa624b0f571592bdfe52db8e180199f842a745796c768c76f7fee3 +size 93099 diff --git a/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_origin.pdf b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c75f53e94e1b62ba99eec4a8b7fb6f69ff579c4d --- /dev/null +++ b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/dd2644c2-051a-405c-8652-49bc2e209226_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef7780e55e9565319e469aaf532bf80f70218154d60acf4108135d91ba52217c +size 1040847 diff --git a/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/full.md b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a1d2eca660956ff8c29cfcc66bce2ef8dca97c20 --- /dev/null +++ b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/full.md @@ -0,0 +1,283 @@ +# SNOW: SUBSCRIBING TO KNOWLEDGE VIA CHANNEL POOLING FOR TRANSFER & LIFE LONG LEARNING OF CONVOLUTIONAL NEURAL NETWORKS + +Chungkuk Yoo1 Bumsoo Kang1,2 Minsik Cho1 + +$^{1}$ IBM, Austin TX, USA $^{2}$ KAIST, Daejeon, South Korea + +ckyoo@ibm.com bumsoo@nclab.kaist.ac.kr minsikcho@us.ibm.com + +# ABSTRACT + +SNOW is an efficient learning method to improve training/serving throughput as well as accuracy for transfer and lifelong learning of convolutional neural networks based on knowledge subscription. SNOW selects the top-K useful intermediate feature maps for a target task from a pre-trained and frozen source model through a novel channel pooling scheme, and utilizes them in the task-specific delta model. The source model is responsible for generating a large number of generic feature maps. Meanwhile, the delta model selectively subscribes to those feature maps and fuses them with its local ones to deliver high accuracy for the target task. Since a source model takes part in both training and serving of all target tasks in an inference-only mode, one source model can serve multiple delta models, enabling significant computation sharing. The sizes of such delta models are fractional of the source model, thus SNOW also provides model-size efficiency. Our experimental results show that SNOW offers a superior balance between accuracy and training/inference speed for various image classification tasks to the existing transfer and lifelong learning practices. + +# 1 INTRODUCTION + +Learning new tasks from old tasks over time as natural intelligence does is a key challenge in artificial intelligence, and transfer and lifelong learning are two popular strategies in this direction. Transfer learning delivers a neural network with good predictive power by duplicating and tuning the parameters for a pre-trained source model against a dataset for a new task (Dauphin et al., 2012; Donahue et al., 2014). However, transfer learning from a source task to many target tasks incurs overall significant training and space overhead due to multiple large models to customize and store (Mudrakarta et al., 2019). On the other hand, lifelong learning can enable substantial parameter sharing and deliver multiple target tasks with less training time and smaller model size, but may suffer from catastrophic forgetting or lower accuracy (McCloskey & Cohen, 1989). Comprehensive efforts to tackle such challenges have been proposed but at significant computational overhead in general (Rusu et al., 2016; Guo et al., 2019; Mudrakarta et al., 2019; Li & Hoiem, 2018). + +In this work, we propose SNOW for efficient transfer and lifelong learning, which consists of an inference-only source model, multiple task-specific delta models, and channel pooling in-between. Unlike transfer learning, we let multiple delta models subscribe to a source model through a channel pooling layer. SNOW is fundamentally different from the prior arts in transfer and lifelong learning in the following aspects: a) the source model is frozen during both training and inference, b) the top-K intermediate feature maps of the source model are learned for each delta model using channel pooling powered by Gaussian reparametrization technique (Kingma & Welling, 2014). And, SNOW does require neither persistent training data nor episodic memories (Sprechmann et al., 2018; Li & Hoiem, 2018) to overcome catastrophic forgetting. Table 1 compares SNOW with prior arts in transfer/lifelong learning, and Fig. 1 visualizes the key structural differences between SNOW and transfer/lifelong learning where the source model for task0 is leveraged to build the target models for task1-3 without persistent datasets as follows: + +
FOaFEbFTcMPdLFePNfSNOW
Overall Accuracypoormediumgoodgoodmediumgoodgood
Catastrophic forgettingnonononoyesnono
Training efficiencygoodgoodpoorpoorpoorpoorgood
Serving efficiencygoodgoodpoorpoorgoodpoorgood
Model-size efficiencygoodgoodpoorgoodgoodpoorgood
+ +a Final output only: re-training the final output layer of the duplicated source model (Yosinski et al., 2014). +b Feature extraction: re-training a few output layers of the duplicated source model (Whatmough et al., 2019). +c Fine-tuning: re-training the entire duplicated source model (Dauphin et al., 2012). +d Model patch: training only the patch layers inserted into the source model (Mudrakarta et al., 2019). +e Learning w/o forgetting: regularizing hidden activations without persistent datasets (Li & Hoiem, 2018). +$^{\mathrm{f}}$ Progressive Net: propagating combined feature maps via lateral connections (Rusu et al., 2016). + +Table 1: Comparison of various transfer/lifelong schemes and SNOW. + +![](images/6cec57c2d90c8cd9324af78ab6fb877628ab7d6376c0e4641d1b482da34869f5.jpg) +(a) Transfer learning: FO, FE, FT + +![](images/e1d332a7a6b44347cdbee41598e1c4b905c3d9389560b70da158510adb5ea2ec.jpg) +(b) Lifelong learning: MP, LF, PN +Figure 1: SNOW vs. transfer/lifelong learning to build task1-3 from task0. The dotted box indicates a model which stays frozen (or no parameter update), skipping back-propagation during training. + +![](images/2fdebe1db09094fd243f9a4a7e9a5c2b7b9f062b2498ac8d86cbeb7ce2367a29.jpg) +(c) SNOW + +- Transfer learning in (a) tunes the parameters in a copy of the source model for a new task. In transfer learning, updating more parameters improves the prediction accuracy but at higher computational overhead (Yosinski et al., 2014). For example, $\text{task}1^a$ can be obtained faster (as the source model is frozen), but may show lower accuracy than $\text{task}1^b$ where the entire model is customized. Therefore, when high accuracy desired, transfer learning can involve significant training and storage overhead as separate models are trained for different target tasks. +- Lifelong learning in (b) allows a common structure for all the tasks, increasing the parameter efficiency of the entire system. Although lifelong learning could be a faster methodology than conventional transfer learning for multi-task delivery, it may still demand quite some training time as the shared part needs to get trained for all the target tasks and/or can suffer from catastrophic forgetting (unless expensive and complex persistent dataset or episodic memory is available). In (Rusu et al., 2016), a constructive method for lifelong learning is proposed to avoid catastrophic forgetting but at substantial computational and memory footprint overhead. +- As in (c) for SNOW, the source model serves as a static feature supplier (i.e., no back-propagation) to all the target-specific models or delta models (significantly enhancing training and serving speed). Hence, each delta model generates only a handful number of its own local features and obtain all other features for free (greatly improving parameter efficiency). However, instead of supplying all the features to the delta models, SNOW has a channel pooling layer which learns and selects the top-K features for each individual delta model (delivering excellent prediction). For an example of (c), the channel pooling layer independently selects the top-3 intermediate feature maps for task1-3. Unlike conventional lifelong learning, SNOW does not suffer from catastrophic forgetting, as each delta model is independent of each other. +- Cost benefits on SNOW architecture are threefolds. First, training time and memory footprint can be significantly reduced since the source model runs only in an inference mode (thus no back-propagation). Second, the overhead of channel pooling is negligible because it requires only one parameter for each feature map of the source model. Lastly, a delta model needs to be + +![](images/ffccc754c1ecbda72d4d40be04acde12f694b48ba326bfa2077775cb734f78c8.jpg) +Figure 2: In SNOW architecture, the source model produces intermediate features and the delta models selectively subscribe to them via channel pooling in their training and serving processes. The local features from a delta model and the subscribed features are combined through concatenation. + +large enough to produce target-specific features, but can be much smaller than the source model. Fusing those features with the $K$ features from the source model leads to significant parameter count saving compared with existing transfer learning practices. + +The rest of the paper is organized as follows. Section 2 details SNOW, and experimental results are in Section 3. Review on prior arts is in Section 4, followed by the conclusion in Section 5. + +# 2 SNOW ARCHITECTURE + +In this section, we present SNOW which aims scalable training/serving and high accuracy without catastrophic forgetting, as an efficient framework for transfer and lifelong learning. We will describe SNOW from two perspectives: a) how to connect the source model and the delta models efficiently (discussed in Section 2.1) and b) how to effectively subscribe to the intermediate features from the source model from the delta models for higher accuracy (discussed in Section 2.2). The above two challenges look orthogonal, yet highly correlated, as over-subscription of features degrades model-size efficiency while under-subscription would harm the model accuracy. + +# 2.1 ARCHITECTURE AND LATERAL CONNECTIONS + +SNOW proposes an architecture where a pre-trained/frozen source model and independently to-be-trained delta models for target tasks are connected via channel pooling which enables uni-directional knowledge flow or knowledge subscription from the source to the delta models as in Fig. 2. The goal of uni-directional knowledge subscription is to capture universal features at one-time computation cost which would be amortized across multiple target tasks, assuming the source model has been trained with a large and comprehensive dataset (as in a typical transfer learning scenario). Meanwhile, target task-specific features are computed in the delta models with local feature extractors. In SNOW, therefore, the delta neural network can be at a much smaller size than the source model as long as the source model can provide a comprehensive set of feature maps. Since these two kinds of features are fused and further co-processed in the delta models, we can intuitively see that the delta models are encouraged to learn a set of task-specific features complementary to the subscribed features. In case a delta model cannot extract a sufficient amount of features from a source net, the delta model capacity would need to grow in order to generate the target-specific features sufficiently by itself. However, making the delta model too large can degrade parameter efficiency and further result in parameter o over-fitting when the target dataset is small, like any other neural network (Heaton, 2008). + +In detail, the source model produces intermediate features for a given sample during its feed-forward path (which is the same as inference). The delta model then is trained to use some of these feature maps as inputs for the next layer. The input for $L_{i}$ layer of the delta model is given as the concatenation of the subscribed features from the source model and the output features from $L_{i-1}$ layer of the delta model. In our experiments (see Section 3 for details), we used a down-sized version of the source model for the delta models, but there is no restriction on how to design a delta model because concatenation is more flexible than feature superposition. More specifically, we used ResNet50 (He + +![](images/4792b2a563bf5f69f2848bc83e2aa27b94ed54f7f2e9e330068896f4ebaf0dde.jpg) +Figure 3: Forward and backward propagation paths for channel pooling. + +et al., 2016) trained by ImageNet (Deng et al., 2009) as a source model, and used the same ResNet50 architecture as a delta model for a new task after reducing its channel size by a factor of eight. Note that such lateral connection has been proposed in existing works (Feichtenhofer et al., 2018), but the context is vastly different in a sense that it is mainly to accelerate the inference of a single task during a neural network architecture design, while SNOW is for efficient transfer and lifelong learning. + +In our training process for SNOW, all parameters in the source model are frozen while the parameters in the delta models are updated from random initialization. Therefore, we do not need to compute back-propagation for the source model parameters, greatly improving training memory footprint (i.e., nothing to keep for gradient computation) and throughput. And, the delta models have much fewer parameters than the source model. As a result, SNOW training offers much shorter training time and demands less computing resources compared with other conventional transfer/lifelong learning techniques, e.g., FT, MP, LF, PN (see Table 1). Furthermore, we also minimize inference time for multiple tasks by sharing the intermediate features of a single source model across all the target tasks. + +The serving efficiency of SNOW is maximized when all the target tasks take the same input. By sharing a feature map from a source network across more tasks, one-time feature map generation overhead can be amortized better, increasing the overall computational efficiency of SNOW architecture. Such single-input-multi-output scenario is highly realistic in practice: for instance, multiple neural networks take in the feed from a single camera, microphone, and sensors (Mathur et al., 2017). + +# 2.2 SELECTIVE FEATURE SUBSCRIPTION BY CHANNEL POOLING + +The purpose of channel pooling is to find a small subset of intermediate features from a source model in a way that the prediction accuracy from the delta model can be maximized. Therefore, each delta model will be accompanied by one channel pooling layer. In order to find the best subset, each channel pooling layer has a learnable scalar for each source feature map, and SNOW selects the top-K channels to limit the computation and model-size overhead in a stochastic way using normal distribution reparameterization. During inference, SNOW directly feeds the best K feature maps based on the training result without a stochastic step. + +Fig. 3 shows the forward and backward paths inside a channel pooling layer. At $i$ -th forward path, we first compute the following for any channel $x$ : $w_{x} = m_{x} + \mathcal{N}(0,\sigma^{2})$ . Then, we select the top-3 weights $(w_0,w_1,w_3)$ in white and perform channel-wise multiplication between the selected weights and corresponding channels at the same indices so that the results can be consumed by the + +corresponding delta model. The purpose of stochasticity is to provide every channel with enough chance to be selected, but we keep $\sigma$ small that the channels can be easily separable. For example, even if $m_2 > m_3$ , $w_2$ is ignored at this iteration due to stochasticity. + +In the $i$ -th backward path, the gradients for all the mean variables are computed except $m_2$ , as $w_2$ did not participate in the forward execution. Thus, the computed gradients will update $(m_0, m_1, m_3)$ accordingly, and affect the top-K selection in the $(i + 1)$ -th forward path. Assume that the gradient for $m_3$ is -0.2, as the $c_3$ hurts a specific target task, then $m_3$ becomes 0.0 (now the lowest). + +In the subsequent $i + 1$ -th forward path, $(w_0, w_1, w_2, w_3)$ are computed in the same way. This time, the difference between $m_3$ and $m_2$ is big enough to keep $w_3$ from being selected. If $c_2$ is indeed helpful to the target task, then $m_2$ will get bigger, which would in turn help $w_2$ be selected again in the subsequent iterations for further tuning. In this way, while the less relevant feature maps (to a target task) are to be ignored with high probability over time due to the decreased channel pooling weights, the critical channels will not only survive the top-K filtering but also be thoroughly tuned to minimize the training loss. Note that we remove the stochastic part, $\mathcal{N}(0, \sigma^2)$ after training is over so that we directly select top-K trained weights for an inference task. + +Determining $K$ (i.e., the number of feature maps to subscribe) has a critical impact on both size and prediction accuracy of the target models. If a delta model undersubscribes to the intermediate features, it would not have sufficient information for higher prediction power. If a delta model oversubscribes, it would require more neurons and computational resources to process larger feature maps, not to mention that it could introduce unwanted noises to the delta model. + +
1×1 convChannel-wise attentionChannel pooling
Model-size overheadO(N×K)O(N2)O(N)
Computation patternconvolutionmatrix mult.ch.-wise scalar mult.
+ +There are naive alternative approaches to selectively subscribe to features such as $1 \times 1$ convolution (Rusu et al., 2016) or channel-wise attention (Woo et al., 2018) as in Table 2, but they require more parameters/computations and/or offer worse accuracy than the proposed channel pooling (see Section 3.4 for comparison). In detail, when downsizing a feature map of $N \times h \times w$ size to $K \times h \times w$ , a channel pooling layer needs $N$ parameters with $K \times h \times w$ multiplications, while $1 \times 1$ convolution requires $N \times K$ parameters with $N \times K \times h \times w$ multiplications. The channel pooling operation may look similar to the channel-wise attention (Woo et al., 2018), but its attention block keeps the channel size the same as the number of given feature maps, while the channel pooling layer prunes irrelevant features out to have a smaller delta model. Finally, the size of the channel attention module is $O(N^2)$ which is bigger than that of channel pooling $O(N)$ , and channel-wise attention requires expensive matrix multiplication, but channel pooling only needs channel-wise scalar multiplication. + +# 3 EXPERIMENTS + +We implemented SNOW in Pytorch 1.1 (Paszke et al., 2017), and compared its training and serving performance with prior arts in terms of accuracy and computational overhead in the following setup; + +- Source model: a pre-trained ResNet50 with ImageNet available in torchvision. +- Workload: five classification tasks for Food-101 (Bossard et al., 2014), Describable texture dataset (DTD) (Cimpoi et al., 2014), Stanford 40 actions (Yao et al., 2011), Car dataset (Krause et al., 2013), and Caltech-UCSD birds-200-2011 (CUB) (Wah et al., 2011) (see Table 3 for details) without any persistent dataset storage. +- Comparison: SNOW and five other transfer/lifelong schemes FO, FE, FT, MP, PN in Table 1 (implemented per the corresponding reference in the table note) for training the models. We + +Table 2: Overhead of each channel subscription approach for a feature map of $N \times h \times w$ size. + +
Dataset(#class)Food(101)DTD(47)Action(40)Car(196)CUB(200)
train/test75,750/25,2503,760/1,8804,000/5,5328,144/8,0415,994/5,794
+ +Table 3: Datasets for experiments. + +could not compare with LF because LF exposed catastrophic forgetting on the tested datasets, preventing fair comparison (see Appendix for more discussion and LF results). + +- Environment: IBM POWER9 @ 3.10GHz (44 cores) machine with Red Hat Enterprise Linux Server 7.6, CUDA10.1, CUDA7.5 and 4 Tesla V100 GPUs (16GB on each). + +# 3.1 TRAINING PERFORMANCE + +In this section, we report the training performance of all the experimented schemes in terms of training throughput, GPU runtime memory footprint, and Top-1 test accuracy. We trained each algorithm on a single GPU but with a different batch size in order to fit each algorithm's memory footprint into a single GPU memory capacity. While we set the initial learning rate as 0.1 for all other schemes under comparison, we applied 1.0 to SNOW, as SNOW has channel pooling parts that need training from scratch. We decayed the initial learning rate by a factor of 0.1 per every 30 epochs. For the exact batch sizes and initial learning rates, please refer to Figure 4. We used a SGD optimizer with a momentum of 0.9 and a weight decay factor of $10^{-4}$ . We set $K = N/8$ for $N$ intermediate channels from ResNet50 and $\sigma^2 = 0.001$ , except for the Action dataset (Yao et al., 2011) ( $\sigma^2 = 10^{-8}$ ). For each algorithm in Table 1, training is done in a sequential manner for all the datasets to reflect the nature of transfer and lifelong learning. + +SNOW shows the best balance between the training throughput and prediction accuracy over all the tested datasets as shown in Figure 4. Overall, SNOW-256 delivered a comparable accuracy with up to $2.4\mathrm{x}$ higher throughput than FT or MP and $5.2\mathrm{x}$ more than PN. This implies that SNOW enables transfer or lifelong learning to be completed significantly faster under the same computing resources, yet preserving its quality. + +![](images/9250963aae8d6c4c72b37fbb56bd32bb20b6f7a9880d74cf5c474d46d0be48ce.jpg) + +![](images/072fca4d29787cf6631e25cf12bd87e7c45028caf557b4737b714f0ae999c02e.jpg) + +![](images/10fb626fd42e81dde9cc3dd39bbf9acf6920d9999a17d9583dd17b3482a1301f.jpg) + +![](images/6683eaeefc4f3ba3558f27b5526c35a8b4976a468a163bed48d7b65a04a7ab3d.jpg) +Figure 4: Training performance comparison on five different datasets on a single GPU. + +![](images/1e40a542a837028a4ecd0ea89d42150a2687d2c6b27413639a1fbf3d19377448.jpg) + +![](images/295010fd1fbbc8b97fca1218987fc1412f1e55a141ad7f94b4fdf02ab1eba966.jpg) + +
hyper-parametersFOFEFTMPPNSNOW-128SNOW-256
batch size2562561289696128256
initial learning rate0.10.10.050.03750.03750.51.0
+ +The reason why FT and MP show poor throughputs compared with other light-weight approaches is because nearly full back-propagation is required for both cases. Especially, MP is slower than FT (despite that majority of parameters are frozen) because it has more layers to deal with yet back-propagation needs to reach out the very first patched layer in the model (thus, still need to compute the gradients for the intermediate features along the way). Meanwhile, SNOW performs back-propagation only for the delta models which are substantially smaller than the source model, resulting in significant reduction in training time and computing overhead, e.g., memory footprint for a given batch size. (note that the circle size in Figure 4 represents the memory footprint size). Although PN does not require back-propagation on the source model either, its large target model size and heavy lateral connections make it the slowest scheme (Chaudhry et al., 2019). + +FO and FE show marginally higher throughput than SNOW-256, but we noticed significant accuracy drops up to $31.8\%$ with FO and $9.7\%$ with FE from SNOW-256's accuracy. Both approaches look similar to SNOW in a way of attaching a small size model and training it while the source model is frozen. However, instead of subscribing to intermediate features, they use only a single feature map of the third residual block (FE) or the last residual block (FO) of the source model, which prevents them from exploiting the intermediate features. From this comparison, we can see that feature subscription actually contributes to higher accuracy for a new task as we expected in Section 2, yet at a small throughput overhead with cost-efficient channel pooling. + +Figure 4 (bottom right) shows that SNOW as well as all other techniques converge around 60 epochs. Since the end-to-end training time is proportional to the number of epochs to run and inversely proportional to the throughput, SNOW is much faster than FT, MP, and PN in terms of time-to-convergence, because of $2\mathrm{x}\sim 5\mathrm{x}$ higher throughput for the same number of epochs. + +# 3.2 HYPER-PARAMETER SENSITIVITY + +In this section, we discussed the hyper-parameter sensitivity of SNOW in terms of model accuracy with the Car dataset. We performed additional training on the Car dataset while varying the $K$ , the delta model size, and $\sigma^2$ , one at a time to understand their impacts on the accuracy. We summarized the outcomes in Table 4 (where $M$ is the source model size) and made the following observations with practical strategies for hyper-parameter tuning: + +
Parameter changesKdelta model sizeσ2
N/4N/8N/16M/4M/8M/161e-11e-31e-51e-7
Top-1 accuracy83.1083.7983.3979.0283.7980.3681.2783.7983.4583.35
Top-5 accuracy96.9396.9496.9396.0096.9495.9395.8596.9496.9996.88
+ +Table 4: Accuracy changes over varying hyper-parameter settings. + +- The accuracy is a bit sensitive to the $K$ values. As discussed in Section 2.2, both under-subscription and over-subscription can degrade predictive power: under-subscription means insufficient feature sets for a delta model, while over-subscription may introduce unwanted noises to a delta model. Since over-subscription also incur model-size overhead, it would be desirable to start with small $K$ values first until the peak accuracy gets reached. +- It shows that the performance is sensitive to the delta model size, and clearly exposes the existence of the ideal size. It is obvious that the same rule of thumb in neural network architecture design applies here too: having too few target-specific features hurts accuracy, but having too many (or too big delta net) may lead to over-fitting. +- Too large or too small $\sigma^2$ can lead to sub-optimal predictive power. Therefore, it is important to develop a method to find good sigma values. We're currently researching on the idea to examine the early weight distribution (i.e., for a few epochs) and then determine the sigma value to balance out exploration and stabilization. + +# 3.3 SERVING PERFORMANCE + +We measured the computation costs of an inference process by serving all six tasks concurrently (i.e., classifiers for ImageNet and the other five in Table 3) against the same input (Zamir et al., 2018; + +
Serving approachImageNet, Food, DTD, Action, Cars, CUB
FOFEFTMPPNSNOW
Total model parameters (M)26.828.3144.326.9257.430.8
Throughput (images/sec)7060.85626.6632.9304.3169.54731.0
+ +Table 5: Computation costs for inference for the ImageNet (source model), Food, DTD, Action, Car, and CUB. We run all tasks together on a single Tesla V100. + +Mathur et al., 2017). Overall inference results are summarized in Table 5 and indicate that SNOW has significantly less overhead than other high-accuracy approaches such as FT, MP, PN in terms of total model size and inference throughput. + +- FO is the smallest and shows the best inference throughput due to its architectural simplicity, but please note that FO has the worst accuracy on all five datasets. +- SNOW yields comparable inference throughput with FE (yet shows better accuracy as in Fig. 4) with slightly more parameters. +- FT has the 2nd biggest penalty in terms of the parameter count as expected, because each task needs a full set of parameters tuned for a specific task. +- MP does not have much overhead in terms of the total model size, but suffers from the poor inference performance. Such penalty comes from the expensive scatter-gather computation patterns which are required to re-use the source model parameters, in addition to the increased number of layers on the critical execution path. +- PN has the biggest penalty in terms of the total number of parameters as expected size which leads to poor inference throughput. + +In short, we can observe that SNOW has critical advantages in a multi-task setup, as it marginally increases the total model size by sharing the source model among all the individual tasks and keeps the inference performance intact, not to mention that it delivers high-quality accuracy. + +# 3.4 CHANNEL POOLING EFFECT + +In this section, we first examine how the top-K selection changes during the course of training, and then compares the channel pooling scheme in SNOW with other popular dimensional reduction and feature amplification methods in deep learning. + +![](images/071a743e83b16022a0e2e469e16cd4092fb5bae594e9c304f620d02f393a74dd.jpg) +Figure 5: Selected channel index distribution for the Car dataset during training. + +Figure 5 shows how the top-K features selected by the proposed channel pooling for the Car dataset change throughout training where the X-axis is for the channel indices, the Y-axis is for the training progress (in terms of iteration), the solid vertical lines represent the selection of the corresponding channels, and the horizontal dotted lines indicate the start of a next epoch. Overall, we found that the top-K feature selections change frequently in the beginning of the training for exploration but get stabilized after additional epochs, as discussed in Section 2.2. + +We applied such alternatives as feature subscription in SNOW and compared performances in terms of parameter overhead and test accuracy as summarized in Table 6 which clearly highlights that the proposed channel pooling has the minimum overhead, while showing the overall best performance. As we discussed in Section 2.2, our channel pooling requires the least number of parameters for selective + +
Feature subscription methodParm. overhead (M)FoodDTDActionCarCUB
1 × 1 Convolution14.082.8274.7378.2977.0773.20
Channel-wise attention35.084.0574.0679.5082.9475.51
Subscribing to all17.582.8073.7878.8576.4772.80
Channel pooling5.584.0672.3778.4883.7975.81
+ +Table 6: Parameter overhead and Top-1 accuracy (%) of SNOW for each feature subscription method. + +feature subscription among other alternative methods, including $1 \times 1$ convolution, channel-wise attention (Woo et al., 2018), and subscribing to all features. Notably, the channel pooling approach shows the highest accuracy for most of the datasets (Food, Car, CUB), and still achieves comparable accuracy for Action dataset. + +Although the $1 \times 1$ convolution uses more parameters than the channel pooling block, its accuracy is $6.72\%$ less than channel pooling while slightly higher than the case of subscribing to all. This result indicates that the $1 \times 1$ convolution learns which channels are important for a new task, but its performance is worse than that of channel pooling in terms of all aspects, including accuracy, the number of required parameters, and computational overhead. + +The channel-wise attention module (Woo et al., 2018) shows good accuracy for most of the datasets, however, its computation costs for inference are high. For the multi-task setup, its inferencing throughput is 1101.8 images/sec and the model parameter overhead is 35.0M while channel pooling achieves 4731.0 images/sec throughput with only 5.5M parameter overhead. Our channel pooling shows $4.3\mathrm{x}$ higher throughput and $84.2\%$ of parameter overhead reduction for multi-task inference than the channel-wise attention. + +# 4 RELATED WORK + +A large body of research has been done to address such challenges by exploring the various trade-off between training/serving/parameter efficiencies and accuracy. On the transfer learning side, the drawbacks in popular schemes such as feature extraction (Donahue et al., 2014) or fine-tuning (Dauphin et al., 2012; Cui et al., 2018) are addressed in various efforts. Mudrakarta et al. (2019) enhanced the parameter efficiency by inserting multiple patch layers into a frozen source model to capture the task-specific features at a cost of increased training and serving overhead. Knowledge flow (Liu et al., 2019) proposed a method that trains a learner from multiple experts by gradually distilling the layer-wise features from experts into the student at a cost of increased critical execution path for both training and inference as well. FixyNN (Whatmough et al., 2019) revisited the conventional feature-extraction based transfer learning approach from the computation perspective by re-designing and fine-tuning the last few top layers, which could degrade the model accuracy substantially unless enough number of top layers are optimized. Recently, Guo et al. (2019) improved the accuracy of transfer learning by adaptively selecting feature extraction blocks per sample at a cost of extra policy network and low training throughput. + +On the lifelong learning side, catastrophic forgetting has been researched from various angles. Weight regularizations during re-training to retain the old task performance were studied in Lee et al. (2016); Weinshall & Cohen (2018). Li & Hoiem (2018) mitigated the catastrophic forgetting without old datasets by regularizing the final hidden activations from all the old tasks. Progressive Neural Net (Rusu et al., 2016) in reinforcement learning proposed to propagate the feature maps in a columnized set of models for different tasks, which would greatly increase the total model size, not to mention slower inference performance due to the complex per-layer inter-column adapters and expensive lateral connections (Guo et al., 2019; Chaudhry et al., 2019). + +# 5 CONCLUSION + +In this paper, we propose SNOW, an innovative way of transfer and lifelong learning by subscribing knowledge of a source model for a new task through channel pooling. SNOW architecture enables the delta model to subscribe to the intermediate features from each layer of the source model in a uni-directional manner. In this manner, we can effectively train a model for a new task by updating the parameters of the delta model only while keeping the source model untouched. SNOW offers + +excellent training/serving speed by sharing the same features of the source model across all the delta models, yet offers competitive prediction accuracy with design a simple yet effective Channel Pooling. In our comprehensive experiments with multiple datasets, SNOW shows the best balance between the computing performance and prediction accuracy over all the datasets among the state-of-the-art transfer and lifelong learning methods during both training and serving process. + +# REFERENCES + +Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 - mining discriminative components with random forests. In Proceedings of the European Conference on Computer Vision (ECCV), 2014. +Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. In ICLR, 2019. +M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014. +Yin Cui, Yang Song, Chen Sun, Andrew Howard, and Serge J. Belongie. Large scale fine-grained categorization and domain-specific transfer learning. CoRR, abs/1806.06193, 2018. +Grégoire Mesnil Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian Goodfellow, Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, Pascal Vincent, Aaron Courville, and James Bergstra. Unsupervised and transfer learning challenge: a deep learning approach. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, Proceedings of Machine Learning Research, pp. 97-110, 2012. +J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009. +Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML'14, 2014. +Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. arXiv preprint arXiv:1812.03982, 2018. +Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, and Rogério Schmidt. Feris. Spottune: Transfer learning through adaptive fine-tuning. CVPR, 2019. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. +Jeff Heaton. Introduction to Neural Networks for Java, 2Nd Edition. 2008. +Diederik Kingma and Max Welling. Auto-encoding variational bayes. 2014. +Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR), Sydney, Australia, 2013. +Giwoong Lee, Eunho Yang, and Sung Ju Hwang. Asymmetric Multi-task Learning Based on Task Relatedness and Loss. In ICML, 2016. +Zhizhong Li and Derek Hoiem. Learning without Forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40:2935-2947, 2018. +Iou-Jen Liu, Jian Peng, and Alexander G. Schwing. Knowledge Flow: Improve Upon Your Teachers. In ICLR, 2019. + +Akhil Mathur, Nicholas D Lane, Sourav Bhattacharya, Aidan Boran, Claudio Forlivesi, and Fahim Kawsar. Deepeye: Resource efficient local execution of multiple deep vision models using wearable commodity hardware. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pp. 68-81. ACM, 2017. +Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. volume 24 of Psychology of Learning and Motivation, pp. 109 - 165. Academic Press, 1989. +Pramod Kaushik Mudrakarta, Mark Sandler, Andrey Zhmoginov, and Andrew G. Howard. K For The Price Of 1: Parameter Efficient Multi-task And Transfer Learning. In ICLR, 2019. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, 2017. +Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. CoRR, 2016. +Pablo Sprechmann, Siddhant M. Jayakumar, Jack W. Rae, Alexander Pritzel, Adrià Puigdomènech Badia, Benigno Uria, Oriol Vinyals, Demis Hassabis, Razvan Pascanu, and Charles Blundell. Memory-based Parameter Adaptation. In ICLR, 2018. +C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. +Daphna Weinshall and Gad Cohen. Curriculum learning by transfer learning: Theory and experiments with deep networks. In ICML, 2018. +Paul N. Whatmough, Chuteng Zhou, Patrick Hansen, Shreyas K. Venkataramanaiah, Jae-sun Seo, and Matthew Mattina. Fixynn: Efficient hardware for mobile computer vision via transfer learning. In SysML, 2019. +Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3-19, 2018. +Bangpeng Yao, Xiaoye Jiang, Aditya Khosla, Andy Lai Lin, Leonidas Guibas, and Li Fei-Fei. Human action recognition by learning bases of action attributes and parts. In 2011 International Conference on Computer Vision, pp. 1331-1338. IEEE, 2011. +Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, 2014. +Amir R. Zamir, Alexander Sax, William Shen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. + +# A APPENDIX + +# A.1 CHANNEL POOLING + +We present the channel pooling block in Section 2.2 with Figure 3. To better understand, we provide the details of how the channel pooling block works in an operation level as follows. + +Algorithm 1 Channel Pooling Forward Function +```latex +$w$ : channel pooling weight $(N\times 1$ vector) + $x$ : feature map from a source model. $(N$ channels) + $N\to M$ pooling +procedure CHANNELPOOLINGFORWARD(x) if training then rand_v $= w + \mathcal{N}(0,\sigma^2)$ $\_ ,\mathrm{idx} = topK(\mathrm{rand\_v},M)$ selected_weight $= w[\mathrm{idx}]$ else selected_weight, idx $= topK(w,M)$ +end if + $x = x[\mathrm{idx}]$ $x = x\otimes$ selected_weight //: channel-wise scalar multiplication + $x = BatchNorm(x)$ //optional + $x = ReLU(x)$ //optional +return $x$ +end procedure +``` + +# A.2 EXPERIMENT DETAILS OF TRAINING PERFORMANCE + +Figure 4 shows training performance in terms of accuracy, throughput, and memory footprint for each dataset and training method. In the following tables, we report precise numbers of the graphs in Figure 4 as well as validation accuracy curves during training for all datasets. + +
Training approachTop-1 (top-5) accuracy
FoodDTDActionCarCUB
FO70.76 (90.55)70.81 (91.65)76.41 (95.58)51.96 (78.68)65.96 (89.90)
FE82.24 (95.84)71.45 (92.40)73.97 (93.64)74.05 (94.12)67.16 (91.27)
FT87.63 (97.32)61.56 (85.01)75.40 (93.02)90.26 (98.62)77.93 (93.37)
MP81.97 (95.68)72.41 (92.88)81.51 (95.44)81.42 (96.00)77.22 (94.24)
PN86.19 (96.97)74.32 (92.34)79.99 (94.41)86.26 (97.71)76.51 (93.63)
SNOW-25684.06 (96.47)72.37 (92.02)78.48 (95.04)83.79 (96.94)75.81 (94.05)
SNOW-12884.20 (96.52)73.98 (93.06)79.28 (95.25)83.69 (97.04)75.51 (93.87)
+ +Table 7: Test accuracy of each approaches with 5 datasets. + +
Training approachThroughput (images/sec) | Memory usage (GB)
FoodDTDActionCarCUB
FO1076.08 | 4.121089.83 | 4.401106.31 | 4.381085.67 | 4.241102.50 | 4.63
FE1205.84 | 4.051205.84 | 4.311230.77 | 4.021209.26 | 4.171219.63 | 4.31
FT356.84 | 13.74357.94 | 13.74359.13 | 13.83358.74 | 13.75358.24 | 14.66
MP391.36 | 14.57393.60 | 14.68394.25 | 14.69388.66 | 14.67392.96 | 13.72
PN189.69 | 15.29189.09 | 15.60189.50 | 15.29185.01 | 15.72187.24 | 15.56
SNOW-256947.80 | 4.40959.16 | 4.40949.55 | 4.26965.31 | 4.25970.06 | 4.27
SNOW-128871.34 | 2.92873.12 | 2.87868.97 | 2.80885.20 | 2.87880.94 | 2.94
+ +Table 8: Computational performance for training on a single GPU + +![](images/a9fbc3f5921e775c4545d2e6e877eb3ad993344d14e8206e72111477e9427123.jpg) + +![](images/41d2bf8af30828b9f96c5051b0666794bfdaaab8c5c22f680aa753b06f6b0ff5.jpg) + +![](images/accfa2a7d1013d2eeeb5d2589801c06f97f1db6d736edcee7b33ec00a304b3f0.jpg) + +![](images/9699d9e8a0b7bfc76837322bf4a7594af23ac7dc3963ba7db191a51375913a57.jpg) + +![](images/da5c52a8aeb632250f33486a81a37583222d4dfa4746d9e018356dfa0468d91e.jpg) +Figure 6: Validation accuracy curves during training for each dataset + +# A.3 LEARNING WITHOUT FORGETTING + +In our main experiment, we do not include the results from SNOW with Learning without Forgetting (LF) (Li & Hoiem, 2018) due to the catastrophic forgetting issue as shown in Table 9. Simply, the accuracies from LF for the experimented datasets are quite off from other competing algorithms, making it hard to put them on the same charts. We explored a few sequences to get the best outcome for LF and came up with the following in Table 9, which led to the accuracy changes for the datasets below. We used a batch size of 128, 0.005 learning rate, and $\lambda_{o} = 1$ as suggested in (Li & Hoiem, 2018). As you can see in the result for the training order A, once Action dataset is used, the accuracy against Car dataset drops significantly, as both are very heterogeneous (in terms of target domain and dataset size). Yet, adding CUB has somewhat limited impacts on Action. Also, we observe that the accuracy of LF depends on a training order. The accuracy from the training order A and training order B is significantly different before adding Food data. Overall, the combination of datasets in our experiment seems to be a very challenging scenario for LF in terms of catastrophic forgetting. + +
TaskTraining Order A (starting from ImageNet)
→ Car→ Action→ CUB→ DTD→ Food
Car81.8728.4022.7822.4415.31
Action-77.0868.8762.6056.36
CUB--70.9446.1232.91
DTD---71.9339.29
Food----79.71
+ +
TaskTraining Order B (starting from ImageNet)
→ Action→ CUB→ DTD→ Car→ Food
Action80.6372.3166.7455.557.36
CUB-75.9154.3352.1936.00
DTD--73.2643.2740.78
Car---73.7716.33
Food----79.10
+ +Table 9: Top-1 accuracy of Learning without Forgetting (LF) + +LF overall showed the training throughput of 272.69 images/sec with 13.99GB memory footprint. The reason why LF is slower than FT is because LF needs to perform extra forward paths to compute the loss for the old tasks. \ No newline at end of file diff --git a/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/images.zip b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e3dcd57506bf8643386d405ecad7f8bbc078ba87 --- /dev/null +++ b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f71522c0b81984ef9626faf3eea5846f399b1fb5f3e97748abaabef42cb8f589 +size 923129 diff --git a/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/layout.json b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..21d6c547159ee1bedcf79a6640aebcaa14c6e595 --- /dev/null +++ b/snowsubscribingtoknowledgeviachannelpoolingfortransferlifelonglearningofconvolutionalneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1856c1626e05989ea011aebe0f2ffc8a440e497702ec6ded7fea17d5b3cb06c7 +size 377181 diff --git a/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_content_list.json b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cdfda1ca76d676c607ad2574237e1dfb365d7281 --- /dev/null +++ b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0df3f6c66c552d6e520cbf726bfbd42e5ae2f16e143fdcdc0ee209eea9d1e937 +size 107827 diff --git a/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_model.json b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bfd998844f0ab6a286fd1080ecef38289f1cf507 --- /dev/null +++ b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:260c2263c86cf93be747269633eb68a7d633796cd91967aa4b287680abe20e3a +size 124592 diff --git a/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_origin.pdf b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f606262b7a7231efddb2e53c460c87daacddfb91 --- /dev/null +++ b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/d215859c-b0db-4957-b64b-9f1ca31e6117_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa9bd53e9069db505e670c0ac3c487baacddf8235855b2c839212c97f5612307 +size 4085529 diff --git a/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/full.md b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c3d122c336898ffa416ad7cd93dd65a2c9241705 --- /dev/null +++ b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/full.md @@ -0,0 +1,388 @@ +# SPACE: UNSUPERVISED OBJECT-ORIENTED SCENE REPRESENTATION VIA SPATIAL ATTENTION AND DECOMPOSITION + +{Zhixuan Lin $^{1,2*}$ , Yi-Fu Wu $^{1}$ , Skand Vishwanath Peri $^{1}$ ,} Weihao Sun $^{1}$ , Gautam Singh $^{1}$ , Fei Deng $^{1}$ , Jindong Jiang $^{1}$ , Sungjin Ahn $^{1}$ + +$^{1}$ Rutgers University & $^{2}$ Zhejiang University + +# ABSTRACT + +The ability to decompose complex multi-object scenes into meaningful abstractions like objects is fundamental to achieve higher-level cognition. Previous approaches for unsupervised object-oriented scene representation learning are either based on spatial-attention or scene-mixture approaches and limited in scalability which is a main obstacle towards modeling real-world scenes. In this paper, we propose a generative latent variable model, called SPACE, that provides a unified probabilistic modeling framework that combines the best of spatial-attention and scene-mixture approaches. SPACE can explicitly provide factorized object representations for foreground objects while also decomposing background segments of complex morphology. Previous models are good at either of these, but not both. SPACE also resolves the scalability problems of previous methods by incorporating parallel spatial-attention and thus is applicable to scenes with a large number of objects without performance degradations. We show through experiments on Atari and 3D-Rooms that SPACE achieves the above properties consistently in comparison to SPAIR, IODINE, and GENESIS. Results of our experiments can be found on our project website: https://sites.google.com/view/space-project-page + +# 1 INTRODUCTION + +One of the unsolved key challenges in machine learning is unsupervised learning of structured representation for a visual scene containing many objects with occlusion, partial observability, and complex background. When properly decomposed into meaningful abstract entities such as objects and spaces, this structured representation brings many advantages of abstract (symbolic) representation to areas where contemporary deep learning approaches with a global continuous vector representation of a scene have not been successful. For example, a structured representation may improve sample efficiency for downstream tasks such as a deep reinforcement learning agent (Mnih et al., 2013). It may also enable visual variable binding (Sun, 1992) for reasoning and causal inference over the relationships between the objects and agents in a scene. Structured representations also provide composability and transferability for better generalization. + +Recent approaches to this problem of unsupervised object-oriented scene representation can be categorized into two types of models: scene-mixture models and spatial-attention models. In scene-mixture models (Greff et al., 2017; 2019; Burgess et al., 2019; Engelcke et al., 2019), a visual scene is explained by a mixture of a finite number of component images. This type of representation provides flexible segmentation maps that can handle objects and background segments of complex morphology. However, since each component corresponds to a full-scale image, important physical features of objects like position and scale are only implicitly encoded in the scale of a full image and further disentanglement is required to extract these useful features. Also, since it does not explicitly reflect useful inductive biases like the locality of an object in the Gestalt principles (Koffka, 2013), + +the resulting component representation is not necessarily a representation of a local area. Moreover, to obtain a complete scene, a component needs to refer to other components, and thus inference is inherently performed sequentially, resulting in limitations in scaling to scenes with many objects. + +In contrast, spatial-attention models (Eslami et al., 2016; Crawford & Pineau, 2019) can explicitly obtain the fully disentangled geometric representation of objects such as position and scale. Such features are grounded on the semantics of physics and should be useful in many ways (e.g., sample efficiency, interpretability, geometric reasoning and inference, transferability). However, these models cannot represent complex objects and background segments that have too flexible morphology to be captured by spatial attention (i.e. based on rectangular bounding boxes). Similar to scene-mixture models, previous models in this class show scalability issues as objects are processed sequentially. + +In this paper, we propose a method, called Spatially Parallel Attention and Component Extraction (SPACE), that combines the best of both approaches. SPACE learns to process foreground objects, which can be captured efficiently by bounding boxes, by using parallel spatial-attention while decomposing the remaining area that includes both morphologically complex objects and background segments by using component mixtures. Thus, SPACE provides an object-wise disentangled representation of foreground objects along with explicit properties like position and scale per object while also providing decomposed representations of complex background components. Furthermore, by fully parallelizing the foreground object processing, we resolve the scalability issue of existing spatial attention methods. In experiments on 3D-room scenes and Atari game scenes, we quantitatively and qualitatively compare the representation of SPACE to other models and show that SPACE combines the benefits of both approaches in addition to significant speed-ups due to the parallel foreground processing. + +The contributions of the paper are as follows. First, we introduce a model that unifies the benefits of spatial-attention and scene-mixture approaches in a principled framework of probabilistic latent variable modeling. Second, we introduce a spatially parallel multi-object processing module and demonstrate that it can significantly mitigate the scalability problems of previous methods. Lastly, we provide an extensive comparison with previous models where we illustrate the capabilities and limitations of each method. + +# 2 THE PROPOSED MODEL: SPACE + +In this section, we describe our proposed model, Spatially Parallel Attention and Component Extraction (SPACE). The main idea of SPACE, presented in Figure 1, is to propose a unified probabilistic generative model that combines the benefits of the spatial-attention and scene-mixture models. + +# 2.1 GENERATIVE PROCESS + +SPACE assumes that a scene $\mathbf{x}$ is decomposed into two independent latents: foreground $\mathbf{z}^{\mathrm{fg}}$ and background $\mathbf{z}^{\mathrm{bg}}$ . The foreground is further decomposed into a set of independent foreground objects $\mathbf{z}^{\mathrm{fg}} = \{\mathbf{z}_i^{\mathrm{fg}}\}$ and the background is also decomposed further into a sequence of background segments $\mathbf{z}^{\mathrm{bg}} = \mathbf{z}_{1:K}^{\mathrm{bg}}$ . While our choice of modeling the foreground and background independently worked well empirically, for better generation, it may also be possible to condition one on the other. The image distributions of the foreground objects and the background components are combined together with a pixel-wise mixture model to produce the complete image distribution: + +$$ +p \left(\mathbf {x} \mid \mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}}\right) = \alpha \underbrace {p \left(\mathbf {x} \mid \mathbf {z} ^ {\mathrm {f g}}\right)} _ {\text {F o r e g r o u n d}} + (1 - \alpha) \sum_ {k = 1} ^ {K} \pi_ {k} \underbrace {p \left(\mathbf {x} \mid \mathbf {z} _ {k} ^ {\mathrm {b g}}\right)} _ {\text {B a c k g r o u n d}}. \tag {1} +$$ + +Here, the foreground mixing probability $\alpha$ is computed as $\alpha = f_{\alpha}(\mathbf{z}^{\mathrm{fg}})$ . This way, the foreground is given precedence in assigning its own mixing weight and the remaining is apportioned to the background. The mixing weight assigned to the background is further sub-divided among the $K$ background components. These weights are computed as $\pi_k = f_{\pi_k}(\mathbf{z}_{1:k}^{\mathrm{bg}})$ and $\sum_{k}\pi_{k} = 1$ . With these notations, the complete generative model can be described as follows. + +$$ +p (\mathbf {x}) = \iint p (\mathbf {x} | \mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}}) p (\mathbf {z} ^ {\mathrm {b g}}) p (\mathbf {z} ^ {\mathrm {f g}}) d \mathbf {z} ^ {\mathrm {f g}} d \mathbf {z} ^ {\mathrm {b g}} \tag {2} +$$ + +![](images/4f37c66f3923d9de6eb6677eac3fc001dd1fc00278a677b93c32466bcaab2345.jpg) +Figure 1: Illustration of the SPACE model. SPACE consists of a foreground module and a background module. In the foreground module, the input image is divided into a grid of $H \times W$ cells (4 × 4 in the figure). An image encoder is used to compute the $z^{\text{where}}$ , $z^{\text{depth}}$ , and $z^{\text{pres}}$ for each cell in parallel. $z^{\text{where}}$ is used to identify proposal bounding boxes and a spatial transformer is used to attend to each bounding box in parallel, computing a $z^{\text{what}}$ encoding for each cell. The model selects patches using the bounding boxes and reconstructs them using a VAE from all the foreground latents $z^{\text{fg}}$ . The background module segments the scene into $K$ components (4 in the figure) using a pixel-wise mixture model. Each component consists of a set of latents $z^{\text{bg}} = (z^{m}, z^{c})$ where $z^{m}$ models the mixing probability of the component and $z^{c}$ models the RGB distribution of the component. The components are combined to reconstruct the background using a VAE. The reconstructed background and foreground are then combined using a pixel-wise mixture model to generate the full reconstructed image. + +We now describe the foreground and background models in more detail. + +Foreground. SPACE implements $\mathbf{z}^{\mathrm{fg}}$ as a structured latent. In this structure, an image is treated as if it were divided into $H\times W$ cells and each cell is tasked with modeling at most one (nearby) object in the scene. This type of structuring has been used in (Redmon et al., 2016; Santoro et al., 2017; Crawford & Pineau, 2019). Similar to SPAIR, in order to model an object, each cell $i$ is associated with a set of latents $(\mathbf{z}_i^{\mathrm{pres}},\mathbf{z}_i^{\mathrm{where}},\mathbf{z}_i^{\mathrm{depth}},\mathbf{z}_i^{\mathrm{what}})$ . In this notation, $\mathbf{z}^{\mathrm{pres}}$ is a binary random variable denoting if the cell models any object or not, $\mathbf{z}^{\mathrm{where}}$ denotes the size of the object and its location relative to the cell, $\mathbf{z}^{\mathrm{depth}}$ denotes the depth of the object to resolve occlusions and $\mathbf{z}^{\mathrm{what}}$ models the object appearance and its mask. These latents may then be used to compute the foreground image component $p(\mathbf{x}|\mathbf{z}^{\mathrm{fg}})$ which is modeled as a Gaussian distribution $\mathcal{N}(\mu^{\mathrm{fg}},\sigma_{\mathrm{fg}}^{2})$ . In practice, we treat $\sigma_{\mathrm{fg}}^{2}$ as a hyperparameter and decode only the mean image $\mu^{\mathrm{fg}}$ . In this process, SPACE reconstructs the objects associated to each cell having $\mathbf{z}_i^{\mathrm{pres}} = 1$ . For each such cell, the model uses the $\mathbf{z}_i^{\mathrm{what}}$ to decode the object glimpse and its mask and the glimpse is then positioned on a full-resolution canvas using $\mathbf{z}_i^{\mathrm{where}}$ via the Spatial Transformer (Jaderberg et al., 2015). Using the object masks and $\mathbf{z}_i^{\mathrm{depth}}$ , all the foreground objects are combined into a single foreground mean-image $\mu^{\mathrm{fg}}$ and the foreground mask $\alpha$ (See Appendix D for more details). + +SPACE imposes a prior distribution on these latents as follows: + +$$ +p \left(\mathbf {z} ^ {\mathrm {f g}}\right) = \prod_ {i = 1} ^ {H \times W} p \left(\mathbf {z} _ {i} ^ {\text {p r e s}}\right) \left(p \left(\mathbf {z} _ {i} ^ {\text {w h e r e}}\right) p \left(\mathbf {z} _ {i} ^ {\text {d e p t h}}\right) p \left(\mathbf {z} _ {i} ^ {\text {w h a t}}\right)\right) ^ {\mathbf {z} _ {i} ^ {\text {p r e s}}} \tag {3} +$$ + +Here, only $\mathbf{z}_i^{\mathrm{pres}}$ is modeled using a Bernoulli distribution while the remaining are modeled as Gaussian. + +Background. To model the background, SPACE implements $\mathbf{z}_k^{\mathrm{bg}}$ , similar to GENESIS, as $(\mathbf{z}_k^m,\mathbf{z}_k^c)$ where $\mathbf{z}_k^m$ models the mixing probabilities $\pi_{k}$ of the components and $\mathbf{z}_k^c$ models the RGB distribution $p(\mathbf{x}|\mathbf{z}_k^{\mathrm{bg}})$ of the $k^{\mathrm{th}}$ background component as a Gaussian $\mathcal{N}(\mu_i^{\mathrm{bg}},\sigma_{\mathrm{bg}}^2)$ . The following prior is + +imposed upon these latents. + +$$ +p \left(\mathbf {z} ^ {\mathrm {b g}}\right) = \prod_ {k = 1} ^ {K} p \left(\mathbf {z} _ {k} ^ {c} \mid \mathbf {z} _ {k} ^ {m}\right) p \left(\mathbf {z} _ {k} ^ {m} \mid \mathbf {z} _ {< k} ^ {m}\right) \tag {4} +$$ + +# 2.2 INFERENCE AND TRAINING + +Since we cannot analytically evaluate the integrals in equation 2 due to the continuous latents $\mathbf{z}^{\mathrm{fg}}$ and $\mathbf{z}_{1:K}^{\mathrm{bg}}$ , we train the model using a variational approximation. The true posterior on these variables is approximated as follows. + +$$ +p \left(\mathbf {z} _ {1: K} ^ {\mathrm {b g}}, \mathbf {z} ^ {\mathrm {f g}} | \mathbf {x}\right) \approx q \left(\mathbf {z} ^ {\mathrm {f g}} | \mathbf {x}\right) \prod_ {k = 1} ^ {K} q \left(\mathbf {z} _ {k} ^ {\mathrm {b g}} \mid \mathbf {z} _ {< k} ^ {\mathrm {b g}}, \mathbf {x}\right) \tag {5} +$$ + +This is used to derive the following ELBO to train the model using the reparameterization trick and SGD (Kingma & Welling, 2013). + +$$ +\begin{array}{l} \mathcal {L} (\mathbf {x}) = \mathbb {E} _ {q \left(\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} \mid \mathbf {x}\right)} \left[ \log p \left(\mathbf {x} \mid \mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}}\right) - \sum_ {k = 1} ^ {K} D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {k} ^ {\mathrm {b g}} \mid \mathbf {z} _ {< k} ^ {\mathrm {b g}}, \mathbf {x}\right) \| p \left(\mathbf {z} _ {k} ^ {\mathrm {b g}} \mid \mathbf {z} _ {< k} ^ {\mathrm {b g}}\right)\right) \right. \tag {6} \\ \left. - \sum_ {i = 1} ^ {H \times W} D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\mathrm {f g}} \mid \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\mathrm {f g}}\right)\right) \right] \\ \end{array} +$$ + +See Appendix B for the detailed decomposition of the ELBO and the related details. + +Parallel Inference of Cell Latents. SPACE uses mean-field approximation when inferring the cell latents, so $\mathbf{z}_i^{\mathrm{fg}} = (\mathbf{z}_i^{\mathrm{pres}},\mathbf{z}_i^{\mathrm{where}},\mathbf{z}_i^{\mathrm{depth}},\mathbf{z}_i^{\mathrm{what}})$ for each cell does not depend on other cells. + +$$ +q \left(\mathbf {z} ^ {\mathrm {f g}} | \mathbf {x}\right) = \prod_ {i = 1} ^ {H \times W} q \left(\mathbf {z} _ {i} ^ {\text {p r e s}} | \mathbf {x}\right) \left(q \left(\mathbf {z} _ {i} ^ {\text {w h e r e}} | \mathbf {x}\right) q \left(\mathbf {z} _ {i} ^ {\text {d e p t h}} | \mathbf {x}\right) q \left(\mathbf {z} _ {i} ^ {\text {w h a t}} | \mathbf {z} _ {i} ^ {\text {w h e r e}}, \mathbf {x}\right)\right) ^ {\mathbf {z} _ {i} ^ {\text {p r e s}}} \tag {7} +$$ + +As shown in Figure 1, this allows each cell to act as an independent object detector, spatially attending to its own local region in parallel. This is in contrast to inference in SPAIR, where each cell's latents auto-regressively depend on some or all of the previously traversed cells in a row-major order i.e., $q(\mathbf{z}^{\mathrm{fg}}|\mathbf{x}) = \prod_{i=1}^{HW} q(\mathbf{z}_i^{\mathrm{fg}}|\mathbf{z}_{1 for both SPAIR and SPAIR-P. Thus, because of these improvements, our SPAIR implementation can be seen as a stronger baseline than the original SPAIR. More details of the baselines are given in Appendix D. + +![](images/ac56b131c8bac3cead6ea015abf24caa4aa1666f769417daaeadfee56232055c.jpg) +Figure 2: Qualitative comparison between SPACE, SPAIR, SPAIR-P, IODINE and GENESIS for the 3D- Room dataset. + +# 4.1 QUALITATIVE COMPARISON OF INFERRED REPRESENTATIONS + +In this section, we provide a qualitative analysis of the generated representations of the different models. For each model, we performed a hyperparameter search and present the results for the best settings of hyperparameters for each environment. Figure 2 shows sample scene decompositions of our baselines from the 3D-Room dataset and Figure 3 shows the results on Atari. Note that SPAIR does not use component masks and IODINE and GENESIS do not separate foreground from background, hence the corresponding cells are left empty. Additionally, we only show a few representative components for IODINE and GENESIS since we ran those experiments with larger $K$ than can be displayed. More qualitative results of SPACE can be found in Appendix A. + +IODINE & GENESIS. In the 3D-Room environment, IODINE is able to segment the objects and the background into separate components. However, it occasionally does not properly decompose objects (in the Large 3D-room results, the orange sphere on the right is not reconstructed) and may generate blurry objects. GENESIS is able to segment the background walls, floor, and sky into multiple components. It is able to capture blurry foreground objects in the Small 3D-Room, but is not able to cleanly capture foreground objects with the larger number of objects in the Large 3D-Room. In Atari, both IODINE and GENESIS fail to capture the foreground properly or try to encode all objects in a single component. We believe this is because the objects in Atari games are smaller, less regular and lack the obvious latent factors like color and shape as in the 3D dataset, and thus detection-based approaches are more appropriate in this case. + +SPAIR & SPAIR-P. SPAIR is able to detect tight bounding boxes in both 3D-Rooms and the two Atari games. SPAIR-P, however, often fails to detect the foreground objects in proper bounding boxes, frequently uses multiple bounding boxes for one object and redundantly detects parts of the background as foreground objects. This is a limitation of the patch training as the receptive field of + +![](images/d080f61a827072ba41eba8d78b1b3de62b6660f4d4710e1cdbdad21bcaf6ae05.jpg) +Figure 3: Qualitative comparison between SPACE, SPAIR, IODINE and GENESIS for Space Invaders, Air Raid, and River Raid. + +![](images/7deefa60b94449a4dec64509d89a964222ede5603749622d8a69001cd7c6b7ef.jpg) +Figure 4: Qualitative demonstration of SPACE trained jointly on a selection of 10 Atari games. We show 6 games with complex background here. + +each patch is limited to a $32 \times 32$ glimpse, prohibiting it from detecting larger objects and making it difficult to distinguish the background from foreground. These two properties are illustrated well in Space Invaders, where it is able to detect the small aliens, but it detects the long piece of background ground on the bottom of the image as foreground objects. + +SPACE. In both 3D-Room, SPACE is able to accurately detect almost all objects despite the large variations in object positions, colors, and shapes, while producing a clean segmentation of the background walls, ground, and sky. This is in contrast to the SPAIR model, while being able to provide similar foreground detection quality, encodes the whole background into a single component, which makes the representation less disentangled. Notably, in River Raid where the background is constantly changing, SPACE is able to perfectly segment the blue river while accurately detecting all foreground objects, while SPAIR often cannot properly separate foreground and background. + +Joint Training. Figure 4 shows the results of training SPACE jointly across 10 Atari games. We see that even in this setting, SPACE is able to correctly detect foreground objects and cleanly segment the background. + +Foreground vs Background. Typically, foreground is the dynamic local part of the scene that we are interested in, and background is the relatively static and global part. This definition, though intuitive, is ambiguous. Some objects, such as the red shields in Space Invaders and the key in Montezuma's Revenge (Figure 6) are static but important to detect as foreground objects. We found that SPACE tends to detect these as foreground objects while SPAIR considers it background. Similar behavior is observed in Atlantis (Figure 4), where SPACE tends to detect some foreground objects from the middle base that is above the water. One reason for this behavior is because we limit the capacity of the background module by using a spatial broadcast decoder (Watters et al., 2019) which is much weaker when compared to other decoders like sub-pixel convolutional nets (Shi et al. (2016)). This would favor modeling static objects as foreground rather than background. + +# 4.2 QUANTITATIVE COMPARISON + +In this section we compare SPACE with the baselines in several quantitative metrics2. We first note that each of the baseline models has a different decomposition capacity $(\mathcal{C})$ , which we define as the capability of the model to decompose the scene into its semantic constituents such as the foreground objects and the background segmented components. For SPACE, the decomposition capacity is equal to the number of grid cells $H \times W$ (which is the maximum number of foreground objects that can be detected) plus the number of background components $K$ . For SPAIR, the decomposition capacity is equal to the number of grid cells $H \times W$ plus 1 for background. For IODINE and GENESIS, it is equal to the number of components $K$ . + +For each experiment, we compare the metrics for each model with similar decomposition capacities. This way, each model can decompose the image into the same number of components. For a setting in SPACE with a grid size of $H \times W$ with $K_{\mathrm{SPACE}}$ components, the equivalent settings in IODINE and GENESIS would be with $\mathcal{C} = (H \times W) + K_{\mathrm{SPACE}}$ . The equivalent setting in SPAIR would be a grid size of $H \times W$ . + +Table 1: Comparison of SPACE the SPAIR baseline with respect to the quality of the bounding boxes in the 3D-Room setting. Results are averaged over 5 best random seeds and standard deviations are given. + +
ModelDatasetAvg. Precision IoU Threshold = 0.5Avg. Precision IoU Threshold ∈ [0.5 : 0.05 : 0.95]Object Count Error Rate
SPACE (16 × 16)3D-Room Large0.8927 ± 0.00270.4445 ± 0.00750.0446 ± 0.0026
SPAIR (16 × 16)3D-Room Large0.9072 ± 0.00030.4364 ± 0.01790.0360 ± 0.0072
SPACE (8 × 8)3D-Room Small0.9027 ± 0.00090.5069 ± 0.00300.0397 ± 0.0026
SPAIR (8 × 8)3D-Room Small0.9081 ± 0.00040.5068 ± 0.00810.0209 ± 0.0039
+ +Gradient Step Latency. The leftmost chart of Figure 5 shows the time taken to complete one gradient step (forward and backward propagation) for different decomposition capacities for each of the models. We see that SPAIR's latency grows with the number of cells because of the sequential + +![](images/ade5e16a8405d84fa734a9a541611202ac275ead9c45eb41e079afd340b52d55.jpg) +Figure 5: Quantitative performance comparison between SPACE, SPAIR, IODINE and GENESIS in terms of batch-processing time during training, training convergence and converged pixel MSE. Convergence plots showing pixel-MSE were computed on a held-out set during training. + +nature of its latent inference step. Similarly GENESIS and IODINE's latency grows with the number of components $K$ because each component is processed sequentially in both the models. IODINE is the slowest overall with its computationally expensive iterative inference procedure. Furthermore, both IODINE and GENESIS require storing data for each of the $K$ components, so we were unable to run our experiments on 256 components or greater before running out of memory on our 22GB GPU. On the other hand, SPACE employs parallel processing for the foreground which makes it scalable to large grid sizes, allowing it to detect a large number of foreground objects without any significant performance degradation. Although this data was collected for gradient step latency, this comparison implies a similar relationship exists with inference time which is a main component in the gradient step. + +Time for Convergence. The remaining three charts in Figure 5 show the amount of time each model takes to converge in different experimental settings. We use the pixel-wise mean squared error (MSE) as a measurement of how close a model is to convergence. In all settings, SPAIR and SPACE converge much faster than IODINE and GENESIS. In the $4 \times 4$ and $8 \times 8$ setting, SPAIR and SPACE converge equally fast. But as we scale up to $16 \times 16$ , SPAIR becomes much slower than SPACE. + +Average Precision and Error Rate. In order to assess the quality of our bounding box predictions, we measure the Average Precision and Object Count Error Rate of our predictions. Our results are shown in Table 1. We only report these metrics for 3D-Room since we have access to the ground truth bounding boxes for each of the objects in the scene. Both models have very similar average precision and error rate. Despite being parallel in its inference, SPACE has a comparable count error rate to that of SPAIR. + +From our experiments, we can assert that SPACE can produce similar quality bounding boxes as SPAIR while 1) having orders of magnitude faster inference and gradient step time, 2) scaling to a large number of objects without significant performance degradation, and 3) providing complex background segmentation. + +# 5 CONCLUSION + +We propose SPACE, a unified probabilistic model that combines the benefits of the object representation models based on spatial attention and the scene decomposition models based on component mixture. SPACE can explicitly provide factorized object representation per foreground object while also decomposing complex background segments. SPACE also achieves a significant speed-up and thus makes the model applicable to scenes with a much larger number of objects without performance degradation. Besides, the detected objects in SPACE are also more intuitive than other methods. We show the above properties of SPACE on Atari and 3D-Rooms. Interesting future directions are to replace the sequential processing of background by a parallel one and to improve the model for natural images. Our next plan is to apply SPACE for object-oriented model-based reinforcement learning. + +# ACKNOWLEDGMENTS + +SA thanks to Kakao Brain and Center for Super Intelligence (CSI) for their support. ZL thanks to the ZJU-3DV group for its support. The authors would also like to thank Chang Chen and Zhuo Zhi for their insightful discussions and help in generating the 3D-Room dataset. + +# REFERENCES + +Jonathan T. Barron. Continuously differentiable exponential linear units. ArXiv, abs/1704.07483, 2017. +M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253-279, jun 2013. +Christopher P Burgess, Loic Matthew, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. Monet: Unsupervised scene decomposition and representation. arXiv preprint arXiv:1901.11390, 2019. +Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In *ICLR*, 2016. +Eric Crawford and Joelle Pineau. Spatially invariant unsupervised object detection with convolutional neural networks. In Proceedings of AAAI, 2019. +Eric Crawford and Joelle Pineau. Exploiting spatial invariance for scalable unsupervised object tracking. In Proceedings of AAAI, 2020. +Martin Engelcke, Adam R. Kosiorek, Oiwi Parker Jones, and Ingmar Posner. Genesis: Generative scene inference and sampling with object-centric latent representations, 2019. +SM Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, and Geoffrey E Hinton. Attend, infer, repeat: Fast scene understanding with generative models. In Advances in Neural Information Processing Systems, pp. 3225-3233, 2016. +Klaus Greff, Sjoerd van Steenkiste, and Jurgen Schmidhuber. Neural expectation maximization. In Advances in Neural Information Processing Systems, pp. 6691-6701, 2017. +Klaus Greff, Raphaël Lopez Kaufmann, Rishab Kabra, Nick Watters, Chris Burgess, Daniel Zoran, Loic Matthew, Matthew Botvinick, and Alexander Lerchner. Multi-object representation learning with iterative variational inference. arXiv preprint arXiv:1903.00450, 2019. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, 2015. +Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. In Advances in neural information processing systems, pp. 2017-2025, 2015. +Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. +Jindong Jiang, Sepehr Janghorbani, Gerard De Melo, and Sungjin Ahn. Scalor: Scalable object-oriented sequential generative models. In International Conference on Learning Representations, 2020. +Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Kurt Koffka. Principles of Gestalt psychology. Routledge, 2013. + +Adam Kosiorek, Hyunjik Kim, Yee Whye Teh, and Ingmar Posner. Sequential attend, infer, repeat: Generative modelling of moving objects. In Advances in Neural Information Processing Systems, pp. 8606-8616, 2018. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. *ArXiv*, abs/1312.5602, 2013. +Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016. +Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pp. 4967-4976, 2017. +Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. +Ron Sun. On variable binding in connectionist networks. Connection Science, 4(2):93-124, 1992. +Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop, neural networks for machine learning. Tech Report, 2012. URL https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf. +Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012. +Sjoerd Van Steenkiste, Michael Chang, Klaus Greff, and Jürgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. arXiv preprint arXiv:1802.10353, 2018. +Nicholas Watters, Loic Matthey, Christopher P. Burgess, and Alexander Lerchner. Spatial broadcast decoder: A simple architecture for learning disentangled representations in vaes. ArXiv, abs/1901.07017, 2019. +Yuxin Wu and Kaiming He. Group normalization. In The European Conference on Computer Vision (ECCV), September 2018. +Yuxin Wu et al. Tensorpack. https://github.com/tensorpack/, 2016. + +# A ADDITIONAL RESULTS OF SPACE + +![](images/50112361d944797f649c5da90e7737523a1fd22a17f95c96fdfc72383a793b81.jpg) +Figure 6: Case illustration of Montezuma's Revenge comparing object-detection behaviour in SPACE and SPAIR. + +![](images/01978f9c2894bb598087028f37a06bc9a0505d03e7365f50d81eb1b732c192b2.jpg) +Figure 7: Qualitative demonstration of SPACE trained on the jointly on a selection of 10 ATARI games. + +![](images/ce614db45bbfc0372500354150a8c78d0122111f95674413cec954b1e83b9d8f.jpg) +Figure 8: Object detection and background segmentation using SPACE on 3D-Room data set with small number of objects. Each row corresponds to one input image. + +![](images/2acc7a6c49910937f30142458cd3a9ff13708f2309cbca7d6ddcd2aeebc92c98.jpg) +Figure 9: Object detection and background segmentation using SPACE on 3D-Room data set with large number of objects. + +# B ELBO DERIVATIONS + +In this section, we derive the ELBO for the log-likelihood $\log p(\mathbf{x})$ . + +$$ +\begin{array}{l} \log p (\mathbf {x}) \geq \mathbb {E} _ {q (\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} | \mathbf {x})} \left[ p (\mathbf {x} | \mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}}) \right] - D _ {\mathrm {K L}} (q (\mathbf {z} ^ {\mathrm {f g}} | \mathbf {x}) \| p (\mathbf {z} ^ {\mathrm {f g}})) - D _ {\mathrm {K L}} (q (\mathbf {z} ^ {\mathrm {b g}} | \mathbf {x}) \| p (\mathbf {z} ^ {\mathrm {b g}})) \\ = \mathbb {E} _ {q (\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} | \mathbf {x})} \left[ p (\mathbf {x} | \mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}}) - D _ {\mathrm {K L}} (q (\mathbf {z} ^ {\mathrm {f g}} | \mathbf {x}) \| p (\mathbf {z} ^ {\mathrm {f g}})) - D _ {\mathrm {K L}} (q (\mathbf {z} ^ {\mathrm {b g}} | \mathbf {x}) \| p (\mathbf {z} ^ {\mathrm {b g}})) \right] \\ = \mathbb {E} _ {q (\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} | \mathbf {x})} \left[ p (\mathbf {x} | \mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}}) - \sum_ {k = 1} ^ {K} D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {k} ^ {\mathrm {b g}} | \mathbf {x}, \mathbf {z} _ {< k} ^ {\mathrm {b g}}\right) \| p \left(\mathbf {z} ^ {\mathrm {b g}} | \mathbf {z} _ {< k} ^ {\mathrm {b g}}\right)\right) - \right. \\ \left. \sum_ {i = 1} ^ {H \times W} D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\mathrm {f g}} \mid \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\mathrm {f g}}\right)\right) \right] \\ \end{array} +$$ + +KL Divergence for the Foreground Latents Under the SPACE's approximate inference, the $D_{\mathrm{KL}}(q(\mathbf{z}_i^{\mathrm{fg}}|\mathbf{x})\parallel p(\mathbf{z}_i^{\mathrm{fg}}))$ inside the expectation can be evaluated as follows. + +$$ +\begin{array}{l} \mathbb {E} _ {q \left(\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} \mid \mathbf {x}\right)} \left[ D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\mathrm {f g}} \mid \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\mathrm {f g}}\right)\right) \right] \\ = \mathbb {E} _ {q (\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} | \mathbf {x})} \left[ \right. D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\text {p r e s}} | \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\text {p r e s}}\right)\right) + \mathbb {E} _ {q \left(\mathbf {z} _ {i} ^ {\text {p r e s}} | \mathbf {x}\right)} \mathbf {z} _ {i} ^ {\text {p r e s}} \left[ D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\text {w h e r e}} | \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\text {w h e r e}}\right)\right)\right. \\ \left. \left. + \mathbb {E} _ {q (\mathbf {z} _ {i} ^ {\text {w h e r e}} | \mathbf {x})} D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\text {w h a t}} | \mathbf {x}, \mathbf {z} _ {i} ^ {\text {w h e r e}}\right) \| p \left(\mathbf {z} _ {i} ^ {\text {w h a t}}\right)\right) + D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\text {d e p t h}} | \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\text {d e p t h}}\right)\right) \right] \right] \\ = \mathbb {E} _ {q (\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} | \mathbf {x})} \left[ D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\text {p r e s}} | \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\text {p r e s}}\right)\right) + \mathbf {z} _ {i} ^ {\text {p r e s}} \left[ D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\text {w h e r e}} | \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\text {w h e r e}}\right)\right) \right. \right. \\ \left. \left. + D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\text {w h a t}} \mid \mathbf {x}, \mathbf {z} _ {i} ^ {\text {w h e r e}}\right) \| p \left(\mathbf {z} _ {i} ^ {\text {w h a t}}\right)\right) + D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {i} ^ {\text {d e p t h}} \mid \mathbf {x}\right) \| p \left(\mathbf {z} _ {i} ^ {\text {d e p t h}}\right)\right) \right] \right] \\ \end{array} +$$ + +KL Divergence for the Background Latents Under our GENESIS-like modeling of inference for the background latents, the KL term inside the expectation for the background is evaluated as follows. + +$$ +\begin{array}{l} \mathbb {E} _ {q \left(\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} \mid \mathbf {x}\right)} \left[ D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {k} ^ {\mathrm {b g}} \mid \mathbf {z} _ {< k} ^ {\mathrm {b g}}, \mathbf {x}\right) \| p \left(\mathbf {z} _ {k} ^ {\mathrm {b g}} \mid \mathbf {z} _ {< k} ^ {\mathrm {b g}}\right)\right) \right] \\ = \mathbb {E} _ {q (\mathbf {z} ^ {\mathrm {f g}}, \mathbf {z} ^ {\mathrm {b g}} | \mathbf {x})} \left[ D _ {\mathrm {K L}} (q (\mathbf {z} _ {k} ^ {m} | \mathbf {z} _ {< k} ^ {m}, \mathbf {x}) \| p (\mathbf {z} _ {k} ^ {m} | \mathbf {z} _ {< k} ^ {m})) + \mathbb {E} _ {q (\mathbf {z} _ {k} ^ {m} | \mathbf {z} _ {< k} ^ {m}, \mathbf {x})} D _ {\mathrm {K L}} (q (\mathbf {z} _ {k} ^ {c} | \mathbf {z} _ {k} ^ {m}, \mathbf {x}) \| p (\mathbf {z} _ {k} ^ {c} | \mathbf {z} _ {k} ^ {m})) \right] \\ = \mathbb {E} _ {q (\mathbf {z} ^ {\mathrm {f}}, \mathbf {z} ^ {\mathrm {b g}} | \mathbf {x})} \left[ D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {k} ^ {m} | \mathbf {z} _ {< k} ^ {m}, \mathbf {x}\right) \| p \left(\mathbf {z} _ {k} ^ {m} | \mathbf {z} _ {< k} ^ {m}\right)\right) + D _ {\mathrm {K L}} \left(q \left(\mathbf {z} _ {k} ^ {c} | \mathbf {z} _ {k} ^ {m}, \mathbf {x}\right) \| p \left(\mathbf {z} _ {k} ^ {c} | \mathbf {z} _ {k} ^ {m}\right)\right) \right] \\ \end{array} +$$ + +Relaxed treatment of $\mathbf{z}^{\mathrm{pres}}$ In our implementation, we model the Bernoulli random variable $\mathbf{z}_i^{\mathrm{pres}}$ using the Gumbel-Softmax distribution (Jang et al., 2016). We use the relaxed value of $\mathbf{z}^{\mathrm{pres}}$ in the entire training and use hard samples only for the visualizations. + +# C BOUNDARY LOSS + +In this section we elaborate on the implementation details of the boundary loss. We construct a kernel of the size of the glimpse, $gs \times gs$ (we use $gs = 32$ ) with a boundary gap of $b = 6$ having negative uniform weights inside the boundary and a zero weight in the region between the boundary and the glimpse. This ensures that the model is penalized when the object is outside the boundary. This kernel is first mapped onto the global space via STN Jaderberg et al. (2015) to obtain the global kernel. This is then multiplied element-wise with global object mask $\alpha$ to obtain the boundary loss map. The objective of the loss is to minimize the mean of this boundary loss map. In addition to the ELBO, this loss is also back-propagated via RMSProp (Tieleman & Hinton. (2012)). This loss, due to the boundary constraint, enforces the bounding boxes to be less tight and results in lower average precision, so we disable the loss and optimize only the ELBO after the model has converged well. + +# D IMPLEMENTATION DETAILS + +# D.1 ALGORITHMS + +Algorithm 1 and Algorithm 3 present SPACE's inference for foreground and background. Algorithm 2 describe the details of rescale $i$ function in Algorithm 1 that transforms local shift $\mathbf{z}_i^{\mathrm{shift}}$ to global shift $\hat{\mathbf{z}}_i^{\mathrm{shift}}$ . Algorithm 4 show the details of the generation process of the background module. For foreground generation, we simply sample the latent variables from the priors instead of conditioning on the input. Note that, for convenience the algorithms for the foreground module and background module are presented with for loops, but inference for all variables of the foreground module are implemented as parallel convolution operations and most operations of the background module (barring the LSTM module) are parallel as well. + +Algorithm 1: Foreground Inference +Input: image $\pmb{x}$ +Output: foreground mask $\alpha$ , appearance $\mu^{\mathrm{fg}}$ , grid height $H$ and width $W$ $\hat{e}^{\mathrm{img}} = \mathrm{ImageEncoderFg}(\boldsymbol {x})$ $r^{\mathrm{img}} =$ ResidualConnection $(\hat{e}^{\mathrm{img}})$ $e^{\mathrm{img}} =$ ResidualEncoder([ $\hat{e}^{\mathrm{img}},r^{\mathrm{img}}]$ +for $i\gets 1$ to HW do +/\* The following is performed in parallel \*/ + $\rho_{i} = \mathrm{ZPresNet}(e_{i}^{\mathrm{img}})$ $[\pmb {\mu}_i^{\mathrm{depth}},\pmb {\sigma}_i^{\mathrm{depth}}] = \mathrm{ZDepthNet}(e_i^{\mathrm{img}})$ $[\pmb {\mu}_i^{\mathrm{scale}},\pmb {\sigma}_i^{\mathrm{scale}}] = \mathrm{ZScaleNet}(e_i^{\mathrm{img}})$ $[\pmb {\mu}_i^{\mathrm{shift}},\pmb {\sigma}_i^{\mathrm{shift}}] = \mathrm{ZShiftNet}(e_i^{\mathrm{img}})$ $\mathbf{z}_{i}^{\mathrm{pres}}\sim \mathrm{Bern}(\pmb {\rho}_i)$ $\mathbf{z}_{i}^{\mathrm{depth}}\sim \mathcal{N}(\pmb {\mu}_{i}^{\mathrm{depth}},\pmb {\sigma}_{i}^{\mathrm{depth},2})$ $\mathbf{z}_{i}^{\mathrm{scale}}\sim \mathcal{N}(\pmb {\mu}_{i}^{\mathrm{scale}},\pmb {\sigma}_{i}^{\mathrm{scale},2})$ $\mathbf{z}_{i}^{\mathrm{shift}}\sim \mathcal{N}(\pmb {\mu}_{i}^{\mathrm{shift}},\pmb {\sigma}_{i}^{\mathrm{shift},2})$ +/\* Rescale local shift to global shift as in SPAIR \*/ + $\hat{\mathbf{z}}_i^{\mathrm{scale}} = \sigma (\mathbf{z}_i^{\mathrm{scale}})$ $\hat{\mathbf{z}}_i^{\mathrm{shift}} = \operatorname {rescale}_i(\mathbf{z}_i^{\mathrm{shift}})$ $\mathbf{z}_{i}^{\mathrm{where}} = [\hat{\mathbf{z}}_{i}^{\mathrm{scale}},\hat{\mathbf{z}}_{i}^{\mathrm{shift}}]$ +/\* Extract glimpses with a Spatial Transformer \*/ + $\hat{x}_i = \mathrm{ST}(\pmb {x},\pmb {z}_i^{\mathrm{where}})$ $[\pmb {\mu}_i^{\mathrm{what}},\pmb {\sigma}_i^{\mathrm{what}}] = \mathrm{GlimpsEncoder}(\hat{x}_i)$ $\mathbf{z}_{i}^{\mathrm{whata}}\sim \mathcal{N}(\pmb {\mu}_{i}^{\mathrm{whata}},\pmb {\sigma}_{i}^{\mathrm{whata,2}})$ +/\* Foreground mask and appearance of glimpse size \*/ + $[\pmb {\alpha}_i^{\mathrm{att}},\pmb {\sigma}_i^{\mathrm{att}}] = \mathrm{GlimpsDecoder}(\pmb {z}_i^{\mathrm{whata}})$ $\hat{\alpha}_{i}^{\mathrm{att}} = \hat{\alpha}_{i}^{\mathrm{att}}\odot \mathbf{z}_{i}^{\mathrm{pres}}$ $\pmb {y}_i^{\mathrm{att}} = \hat{\alpha}_i^{\mathrm{att}}\odot \pmb {o}_i^{\mathrm{att}}$ +/\* Transform both to canvas size \*/ + $\hat{\alpha}_{i}^{\mathrm{att}} = \mathrm{ST}^{-1}(\hat{\alpha}_{i}^{\mathrm{att}},\mathbf{z}_{i}^{\mathrm{where}})$ ${\pmb {y}}_{i}^{\mathrm{att}}=\mathrm{ST}^{-1}({\pmb {y}}_{i}^{\mathrm{att}},{\pmb {z}}_{i}^{\mathrm{where}})$ +end +/\* Compute weights for each component \*/ + $\pmb {w} = \operatorname {softmax}(-100\cdot \sigma (\pmb {z}^{\mathrm{depth}})\odot \hat{\alpha}^{\mathrm{att}})$ +/\* Compute global weighted mask and foreground appearance \*/ + $\alpha = \operatorname {sum}(\pmb {w}\odot \hat{\alpha}^{\mathrm{att}})$ $\mu^{\mathrm{fg}} = \operatorname {sum}(\pmb {w}\odot \pmb {y}^{\mathrm{att}})$ + +Algorithm 2: Rescale $\mathbf{z}_i^{\mathrm{shift}}$ +Input: Shift latent $\mathbf{z}_i^{\mathrm{shift}}$ , cell index $i$ grid height $H$ and width $W$ +Output: Rescaled shift latent $\hat{\mathbf{z}}_i^{\mathrm{shift}}$ +/\*Get width and heigh index of cell i \*/ + $[k,j] = [i\% H,i\div H]$ +/\*Center of this cell \*/ + $\mathbf{c}_i = [k + 0.5,j + 0.5]$ +/\*Get global shift \*/ + $\tilde{\mathbf{z}}_i^{\mathrm{shift}} = \mathbf{c}_i + \tanh (\mathbf{z}_i^{\mathrm{shift}})$ +/\*Normalize to range (-1, 1) \*/ + $\hat{\mathbf{z}}_i^{\mathrm{shift}} = 2\cdot \tilde{\mathbf{z}}_i^{\mathrm{shift}} / [W,H] - 1$ + +Algorithm 3: Background Inference +Input: image $\pmb{x}$ , initial LSTM states $h_0,c_0$ , initial dummy mask $\mathbf{z}_0^m$ +Output: background masks $\pi_{k}$ , appearance $\mu_k^{\mathrm{bg}}$ , for $k = 1,\dots ,K$ $e^{\mathrm{img}} =$ ImageEncoderBg(x) +for $k\gets 1$ to $K$ do + $\begin{array}{rl} & {\pmb {h}_k,\pmb {c}_k = \mathrm{LSTM}([\pmb {z}_{k - 1}^m,\pmb {e}^{\mathrm{img}}],\pmb {c}_{k - 1},\pmb {h}_{k - 1})}\\ & {[ \pmb {\mu}_k^m,\pmb {\sigma}_k^m ] = \mathrm{PredictMask}(\pmb {h}_k)}\\ & {\pmb {z}^m\sim \mathcal{N}(\pmb {\mu}_k^m,\pmb {\sigma}_k^{m,2})}\\ & {\star \text{Actually decoded in parallel}}\\ & {\hat{\pi}_k = \mathrm{MaskDecoder}(\pmb {z}_k^m)}\\ & {\star \text{Stick breaking process as described in GENESIS}}\\ & {\pi_k = \mathrm{SBP}(\hat{\pi}_{1:k})}\\ & {[ \pmb {\mu}_k^c,\pmb {\sigma}_k^c ] = \mathrm{CompEncoder}([\pi_k,\pmb {x}])}\\ & {\pmb {z}^c\sim \mathcal{N}(\pmb {\mu}_k^c,\pmb {\sigma}_k^{c,2})}\\ & {\pmb {\mu}_k^{\mathrm{bg}} = \mathrm{CompDecoder}(\pmb {z}_k^c)} \end{array}$ +end + +Algorithm 4: Background Generation +Input: initial LSTM states $h_0, c_0$ , initial dummy mask $\mathbf{z}_0^m$ +Output: background masks $\pi_k$ , appearance $\mu_k^{\mathrm{bg}}$ , for $k = 1, \dots, K$ +for $k \gets 1$ to $K$ do +\[ +\begin{aligned} +& h_k, c_k = \mathrm{LSTM}(\mathbf{z}_{k-1}^m, c_{k-1}, h_{k-1}) \\ +& [\mu_k^m, \sigma_k^m] = \text{PredictMaskPrior}(h_k) \\ +& \mathbf{z}^m \sim \mathcal{N}(\mu_k^m, \sigma_k^{m,2}) \\ +& / * \text{Actually decoded in parallel} \\ +& \hat{\pi}_k = \text{MaskDecoder}(\mathbf{z}_k^m) \\ +& / * \text{Stick breaking process as described in GENESIS} \\ +& \pi_k = \text{SBP}(\hat{\pi}_{1:k}) \\ +& [\mu_k^c, \sigma_k^c] = \text{PredictComp}(\mathbf{z}_k^m) \\ +& \mathbf{z}_k^c \sim \mathcal{N}(\mu_k^c, \sigma_k^{c,2}) \\ +& \mu_k^{\mathrm{bg}} = \text{CompDecoder}(\mathbf{z}_k^c) +\end{aligned} +\] +end + +# D.2 TRAINING REGIME AND HYPERPARAMETERS + +For all experiments we use an image size of $128 \times 128$ and a batch size of 12 to 16 depending on memory usage. For the foreground module, we use the RMSProp (Tieleman & Hinton. (2012)) optimizer with a learning rate of $1 \times 10^{-5}$ except for Figure 5, for which we use a learning rate of $1 \times 10^{-4}$ as SPAIR. For the background module, we use the Adam (Kingma & Ba (2014)) optimizer with a learning rate of $1 \times 10^{-3}$ . We use gradient clipping with a maximum norm of 1.0. For quantitative results, SPACE is trained up to 160000 steps. For Atari games, we find it beneficial to set $\alpha$ to be fixed for the first several thousand steps, and vary the actual value and number of steps for different games. This allows both the foreground as well as the background module to learn in the early stage of training. + +We list out our hyperparameters for 3D large dataset and joint training for 10 static Atari games below. Hyperparameters for other experiments are similar, but are finetuned for each dataset individually. In the tables below, $(m\to n):(p\to q)$ denotes annealing the hyperparameter value from $m$ to $n$ , starting from step $p$ until step $q$ . + +3D Room Large + +
NameSymbolValue
zpresprior probρ(0.1 → 0.01): (4000 → 10000)
zscaleprior meanμscale(-1.0 → -2.0): (10000 → 20000)
zscaleprior stdevσscale0.1
zshiftpriorμshift, σshiftN(0, I)
zdepthpriorμdepth, σdepthN(0, I)
zwhatpriorμwhat, σwhatN(0, I)
foreground stdevσfg0.15
background stdevσbg0.15
component numberK5
gumbel-softmax temperatureτ(2.5 → 0.5): (0 → 20000)
#steps to fix αN/A
fixed α valueN/A
boundary lossYes
turn off boundary loss at step100000
+ +Joint Training on 10 Atari Games + +
NameSymbolValue
zpresprior probρ1 × 10-10
zscaleprior meanμscale-2.5
zscaleprior stdevσscale0.1
zshiftpriorμshift, σshiftN(0, I)
zdepthpriorμdepth, σdepthN(0, I)
zwhatpriorμwhat, σwhatN(0, I)
foreground stdevσfg0.20
background stdevσbg0.10
component numberK3
gumbel-softmax temperatureτ(2.5 → 1.0): (0 → 10000)
#steps to fix α4000
fixed α value0.1
boundary lossNo
turn off boundary loss at stepN/A
+ +# D.3 MODEL ARCHITECTURE + +Here we describe the architecture of our $16 \times 16$ SPACE model. The model for $8 \times 8$ grid cells is the same but with a stride-2 convolution for the last layer of the image encoder. + +All modules that output distribution parameters are implemented with either one single fully connected layer or convolution layer, with the appropriate output size. Image encoders are fully convolutional networks that output a feature map of shape $H \times W$ , and the glimpse encoder comprises of convolutional layers followed by a final linear layer that computes the parameters of a Gaussian distribution. For the glimpse decoder of the foreground module and the mask decoder of the background module we use the sub-pixel convolution layer (Shi et al. (2016)). On the lines of GENESIS (Engelcke et al. (2019)) and IODINE (Greff et al. (2019)), we adopt Spatial Broadcast Network (Watters et al. (2019)) as the component decoder to decode $\mathbf{z}_k^c$ into background components. + +For inference and generation of the background module, the dependence of $\mathbf{z}_k^m$ on $\mathbf{z}_{1:k-1}^m$ is implemented with LSTMs, with hidden sizes of 64. Dependence of $\mathbf{z}_k^c$ on $\mathbf{z}_k^m$ is implemented with a MLP with two hidden layers with 64 units per layer. We apply softplus when computing standard deviations for all Gaussian distributions, and apply sigmoid when computing reconstruction and masks. We use either Group Normalization (GN) (Wu & He (2018)) and CELU (Barron (2017)) or Batch Normalization (BN) (Ioffe & Szegedy (2015)) and ELU (Clevert et al. (2016)) depending on the module type. + +The rest of the architecture details are described below. In the following tables, $\mathrm{ConvSub}(n)$ denotes a sub-pixel convolution layer implemented as a stride-1 convolution and a PyTorch PixelShuffle(n) layer, and $\mathrm{GN}(n)$ denotes Group Normalization with $n$ groups. + +
NameValueComment
zdepth dim1
zpres dim1
zscale dim2for x and y axis
zshift dim2for x and y axis
zwhat dim32
zm dim32
zc dim32
glimpse shape(32, 32)for oatt, αatt
image shape(128, 128)
+ +Foreground Image Encoder + +
LayerSize/Ch.StrideNorm./Act.
Input3
Conv 4 × 4162GN(4)/CELU
Conv 4 × 4322GN(8)/CELU
Conv 4 × 4642GN(8)/CELU
Conv 3 × 31281GN(16)/CELU
Conv 3 × 32561GN(32)/CELU
Conv 1 × 11281GN(16)/CELU
+ +Residual Connection + +
LayerSize/Ch.StrideNorm./Act.
Input128
Conv 3 × 31281GN(16)/CELU
Conv 3 × 31281GN(16)/CELU
+ +Residual Encoder + +
LayerSize/Ch.StrideNorm./Act.
Input128 + 128
Conv 3 × 31281GN(16)/CELU
+ +Glimpse Encoder + +
LayerSize/Ch.StrideNorm./Act.
Input3
Conv 3 × 3161GN(4)/CELU
Conv 4 × 4322GN(8)/CELU
Conv 3 × 3321GN(4)/CELU
Conv 4 × 4642GN(8)/CELU
Conv 4 × 41282GN(8)/CELU
Conv 4 × 42561GN(16)/CELU
Linear32 + 32
+ +Glimpse Decoder + +
LayerSize/Ch.StrideNorm./Act.
Input32
Conv 1 × 12561GN(16)/CELU
ConvSub(2)1281GN(16)/CELU
Conv 3 × 31281GN(16)/CELU
ConvSub(2)1281GN(16)/CELU
Conv 3 × 31281GN(16)/CELU
ConvSub(2)641GN(8)/CELU
Conv 3 × 3641GN(8)/CELU
ConvSub(2)321GN(8)/CELU
Conv 3 × 3321GN(8)/CELU
ConvSub(2)161GN(4)/CELU
Conv 3 × 3161GN(4)/CELU
+ +Background Image Encoder + +
LayerSize/Ch.StrideNorm./Act.
Input3
Conv 3 × 3642BN/ELU
Conv 3 × 3642BN/ELU
Conv 3 × 3642BN/ELU
Conv 3 × 3642BN/ELU
Flatten
Linear64ELU
+ +Mask Decoder + +
LayerSize/Ch.StrideNorm./Act.
Input32
Conv 1 × 12561GN(16)/CELU
ConvSub(4)2561GN(16)/CELU
Conv 3 × 32561GN(16)/CELU
ConvSub(2)1281GN(16)/CELU
Conv 3 × 31281GN(16)/CELU
ConvSub(4)641GN(8)/CELU
Conv 3 × 3641GN(8)/CELU
ConvSub(4)161GN(4)/CELU
Conv 3 × 3161GN(4)/CELU
Conv 3 × 3161GN(4)/CELU
Conv 3 × 311
+ +Component Encoder + +
LayerSize/Ch.StrideNorm./Act.
Input3+1 (RGB+mask)
Conv 3 × 3322BN/ELU
Conv 3 × 3322BN/ELU
Conv 3 × 3642BN/ELU
Conv 3 × 3642BN/ELU
Flatten
Linear32+32
+ +Component Decoder + +
LayerSize/Ch.StrideNorm./Act.
Input32 (1d)
Spatial Broadcast32+2 (3d)
Conv 3 × 3321BN/ELU
Conv 3 × 3321BN/ELU
Conv 3 × 3321BN/ELU
Conv 3 × 331
+ +# D.4 BASELINES + +Here we give out the details of the background decoder in training of SPAIR (both full image as well as patch-wise training). The foreground and background image encoder is same as that of SPACE with the only difference that the inferred latents are conditioned on previous cells' latents as described in Section 2.2. For the background image encoder, we add an additional linear layer so that the encoded background latent is one dimensional. + +SPAIR Background Decoder + +
LayerSize/Ch.StrideNorm./Act.
Input32
Conv 1 × 12561GN(16)/CELU
ConvSub(4)2561GN(16)/CELU
Conv 3 × 32561GN(16)/CELU
ConvSub(4)1281GN(16)/CELU
Conv 3 × 31281GN(16)/CELU
ConvSub(2)641GN(8)/CELU
Conv 3 × 3641GN(8)/CELU
ConvSub(4)161GN(4)/CELU
Conv 3 × 3161GN(4)/CELU
Conv 3 × 3161GN(4)/CELU
Conv 3 × 331
+ +SPAIR Background Encoder For Patch Training + +
LayerSize/Ch.StrideNorm./Act.
Input3
Conv 2 × 2162GN(4)/CELU
Conv 2 × 2322GN(8)/CELU
Conv 2 × 2642GN(8)/CELU
Conv 2 × 21282GN(16)/CELU
Conv 2 × 2322GN(4)/CELU
+ +SPAIR Background Decoder For Patch Training + +
LayerSize/Ch.StrideNorm./Act.
Input16
Conv 1 × 12561GN(16)/CELU
Conv 1 × 120481
ConvSub(4)1281GN(16)/CELU
Conv 3 × 31281GN(16)/CELU
Conv 1 × 12561
ConvSub(2)641GN(8)/CELU
Conv 3 × 3641GN(8)/CELU
Conv 1 × 12561
ConvSub(4)161GN(4)/CELU
Conv 3 × 3161GN(4)/CELU
Conv 3 × 3161GN(4)/CELU
Conv 3 × 331
+ +For IODINE, we use our own implementation following the details as described in (Greff et al., 2019). For GENESIS, we also use our own implementation following the same architecture as in (Engelcke et al., 2019), but the details of individual networks are similar to SPACE's background module. + +# E DATASET DETAILS + +Atari. For each game, we sample 60,000 random images from a pretrained agent (Wu et al., 2016). We split the images into 50,000 for the training set, 5,000 for the validation set, and 5,000 for the testing set. Each image is preprocessed into a size of $128 \times 128$ pixels with BGR color channels. We present the results for the following games: Space Invaders, Air Raid, River Raid, Montezuma's Revenge. + +We also train our model on a dataset of 10 games jointly, where we have 8,000 training images, 1,000 validation images, and 1,000 testing images for each game. We use the following games: Asterix, Atlantis, Carnival, Double Dunk, Kangaroo, Montezuma Revenge, Pacman, Pooyan, Qbert, Space Invaders. + +Room 3D. We use MuJoCo (Todorov et al., 2012) to generate this dataset. Each image consists of a walled enclosure with a random number of objects on the floor. The possible objects are randomly sized spheres, cubes, and cylinders. The small 3D-Room dataset has 4-8 objects and the large 3D-Room dataset has 18-24 objects. The color of the objects are randomly chosen from 8 different colors and the colors of the background (wall, ground, sky) are chosen randomly from 5 different colors. The angle of the camera is also selected randomly. We use a training set of 63,000 images, a validation set of 7,000 images, and a test set of 7,000 images. We use a 2-D projection from the camera to determine the ground truth bounding boxes of the objects so that we can report the average precision of the different models. \ No newline at end of file diff --git a/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/images.zip b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..db449d9d6ff136b68a20310ee61f0cfa151d055a --- /dev/null +++ b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c6c1e40113733307a401f1d8d1a85ba759c2487bfa1f543d760ab9ff92b5da3 +size 1687712 diff --git a/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/layout.json b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a7479a9f71029dd534a8b68eb68e1fd7d39d3d19 --- /dev/null +++ b/spaceunsupervisedobjectorientedscenerepresentationviaspatialattentionanddecomposition/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8a07989659c6259019c81fc7b048015312d1be9e9f8c7ffaeda80307be45562 +size 526469 diff --git a/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_content_list.json b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e0ab3e97176937add691c02179e33ece369d9342 --- /dev/null +++ b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:778b817550d679df0055a099a625095b9acbd026ca01ea595c2c1a941b356145 +size 134352 diff --git a/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_model.json b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6e6bd1a4622de15004ce61ef8c984c9c1282fa72 --- /dev/null +++ b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bf3fa251c315e5b099cc75da6139a53e1d74879d2f61c93126c8c1863eff1bf +size 154320 diff --git a/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_origin.pdf b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..41d92906a5eace4fbe55eb60a6e59de055654e6c --- /dev/null +++ b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/87954272-f891-4984-a9a5-e965f577a53e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3352f6fe349780527d75b73bb6d43df66dbd9ce83de14eb63b7753c1f42bcb2 +size 507596 diff --git a/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/full.md b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..29011d787f596b91b7c470a74f6b0a9e520d58b3 --- /dev/null +++ b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/full.md @@ -0,0 +1,489 @@ +# SPAN RECOVERY FOR DEEP NEURAL NETWORKS WITH APPLICATIONS TO INPUT OBFUSCATION + +Rajesh Jayaram + +Computer Science Department + +Carnegie Mellon University + +Pittsburgh, PA 15213, USA + +rkjayara@cs.cmu.edu + +David Woodruff + +Computer Science Department + +Carnegie Mellon University + +Pittsburgh, PA 15213, USA + +dwoodruf@cs.cmu.edu + +Qiuyi Zhang + +Google Brain + +qiuyiz@google.com + +# ABSTRACT + +The tremendous success of deep neural networks has motivated the need to better understand the fundamental properties of these networks, but many of the theoretical results proposed have only been for shallow networks. In this paper, we study an important primitive for understanding the meaningful input space of a deep network: span recovery. For $k < n$ , let $\mathbf{A} \in \mathbb{R}^{k \times n}$ be the innermost weight matrix of an arbitrary feed forward neural network $M : \mathbb{R}^n \to \mathbb{R}$ , so $M(x)$ can be written as $M(x) = \sigma(\mathbf{A}x)$ , for some network $\sigma : \mathbb{R}^k \to \mathbb{R}$ . The goal is then to recover the row span of $\mathbf{A}$ given only oracle access to the value of $M(x)$ . We show that if $M$ is a multi-layered network with ReLU activation functions, then partial recovery is possible: namely, we can provably recover $k/2$ linearly independent vectors in the row span of $\mathbf{A}$ using poly( $n$ ) non-adaptive queries to $M(x)$ . Furthermore, if $M$ has differentiable activation functions, we demonstrate that full span recovery is possible even when the output is first passed through a sign or $0/1$ thresholding function; in this case our algorithm is adaptive. Empirically, we confirm that full span recovery is not always possible, but only for unrealistically thin layers. For reasonably wide networks, we obtain full span recovery on both random networks and networks trained on MNIST data. Furthermore, we demonstrate the utility of span recovery as an attack by inducing neural networks to misclassify data obfuscated by controlled random noise as sensical inputs. + +# 1 INTRODUCTION + +Consider the general framework in which we are given an unknown function $f: \mathbb{R}^n \to \mathbb{R}$ , and we want to learn properties about this function given only access to the value $f(x)$ for different inputs $x$ . There are many contexts where this framework is applicable, such as blackbox optimization in which we are learning to optimize $f(x)$ (Djolonga et al., 2013), PAC learning in which we are learning to approximate $f(x)$ (Denis, 1998), adversarial attacks in which we are trying to find adversarial inputs to $f(x)$ (Szegedy et al., 2013), or structure recovery in which we are learning the structure of $f(x)$ . For example in the case when $f(x)$ is a neural network, one might want to recover the underlying weights or architecture (Arora et al., 2014; Zhang et al., 2017). In this work, we consider the setting when $f(x) = M(x)$ is a neural network that admits a latent low-dimensional structure, namely $M(x) = \sigma(\mathbf{A}x)$ where $\mathbf{A} \in \mathbb{R}^{k \times n}$ is a rank $k$ matrix for some $k < n$ , and $\sigma: \mathbb{R}^k \to \mathbb{R}$ is some neural network. In this setting, we focus primarily on the goal of recovering the row-span of the weight matrix $\mathbf{A}$ . We remark that all our results generalize in a straightforward manner to the case when $\mathbf{A}$ is rank $r < k$ . + +Span recovery of general functions $f(x) = g(\mathbf{A}x)$ , where $g$ is arbitrary, has been studied in some contexts, and is used to gain important information about the underlying function $f$ . By learning Span(A), we in essence are capturing the relevant subspace of the input to $f$ ; namely, $f$ behaves identically on $x$ as it does on the projection of $x$ onto the row-span of A. In statistics, this is known as effective dimension reduction or the multi-index model Li (1991); Xia et al. (2002). Another important motivation for span recovery is for designing adversarial attacks. Given the span of A, we compute the kernel of A, which can be used to fool the function into behaving incorrectly on inputs which are perturbed by vectors in the kernel. Specifically, if $x$ is a legitimate input correctly classified by $f$ and $y$ is a large random vector in the kernel of A, then $x + y$ will be indistinguishable from noise but we will have $f(x) = f(x + y)$ . + +Several works have considered the problem from an approximation-theoretic standpoint, where the goal is to output a hypothesis function $\widetilde{f}$ which approximates $f$ well on a bounded domain. For instance, in the case that $\mathbf{A} \in \mathbb{R}^n$ is a rank 1 matrix and $g(\mathbf{A}x)$ is a smooth function with bounded derivatives, Cohen et al. (2012) gives an adaptive algorithm to approximate $f$ . Their results also give an approximation $\widetilde{\mathbf{A}}$ to $\mathbf{A}$ , under the assumption that $\mathbf{A}$ is a stochastic vector $(\mathbf{A}_i \geq 0$ for each $i$ and $\sum_{i} \mathbf{A}_i = 1$ ). Extending this result to more general rank $k$ matrices $\mathbf{A} \in \mathbb{R}^{k \times n}$ , Tyagi & Cevher (2014) and Fornasier et al. (2012) give algorithms with polynomial sample complexity to find approximations $\widetilde{f}$ to twice differentiable functions $f$ . However, their results do not provide any guarantee that the original matrix $\mathbf{A}$ itself or a good approximation to its span will be recovered. Specifically, the matrix $\widetilde{\mathbf{A}}$ used in the hypothesis function $\widetilde{f}(x) = \widetilde{g}(\widetilde{\mathbf{A}}x)$ of Tyagi & Cevher (2014) only has moderate correlation with the true row span of $\mathbf{A}$ , and always admits some constant factor error (which can translate into very large error in any subspace approximation). + +Furthermore, all aforementioned works require the strong assumption that the matrix of gradients is well-conditioned (and full rank) in order to obtain good approximations $\tilde{f}$ . In contrast, when $f(x)$ is a non-differentiable ReLU deep network with only mild assumptions on the weight matrices, we prove that the gradient matrix has rank at least $k / 2$ , which significantly strengthens span recovery guarantees since we do not make any assumptions on the gradient matrix. Finally, Hardt & Woodruff (2013) gives an adaptive approximate span recovery algorithm with $\mathrm{poly}(n)$ samples under the assumption that the function $g$ satisfies a norm-preserving condition, which is restrictive and need not (and does not) hold for the deep neural networks we consider here. + +On the empirical side, the experimental results of Tyagi & Cevher (2014) for function approximation were only carried out for simple one-layer functions, such as the logistic function in one dimension (where $\mathbf{A}$ has rank $k = 1$ ), and on linear functions $g^{T}(\mathbf{A}x + b)$ , where $g \in \mathbb{R}^k$ has i.i.d. Gaussian entries. Moreover, their experiments only attempted to recover approximations $\widetilde{\mathbf{A}}$ to $\mathbf{A}$ when $\mathbf{A}$ was orthonormal. In addition, Fornasier et al. (2012) experimentally considers the approximation problem for $f$ when $f(\mathbf{A}x)$ is a third degree polynomial of the input. This leaves an experimental gap in understanding the performance of span recovery algorithms on non-smooth, multi-layer deep neural networks. + +When $f(x)$ is a neural network, there have been many results that allow for weight or architecture recovery under additional assumptions; however nearly all such results are for shallow networks. Arora et al. (2014) shows that layerwise learning can recover the architecture of random sparse neural networks. Janzamin et al. (2015) applies tensor methods to recover the weights of a two-layer neural network with certain types of smooth activations and vector-valued output, whereas Ge et al. (2019); Bakshi et al. (2019) obtain weight recovery for ReLU activations. Zhang et al. (2017) shows that SGD can learn the weights of two-layer neural networks with some specific activations. There is also a line of work for improperly two-layer networks, where the algorithm outputs an arbitrary hypothesis function which behaves similarly to the network under a fixed distribution (Goel et al., 2017; Goel & Klivans, 2019). + +Learning properties of the network can also lead to so-called model extraction attacks or enhance classical adversarial attacks on neural networks (Jagielski et al., 2019). Adversarial attacks are often differentiated into two settings, the whitebox setting where the trained network weights are known or the blackbox setting where the network weights are unknown but attacks can still be achieved on external information, such as knowledge of the dataset, training algorithm, network architecture, or network predictions. Whitebox attacks are well-studied and usually use explicit gradients or optimization procedures to compute adversarial inputs for various tasks such as classification and reinforcement learning (Szegedy et al., 2013; Huang et al., 2017; Goodfellow et al., 2014). However, blackbox attacks are more realistic and it is clear that model recovery can enhance these attacks. The work of Papernot et al. (2017) attacks practical neural networks upon observations of predictions of adaptively chosen inputs, trains a substitute neural network on the observed data, and applies a whitebox attack on the substitute. This setting, nicknamed the practical blackbox setting (Chen et al., 2017), is what we work in, as we only observe adaptively chosen predictions without knowledge of the network architecture, dataset, or algorithms. We note that perhaps surprisingly, some of our algorithms are in fact entirely non-adaptive. + +# 1.1 OUR CONTRIBUTIONS + +In this paper, we provably show that span recovery for deep neural networks with high precision can be efficiently accomplished with $\mathrm{poly}(n)$ function evaluations, even when the networks have $\mathrm{poly}(n)$ layers and the output of the network is a scalar in some finite set. Specifically, for deep networks $M(x): \mathbb{R}^n \to \mathbb{R}$ with ReLU activation functions, we prove that we can recover a subspace $V \subset \operatorname{Span}(A)$ of dimension at least $k/2$ with polynomially many non-adaptive queries. First, we use a volume bounding technique to show that a ReLU network has sufficiently large piece-wise linear sections and that gradient information can be derived from function evaluations. Next, by using a + +novel combinatorial analysis of the sign patterns of the ReLU network along with facts in polynomial algebra, we show that the gradient matrix has sufficient rank to allow for partial span recovery. + +Theorem 3.4 (informal) Suppose we have the network $M(x) = w^{T}\phi(\mathbf{W}_{1}\phi(\mathbf{W}_{2}\phi(\ldots \mathbf{W}_{d}\phi(\mathbf{A}x))\ldots)$ , where $A \in \mathbb{R}^{k \times n}$ is rank $k$ , $\phi$ is the ReLU and $\mathbf{W}_i \in \mathbb{R}^{k_i \times k_{i+1}}$ are weight matrices, with $k_i$ possibly much smaller than $k$ . Then, under mild assumptions, there is a non-adaptive algorithm that makes $O(kn\log k)$ queries to $M(x)$ and returns in poly(n,k)-time a subspace $V \subseteq \text{span}(A)$ of dimension at least $\frac{k}{2}$ with probability $1 - \delta$ . + +We remark that span recovery of the first weight layer is provably feasible even in the surprising case when the neural network has many "bottleneck" layers with small $O(\log(n))$ width. Because this does not hold in the linear case, this implies that the non-linearities introduced by activations in deep learning allow for much more information to be captured by the model. Moreover, our algorithm is non-adaptive, which means that the points $x_{i}$ at which $M(x_{i})$ needs to be evaluated can be chosen in advance and span recovery will succeed with high probability. This has the benefit of being parallelizable, and possibly more difficult to detect when being used for an adversarial attack. In addition, we note that this result generalize to the case when $\mathbf{A}$ is rank $r < k$ , in which setting our guarantee will instead be that we recover a subspace of dimension at least $\frac{r}{2}$ contained within the span of $A$ . + +In contrast with previous papers, we do not assume that the gradient matrix has large rank; rather our main focus and novelty is to prove this statement under minimal assumptions. We require only two mild assumptions on the weight matrices. The first assumption is on the orthant probabilities of the matrix $\mathbf{A}$ , namely that the distribution of sign patterns of a vector $\mathbf{A}g$ , where $g \sim \mathcal{N}(0, \mathbb{I}_n)$ , is not too far from uniform. Two examples of matrices which satisfy this property are random matrices and matrices with nearly orthogonal rows. The second assumption is a non-degeneracy condition on the matrices $\mathbf{W}_i$ , which enforces that products of rows of the matrices $\mathbf{W}_i$ result in vectors with non-zero coordinates. + +Our next result is to show that full span recovery is possible for thresholded networks $M(x)$ with twice differentiable activation functions in the inner layers, when the network has a $0/1$ threshold function in the last layer and becomes therefore non-differentiable, i.e., $M(x) \in \{0,1\}$ . Since the activation functions can be arbitrarily non-linear, our algorithm only provides an approximation of the true subspace $\operatorname{Span}(\mathbf{A})$ , although the distance between the subspace we output and $\operatorname{Span}(\mathbf{A})$ can be made exponentially small. We need only assume bounds on the first and second derivatives of the activation functions, as well as the fact that we can find inputs $x \in \mathbb{R}^n$ such that $M(x) \neq 0$ with good probability, and that the gradients of the network near certain points where the threshold evaluates to one are not arbitrarily small. We refer the reader to Section 4 for further details on these assumptions. Under these assumptions, we can apply a novel gradient-estimation scheme to approximately recover the gradient of $M(x)$ and the span of $\mathbf{A}$ . + +Theorem 4.3 (informal) Suppose we have the network $M(x) = \tau(\sigma(\mathbf{A}x))$ , where $\tau: \mathbb{R} \to \{0,1\}$ is a threshold function and $\sigma: \mathbb{R}^k \to \mathbb{R}$ is a neural network with twice differentiable activation functions, and such that $M$ satisfies the conditions sketched above (formally defined in Section 4). Then there is an algorithm that runs in poly( $N$ ) time, making at most poly( $N$ ) queries to $M(x)$ , where $N = \text{poly}(n,k,\log(\frac{1}{\epsilon}),\log(\frac{1}{\delta}))$ , and returns with probability $1 - \delta$ a subspace $V \subset \mathbb{R}^n$ of dimension $k$ such that for any $x \in V$ , we have + +$$ +\| \mathbf {P} _ {\text {S p a n} (\mathbf {A})} x \| _ {2} \geq (1 - \epsilon) \| x \| _ {2} +$$ + +where $\mathbf{P}_{Span}(\mathbf{A})$ is the orthogonal projection onto the span of $\mathbf{A}$ . + +Empirically, we verify our theoretical findings by running our span recovery algorithms on randomly generated networks and trained networks. First, we confirm that full recovery is not possible for all architectures when the network layer sizes are small. This implies that the standard assumption that the gradient matrix is full rank does not always hold. However, we see that realistic network architectures lend themselves easily to full span recovery on both random and trained instances. We emphasize that this holds even when the network has many small layers, for example a ReLU network that has 6 hidden layers with [784, 80, 40, 30, 20, 10] nodes, in that order, can still admit full span recovery of the rank 80 weight matrix. + +Furthermore, we observe that we can effortlessly apply input obfuscation attacks after a successful span recovery and cause misclassifications by tricking the network into classifying noise as normal inputs with high confidence. Specifically, we can inject large amounts of noise in the null space of $\mathbf{A}$ to arbitrarily obfuscate the input without changing the output of the network. We demonstrate the utility of this attack on MNIST data, where we use span recovery to generate noisy images that are classified by the network as normal digits with high confidence. We note that this veers away from traditional adversarial attacks, which aim to drastically change the network output with humanly-undetectable changes in the input. In our case, we attempt the arguably more challenging problem of drastically changing the input without affecting the output of the network. + +# 2 PRELIMINARIES + +Notation For a vector $x \in \mathbb{R}^k$ , the sign pattern of $x$ , denoted $\mathrm{sign}(x) \in \{0,1\}^k$ , is the indicator vector for the nonzero coordinates of $x$ . Namely, $\mathrm{sign}(x)_i = 1$ if $x_i \neq 0$ and $\mathrm{sign}(x)_i = 0$ otherwise. Given a matrix $\mathbf{A} \in \mathbb{R}^{n \times m}$ , we denote its singular values as $\sigma_{\min}(\mathbf{A}) = \sigma_{\min\{n,m\}}, \ldots, \sigma_1(\mathbf{A}) = \sigma_{\max}(\mathbf{A})$ . The condition number of $\mathbf{A}$ is denoted $\kappa(\mathbf{A}) = \sigma_{\max}(\mathbf{A}) / \sigma_{\min}(\mathbf{A})$ . We let $\mathbb{I}_n \in \mathbb{R}^{n \times n}$ denote the $n \times n$ identity matrix. For a subspace $V \subset \mathbb{R}^n$ , we write $\mathbf{P}_V \in \mathbb{R}^{n \times n}$ to denote the orthogonal projection matrix onto $V$ . If $\mu \in \mathbb{R}^n$ and $\Sigma \in \mathbb{R}^{n \times n}$ is a PSD matrix, we write $\mathcal{N}(\mu, \Sigma)$ to denote the multi-variate Gaussian distribution with mean $\mu$ and covariance $\Sigma$ . + +Gradient Information For any function $f(x) = g(\mathbf{A}x)$ , note that $\nabla f(x) = \mathbf{A}^\top g(\mathbf{A}x)$ must be a vector in the row span of $\mathbf{A}$ . Therefore, span recovery boils down to understanding the span of the gradient matrix as $x$ varies. Specifically, note that if we can find points $x_1, \ldots, x_k$ such that $\{\nabla f(x_i)\}$ are linearly independent, then the full span of $\mathbf{A}$ can be recovered using the span of the gradients. To our knowledge, previous span recovery algorithms heavily rely on the assumption that the gradient matrix is full rank and in fact well-conditioned. Specifically, for some distribution $\mathcal{D}$ , it is assumed that $H_f = \int_{x \sim \mathcal{D}} \nabla f(x) \nabla f(x)^\top dx$ is a rank $k$ matrix with a minimum non-zero singular value bounded below by $\alpha$ and the number of gradient or function evaluations needed depends inverse polynomially in $\alpha$ . In contrast, in this paper, when $f(x)$ is a neural network, we provably show that $H_f$ is a matrix of sufficiently high rank or large minimum non-zero singular value under mild assumptions, using tools in polynomial algebra. + +# 3 DEEP NETWORKS WITH RELU ACTIVATIONS + +In this section, we demonstrate that partial span recovery is possible for deep ReLU networks. Specifically, we consider neural networks $M(x): \mathbb{R}^n \to \mathbb{R}$ of the form + +$$ +M (x) = w ^ {T} \phi (\mathbf {W} _ {1} \phi (\mathbf {W} _ {2} \phi (\dots \mathbf {W} _ {d} \phi (\mathbf {A} x)) \dots) +$$ + +where $\phi (x)_i = \max \{x_i,0\}$ is the RELU (applied coordinate-wise to each of its inputs), and $\mathbf{W}_i\in \mathbb{R}^{k_i\times k_{i + 1}}$ , and $w\in \mathbb{R}^{k_d}$ , and A has rank $k$ . We note that $k_{i}$ can be much smaller than $k$ . In order to obtain partial span recovery, we make the following assumptions parameterized by a value $\gamma >0$ (our algorithms will by polynomial in $1 / \gamma$ .. + +- Assumption 1: For every sign pattern $S \in \{0,1\}^k$ , we have $\mathbf{Pr}_{g \sim \mathcal{N}(0,I_n)}[\mathrm{sign}(\phi(\mathbf{A}g)) = S] \geq \gamma / 2^k$ . +- Assumption 2: For any $S_1, \ldots, S_d \neq \emptyset$ where $S_i \subseteq [k_i]$ , we have $w^T\left(\prod_{i=1}^d (\mathbf{W}_i)_{S_i}\right) \in \mathbb{R}^k$ is entry-wise non-zero. Here $(\mathbf{W}_i)_{S_i}$ is the matrix with the rows $j \notin S_i$ set equal to 0. Moreover, we assume + +$$ +\mathbf {P r} _ {g \sim \mathcal {N} (0, I _ {k})} [ M (g) = 0 ] \leq \frac {\gamma}{8}. +$$ + +Our first assumption is an assumption on the orthant probabilities of the distribution $\mathbf{A}g$ . Specifically, observe that $\mathbf{A}g \in \mathbb{R}^k$ follows a multi-variate Gaussian distribution with covariance matrix $\mathbf{A}\mathbf{A}^T$ . Assumption 1 then states that the probability that a random vector $x \sim \mathcal{N}(0, \mathbf{A}\mathbf{A}^T)$ lies in a certain orthant of $\mathbb{R}^k$ is not too far from uniform. We remark that orthant probabilities of multivariate Gaussian distributions are well-studied (see e.g., Miwa et al. (2003); Bacon (1963); Abrahamson et al. (1964)), and thus may allow for the application of this assumption to a larger class of matrices. In particular, we show it is satisfied by both random matrices and orthogonal matrices. Our second assumption is a non-degeneracy condition on the weight matrices $\mathbf{W}_i$ - namely, that products of $w^T$ with non-empty sets of rows of the $\mathbf{W}_i$ result in entry-wise non-zero vectors. In addition, Assumption 2 requires that the network is non-zero with probability that is not arbitrarily small, otherwise we cannot hope to find even a single $x$ with $M(x) \neq 0$ . + +In the following lemma, we demonstrate that these conditions are satisfied by randomly initialized networks, even when the entries of the $\mathbf{W}_i$ are not identically distributed. + +Lemma 3.1. If $\mathbf{A} \in \mathbb{R}^{k \times n}$ is an arbitrary matrix with orthogonal rows, or if $n > \Omega(k^3)$ and $\mathbf{A}$ has entries that are drawn i.i.d. from some sub-Gaussian distribution $\mathcal{D}$ with expectation 0, unit variance, and constant sub-Gaussian norm $\| \mathcal{D} \|_{\psi_2} = \sup_{p \geq 1} p^{-1/2} (\mathbf{E}_{X \sim \mathcal{D}} |X|^p)^{1/p}$ then with probability at least $1 - e^{-k^2}$ , $\mathbf{A}$ satisfies Assumption 1 with $\gamma \geq 1/2$ . Moreover, if the weight matrices $w, \mathbf{W}_1, \mathbf{W}_2, \dots, \mathbf{W}_d$ with $\mathbf{W}_i \in \mathbb{R}^{k_i \times k_{i+1}}$ have entries that are drawn independently (and possibly non-identically) from continuous symmetric distributions, and if $k_i \geq \log \left( \frac{16d}{\delta\gamma} \right)$ for each $i \in [d]$ , then Assumption 2 holds with probability $1 - \delta$ . + +# 3.1 ALGORITHM FOR SPAN RECOVERY + +The algorithm for recovery is given in Algorithm 1. Our algorithm computes the gradient $\nabla M(g_i)$ for different Gaussian vectors $g_i \sim \mathcal{N}(0, I_k)$ , and returns the subspace spanned by these gradients. To implement this procedure, + +Algorithm 1: Span Recovery with Non-Adaptive Gradients +Input: function $M(x):\mathbb{R}^n\to \mathbb{R},k$ : latent dimension, $\gamma$ : probability parameter 1 Set $r = O(k\log (k) / \gamma)$ 2 for $i = 0,\dots ,r$ do 3 Generate random Gaussian vector $g^{i}\sim \mathcal{N}(0,I_{n})$ 4 Compute Gradient: $z^i = \nabla M(g^i)$ 5 end 6 return Span $(z^1,z^2,\ldots ,z^r)$ + +we must show that it is possible to compute gradients via the perturbational method (i.e. finite differences), given only oracle queries to the network $M$ . Namely, we firstly must show that if $g \sim \mathcal{N}(0, \mathbb{I}_n)$ then $\nabla M(g)$ exists, and moreover, that $\nabla M(x)$ exists for all $x \in \mathcal{B}_{\epsilon}(g)$ , where $\mathcal{B}_{\epsilon}(g)$ is a ball of radius $\epsilon$ centered at $g$ , and $\epsilon$ is some value with polynomial bit complexity which we can bound. To demonstrate this, we show that for any fixing of the sign patterns of the network, we can write the region of $\mathbb{R}^n$ which satisfies this sign pattern and is $\epsilon$ -close to one of the $O(dk)$ ReLU thresholds of the network as a linear program. We then show that the feasible polytope of this linear program is contained inside a Euclidean box in $\mathbb{R}^n$ , which has one side of length $\epsilon$ . Using this containment, we upper bound the volume of the polytope in $\mathbb{R}^n$ which is $\epsilon$ close to each ReLU, and union bound over all sign patterns and ReLUs to show that the probability that a Gaussian lands in one of these polytopes is exponentially small. + +Lemma 3.2. There is an algorithm which, given $g \sim \mathcal{N}(0, I_k)$ , with probability $1 - \exp(-n^c)$ for any constant $c > 1$ (over the randomness in $g$ ), computes $\nabla M(g) \in \mathbb{R}^n$ with $O(n)$ queries to the network, and in poly(n) runtime. + +Now observe that the gradients of the network lie in the row-span of $\mathbf{A}$ . To see this, for a given input $x\in \mathbb{R}^n$ , let $S_0(x)\in \mathbb{R}^k$ be the sign pattern of $\phi (\mathbf{Ax})\in \mathbb{R}^k$ , and more generally define $S_{i}(x)\in \mathbb{R}^{k_{i}}$ via + +$$ +S _ {i} (x) = \operatorname {s i g n} \left(\phi \left(\mathbf {W} _ {i} \phi \left(\mathbf {W} _ {i + 1} \phi (\dots \mathbf {W} _ {d} \phi (A x)) \dots\right)\right) \right. +$$ + +Then $\nabla M(x) = (w^T \cdot (\prod_{i=1}^d (\mathbf{W}_i)_{S_i})))\mathbf{A}_{S_0}$ , which demonstrates the claim that the gradients lie in the row-span of $\mathbf{A}$ . Now define $z^i = \nabla M(g^i)$ where $g^i \sim \mathcal{N}(0, \mathbb{I}_n)$ , and let $\mathbf{Z}$ be the matrix where the $i$ -th row is equal to $z_i$ . We will prove that $\mathbf{Z}$ has rank at least $k/2$ . To see this, first note that we can write $\mathbf{Z} = \mathbf{V}\mathbf{A}$ , where $\mathbf{V}$ is some matrix such that the non-zero entries in the $i$ -th row are precisely the coordinates in the set $S_0^i$ , where $S_j^i = S_j(g^i)$ for any $j = 0, 1, 2, \ldots, d$ and $i = 1, 2, \ldots, r$ . We first show that $\mathbf{V}$ has rank at least $ck$ for a constant $c > 0$ . To see this, suppose we have computed $r$ gradients so far, and the rank of $\mathbf{V}$ is less than $ck$ for some $0 < c < 1/2$ . Now $\mathbf{V} \in \mathbb{R}^{r \times k}$ is a fixed rank- $ck$ matrix, so the span of the matrix can be expressed as a linear combination of some fixed subset of $ck$ of its rows. We use this fact to show in the following lemma that the set of all possible sign patterns obtainable in the row span of $\mathbf{V}$ is much smaller than $2^k$ . Thus a gradient $z^{r+1}$ with a uniform (or nearly uniform) sign pattern will land outside this set with good probability, and thus will increase the rank of $\mathbf{Z}$ when appended. + +Lemma 3.3. Let $\mathbf{V} \in \mathbb{R}^{r \times k}$ be a fixed at most rank $ck$ matrix for $c \leq 1/2$ . Then the number of sign patterns $S \subset [k]$ with at most $k/2$ non-zero spanned by the rows of $\mathbf{V}$ is at most $\frac{2^k}{\sqrt{k}}$ . In other words, the set $S(\mathbf{V}) = \{\text{sign}(w) \mid w \in \text{span}(\mathbf{V}), \text{nnz}(w) \leq \frac{k}{2}\}$ has size at most $\frac{2^k}{\sqrt{k}}$ . + +Theorem 3.4. Suppose the network $M(x) = w^{T}\phi (W_{1}\phi (W_{2}\phi (\dots W_{d}\phi (Ax))\ldots)$ , where $\phi$ is the ReLU, satisfies Assumptions 1 and 2. Then the algorithm given in Figure 1 makes $O(kn\log (k / \delta) / \gamma)$ queries to $M(x)$ and returns in poly(n,k,1/γ, log(1/δ)) time a subspace $V \subseteq \operatorname{span}(A)$ of dimension at least $\frac{k}{2}$ with probability $1 - \delta$ . + +# 4 NETWORKS WITH THRESHOLDING ON DIFFERENTIABLE ACTIVATIONS + +In this section, we consider networks that have a threshold function at the output node, as is done often for classification. Specifically, let $\tau : \mathbb{R} \to \{0,1\}$ be the threshold function: $\tau(x) = 1$ if $x \geq 1$ , and $\tau(x) = 0$ otherwise. Again, we let $\mathbf{A} \in \mathbb{R}^{k \times n}$ where $k < n$ , be the innermost weight matrix. The networks $M : \mathbb{R}^n \to \mathbb{R}$ we consider are then of the form: + +$$ +M (x) = \tau \left(\mathbf {W} _ {1} \phi_ {1} \left(\mathbf {W} _ {2} \phi_ {2} (\dots \phi_ {d} \mathbf {A} x)\right) \dots\right) +$$ + +where $\mathbf{W}_i\in \mathbb{R}^{k_i\times k_{i + 1}}$ and each $\phi_i$ is a continuous, differentiable activation function applied entrywise to its input. We will demonstrate that even for such functions with a binary threshold placed at the end, giving us minimal information about the network, we can still achieve full span recovery of the weight matrix $\mathbf{A}$ , albeit with the cost of an $\epsilon$ -approximation to the subspace. Note that the latter fact is inherent, since the gradient of any function that is not linear + +in some ball around each point cannot be obtained exactly without infinitely small perturbations of the input, which we do not allow in our model. + +We can simplify the above notation, and write $\sigma(x) = \mathbf{W}_1\phi_1(\mathbf{W}_2\phi_2(\ldots \phi_d\mathbf{A}x))\ldots)$ , and thus $M(x) = \tau(\sigma(x))$ . Our algorithm will involve building a subspace $V \subset \mathbb{R}^n$ which is a good approximation to the span of $\mathbf{A}$ . At each step, we attempt to recover a new vector which is very close to a vector in $\operatorname{Span}(\mathbf{A})$ , but which is nearly orthogonal to the vectors in $V$ . Specifically, after building $V$ , on an input $x \in \mathbb{R}^n$ , we will query $M$ for inputs $M((\mathbb{I}_n - \mathbf{P}_V)x)$ . Recall that $\mathbf{P}_V$ is the projection matrix onto $V$ , and $\mathbf{P}_{V^\perp}$ is the projection matrix onto the subspace orthogonal to $V$ . Thus, it will help here to think of the functions $M, \sigma$ as being functions of $x$ and not $(\mathbb{I}_n - \mathbf{P}_V)x$ , and so we define $\sigma_V(x) = \sigma(\mathbf{A}(\mathbb{I}_n - \mathbf{P}_V)x)$ , and similarly $M_V(x) = \tau(\sigma_V(x))$ . For the results of this section, we make the following assumptions on the activation functions. + +# Assumptions: + +1. The function $\phi_i: \mathbb{R} \to \mathbb{R}$ is continuous and twice differentiable, and $\phi_i(0) = 0$ . +2. $\phi_{i}$ and $\phi_i^\prime$ are $L_{i}$ -Lipschitz, meaning: + +$$ +\sup _ {x \in \mathbb {R}} \left| \frac {d}{d x} \phi_ {i} (x) \right| \leq L _ {i}, \quad \sup _ {x \in \mathbb {R}} \left| \frac {d ^ {2}}{d ^ {2} x} \phi_ {i} (x) \right| \leq L _ {i} +$$ + +3. The network is non-zero with bounded probability: for every subspace $V \subset \mathbb{R}^n$ of dimension $\dim(V) < k$ , we have that $\mathbf{Pr}_{g \sim \mathcal{N}(0, \mathbb{I}_n)}[\sigma_V(g) \geq 1] \geq \gamma$ for some value $\gamma > 0$ . +4. Gradients are not arbitrarily small near the boundary: for every subspace $V\subset \mathbb{R}^n$ of dimension $\dim (V) < k$ + +$$ +\Pr_ {g \sim \mathcal {N} (0, \mathbb {I} _ {n})} [ | \nabla_ {g} \sigma_ {V} (c g) | \geq \eta , \forall c > 0 \text {s u c h t h a t} \sigma_ {V} (c g) = 1, \text {a n d} \sigma_ {V} (g) \geq 1 ] \geq \gamma +$$ + +for some values $\eta, \gamma > 0$ , where $\nabla_g \sigma_V(cg)$ is the directional derivative of $\sigma_V$ in the direction $g$ . + +The first two conditions are standard and straightforward, namely $\phi_{i}$ is differentiable, and has bounded first and second derivatives (note that for our purposes, they need only be bounded in a ball of radius $\mathrm{poly}(n)$ ). Since $M(x)$ is a threshold applied to $\sigma (x)$ , the third condition states that it is possible to find inputs $x$ with non-zero network evaluation $M(x)$ . Our condition is slightly stronger, in that we would like this to be possible even when $x$ is projected away from any $k^{\prime} < k$ dimensional subspace (note that this ensures that $\mathbf{A}x$ is non-zero, since $\mathbf{A}$ has rank $k$ ). + +The last condition simply states that if we pick a random direction $g$ where the network is non-zero, then the gradients of the network are not arbitrarily small along that direction at the threshold points where $\sigma (c\cdot g) = 1$ . Observe that if the gradients at such points are vanishingly small, then we cannot hope to recover them. Moreover, since $M$ only changes value at these points, these points are the only points where information about $\sigma$ can be learned. Thus, the gradients at these points are the only gradients which could possibly be learned. We note that the running time of our algorithms will be polynomial in $\log (1 / \eta)$ , and thus we can even allow the gradient size $\eta$ to be exponentially small. + +# 4.1 THE APPROXIMATE SPAN RECOVERY ALGORITHM + +We now formally describe and analyze our span recovery algorithm for networks with differentiable activation functions and $0/1$ thresholding. Let $\kappa_{i}$ be the condition number of the $i$ -th weight matrix $\mathbf{W}_{i}$ , and let $\delta > 0$ be a failure probability, and let $\epsilon > 0$ be a precision parameter which will affect the how well the subspace we output will approximate $\operatorname{Span}(\mathbf{A})$ . Now fix $N = \mathrm{poly}(n, k, \frac{1}{\gamma}, \sum_{i=1}^{d} \log(L_{i}), \sum_{i=1}^{d} \log(\kappa_{i}), \log(\frac{1}{\eta}), \log(\frac{1}{\epsilon}), \log(\frac{1}{\delta}))$ . The running time and query complexity of our algorithm will be polynomial in $N$ . Our algorithm for approximate span recovery is given formally in Algorithm 2. + +Proposition 4.1. Let $V \subset \mathbb{R}^n$ be a subspace of dimension $k' < k$ , and fix any $\epsilon_0 > 0$ . Then we can find a vector $x$ with $0 \leq \sigma_V(x) - 1 \leq 2\epsilon_0$ in expected $O(1 / \gamma + N\log (1 / \epsilon_0))$ time. Moreover, with probability $\gamma / 2$ we have that $\nabla_x\sigma_V(x) > \eta / 4$ and the tighter bound of $\epsilon_0\eta 2^{-N} \leq \sigma_V(x) - 1 \leq 2\epsilon_0$ . + +We will apply the above proposition as input to the following Lemma 4.2, which is the main technical result of this section. Our approach involves first taking the point $x$ from Proposition 4.1 such that $\sigma_V(x)$ is close but bounded away from the boundary, and generating $n$ perturbations at this point $M_V(x + u_i)$ for carefully chosen $u_i$ . While we do not know the value of $\sigma_V(x + u_i)$ , we can tell for a given scaling $c > 0$ if $\sigma_V(x + cu_i)$ has crossed the boundary, since we will then have $M_V(x + cu_i) = 0$ . Thus, we can estimate the directional derivative $\nabla_{u_i}\sigma (x)$ by finding a value $c_i$ via a binary search such that $\sigma_V(x + c_iu_i)$ is exponentially closer to the boundary than $\sigma_V(x)$ . In order for our estimate to be accurate, we must carefully upper and lower bound the gradients and Hessian of $\sigma_v$ near $x$ , and demonstrate that the linear approximation of $\sigma_v$ at $x$ is still accurate at the point $x + c_iu_i$ where the boundary is crossed. Since each + +Algorithm 2: Span Recovery With Adaptive Gradients +1 $V\gets \emptyset ,N = \mathrm{poly}\big(n,k,\frac{1}{\gamma},\sum_{i = 1}^{d}\log (L_i),\sum_{i = 1}^{d}\log (\kappa_i),\log (\frac{1}{\eta}),\log (\frac{1}{\epsilon}),\log (\frac{1}{\delta})\big),\epsilon_0\gets 2^{-\mathrm{poly}(N)}$ +2 for $i = 0,\ldots ,k$ do +3 Generate $g\sim \mathcal{N}(0,\mathbb{I}_n)$ until $M((\mathbb{I}_n - \mathbf{P}_V)g) = 1$ +4 Find a scaling $\alpha >0$ via binary search on values $\tau (\sigma_V(\alpha g))$ such that $x = \alpha g$ satisfies $\epsilon_0\eta 2^{-N}\leq \sigma_V(x) - 1\leq 2\epsilon_0.$ ▷ Proposition 4.1 +5 Generate $g_{1},\dots,g_{n}\sim \mathcal{N}(0,\mathbb{I}_{n})$ , and set $u_{i} = \left(g_{i}2^{-N} - x / \| x\|_{2}\right)$ +6 For each $i\in [n]$ , binary search over values $c$ to find $c_{i}$ such that $1 - \beta \leq \sigma_V(x + c_iu_i)\leq 1$ where $\beta = 2^{-N^2}\epsilon_0^2$ +7 If any $c_{i}$ satisfies $|c_i|\geq (10\cdot 2^{-N}\epsilon_0 / \eta)$ , restart from line 5 (regenerate the Gaussian $g$ +8 Otherwise, define $\mathbf{B}\in \mathbb{R}^{n\times n}$ via $\mathbf{B}_{*,i} = u_i$ , where $\mathbf{B}_{*,i}$ is the $i$ -th column of $\mathbf{B}$ . Define $b\in \mathbb{R}^n$ by $b_{i} = 1 / c_{i}$ +9 Let $y^{*}$ be the solution to: min $\begin{array}{r}\min_{y\in \mathbb{R}^n}\| y^T\mathbf{B} - b^T\| _2^2\\ \operatorname {Set}v_i = y^*$ and $V\gets \operatorname {Span}(V,v_i)$ +10 end +11 return V + +value of $1 / c_{i}$ is precisely proportional to $\nabla_{u_i}\sigma (x) = \langle \nabla \sigma (x),u_i\rangle$ , we can then set up a linear system to approximately solve for the gradient $\nabla \sigma (x)$ (lines 8 and 9 of Algorithm 2). + +Lemma 4.2. Fix any $\epsilon, \delta > 0$ , and let $N$ be defined as above. Then given any subspace $V \subset \mathbb{R}^n$ with dimension $\dim(V) < k$ , and given $x \in \mathbb{R}^n$ , such that $\epsilon_0 \eta 2^{-N} \leq \sigma_V(x) - 1 \leq 2\epsilon_0$ where $\epsilon_0 = \Theta(2^{-N^C} / \epsilon)$ for a sufficiently large constant $C = O(1)$ , and such that $\nabla_x \sigma_V(x) > \eta / 2$ , then with probability $1 - 2^{-N / n^2}$ , we can find a vector $v \in \mathbb{R}^n$ in expected poly $(N)$ time, such that $\| \mathbf{P}_{Span}(\mathbf{A}) v \|_2 \geq (1 - \epsilon) \| v \|_2$ , and such that $\| \mathbf{P}_V v \|_2 \leq \epsilon \| v \|_2$ . + +Theorem 4.3. Suppose the network $M(x) = \tau(\sigma(\mathbf{A}x))$ satisfies the conditions described at the beginning of this section. Then Algorithm 2 runs in poly(N) time, making at most poly(N) queries to $M(x)$ , where $N =$ poly(n,k,1/γ,∑i=1d log(Li),∑i=1d log(κi),log(1/η),log(1/ε),log(1/δ)), and returns with probability 1 - δ a subspace $V \subset \mathbb{R}^n$ of dimension k such that for any $x \in V$ , we have $\| \mathbf{P}_{\text{Span}(\mathbf{A})}x \|_2 \geq (1 - \epsilon) \| x \|_2$ . + +# 5 EXPERIMENTS + +![](images/4039ef59f9ce1fa42253e2cbfac5896767590b264d3b5fd75939f67de7eebc43.jpg) +Figure 1: Partial span recovery of small networks with layer sizes specified in the legend. Note that $784->80->[6,3]$ indicates a 4 layer neural network with hidden layer sizes 784, 80, 6, and 3, in that order. Full span recovery is not always possible and recovery deteriorates as width decreases and depth increases. + +![](images/b045fb53906c7562d50ddb556eedd551ee1d54d62e59977db9760f7a8c9be385.jpg) + +![](images/5350846af9f9d4b5dbc5e151fc692f64fab1aa63857fefd1ccd4f1504639f3e0.jpg) + +![](images/6affbea328709467a598639e445230154dbc084f89c328435de58eeba4ef0d80.jpg) +Figure 2: Full span recovery of realistic networks with moderate widths and reasonable architectures. Full recovery occurs with only 100 samples for a rank 80 weight matrix in all settings. + +![](images/7f854d76bf762223fe6e9c1bbce0b25286c4c75b001a759f55b49be1b000a067.jpg) + +![](images/80ae65810efd3bc408baee4e62b38e35cbab5c88b4eb41bb72cd14214d3e6d03.jpg) + +When applying span recovery for a given network, we first calculate the gradients analytically via auto-differentiation at a fixed number of sample points distributed according to a standard Gaussian. Our networks are feedforward, fully-connected with ReLU units; therefore, as mentioned above, using analytic gradients is as precise as using finite differences due to piecewise linearity. Then, we compute the rank of the resulting gradient matrix, where the rank is defined to be the number of singular values that are above 1e-5 of the maximum singular value. In our experiments, we attempt to recover the full span of a 784-by-80 matrix with decreasing layer sizes for varying sample complexity, as specified in the figures. For the MNIST dataset, we use a size 10 vector output and train according to the softmax cross entropy loss, but we only calculate the gradient with respect to the first output node. + +Our recovery algorithms are GradientsRandom (Algorithm 1), GradientsRandomAda (Algorithm 2), and GradientsMNIST. GradientsRandom is a direct application of our first span recovery algorithm and calculates gradients via perturbations at random points for a random network. GradientsRandomAda uses our adaptive span recovery algorithm for a random network. Finally, GradientsMNIST is an application of GradientsRandom on a network with weights trained on MNIST data. In general, we note that the experimental outcomes are very similar among all three scenarios. + +![](images/b41685a9289a3719811243364660232772e40633542c3363d0b6dba3863af88f.jpg) +Figure 3: Fooling ReLU networks into misclassifying noise as digits by introducing Gaussian noise into the null space after span recovery. The prediction of the network is presented above the images, along with its softmax probability. + +For networks with very small widths and multiple layers, we see that span recovery deteriorates as depth increases, supporting our theory (see Figure 1). This holds both in the case when the networks are randomly initialized with Gaussian weights or trained on a real dataset (MNIST) and whether we use adaptive or non-adaptive recovery algorithms. However, we note that these small networks have unrealistically small widths (less than 10) and when trained on MNIST, these networks fail to achieve high accuracy, all falling below 80 percent. The small width case is therefore only used to support, with empirical evidence, why our theory cannot possibly guarantee full span recovery under every network architecture. + +For more realistic networks with moderate or high widths, however, full span recovery seems easy and implies a real possibility for attack (see Figure 2). Although we tried a variety of widths and depths, the results are robust to reasonable settings of layer sizes and depths. Therefore, we only present experimental results with sub-networks of a + +network with layer sizes [784, 80, 40, 30, 20, 10]. Note that full span recovery of the first-layer weight matrix with rank 80 is achieved almost immediately in all cases, with less than 100 samples. + +On the real dataset MNIST, we demonstrate the utility of span recovery algorithms as an attack to fool neural networks to misclassify noisy inputs (see Figure 3). We train a ReLU network (to around 95 percent accuracy) and recover its span by computing the span of the resulting gradient matrix. Then, we recover the null space of the matrix and generate random Gaussian noise projected onto the null space. We see that our attack successfully converts images into noisy versions without changing the output of the network, implying that allowing a full (or even partial) span recovery on a classification network can lead to various adversarial attacks despite not knowing the exact weights of the network. + +# ACKNOWLEDGMENTS + +The authors Rajesh Jayaram and David Woodruff would like to thank the partial support by the National Science Foundation under Grant No. CCF-1815840. + +# REFERENCES + +IG Abrahamson et al. Orthant probabilities for the quadrivariate normal distribution. The Annals of Mathematical Statistics, 35(4):1685-1703, 1964. +Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. In International Conference on Machine Learning, pp. 584-592, 2014. +Ralph Hoyt Bacon. Approximations to multivariate normal orthant probabilities. The Annals of Mathematical Statistics, 34(1):191-198, 1963. ISSN 00034851. URL http://www.jstor.org/stable/2991294. +Ainesh Bakshi, Rajesh Jayaram, and David P Woodruff. Learning two layer rectified neural networks in polynomial time. In Alina Beygelzimer and Daniel Hsu (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pp. 195-268, Phoenix, USA, 25-28 Jun 2019. PMLR. URL http://proceedings.mlr.press/v99/bakshi19a.html. +Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15-26. ACM, 2017. +Albert Cohen, Ingrid Daubechies, Ronald DeVore, Gerard Kerkyacharian, and Dominique Picard. Capturing ridge functions in high dimensions from point queries. Constructive Approximation, 35(2):225-243, 2012. +Nicholas Cook et al. Lower bounds for the smallest singular value of structured random matrices. The Annals of Probability, 46(6):3442-3500, 2018. +François Denis. Pac learning from positive statistical queries. In International Conference on Algorithmic Learning Theory, pp. 112-126. Springer, 1998. +Josip Djolonga, Andreas Krause, and Volkan Cevher. High-dimensional gaussian process bandits. In Advances in Neural Information Processing Systems, pp. 1025-1033, 2013. +Massimo Fornasier, Karin Schnass, and Jan Vybiral. Learning functions of few arbitrary linear parameters in high dimensions. Foundations of Computational Mathematics, 12(2):229-262, 2012. +Rong Ge, Rohith Kuditipudi, Zhize Li, and Xiang Wang. Learning two-layer neural networks with symmetric inputs. In International Conference on Learning Representations, 2019. +Surbhi Goel and Adam R. Klivans. Learning neural networks with two nonlinear layers in polynomial time. In Alina Beygelzimer and Daniel Hsu (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pp. 1470–1499, Phoenix, USA, 25–28 Jun 2019. PMLR. URL http://proceedings.mlr.press/v99/goel19b.html. +Surbhi Goel, Varun Kanade, Adam Klivans, and Justin Thaler. Reliably learning the relu in polynomial time. In Conference on Learning Theory, pp. 1004-1042, 2017. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. +H Tracy Hall, Leslie Hogben, Ryan Martin, and Bryan Shader. Expected values of parameters associated with the minimum rank of a graph. Linear Algebra and its Applications, 433(1):101-117, 2010. +Moritz Hardt and David P Woodruff. How robust are linear sketches to adaptive inputs? In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pp. 121-130. ACM, 2013. +Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017. +Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. High-fidelity extraction of neural network models. arXiv preprint arXiv:1909.01838, 2019. +Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015. + +B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. Ann. Statist., 28(5):1302-1338, 10 2000. doi: 10.1214/aos/1015957395. URL https://doi.org/10.1214/aos/1015957395. +Ker-Chau Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86 (414):316-327, 1991. ISSN 01621459. URL http://www.jstor.org/stable/2290563. +Tetsuhisa Miwa, AJ Hayter, and Satoshi Kuriki. The evaluation of general non-centred orthant probabilities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(1):223-234, 2003. +Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506-519. ACM, 2017. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. +Hemant Tyagi and Volkan Cevher. Learning non-parametric basis independent models from point queries via low-rank methods. Applied and Computational Harmonic Analysis, 37(3):389-412, 2014. +Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010. +Yingcun Xia, Howell Tong, Wai Keungxs Li, and Li-Xing Zhu. An adaptive estimation of dimension reduction space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(3):363-410, 2002. +Qiuyi Zhang, Rina Panigrahy, Sushant Sachdeva, and Ali Rahimi. Electron-proton dynamics in deep learning. arXiv preprint arXiv:1702.00458, pp. 1-31, 2017. + +# A MISSING PROOFS FROM SECTION 3 + +We first restate the results which have had their proofs omitted, and include their proofs subsequently. + +Lemma 3.1 If $\mathbf{A} \in \mathbb{R}^{k \times n}$ has orthogonal rows, or if $n > \Omega(k^3)$ and $\mathbf{A}$ has entries that are drawn i.i.d. from some sub-Gaussian distribution $\mathcal{D}$ with expectation 0, unit variance, and constant sub-gaussian norm $\| \mathcal{D} \|_{\psi_2} = \sup_{p \geq 1} p^{-1/2} (\mathbf{E}_{X \sim \mathcal{D}} |X|^p)^{1/p}$ then with probability at least $1 - e^{-k^2}$ , $\mathbf{A}$ satisfies Assumption 1 with $\gamma \geq 1/2$ . + +Moreover, if the weight matrices $w, \mathbf{W}_1, \mathbf{W}_2, \dots, \mathbf{W}_d$ with $\mathbf{W}_i \in \mathbb{R}^{k_i \times k_{i+1}}$ have entries that are drawn independently (and possibly non-identically) from continuous symmetric distributions, and if $k_i \geq \log\left(\frac{16d}{\delta\gamma}\right)$ for each $i \in [d]$ , then Assumption 2 holds with probability $1 - \delta$ . + +Proof. By Theorem 5.58 of Vershynin (2010), if the entries $\mathbf{A}$ are drawn i.i.d. from some sub-Gaussian isotropic distribution $\mathcal{D}$ over $R^n$ such that $\| \mathbf{A}_j\| _2 = \sqrt{n}$ almost surely, then $\sqrt{n} -C\sqrt{k} -t\leq \sigma_{\min}(\mathbf{A})\leq \sigma_{\max}(\mathbf{A})\leq$ $\sqrt{n} +C\sqrt{k} +t$ with probability at least $1 - 2e^{-ct^2}$ , for some constants $c,C > 0$ depending only on $\| \mathcal{D}\|_{\psi_2}$ . Since the entries are i.i.d. with variance 1, it follows that the rows of $\mathbf{A}$ are isotropic. Moreover, we can always condition on the rows having norm exactly $\sqrt{n}$ , and pulling out a positive diagonal scaling through the first Relu of $M(x)$ and absorbing this scaling into $\mathbf{W}_d$ . It follows that the conditions of the theorem hold, and we have $\sqrt{n} -C\sqrt{k}\le$ $\sigma_{\mathrm{min}}(\mathbf{A})\leq \sigma_{\mathrm{max}}(\mathbf{A})\leq \sqrt{n} +C\sqrt{k}$ with probability at least $1 - e^{-k^2}$ for a suitably large re scaling of the constant $C$ . Setting $n > \Omega (k^3)$ , it follows that $\kappa (A) < (1 + 1 / (100k))$ , which holds immediately if $\mathbf{A}$ has orthogonal rows. + +Now observe that $\mathbf{A}g$ is distributed as a multi-variate Gaussian with co-variance $\mathbf{A}^T\mathbf{A}$ , and is therefore given by the probability density function (pdf) + +$$ +p ^ {\prime} (x) = \frac {1}{(2 \pi) ^ {k / 2} \mathrm {d e t} (\mathbf {A} ^ {T} \mathbf {A}) ^ {1 / 2}} \exp \left(- \frac {1}{2} x ^ {T} \mathbf {A} ^ {T} \mathbf {A} x\right) +$$ + +Let $p(x) = \frac{1}{(2\pi)^{k/2}} \exp \left(-\frac{1}{2} x^T x\right)$ be the pdf of an identity covariance Gaussian $\mathcal{N}(0, I_k)$ . We lower bound $p'(x) / p(x)$ for $x$ with $\|x\|_2^2 \leq 16k$ . In this case, we have + +$$ +\begin{array}{l} \frac {p ^ {\prime} (x)}{p (x)} = \frac {1}{\det (\mathbf {A} ^ {T} \mathbf {A}) ^ {1 / 2}} \exp \left(- \frac {1}{2} \left(x ^ {T} \mathbf {A} ^ {T} \mathbf {A} x - \| x \| _ {2} ^ {2}\right)\right) \\ \geq \kappa (\mathbf {A}) ^ {- k} \exp \left(- \frac {1}{2} \left(\| x \| _ {2} ^ {2} (1 + 1 / (1 0 0 k)) - \| x \| _ {2} ^ {2}\right)\right) \\ \geq \kappa (\mathbf {A}) ^ {- k} \exp \left(- \frac {1}{2} | x | \| _ {2} ^ {2} / (1 0 0 k)\right) \tag {1} \\ \geq (1 + 1 / (1 0 0 k)) ^ {- k} \exp \left(- \frac {1}{2} (1 6 / 1 0 0)\right) \\ \geq (1 + 1 / (1 0 0 k)) ^ {- k} \exp \left(- \frac {1}{2} (1 6 / 1 0 0)\right) \\ \geq 1 / 2 \\ \end{array} +$$ + +Thus for any sign pattern $S$ , $\mathbf{Pr}[\mathrm{sign}(Ag) = S : \| Ag\|_2^2 \leq k] \geq \frac{1}{2}\mathbf{Pr}[\mathrm{sign}(g) = S : \| g\|_2^2 \leq k]$ . Now $\mathbf{Pr}[\mathrm{sign}(g) = S] = 2^{-k}$ , and spherical symmetry of Gaussians, $\mathbf{Pr}[\mathrm{sign}(g) = S : \| g\|_2^2 \leq k] = 2^{-k}$ , and thus $\mathbf{Pr}[\mathrm{sign}(Ag) = S : \| Ag\|_2^2 \leq k] \geq 2^{-k-1}$ . Now $\| Ag\|_2^2 \leq 2\| g\|_2^2$ which is distributed as a $\chi^2$ random variable with $k$ -degrees of freedom. By standard concentration results for $\chi^2$ distributions (Lemma 1 Laurent & Massart (2000)), we have $\mathbf{Pr}[\| g\|_2^2 \geq 8k] \leq e^{-2k}$ , so $\mathbf{Pr}[\| Ag\|_2^2 \geq 16k] \leq \mathbf{Pr}[\| g\|_2^2 \geq 8k] \leq e^{-2k}$ . By a union bound, $\mathbf{Pr}[\mathrm{sign}(Ag) = S : \| Ag\|_2^2 \leq k] \geq 2^{-k-1} - e^{-2k} \geq \frac{1}{4} 2^{-k}$ , which completes the proof of the first claim with $\gamma = 1/4$ . + +For the second claim, by an inductive argument, the entries in the rows $i \in S_j$ of the product $\mathbf{W}_{S_j}\left(\prod_{i = j + 1}^d (\mathbf{W}_i)_{S_i}\right)$ are drawn from a continuous distribution. Thus each column of $\mathbf{W}_{S_j}\left(\prod_{i = j + 1}^d (\mathbf{W}_i)_{S_i}\right)$ is non-zero with probability 1. It follows that $\langle w, \left(\prod_{i = 1}^d (\mathbf{W}_i)_{S_i}\right)_{*,j} \rangle$ is the inner product of a non-zero vector with a vector $w$ with continuous, + +independent entries, and is thus non-zero with probability 1. By a union bound over all possible non-empty sets $S_{j}$ , the desired result follows. + +We now show that the second part of Assumption 2 holds. To do so, first let $g \sim \mathcal{N}(0, I_n)$ . We demonstrate that $\mathbf{Pr}\mathbf{w}_{1}, \mathbf{w}_{2}, \ldots, \mathbf{w}_{d,g} [M(x) = 0] \leq 1 - \gamma \delta / 100$ . Here the entries of the $\mathbf{W}_i$ 's are drawn independently but not necessarily identically from a continuous symmetric distribution. To see this, note that we can condition on the value of $g$ , and condition at each step on the non-zero value of $y_i = \phi(W_{i+1} \phi(W_{i+2} \phi(\dots \phi(Ag) \dots))$ . Then, over the randomness of $W_i$ , note that the inner product of a row of $W_i$ and $y_i$ is strictly positive with probability at least $1/2$ , and so each coordinate of $W_i y_i$ is strictly positive independently with probability $\geq 1/2$ . It follows that $\phi(W_i y_i)$ is non-zero with probability at least $1 - 2^{-k_i}$ . Thus + +$$ +\begin{array}{l} \Pr_ {\mathbf {W} _ {1}, \mathbf {W} _ {2}, \dots , \mathbf {W} _ {d}, g} [ M (g) \neq 0 ] \geq \prod_ {i = 1} ^ {d} \left(1 - 2 ^ {- k _ {i}}\right) \\ \geq \left(1 - \frac {\delta \gamma}{1 6 d}\right) ^ {d} \tag {2} \\ \geq 1 - \delta \gamma / 8 \\ \end{array} +$$ + +where the second inequality is by assumption. It follows by our first part that + +$$ +\Pr_ {\mathbf {W} _ {1}, \mathbf {W} _ {2}, \dots , \mathbf {W} _ {d}, g} [ M (g) = 0 ] \leq \delta \gamma / 8 +$$ + +So by Markov's inequality, + +$$ +\Pr_ {\mathbf {W} _ {1}, \mathbf {W} _ {2}, \dots , \mathbf {W} _ {d}} \left[ \Pr_ {g} [ M (g) = 0 ] \geq \gamma / 8 \right] \leq \delta +$$ + +Thus with probability $1 - \delta$ over the choice of $\mathbf{W}_1, \ldots, \mathbf{W}_d$ , we have $\operatorname{Pr}_g\left[M(g) = 0\right] \leq \gamma / 8$ as desired. + +![](images/8421aa60a2a6242e3993de8da007be79ed122dab152b9ec97f42742479c17c3b.jpg) + +Lemma 3.2 There is an algorithm which, given $g \sim \mathcal{N}(0, I_k)$ , with probability $1 - \exp(-n^c)$ for any constant $c > 1$ (over the randomness in $g$ ), computes $\nabla M(g) \in \mathbb{R}^n$ with $O(n)$ queries to the network, and in poly(n) running time. + +Proof. Let $M_i(x) = \phi(\mathbf{W}_i\phi(\mathbf{W}_{i+1}\phi(\ldots\phi(\mathbf{A}x))\ldots)$ where $\phi$ is the ReLU. If $\nabla M(g)$ exists, there is an $\epsilon > 0$ such that $M$ is differentiable on $\mathcal{B}_{\epsilon}(g)$ . We show that with good probability, if $g \sim \mathcal{N}(0, I_n)$ (or in fact, almost any continuous distribution), then $M(g)$ is continuous in the ball $\mathcal{B}_{\epsilon}(g) = \{x \in \mathbb{R}^n \mid \|x - g\|_2 < \epsilon\}$ for some $\epsilon$ which we will now compute. + +First, we can condition on the event that $\| g\| _2^2\leq (nd)^{10c}$ , which occurs with probability at least $1 - \exp (-(nd)^{5c})$ by concentration results for $\chi^2$ distributions Laurent & Massart (2000). Now, fix any sign pattern $S_{i}\subseteq [k_{i}]$ for the $i$ -th layer $M_{i}(x) = \phi (\mathbf{W}_{i}(\phi (\dots (\phi (\mathbf{A}x)\dots))$ , and let $\mathcal{S} = (S_1,S_2,\ldots ,S_{d + 1})$ . We note that we can enforce the constraint that for an input $x\in \mathbb{R}^n$ , the sign pattern of $M_{i}(x)$ is precisely $S_{i}$ . To see this, note that after conditioning on a sign pattern for each layer, the entire network becomes linear. Thus each constraint that $\langle (\mathbf{W}_i)_{j,*},M_{i + 1}(x)\rangle \geq 0$ or $\langle (\mathbf{W}_i)_{j,*},M_{i + 1}(x)\rangle \leq 0$ can be enforced as a linear combination of the coordinates of $x$ . + +Now fix any layer $i \in [d + 1]$ and neuron $j \in [k_i]$ , WLOG $j \in S_i$ . We now add the additional constraint that $\langle (\mathbf{W}_i)_{j,*}, M_{i+1}(x) \rangle \leq \eta$ , where $\eta = \exp(-\mathrm{poly}(nd))$ is a value we will later choose. Thus, we obtain a linear program with $k + \sum_{i=1}^{d} k_i$ constraints and $n$ variables. The feasible polytope $\mathcal{P}$ represents the set of input points which satisfy the activation patterns $S$ and are $\eta$ -close to the discontinuity given by the $j$ -th neuron in the $i$ -th layer. + +We can now introduce the following non-linear constraint on the input that $\| x\| _2\leq (nd)^{10c}$ . Let $\mathcal{B} = \mathcal{B}_{(nd)^{10c}}(\vec{0})$ be the feasible region of this last constraint, and let $\mathcal{P}^* = \mathcal{P}\cap \mathcal{B}$ . We now bound the Lesbegue measure (volume) $V(\mathcal{P}^{*})$ of the region $\mathcal{P}$ . First note that $V(\mathcal{P}^{*})\leq V(\mathcal{P}^{\prime})$ , where $V(\mathcal{P}^{\prime})$ is the region defined by the set of points which satisfy: + +$$ +\langle y, x \rangle \geq 0 +$$ + +$$ +\langle y, x \rangle \leq \eta \tag {3} +$$ + +$$ +\left\| x \right\| _ {2} ^ {2} \leq (n d) ^ {1 0 c} +$$ + +where each coordinate of the vector $y \in \mathbb{R}^n$ is a linear combination of products of the weight matrices $\mathbf{W}_{\ell}$ , $\ell \geq i$ . One can see that the first two constraints for $\mathcal{P}'$ are also constraints for $\mathcal{P}^*$ , and the last constraint is precisely $\mathcal{B}$ , thus $\mathcal{P}^* \subset \mathcal{P}'$ which completes the claim of the measure of the latter being larger. Now we can rotate $\mathcal{P}'$ by the rotation + +which sends $y \to \| y\| _2 \cdot e_1 \in \mathbb{R}^n$ without changing the volume of the feasible region. The resulting region is contained in the region $\mathcal{P}''$ given by + +$$ +0 \leq x _ {1} \leq \eta \| x \| _ {\infty} ^ {2} \leq (n d) ^ {1 0 c} \tag {4} +$$ + +Finally, note that $\mathcal{P}'' \subset \mathbb{R}^n$ is a Euclidean box with $n - 1$ side lengths equal to $(nd)^{10c}$ and one side length of $\|y\|_2 \eta$ , and thus $V(\mathcal{P}') \leq \|y\|_2 \eta(nd)^{10nc}$ . Now note we can assume that the entries of the weight matrices $\mathbf{A}, \mathbf{W}_1, \ldots, \mathbf{W}_d$ are specified in polynomially many (in $n$ ) bits, as if this were not the case the output $M(x)$ of the network would not have polynomial bit complexity, and could not even be read in poly(n) time. Equivalently, we can assume that our running time is allowed to be polynomial in the number of bits in any value of $M(x)$ , since this is the size of the input to the problem. Given this, since the coordinates of $y$ were linear combinations of products of the coordinates of the weight matrices, and note that each of which is at most $2^{n^C}$ for some constant $C$ (since the matrices have polynomial bit-complexity), we have that $\mathcal{P}^* \leq \eta 2^{n^C} (nd)^{10nc}$ as needed. + +Now the pdf of a multi-variate Gaussian is upper bounded by 1, so $\mathbf{Pr}_{g\sim \mathcal{N}(0,I_n)}[g\in \mathcal{P}^* ]\leq V(\mathcal{P}^*)\leq \eta 2^{n^C}(nd)^{10nc}$ . It follows that the probability that a multivariate Gaussian $g\sim \mathcal{N}(0,I_n)$ satisfies the sign pattern $\mathcal{S}$ and is $\eta$ close to the boundary for the $j$ -th neuron in the $i$ -th layer. Now since there are at most $2^k\cdot \prod_{i = 1}^d 2^{k_i}\leq 2^{nd}$ possible combinations of sign patterns $\mathcal{S}$ , it follows that the probability that a multivariate Gaussian $g\in \mathcal{N}(0,I_n)$ is $\eta$ close to the boundary for the $j$ -th neuron in the $i$ -th layer is at most $\eta 2^{n^C}(nd)^{10nc}2^{nd}$ . Union bounding over each of the $k_{i}$ neurons in layer $i$ , and then each of the $d$ layers, it follows that $g\in \mathcal{N}(0,I_n)$ is $\eta$ close to the boundary for any discontinuity in $M(x)$ is at most $\eta 2^{n^C}(nd)^{10nc + 1}2^{nd}$ . Setting $\eta \leq 2^{(nd)^{20c + 1}}2^{-n^C}$ , it follows that with probability at least $1 - \exp (-(nd)^c)$ , the network evaluated at $g\in \mathcal{N}(0,I_n)$ is at least $\eta$ close from all boundaries (note that $C$ is known to us by assumption). + +Now we must show that perturbing the point $g$ by any vector with norm at most $\epsilon$ results in a new point $g'$ which still has not hit one of the boundaries. Note that $M(g)$ is linear in an open ball around $g$ , so the change that can occur in any intermediate neuron after perturbing $g$ by some $v \in \mathbb{R}^n$ is at most $\| \mathbf{A} \|_2 \prod_{i=1}^{d} \| \mathbf{W}_i \|_2$ , where $\| \cdot \|_2$ is the spectral norm. Now since each entry in the weight matrix can be specified in polynomially many bits, the Frobenius norm of each matrix (and therefore the spectral norm), is bounded by $n^2 2^{n^C}$ for some constant $C$ . Thus + +$$ +\| \mathbf {A} \| _ {2} \prod_ {i = 1} ^ {d} \| \mathbf {W} _ {i} \| _ {2} \leq \left(n ^ {2} 2 ^ {n ^ {C}}\right) ^ {d + 1} = \beta +$$ + +and setting $\epsilon = \eta /\beta$ , it follows that $M(x)$ is differentiable in the ball $\mathcal{B}_{\epsilon}(x)$ as needed. + +We now generate $u_{1}, u_{2}, \ldots, u_{n} \sim \mathcal{N}(0, I_{n})$ , which are linearly independent almost surely. We set $v_{i} = \frac{\epsilon u_{i}}{2\|u_{i}\|_{2}}$ . Since $M(g)$ is a ReLU network which is differentiable on $\mathcal{B}_{\epsilon}(g)$ , it follows that $M(g)$ is a linear function on $\mathcal{B}_{\epsilon}(g)$ , and moreover $v_{i} \in \mathcal{B}_{\epsilon}(g)$ for each $i \in [n]$ . Thus for any $c < 1$ we have $\frac{M(g) - M(g + cv_{i})}{c} = \nabla_{v_{i}} M(x)$ , thus we can compute $\nabla_{v_{i}} M(x)$ for each $i \in [n]$ . Finally, since the directional derivative is given by $\nabla_{v_{i}} M(x) = \langle \nabla M(x), v_{i} / \|v_{i}\|_{2} \rangle$ , and since $v_{1}, \ldots, v_{n}$ are linearly independent, we can set up a linear system to solve for $\nabla M(x)$ exactly in polynomial time, which completes the proof. + +Lemma 3.3 Let $\mathbf{V} \in \mathbb{R}^{r \times k}$ be a fixed at most rank $ck$ matrix for $c \leq 1/2$ . Then the number of sign patterns $S \subset [k]$ with at most $k/2$ non-zero spansned by the rows of $\mathbf{V}$ is at most $\frac{2^k}{\sqrt{k}}$ . In other words, the set $S(\mathbf{V}) = \{\mathrm{sign}(w) \mid w \in \mathrm{span}(\mathbf{V}), \mathrm{nnz}(w) \leq \frac{k}{2}\}$ has size at most $\frac{2^k}{\sqrt{k}}$ . + +Proof. Any vector $w$ in the span of the rows of $\mathbf{V}$ can be expressed as a linear combination of at most $ck$ rows of $\mathbf{V}$ . So create a variable $x_{i}$ for each coefficient $i \in [ck]$ in this linear combination, and let $f_{j}(x)$ be the linear function of the $x_{i}'s$ which gives the value of the $j$ -th coordinate of $w$ . Then $f(x) = (f_{1}(x),\ldots ,f_{k}(x))$ is a $k$ -tuple of polynomials, each in $ck$ -variables, where each polynomial has degree 1. By Theorem 4.1 of Hall et al. (2010), it follows that the number of sign patterns which contain at most $k / 2$ non-zero entries is at most $\binom{ck + k / 2}{ck}$ . Setting $c \leq 1 / 2$ , this is at most $\binom{k}{k / 2} \leq \frac{2^k}{\sqrt{k}}$ . + +Theorem 3.4 Suppose the network $M(x) = w^{T}\phi (W_{1}\phi (W_{2}\phi (\dots W_{d}\phi (Ax))\dots)$ , where $\phi$ is the ReLU, satisfies Assumption 1 and 2. Then the algorithm given in Figure 1 makes $O(kn\log (k / \delta) / \gamma)$ queries to $M(x)$ and returns in poly(n,k,1/γ, log(1/δ))-time a subspace $V\subseteq span(A)$ of dimension at least $\frac{k}{2}$ with probability $1 - \delta$ . + +Proof. First note that by Lemma 3.2, we can efficiently compute each gradient $\nabla M(g^i)$ using $O(n)$ queries to the network. After querying for the gradient $z^i = \nabla M(g^i)$ for $r' \leq r$ independent Gaussian vectors $g^i \in \mathbb{R}^n$ , we obtain the vector of gradients $\mathbf{Z} = \mathbf{V} \cdot \mathbf{A} \in \mathbb{R}^{r' \times k}$ . Now suppose that $\mathbf{V}$ had rank $ck$ for some $c \leq 1/2$ . Now consider the gradient $\nabla (g^{r' + 1})$ , which can be written as $z^{r' + 1} = w^T \left( \prod_{i=1}^d (\mathbf{W}_i)_{S_i^{r' + 1}} \right) \mathrm{Diag}(\mathrm{sign}(\mathbf{A} g^{r' + 1})) \mathbf{A}$ . Thus we can write $z^{r' + 1} = \mathbf{V}_{r' + 1} \mathbf{A}$ , where $\mathbf{V}_{r' + 1} \in \mathbb{R}^k$ is a row vector which will be appended to the matrix $\mathbf{V}$ to form a new $\mathbf{Z} = \mathbf{V} \mathbf{A} \in \mathbb{R}^{r' + 1 \times k}$ after the $(r' + 1)$ -th gradient is computed. Specifically, for any $j \in [r]$ , we have: $\mathbf{V}_j = w^T \left( \prod_{i=1}^d (\mathbf{W}_i)_{S_i^j} \right) \mathrm{Diag}(\mathrm{sign}(\mathbf{A} g^j))$ . + +Let $\mathcal{E}_{r' + 1}$ be the event that $M(g^{r' + 1}) \neq 0$ . It follows necessarily that, conditioned on $\mathcal{E}_{r' + 1}$ , we have that $\mathrm{sign}(\mathbf{V}_{r' + 1}) = S_0^{r' + 1}$ . The reason is as follows: if $M(g^{r' + 1}) \neq 0$ , then we could not have $S_j^{r' + 1} = \emptyset$ for any $j \in \{0, 1, 2, \ldots, d\}$ , since this would result in $M(g^{r' + 1}) = 0$ . It follows that the conditions for Assumption 2 to apply hold, and we have that $w^T \left( \prod_{i=1}^d (\mathbf{W}_i)_{S_i^{r' + 1}} \right) \in \mathbb{R}^k$ is entry-wise non-zero. Given this, it follows that the sign pattern of + +$$ +\mathbf {V} _ {r ^ {\prime} + 1} = w ^ {T} \left(\prod_ {i = 1} ^ {d} \left(\mathbf {W} _ {i}\right) _ {S _ {i} ^ {r ^ {\prime} + 1}}\right) \operatorname {D i a g} \left(\operatorname {s i g n} \left(\mathbf {A} g ^ {j}\right)\right) +$$ + +will be precisely $S_0^{r' + 1}$ , as needed. + +Now by Lemma 3.3, the number of sign patterns of $k$ -dimensional vectors with at most $k/2$ non-zero entries which are contained in the span of $\mathbf{V}$ is at most $\frac{2^k}{\sqrt{k}}$ . Let $S$ be the set of sign patterns with at most $k/2$ non-zeros and such that for every $S \in S$ , $S$ is not a sign pattern which can be realized in the row span of $\mathbf{V}$ . It follows that $|\mathcal{S}| \geq \frac{1}{2} 2^k - \frac{2^k}{\sqrt{k}} \geq \frac{1}{4} 2^k$ , so by Assumption 1, we have that $\operatorname*{Pr}\left[\operatorname{sign}(Ag^{r' + 1}) \in \mathcal{S}\right] \geq \gamma / 4$ . By a union bound, we have $\operatorname*{Pr}\left[\operatorname{sign}(Ag^{r' + 1}) \in \mathcal{S}, \mathcal{E}_{r' + 1}\right] \geq \gamma / 8$ + +Conditioned on $\mathrm{sign}(Ag^{r' + 1})\in S)$ and $\mathcal{E}_{r^{\prime} + 1}$ simultaneously, it follows that adding the row vector $\mathbf{V}_{r^{\prime} + 1}$ to the matrix $\mathbf{V}$ will increase its rank by 1. Thus after $O(\log (k / \delta) / \gamma)$ repetitions, the rank of $\mathbf{V}$ will be increased by at least 1 with probability $1 - \delta /k$ . By a union bound, after after $r = O(k\log (k / \delta) / \gamma)$ , $\mathbf{V}\in \mathbb{R}^{r\times k}$ will have rank at least $k / 2$ with probability at least $1 - \delta$ , which implies that the same will hold for $\mathbf{Z}$ since $\mathrm{rank}(\mathbf{Z})\geq \mathrm{rank}(\mathbf{V})$ , which is the desired result. + +# B MISSING PROOFS FROM SECTION 4 + +Proposition 4.1 Let $V \subset \mathbb{R}^n$ be a subspace of dimension $k' < k$ , and fix any $\epsilon_0 > 0$ . Then we can find a vector $x$ with + +$$ +0 \leq \sigma_ {V} (x) - 1 \leq 2 \epsilon_ {0} +$$ + +in expected $O(1 / \gamma + N \log (1 / \epsilon_0))$ time. Moreover, with probability $\gamma / 2$ we have that $\nabla_x \sigma_V(x) > \eta / 4$ and the tighter bound of + +$$ +\epsilon_ {0} \eta 2 ^ {- N} \leq \sigma_ {V} (x) - 1 \leq 2 \epsilon_ {0} +$$ + +Proof. We begin by generating Gaussians $g_{1},\ldots$ and computing $M_V(g_i)$ . By Property 3 of the network assumptions, we need only repeat the process $1 / \gamma$ times until we obtain an input $y = (\mathbb{I}_n - \mathbf{P}_V)g_i$ with $M(y) = M_V(g_i) = 1$ . Since all activation functions satisfy $\phi_i(0) = 0$ , we know that $\sigma_V(0\cdot g_i) = 0$ , and $\sigma_V(g_i) = 1$ . Since $\sigma$ is continuous, it follows that $\psi_{g_i}(c) = \sigma_V(c\cdot g_i)$ is a continuous function $\psi :\mathbb{R}\mapsto \mathbb{R}$ . By the intermediate value theorem, there exists a $c^* \leq 1$ such that $\sigma_V(c^* g_i) = 1$ . We argue we can find a $c$ with $|c - c^{*}|\leq \epsilon_{0}2^{-N}$ in time $O(N\log (1 / \epsilon_0))$ . + +To find $c$ , we can perform a binary search. We first try $c_0 = 1/2$ , and if $\psi_{g_i}(c_0) = 0$ , we recurse into $[1/2, 1]$ , otherwise if $\psi_{g_i}(c_0) = 1$ we recurse into $[0, 1/2]$ . Thus, we always recurse into an interval where $\psi_{g_i}$ switches values. It follows that we can find a $c$ with $|c - c^*| \leq \epsilon_0 \| g_i \|_2 2^{-N}$ in time $O(N \log(\| g_i \|_2 / \epsilon_0))$ for some $c^*$ with $\sigma_V(c^* g_i) = 1$ . Now observe that it suffices to binary search a total of $O(N \log(\| g_i \|_2 / \epsilon_0))$ times, since $2^N$ is an upper bound on the Lipschitz constant of $\psi_x$ , which gives $0 \leq \sigma_V(c g_i) - 1 \leq \epsilon_0$ . Now the expected running time to do this is $O(1/\gamma + N \log(\| g_i \|_2 / \epsilon_0))$ , but since $\| g_i \|_2$ has Gaussian tails, the expectation of the maximum value of $\| g_i \|_2$ over + +$1 / \gamma$ repetitions is $O(\log (1 / \gamma)\sqrt{n})$ , and thus the expected running time reduces to the stated bound, which completes the first claim of the Proposition. + +For the second claim, note that $\nabla_{g_i}\sigma_V(c^* g_i) > 0$ by construction of the binary search, and since $\sigma_V(c^* g_i) > 0 = 1$ , by Property 4 with probability $\gamma$ we have that $\nabla_{g_i}\sigma_V(g_i) > \eta$ . Now with probability $1 - \gamma /2$ , we have that $\| g_i\| _2^2\leq O(n\log (1 / \gamma))$ (see Lemma 1 Laurent & Massart (2000)), so by a union bound both of these occur with probability $\gamma /2$ . Now since $\| (c^{*} - c)x\|_{2}\leq \epsilon_{0}2^{-N}$ (after rescaling $N$ by a factor of $\log (\| g_i\|_2) = O(\log (n))$ ), and since $2^{N}$ is also an upper bound on the spectral norm of the Hessian of $\sigma$ by construction, it follows that $\nabla_{g_i}\sigma_V(cg_i) > \eta /2$ . + +Now we set $x \gets cg_i + c\epsilon_02^{-N}g_i / (\|cg_i\|_2)$ . First note that this increases $\sigma_V(cx) - 1$ by at most $\epsilon_0$ , so $\sigma(cx) - 1 \leq 2\epsilon_0$ , so this does not affect the first claim of the Proposition. But in addition, note that conditioned on the event in the prior paragraph, we now have that $\sigma_V(x) > 1 + \eta \epsilon_02^{-N}$ . The above facts can be seen by the fact that $2^N$ is polynomially larger than the spectral norm of the Hessian of $\sigma$ , thus perturbing $x$ by $\epsilon_02^{-N}$ additive in the direction of $x$ will result in a positive change of at least $\frac{1}{2} (\eta/4)(\epsilon_02^{-N})$ in $\sigma$ . Moreover, by applying a similar argument as in the last paragraph, we will have $\nabla_x\sigma_V(cx) > \eta/4$ still after this update to $x$ . + +Lemma 4.2 Fix any $\epsilon, \delta > 0$ , and let $N = \text{poly}(n, k, \frac{1}{\gamma}, \sum_{i=1}^{d} \log(L_i), \sum_{i=1}^{d} \log(\kappa_i), \log(\frac{1}{\eta}), \log(\frac{1}{\epsilon}), \log(\frac{1}{\delta}))$ . Then given any subspace $V \subset \mathbb{R}^n$ with dimension $\dim(V) < k$ , and given $x \in \mathbb{R}^n$ , such that $\epsilon_0 \eta 2^{-N} \leq \sigma_V(x) - 1 \leq 2\epsilon_0$ where $\epsilon_0 = \Theta(2^{-N^C} / \epsilon)$ for a sufficiently large constant $C = O(1)$ , and $\nabla_x \sigma_V(x) > \eta / 2$ , then with probability $1 - 2^{-N / n^2}$ , we can find a vector $v \in \mathbb{R}^n$ in expected poly $(N)$ time, such that + +$$ +\| \mathbf {P} _ {\text {S p a n} (\mathbf {A})} v \| _ {2} \geq (1 - \epsilon) \| v \| _ {2} +$$ + +and such that $\| \mathbf{P}_Vv\| _2\leq \epsilon \| v\| _2$ + +Proof. We generate $g_{1}, g_{2}, \ldots, g_{n} \sim \mathcal{N}(0, I_{n})$ , and set $u_{i} = g_{i} 2^{-N} - x / \|x\|_{2}$ . We first condition on the event that $\|g_{i}\|_{2} \leq N$ for all $i$ , which occurs with probability $1 - 2^{-N/n^{2}}$ . Note $\nabla_{u_{i}}[\sigma_{V}(x)] = w^{T} \mathbf{A}(\mathbb{I}_{n} - \mathbf{P}_{V}) u_{i}$ , where $w^{T} = \nabla [\sigma(\mathbf{A}(\mathbb{I}_{n} - \mathbf{P}_{V}) x)]^{T} \in \mathbb{R}^{k}$ , which does not depend on $u_{i}$ . Thus $w^{T} \mathbf{A}(\mathbb{I}_{n} - \mathbf{P}_{V})$ is a vector in the row span of $\mathbf{A}(\mathbb{I}_{n} - \mathbf{P}_{V})$ . We can write + +$$ +M _ {V} (x + c u _ {i}) = \tau \left(\left(\sigma_ {V} (x) + c w ^ {T} \mathbf {A} \left(\mathbb {I} _ {n} - \mathbf {P} _ {V}\right) u _ {i} + \Xi (c u _ {i})\right) \right. +$$ + +where $\Xi(cu_i) = O(\|c(\mathbb{I}_n - \mathbf{P}_V)u_i\|_2^2 2^N) = O(\|cu_i\|_2^2 2^N)$ is the error term for the linear approximation. Note that the factor of $N$ comes from the fact that the spectral norm of the Hessian of $\sigma: \mathbb{R}^n \to \mathbb{R}$ can be bounded by $\prod_i \kappa_i L_i \leq 2^N$ . Fix some $\beta > 0$ . We can now binary search again, as in Proposition 4.1, with $O(\log(N / \beta))$ iterations over $c$ , querying values $M_V(x + cu_i)$ to find a value $c = c_i > 0$ such that + +$$ +1 - \beta \leq M _ {V} (x + c u _ {i}) \leq 1 +$$ + +so + +$$ +\mathbf {1} - \beta \leq \left(\sigma_ {V} (x) + c _ {i} w ^ {T} \mathbf {A} (\mathbb {I} _ {n} - \mathbf {P} _ {V}) u _ {i} + \Xi (c _ {i} u _ {i})\right) \leq 1 +$$ + +We first claim that the $c_{i}$ which achieves this value satisfies $\| c_i u_i\| _2\leq (10\cdot 2^{-N}\epsilon_0 / \eta)$ . To see this, first note that by Proposition 4.1, we have $\nabla_{x}\sigma_{V}(x) > \eta /4$ with probability $\gamma$ . We will condition on this occurring, and if it fails to occur we argue that we can detect this and regenerate $x$ . Now conditioned on the above, we first claim that $\nabla_{u_i}\sigma_V(x)\geq \eta /8$ , which follows from the fact that we can bound the angle between the unit vectors in the directions of $u_{i}$ and $x$ by + +$$ +\cos \left(\operatorname {a n g l e} \left(u _ {i}, x\right)\right) = \left\langle \frac {u _ {i}}{\| u \| _ {2}}, \frac {x}{\| x \| _ {2}} \right\rangle \geq (1 - n / 2 ^ {- N}) > (1 - \eta / 2 ^ {- N / 2}) +$$ + +along with the fact that we have $\nabla_{x}\sigma_{V}(x) > \eta /4$ . Since $|\sigma_V(x) - 1| < 2\epsilon_0 < 2^{-N^C}$ , and since $2^{N}$ is an upper bound on the spectral norm of the Hessian of $\sigma$ , we have that $\nabla_{u_i}\sigma_V(x + cu_i) > \eta /8 + 2^{-N} > \eta /10$ for all $c < 2^{-2N}$ . In other words, if $H$ is the hessian of $\sigma$ , then perturbing $x$ by a point with norm $O(c)\leq 2^{-2N}$ can change the value of the gradient by a vector of norm at most $2^{2N}\| H\| _2\leq 2^{-N}$ , where $\| H\| _2$ is the spectral norm of the Hessian. It follows that setting $c = (10\cdot 2^{-N}\epsilon_0 / \eta)$ is sufficient for $\sigma_V(x + cu_i) < 1$ , which completes the above claim. + +Now observe that if after binary searching, the property that $c \leq (10 \cdot 2^{-N} \epsilon_0 / \eta)$ does not hold, then this implies that we did not have $\nabla \sigma(x) > \eta / 4$ to begin with, so we can throw away this $x$ and repeat until this condition does hold. By Assumption 4, we must only repeat $O(1 / \gamma)$ times in expectation in order for the assumption to hold. + +Next, also note that we can bound $c_{i} \geq \epsilon_{0}\eta 2^{-N} / N$ , since $2^{N}$ again is an upper bound on the norm of the gradient of $\sigma$ and we know that $\sigma_{V}(x) - 1 > \epsilon_{0}\eta 2^{-N}$ . Altogether, we now have that $|\Xi (c_iu_i)| \leq c_i^2 2^N \leq (10\cdot 2^{-N}\epsilon_0 / \eta)^2 2^N$ . + +We can repeat this binary search to find $c_{i}$ for $n$ different perturbations $u_{1},\ldots ,u_{n}$ , and obtain the resulting $c_{1},\ldots ,c_{n}$ such that for each $i\in [n]$ we have + +$$ +1 - \sigma_ {V} (x) - \Xi \left(c _ {i} u _ {i}\right) - \beta_ {i} = c _ {i} w ^ {T} \mathbf {A} \left(\mathbb {I} _ {n} - \mathbf {P} _ {V}\right) u _ {i} +$$ + +where $\beta_{i}$ is the error obtained from the binary search on $c_{i}$ , and therefore satisfies $|\beta_i| \leq 2^{-N^2} \epsilon_0^2$ taking $\mathrm{poly}(N)$ iterations in the search. Now we know $c_{i}$ and $u_{i}$ , so we can set up a linear system + +$$ +\min _ {y} \| y ^ {T} \mathbf {B} - b ^ {T} \| _ {2} ^ {2} +$$ + +for an unknown $y \in \mathbb{R}^k$ where the $i$ -th column of $\mathbf{B}$ is given by $\mathbf{B}_i = u_i \in \mathbb{R}^n$ , and $b_i = 1 / c_i$ for each $i \in [n]$ . First note that the set $\{u_1, u_2, \ldots, u_n\}$ is linearly independent with probability 1, since $\{u_1 - x, u_2 - x, \ldots, u_n - x\}$ is just a set of Gaussian vectors. Thus $\mathbf{B}$ has rank $n$ , and the above system has a unique solution $y^*$ . + +Next, observe that setting $\widehat{y} = \frac{w^T\mathbf{A}(\mathbb{I}_n - \mathbf{P}_V)}{(1 - \sigma_V(x))}$ , we obtain that for each $i\in [n]$ : + +$$ +(\hat {y} B) _ {i} = \frac {1}{c _ {i}} - \frac {1}{c _ {i}} \frac {\Xi (c _ {i} u _ {i}) - \beta_ {i}}{1 - \sigma_ {V} (x)} +$$ + +Thus + +$$ +\begin{array}{l} \left\| \hat {y} ^ {T} B - b ^ {T} \right\| _ {2} \leq \left(\sum_ {i = 1} ^ {n} \left(\frac {1}{c _ {i}} \cdot \frac {\Xi \left(c _ {i} u _ {i}\right) - \beta_ {i}}{1 - \sigma_ {V} (x)}\right) ^ {2}\right) ^ {1 / 2} \\ \leq \left(n \left(\frac {1}{c _ {i}} \cdot \frac {\left(1 0 \cdot 2 ^ {- N} \epsilon_ {0} / \eta\right) ^ {2} 2 ^ {N} - 2 ^ {- N ^ {2}} \epsilon_ {0} ^ {2}}{1 - \sigma_ {V} (x)}\right) ^ {2}\right) ^ {1 / 2} \tag {5} \\ \leq (2 ^ {- N}) ^ {O (1)} \frac {\epsilon_ {0} ^ {2}}{c _ {i} (1 - \sigma (x))} \\ \end{array} +$$ + +Now setting $y^{*}$ such that $(y^{*})^{T}\mathbf{B} = b^{T}$ , we have that cost of the optimal solution is 0, and $\| \widehat{y}\mathbf{B} - b^{T}\|_{2} \leq (2^{-N})^{O(1)}\frac{\epsilon_{0}^{2}}{c_{i}(1 - \sigma(x))}$ , so $\| (y^{*} - \widehat{y})\mathbf{B}\|_{2} \leq (2^{-N})^{O(1)}\frac{\epsilon_{0}^{2}}{c_{i}(1 - \sigma(x))}$ . By definition of the minimum singular value of $B$ , it follows that $\| y^{*} - \widehat{y}\|_{2} \leq \frac{1}{\sigma_{\min}(\mathbf{B})} (2^{-N})^{O(1)}\frac{\epsilon_{0}^{2}}{c_{i}(1 - \sigma(x))}$ . Now using the fact that $\mathbf{B} = 2^{-N}\mathbf{G} + \mathbf{X}$ where $\mathbf{G}$ is a Gaussian matrix and $\mathbf{X}$ is the matrix with each row equal to $x$ , we can apply Theorem 1.6 of Cook et al. (2018), we have $\sigma_{\min}(\mathbf{B}) \geq 1 / (2^{-N})^{O(1)}$ , So $\| y^{*} - \widehat{y}\|_{2} \leq (2^{-N})^{O(1)}\frac{\epsilon_{0}^{2}}{c_{i}(1 - \sigma(x))}$ . Thus we have $\| y^{*}\|_{2} \leq \| \widehat{y}\|_{2} + (2^{-N})^{O(1)}\frac{\epsilon_{0}^{2}}{c_{i}(1 - \sigma(x))}$ , and moreover note that $\| \widehat{y}\|_{2} \geq \| \nabla \sigma (x)\|_{2}\frac{1}{1 - \sigma(x)} >\frac{\eta}{8(1 - \sigma(x))}$ . So we have that + +$$ +\begin{array}{l} \frac {\| y ^ {*} - \widehat {y} \| _ {2}}{\| \widehat {y} \| _ {2}} \leq \frac {(2 ^ {- N}) ^ {O (1)} \frac {\epsilon_ {0} ^ {2}}{c _ {i} (1 - \sigma (x))}}{\| y ^ {*} \| _ {2}} \\ \leq \left(2 ^ {- N}\right) ^ {O (1)} \frac {\epsilon_ {0} ^ {2}}{c _ {i}} \tag {6} \\ \leq \left(2 ^ {- N}\right) ^ {O (1)} \epsilon_ {0} \\ \leq (2 ^ {- N}) ^ {O (1)} \epsilon_ {0} \\ \leq 2 ^ {- N / 2} \\ \end{array} +$$ + +Where the last inequality holds by taking $C$ larger than some constant in the definition of $\epsilon_0$ . Thus $\| y^{*} - \widehat{y}\|_{2}\leq 2^{-N / 2}\| \widehat{y}\|_{2}$ , thus $\| y^{*} - \widehat{y}\|_{2}\leq 2\cdot 2^{-N / 2}\| y^{*}\|_{2}\leq \epsilon \| y^{*}\|_{2}$ after scaling $N$ up by a factor of $\log^2 (1 / \epsilon)$ . Thus by setting $v = y^{*}$ , and observing that $\widehat{y}$ is in the span of $\mathbf{A}$ , we ensure $\| \mathbf{P}_{\mathrm{Span}(\mathbf{A})}v\|_{2}\geq (1 - \epsilon)\| v\|_{2}$ as desired. For the final claim, note that if we had $y^{*} = \widehat{y}$ exactly, we would have $\| \mathbf{P}_Vy^*\| _2 = 0$ , since $\widehat{y}$ is orthogonal to the subspace $V$ . It follows that since $\| y^{*} - \widehat{y}\|_{2}^{2}\leq \epsilon^{2}\| y^{*}\|_{2}^{2}$ , we have $\| (\mathbb{I}_n - \mathbf{P}_V)y^*\| _2^2\geq (1 - \epsilon^2)\| y^*\| _2^2$ , so by the Pythagorean theorem, we have $\| P_Vy^*\| _2^2 = \| y^*\| _2^2 -\| (\mathbb{I}_n - \mathbf{P}_V)y^*\| _2^2\leq \epsilon^2\| y^*\| _2^2$ as desired. + +Theorem 4.3 Suppose the network $M(x) = \tau(\sigma(\mathbf{A}x))$ satisfies the conditions described at the beginning of this section. Then Algorithm 2 runs in poly(N) time, making at most poly(N) queries to $M(x)$ , where $N =$ + +poly(n,k, $\frac{1}{\gamma},\sum_{i = 1}^{d}\log (L_i),\sum_{i = 1}^{d}\log (\kappa_i),\log (\frac{1}{\eta}),\log (\frac{1}{\epsilon}),\log (\frac{1}{\delta}))$ , and returns with probability $1 - \delta$ a subspace $V\subset \mathbb{R}^n$ of dimension $k$ such that for any $x\in V$ , we have + +$$ +\| \mathbf {P} _ {\text {S p a n} (\mathbf {A})} x \| _ {2} \geq (1 - \epsilon) \| x \| _ {2} +$$ + +Proof. We iteratively apply Lemma 4.2, each time appending the output $v \in \mathbb{R}^n$ of the proposition to the subspace $V \subset \mathbb{R}^n$ constructed so far. WLOG we can assume $v$ is a unit vector by scaling it. Note that we have the property at any given point in time $k' < k$ that $V = \operatorname{Span}(v_1, \ldots, v_{k'})$ where each $v_i$ satisfies that $\| \mathbf{P}_{\operatorname{Span}\{v_1, \ldots, v_{i-1}\}} v_i \|_2 \leq \epsilon$ . Note that the latter fact implies that $v_1, \ldots, v_{k'}$ are linearly independent. Thus at the end, we recover a rank $k$ subspace $V = \operatorname{Span}(v_1, \ldots, v_k)$ , with the property that $\| \mathbf{P}_{\operatorname{Span}(\mathbf{A})} v_i \|_2^2 \geq (1 - \epsilon) \| v_i \|_2^2$ for each $i \in [k]$ . + +Now let $\mathbf{V} \in \mathbb{R}^{n \times n}$ be the matrix with $i$ -th column equal to $v_{i}$ . Fix any unit vector $x = \mathbf{V}a \in V$ , where $a \in \mathbb{R}^n$ is uniquely determined by $x$ . Let $\mathbf{V} = \mathbf{V}^{+} + \mathbf{V}^{-}$ where $\dot{\mathbf{V}}^{+} = \mathbf{P}_{\mathrm{Span}(A)}\mathbf{V}$ and $\mathbf{V}^{-} = \mathbf{V} - \mathbf{V}^{+}$ . Then $x = \mathbf{V}^{+}a + \mathbf{V}^{-}a$ , and + +$$ +\begin{array}{l} \left\| \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} (\mathbf {A})}\right) x \right\| _ {2} \leq \left\| \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} (\mathbf {A})}\right) \mathbf {V} ^ {+} a \right\| _ {2} + \left\| \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} (\mathbf {A})}\right) \mathbf {V} ^ {-} a \right\| _ {2} \\ \leq \left\| \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} (\mathbf {A})}\right) \mathbf {V} ^ {-} a \right\| _ {2} \tag {7} \\ \leq \| \mathbf {V} ^ {-} \| _ {2} \| a \| _ {2} \\ \end{array} +$$ + +First note that by the construction of the $v_{i}$ 's, each column of $\mathbf{V}^{-}$ has norm $O(\epsilon)$ , thus $\| \mathbf{V}^{-}\|_{2}\leq O(\sqrt{n}\epsilon)$ . Moreover, since $\| x\| _2 = 1$ , it follows that $\| a\| _2\leq \frac{1}{\sigma_{\min}(\mathbf{V})}$ , which we now bound. Since $\| \mathbf{P}_{\mathrm{Span}\{v_1,\ldots ,v_{i - 1}\}}v_i\| _2\leq \epsilon$ for each $i\in [k]$ , we have + +$$ +\begin{array}{l} \| \mathbf {V} a \| _ {2} \geq \| \sum_ {i = 1} ^ {n} v _ {i} a _ {i} \| _ {2} \\ \geq \left\| \sum_ {i = 1} ^ {n} \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} \left\{v _ {1}, \dots , v _ {i - 1} \right\}}\right) v _ {i} a _ {i} + \sum_ {i = 1} ^ {n} \mathbf {P} _ {\operatorname {S p a n} \left\{v _ {1}, \dots , v _ {i - 1} \right\}} v _ {i} a _ {i} \right\| _ {2} \\ \geq \left\| \sum_ {i = 1} ^ {n} \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} \{v _ {1}, \dots , v _ {i - 1} \}}\right) v _ {i} a _ {i} \right\| _ {2} - \left\| \sum_ {i = 1} ^ {n} \mathbf {P} _ {\operatorname {S p a n} \{v _ {1}, \dots , v _ {i - 1} \}} v _ {i} a _ {i} \right\| _ {2} \\ \geq \left\| \sum_ {i = 1} ^ {n} \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} \{v _ {1}, \dots , v _ {i - 1} \}}\right) v _ {i} a _ {i} \right\| _ {2} - O (\epsilon) \| a \| _ {2} \\ = \left(\| \sum_ {i = 1} ^ {n} \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} \{v _ {1}, \dots , v _ {i - 1} \}}\right) v _ {i} a _ {i} \| _ {2} ^ {2} - O (\epsilon) \| a \| _ {2} \| \sum_ {i = 1} ^ {n} \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} \{v _ {1}, \dots , v _ {i - 1} \}}\right) v _ {i} a _ {i} \| _ {2} + O (\epsilon^ {2}) \| a \| _ {2} ^ {2}\right) ^ {1 / 2} \\ = \left(\| \sum_ {i = 1} ^ {n} \left(\mathbb {I} _ {n} - \mathbf {P} _ {\operatorname {S p a n} \{v _ {1}, \dots , v _ {i - 1} \}}\right) v _ {i} a _ {i} \| _ {2} ^ {2} - O (\epsilon) \| a \| _ {2} ^ {2}\right) ^ {1 / 2} \\ = \left(\sum_ {i = 1} ^ {n} \| \left(\mathbb {I} _ {n} - \mathbf {P} _ {\text {S p a n} \{v _ {1}, \dots , v _ {i - 1} \}}\right) v _ {i} a _ {i} \| _ {2} ^ {2} - O (\epsilon) \| a \| _ {2} ^ {2}\right) ^ {1 / 2} \\ \geq \left(\sum_ {i = 1} ^ {n} \left(1 - O \left(\epsilon^ {2}\right)\right) \left| a _ {i} \right| ^ {2} - O (\epsilon) \| a \| _ {2} ^ {2}\right) ^ {1 / 2} \\ \geq \left(\| a \| _ {2} ^ {2} - O (\epsilon) \| a \| _ {2} ^ {2}\right) ^ {1 / 2} \\ \geq (1 - O (\epsilon)) \| a \| _ {2} \tag {8} \\ \end{array} +$$ + +Thus $\sigma_{\mathrm{min}}(\mathbf{V}) \geq (1 - O(\epsilon))$ , so we have $\| (\mathbb{I}_n - \mathbf{P}_{\mathrm{Span}(\mathbf{A})})x \|_2 \leq \| \mathbf{V}^- \|_2 \frac{1}{\sigma_{\mathrm{min}}(\mathbf{V})} \leq 2\sqrt{n}\epsilon$ . By the Pythagorean theorem: $\| \mathbf{P}_{\mathrm{Span}(\mathbf{A})}x \|_2^2 = 1 - \| (\mathbb{I}_n - \mathbf{P}_{\mathrm{Span}(\mathbf{A})})x \|_2^2 \geq 1 - O(n\epsilon^2)$ . Thus we can scale $\epsilon$ by a factor of $\Theta(1/\sqrt{n})$ in the call to Lemma 4.2, which gives the desired result of $\| \mathbf{P}_{\mathrm{Span}(\mathbf{A})}x \|_2 \geq 1 - \epsilon$ . \ No newline at end of file diff --git a/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/images.zip b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..78ec911137562f6134ac5aed63ec58196907b3c2 --- /dev/null +++ b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3ee16e47b5150e27c36a8f4038589686554ac4837b8af7e7fe026e05f915ff9 +size 481349 diff --git a/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/layout.json b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f51f30a56906baa4b699dee8d340e365669505df --- /dev/null +++ b/spanrecoveryfordeepneuralnetworkswithapplicationstoinputobfuscation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54917e0b7f54aa845867562cb78eb1f2112e15a4441819607c9bf06204b348fa +size 1258203 diff --git a/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_content_list.json b/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..58459fa28d0d0d1750f95a3fdd8d02a6224cdd46 --- /dev/null +++ b/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:499a0cedf72242314cd0cd873ad1b0c9753d991a286fa4ef608adc7f791f862a +size 67615 diff --git a/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_model.json b/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8956f884ea90c0c6834f1e4fc6e8a62cb58c455b --- /dev/null +++ b/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a0a6dabe9d9d074645c8d3ed25f01adfba1ea26c47226a7acf83d86c52abea8 +size 81596 diff --git a/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_origin.pdf b/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5fe98162fde301bcca264ca4ec20a1e5c924004d --- /dev/null +++ b/spikebasedcausalinferenceforweightalignment/978c16cb-88fe-4d72-bcb9-c95a05331c9c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23fca51cd8e9c1c2c5d74aaadf3df1626775f189a60755292c60723dc393c0bc +size 1721556 diff --git a/spikebasedcausalinferenceforweightalignment/full.md b/spikebasedcausalinferenceforweightalignment/full.md new file mode 100644 index 0000000000000000000000000000000000000000..54d86e702a400ec10b174d77bd71a5d2ce1cb66e --- /dev/null +++ b/spikebasedcausalinferenceforweightalignment/full.md @@ -0,0 +1,325 @@ +# SPIKE-BASED CAUSAL INFERENCE FOR WEIGHT ALIGNMENT + +Jordan Guerguiev $^{1,2}$ , Konrad P. Kording $^{3}$ , Blake A. Richards $^{4,5,6,7,*}$ + +$^{1}$ Department of Biological Sciences, University of Toronto Scarborough, Toronto, ON, Canada +$^{2}$ Department of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada +$^{3}$ Department of Bioengineering, University of Pennsylvania, PA, United States +4 Mila, Montreal, QC, Canada +$^{5}$ Department of Neurology & Neurosurgery, McGill University, Montreal, QC, Canada +$^{6}$ School of Computer Science, McGill University, Montreal, QC, Canada +7 Canadian Institute for Advanced Research, Toronto, ON, Canada +* Corresponding author, email: blake.richards@mcgill.ca + +# ABSTRACT + +In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem. + +# 1 INTRODUCTION + +Any learning system that makes small changes to its parameters will only improve if the changes are correlated to the gradient of the loss function. Given that people and animals can also show clear behavioral improvements on specific tasks (Shadmehr et al., 2010), however the brain determines its synaptic updates, on average, the changes in must also correlate with the gradients of some loss function related to the task (Raman et al., 2019). As such, the brain may have some way of calculating at least an estimator of gradients. + +To-date, the bulk of models for how the brain may estimate gradients are framed in terms of setting up a system where there are both bottom-up, feedforward and top-down, feedback connections. The feedback connections are used for propagating activity that can be used to estimate a gradient (Williams, 1992; Lillicrap et al., 2016; Akrout et al., 2019; Roelfsema & Ooyen, 2005; Lee et al., 2015; Scellier & Bengio, 2017; Sacramento et al., 2018). In all such models, the gradient estimator is less biased the more the feedback connections mirror the feedforward weights. For example, in the REINFORCE algorithm (Williams, 1992), and related algorithms like AGREL (Roelfsema & + +Ooyen, 2005), learning is optimal when the feedforward and feedback connections are perfectly symmetric, such that for any two neurons $i$ and $j$ the synaptic weight from $i$ to $j$ equals the weight from $j$ to $i$ , e.g. $W_{ji} = W_{ij}$ (Figure 1). Some algorithms simply assume weight symmetry, such as Equilibrium Propagation (Scellier & Bengio, 2017). The requirement for synaptic weight symmetry is sometimes referred to as the "weight transport problem", since it seems to mandate that the values of the feedforward synaptic weights are somehow transported into the feedback weights, which is not biologically realistic (Crick, 1989-01-12; Grossberg, 1987). Solving the weight transport problem is crucial to biologically realistic gradient estimation algorithms (Lillicrap et al., 2016), and is thus an important topic of study. + +Several solutions to the weight transport problem have been proposed for biological models, including hard-wired sign symmetry (Moskovitz et al., 2018), random fixed feedback weights (Lillicrap et al., 2016), and learning to make the feedback weights symmetric (Lee et al., 2015; Sacramento et al., 2018; Akrout et al., 2019; Kolen & Pollack, 1994). Learning to make the weights symmetric is promising because it is both more biologically feasible than hard-wired sign symmetry (Moskovitz et al., 2018) and it leads to less bias in the gradient estimator (and thereby, better training results) than using fixed random feedback weights (Bartunov et al., 2018; Akrout et al., 2019). However, of the current proposals for learning weight symmetry some do not actually work well in practice (Bartunov et al., 2018) and others still rely on some biologically unrealistic assumptions, including scalar value activation functions (as opposed to all-or-none spikes) and separate error feedback pathways with one-to-one matching between processing neurons for the forward pass and error propagation neurons for the backward pass Akrout et al. (2019); Sacramento et al. (2018). + +Interestingly, learning weight symmetry is implicitly a causal inference problem—the feedback weights need to represent the causal influence of the upstream neuron on its downstream partners. As such, we may look to the causal inference literature to develop better, more biologically realistic algorithms for learning weight symmetry. In econometrics, which focuses on quasi-experiments, researchers have developed various means of estimating causality without the need to actually randomize and control the variables in question Angrist & Pischke (2008); Marinescu et al. (2018). Among such quasi-experimental methods, regression discontinuity design (RDD) is particularly promising. It uses the discontinuity introduced by a threshold to estimate causal effects. For example, RDD can be used to estimate the causal impact of getting into a particular school (which is a discontinuous, all-or-none variable) on later earning power. RDD is also potentially promising for estimating causal impact in biological neural networks, because real neurons communicate with discontinuous, all-or-none spikes. Indeed, it has been shown that the RDD approach can produce unbiased estimators of causal effects in a system of spiking neurons Lansdell & Kording (2019). Given that learning weight symmetry is fundamentally a causal estimation problem, we hypothesized that RDD could be used to solve the weight transport problem in biologically realistic, spiking neural networks. + +Here, we present a learning rule for feedback synaptic weights that is a special case of the RDD algorithm previously developed for spiking neural networks (Lansdell & Kording, 2019). Our algorithm takes advantage of a neuron's spiking discontinuity to infer the causal effect of its spiking on the activity of downstream neurons. Since this causal effect is proportional to the feedforward synaptic weight between the two neurons, by estimating it, feedback synapses can align their weights to be symmetric with the reciprocal feedforward weights, thereby overcoming the weight transport problem. We demonstrate that this leads to the reduction of a cost function which measures the weight symmetry (or the lack thereof), that it can lead to better weight symmetry in spiking neural networks than other algorithms for weight alignment (Akrout et al., 2019) and it leads to better learning in deep neural networks in comparison to the use of fixed feedback weights (Lillicrap et al., 2016). Altogether, these results demonstrate a novel algorithm for solving the weight transport problem that takes advantage of discontinuous spiking, and which could be used in future models of biologically plausible gradient estimation. + +# 2 RELATED WORK + +Previous work has shown that even when feedback weights in a neural network are initialized randomly and remain fixed throughout training, the feedforward weights learn to partially align themselves to the feedback weights, an algorithm known as feedback alignment (Lillicrap et al., 2016). + +![](images/408732b6bd4a6a76f8f94c9c0d54a41828eb6bdeaef6f139f6a5e3ef87b157d2.jpg) +Figure 1: Illustration of weight symmetry in a neural network with feedforward and feedback connections. Processing of inputs to outputs is mirrored by backward flow of gradient information. Gradient estimation is best when feedback synapses have symmetric weights to feedforward synapses (illustrated with colored circles). + +While feedback alignment is successful at matching the learning performance of true gradient descent in relatively shallow networks, it does not scale well to deeper networks and performs poorly on difficult computer vision tasks (Bartunov et al., 2018). + +The gap in learning performance between feedback alignment and gradient descent can be overcome if feedback weights are continually updated to match the sign of the reciprocal feedforward weights (Moskovitz et al., 2018). Furthermore, learning the feedback weights in order to make them more symmetric to the feedforward weights has been shown to improve learning over feedback alignment (Akrout et al., 2019). + +To understand the underlying dynamics of learning weight symmetry, Kunin et al. (2019) define the symmetric alignment cost function, $\mathcal{R}_{\mathrm{SA}}$ , as one possible cost function that, when minimized, leads to weight symmetry: + +$$ +\begin{array}{l} \mathcal {R} _ {\mathrm {S A}} := \left\| W - Y ^ {T} \right\| _ {F} ^ {2} \tag {1} \\ = \| W \| _ {F} ^ {2} + \| Y \| _ {F} ^ {2} - 2 \operatorname {t r} (W Y) \\ \end{array} +$$ + +where $W$ are feedforward weights and $Y$ are feedback weights. The first two terms are simply weight regularization terms that can be minimized using techniques like weight decay. But, the third term is the critical one for ensuring weight alignment. + +In this paper we present a biologically plausible method of minimizing the third term. This method is based on the work of Lansdell & Kording (2019), who demonstrated that neurons can estimate their causal effect on a global reward signal using the discontinuity introduced by spiking. This is accomplished using RDD, wherein a piecewise linear model is fit around a discontinuity, and the differences in the regression intercepts indicate the causal impact of the discontinuous variable. In Lansdell & Kording (2019), neurons learn a piece-wise linear model of a reward signal as a function of their input drive, and estimate the causal effect of spiking by looking at the discontinuity at the spike threshold. Here, we modify this technique to perform causal inference on the effect of spiking on downstream neurons, rather than a reward signal. We leverage this to develop a learning rule for feedback weights that induces weight symmetry and improves training. + +# 3 OUR CONTRIBUTIONS + +The primary contributions of this paper are as follows: + +- We demonstrate that spiking neurons can accurately estimate the causal effect of their spiking on downstream neurons by using a piece-wise linear model of the feedback as a function of the input drive to the neuron. +- We present a learning rule for feedback weights that uses the causal effect estimator to encourage weight symmetry. We show that when feedback weights update using this algorithm it minimizes the symmetric alignment cost function, $\mathcal{R}_{\mathrm{SA}}$ . +- We demonstrate that this learning weight symmetry rule improves training and test accuracy over feedback alignment, approaching gradient-descent-level performance on Fashion-MNIST, SVHN, CIFAR-10 and VOC in deeper networks. + +# 4 METHODS + +# 4.1 GENERAL APPROACH + +In this work, we utilize a spiking neural network model for aligning feedforward and feedback weights. However, due to the intense computational demands of spiking neural networks, we only use spikes for the RDD algorithm. We then use the feedback weights learned by the RDD algorithm for training a non-spiking convolutional neural network. We do this because the goal of our work here is to develop an algorithm for aligning feedback weights in spiking networks, not for training feedforward weights in spiking networks on other tasks. Hence, in the interest of computational expediency, we only used spiking neurons when learning to align the weights. Additional details on this procedure are given below. + +# 4.2 RDD FEEDBACK TRAINING PHASE + +At the start of every training epoch of a convolutional neural network, we use an RDD feedback weight training phase, during which all fully-connected sets of feedback weights in the network are updated. To perform these updates, we simulate a separate network of leaky integrate-and-fire (LIF) neurons. LIF neurons incorporate key elements of real neurons such as voltages, spiking thresholds and refractory periods. Each epoch, we begin by training the feedback weights in the LIF network. These weights are then transferred to the convolutional network, which is used for training the feedforward weights. The new feedforward weights are then transferred to the LIF net, and another feedback training phase with the LIF net starts the next epoch (Figure 2A). During the feedback training phase, the LIF network undergoes a training phase lasting $90\mathrm{~s}$ of simulated time $(30\mathrm{~s}$ per set of feedback weights) (Figure 2B). We find that the spiking network used for RDD feedback training and the convolutional neural network are very closely matched in the activity of the units (Figure S1), which gives us confidence that this approach of using a separate non-spiking network for training the feedforward weights is legitimate. + +During the feedback training phase, a small subset of neurons in the first layer receive driving input that causes them to spike, while other neurons in this layer receive no input (see Appendix A.2). The subset of neurons that receive driving input is randomly selected every $100\mathrm{ms}$ of simulated time. This continues for $30~\mathrm{s}$ in simulated time, after which the same process occurs for the subsequent hidden layers in the network. This protocol enforces sparse, de-correlated firing patterns that improve the causal inference procedure of RDD. + +# 4.3 LIF DYNAMICS + +During the RDD feedback training phase, each unit in the network is simulated as a leaky integrate-and-fire neuron. Spiking inputs from the previous layer arrive at feedforward synapses, where they are convolved with a temporal exponential kernel to simulate post-synaptic spike responses $\mathbf{p} = [p_1,p_2,\dots,p_m]$ (see Appendix A.1). The neurons can also receive driving input $\tilde{p}_i$ , instead of synaptic inputs. The total feedforward input to neuron $i$ is thus defined as: + +![](images/6cc9d1d29946d8a1638f4ac34fe98d897087eab0d36eaf4851b08d88a7ffbbe7.jpg) +A + +![](images/bd2ee95dc4197fa1f6ed6b6ebd9fec6ec9295f37c0ea6edd2884c1cc778ca386.jpg) +B + +![](images/a1ec0d06e89bac2393f47738968c83adbd98bdcb6a32bc29b630d58343745a30.jpg) +C + +![](images/2e7469a01db6ed01c1da98964847d4c1bf68482f9f756cbfeaab5d48e1db3173.jpg) +D +Figure 2: A. Layers of the convolutional network trained on CIFAR-10 and the corresponding network of LIF neurons that undergoes RDD feedback training. Fully-connected feedforward weights (blue) and feedback weights (red) are shared between the two networks. Every training epoch consists of an RDD feedback training phase where feedback weights in the LIF net are updated (bold red arrows) and transferred to the convolutional net, and a regular training phase where feedforward weights in the convolutional net are updated (bold blue arrows) and transferred back to the LIF net. B. RDD feedback training protocol. Every $30\mathrm{s}$ , a different layer in the LIF network receives driving input and updates its feedback weights (red) using the RDD algorithm. C. Top: Sample voltage $(v_{i}$ , solid line) and input drive $(u_{i}$ , dashed line) traces. Whenever $v_{i}$ approaches the spiking threshold, an RDD window lasting $T$ ms is triggered. $u_{i}^{\max}$ is the maximum input drive during this window of time. Bottom: Feedback received at a synapse, $q_{k}$ . $q_{k}^{\mathrm{pre}}$ is the feedback signal at the start of an RDD window, while $q_{k}^{\mathrm{avg}}$ is the average of the feedback signal during the time window. D. Samples of $\Delta q_{k}^{\mathrm{avg}}$ vs. $u_{i}^{\max}$ are used to update a piece-wise linear function of $u_{i}^{\max}$ , and the causal effect $\beta_{ik}$ is defined as the difference of the left and right limits of the function at the spiking threshold. + +$$ +I _ {i} := \left\{ \begin{array}{l l} \omega \tilde {p} _ {i} & \text {i f} \tilde {p} _ {i} > 0 \\ \sum_ {j = 1} ^ {m} W _ {i j} p _ {j} & \text {o t h e r w i s e} \end{array} \right. \tag {2} +$$ + +where $W_{ij}$ is the feedforward weight to neuron $i$ from neuron $j$ in the previous layer, and $\omega$ is a hyperparameter. The voltage of the neuron, $v_{i}$ , evolves as: + +$$ +\frac {d v _ {i}}{d t} = - g _ {L} v _ {i} + g _ {D} \left(I _ {i} - v _ {i}\right) \tag {3} +$$ + +where $g_{L}$ and $g_{D}$ are leak and dendritic conductance constants, respectively. The input drive to the neuron, $u_{i}$ , is similarly modeled: + +$$ +\frac {d u _ {i}}{d t} = - g _ {L} u _ {i} + g _ {D} \left(I _ {i} - u _ {i}\right) \tag {4} +$$ + +If the voltage $v_{i}$ passes a spiking threshold $\theta$ , the neuron spikes and the voltage is reset to a value $v_{\mathrm{reset}} = -1$ (Figure 2C). Note that the input drive does not reset. This helps us to perform regressions both above and below the spike threshold. + +In addition to feedforward inputs, spiking inputs from the downstream layer arrive at feedback synapses, where they create post-synaptic spike responses $\mathbf{q} = [q_1, q_2, \dots, q_n]$ . These responses are used in the causal effect estimation (see below). + +# 4.4 RDD ALGORITHM + +Whenever the voltage approaches the threshold $\theta$ (i.e. $|v_{i} - \theta| < \alpha$ where $\alpha$ is a constant), an RDD window is initiated, lasting $T = 30$ ms in simulated time (Figure 2C). At the end of this time window, at each feedback synapse, the maximum input drive during the RDD window, $u_{i}^{\mathrm{max}}$ , and the average change in feedback from downstream neuron $k$ during the RDD window, $\Delta q_{k}^{\mathrm{avg}}$ , are recorded. $\Delta q_{k}^{\mathrm{avg}}$ is defined as the difference between the average feedback received during the RDD window, $q_{k}^{\mathrm{avg}}$ , and the feedback at the start of the RDD window, $q_{k}^{\mathrm{pre}}$ : + +$$ +\Delta q _ {k} ^ {\text {a v g}} := q _ {k} ^ {\text {a v g}} - q _ {k} ^ {\text {p r e}} \tag {5} +$$ + +Importantly, $u_{i}^{\mathrm{max}}$ provides a measure of how strongly neuron $i$ was driven by its inputs (and whether or not it passed the spiking threshold $\theta$ ), while $\Delta q_{k}^{\mathrm{avg}}$ is a measure of how the input received as feedback from neuron $k$ changed after neuron $i$ was driven close to its spiking threshold. These two values are then used to fit a piece-wise linear model of $\Delta q_{k}^{\mathrm{avg}}$ as a function of $u_{i}^{\mathrm{max}}$ (Figure 2D). This piece-wise linear model is defined as: + +$$ +f _ {i k} (x) := \left\{ \begin{array}{l l} c _ {i k} ^ {1} x + c _ {i k} ^ {2} & \text {i f} x < \theta \\ c _ {i k} ^ {3} x + c _ {i k} ^ {4} & \text {i f} x \geq \theta \end{array} \right. \tag {6} +$$ + +The parameters $c_{ik}^{1}$ , $c_{ik}^{2}$ , $c_{ik}^{3}$ and $c_{ik}^{4}$ are updated to perform linear regression using gradient descent: + +$$ +L = \frac {1}{2} \| f _ {i k} \left(u _ {i} ^ {\max }\right) - \Delta q _ {k} ^ {\text {a v g}} \| ^ {2} \tag {7} +$$ + +$$ +\Delta c _ {i k} ^ {l} \propto - \frac {\partial L}{\partial c _ {i k} ^ {l}} \quad \text {f o r} l \in \{1, 2, 3, 4 \} \tag {8} +$$ + +An estimate of the causal effect of neuron $i$ spiking on the activity of neuron $k$ , $\beta_{ik}$ , is then defined as the difference in the two sides of the piece-wise linear function at the spiking threshold: + +$$ +\beta_ {i k} := \lim _ {x \rightarrow \theta^ {+}} f _ {i k} (x) - \lim _ {x \rightarrow \theta^ {-}} f _ {i k} (x) \tag {9} +$$ + +Finally, the weight at the feedback synapse, $Y_{ik}$ , is updated to be a scaled version of $\beta_{ik}$ : + +$$ +Y _ {i k} = \beta_ {i k} \frac {\gamma}{\sigma_ {\beta} ^ {2}} \tag {10} +$$ + +where $\gamma$ is a hyperparameter and $\sigma_{\beta}^{2}$ is the standard deviation of $\beta$ values for all feedback synapses in the layer. This ensures that the scale of the full set of feedback weights between two layers in the network remains stable during training. + +# 5 RESULTS + +# 5.1 ALIGNMENT OF FEEDBACK AND FEEDFORWARD WEIGHTS + +To measure how well the causal effect estimate at each feedback synapse, $\beta_{ik}$ , and thus the feedback weight $Y_{ik}$ , reflects the reciprocal feedforward weight $W_{ki}$ , we can measure the percentage of feedback weights that have the same sign as the reciprocal feedforward weights (Figure 3A). When training on CIFAR-10 with no RDD feedback training phase (ie. feedback weights remain fixed throughout training), the feedback alignment effect somewhat increases the sign alignment during training, but it is ineffective at aligning the signs of weights in earlier layers in the network. Compared to feedback alignment, the addition of an RDD feedback training phase greatly increases the sign alignment between feedback and feedforward weights for all layers in the network, especially at earlier layers. In addition, the RDD algorithm increases sign alignment throughout the hierarchy more than the current state-of-the-art algorithm for weight alignment introduced recently by Akrout et al. Akrout et al. (2019) (Figure 3A). Furthermore, RDD feedback training changes feedback weights to not only match the sign but also the magnitude of the reciprocal feedforward weights (Figure 3B), which makes it better for weight alignment than hard-wired sign symmetry (Moskovitz et al., 2018). + +# 5.2 DESCENDING THE SYMMETRIC ALIGNMENT COST FUNCTION + +The symmetric alignment cost function (Kunin et al., 2019) (Equation 1) can be broken down as: + +$$ +\mathcal {R} _ {\mathrm {S A}} = \mathcal {R} _ {\text {d e c a y}} + \mathcal {R} _ {\text {s e l f}} \tag {11} +$$ + +where we define $\mathcal{R}_{\mathrm{decay}}$ and $\mathcal{R}_{\mathrm{self}}$ as: + +$$ +\mathcal {R} _ {\text {d e c a y}} := \| W \| _ {F} ^ {2} + \| Y \| _ {F} ^ {2} \tag {12} +$$ + +$$ +\mathcal {R} _ {\text {s e l f}} := - 2 \operatorname {t r} (W Y) \tag {13} +$$ + +$\mathcal{R}_{\mathrm{decay}}$ is simply a weight regularization term that can be minimized using techniques like weight decay. $\mathcal{R}_{\mathrm{self}}$ , in contrast, measures how well aligned in direction the two matrices are. Our learning rule for feedback weights minimizes the $\mathcal{R}_{\mathrm{self}}$ term for weights throughout the network (Figure 4). By comparison, feedback alignment decreases $\mathcal{R}_{\mathrm{self}}$ to a smaller extent, and its ability to do so diminishes at earlier layers in the network. This helps to explain why our algorithm induces weight alignment, and can improve training performance (see below). + +# 5.3 PERFORMANCE ON FASHION-MNIST, SVHN, CIFAR-10 AND VOC + +We trained the same network architecture (see Appendix A.3) on the Fashion-MNIST, SVHN, CIFAR-10 and VOC datasets using standard autograd techniques (backprop), feedback alignment and our RDD feedback training phase. RDD feedback training substantially improved the network's performance over feedback alignment, and led to backprop-level accuracy on the train and test sets (Figure 5). + +![](images/091dd27e7a61f6c22ba3d257fe7e6d8103e2ed80f0ab413af5d1fde905166d70.jpg) +A + +![](images/52a576971bc0f24847ab8e459e2341d1ca8440edf2c43ecbe4c7bdb87dfb6fff.jpg) +FC 2 + +![](images/832db38873469a771f640a7290ea6b70ee29dedff0e523b475c356f88c5bb765.jpg) +FC 3 + +![](images/b6e1663a029856ae4854236430e5377270d0cb51005708ee497768f1b987f91d.jpg) +B + +![](images/c16c3a74f65c090e39993fe8e222a4aaad6f518e505bce01d971efff602132b8.jpg) +FC 2 + +![](images/b248fcbe126c5d13c21e89d22ea232235e4a35a6031754329f95696190882b42.jpg) +FC3 + +![](images/af9ed96e0e6709bbf276bbca1a9bf71698593e57e6be2ddc4737d709286deb17.jpg) +Figure 3: A. Evolution of sign alignment (the percent of feedforward and feedback weights that have the same sign) for each fully-connected layer in the network when trained on CIFAR-10 using RDD feedback training (blue), using the algorithm proposed by Akrout et al. (2019) (purple), and using feedback alignment (red). B. Feedforward vs. feedback weights for each fully-connected layer at the end of training, with RDD feedback training (blue), the Akrout algorithm (purple), and feedback alignment (red). + +![](images/0464ac256b6d50270e7a50d694df1d8edbee6d194070a923a41b1fc08f4e221c.jpg) + +![](images/33eb4c1fea09bf27fc5454d075b04cdb27aedba98188fb298d657d3d8b198199.jpg) + +# 6 DISCUSSION + +In order to understand how the brain learns complex tasks that require coordinated plasticity across many layers of synaptic connections, it is important to consider the weight transport problem. Here, we presented an algorithm for updating feedback weights in a network of spiking neurons that takes advantage of the spiking discontinuity to estimate the causal effect between two neurons (Figure 2). We showed that this algorithm enforces weight alignment (Figure 3), and identified a loss function, $\mathcal{R}_{\mathrm{self}}$ , that is minimized by our algorithm (Figure 4). Finally, we demonstrated that our algorithm + +![](images/b64a5b0086437d13c7319aae0a92599ca242e179ec44f2ab3e7ac9c69fd6dd88.jpg) +FC1 + +![](images/ad2fe7f9b81c7a80c6ef911a7097926789bb998376930e1c78b16bf893b7054e.jpg) +FC2 +Figure 4: Evolution of $\mathcal{R}_{\mathrm{self}}$ for each fully-connected layer in the network when trained on CIFAR-10 using RDD feedback training (solid lines) and using feedback alignment (dashed lines). RDD feedback training dramatically decreases this loss compared to feedback alignment, especially in earlier layers. + +![](images/da28b927bd8a30d266d27ba30979ab3c980411398f9e72bb5eec10bfd62a36a7.jpg) +FC3 + +![](images/6a5c853c10e0b31abcaf556616be28f2bc8d4bf072e07e358c2cb34a18574bb4.jpg) +Fashion-MNIST + +![](images/f78978fa0dc3b901b87fdc1c078cacc4414539ed31ece2c5596d3b70d658e3da.jpg) +SVHN + +![](images/233c525c78aac755ce51f6ffb122dec0f0101589d35af6a60ce00151d2f37ecd.jpg) +CIFAR-10 + +![](images/94aedfb323c453dfeaaa31e3e95186790099b213cd339d60a3dee601ce35ecc6.jpg) +VOC + +![](images/5bf28cb925e9c834ca28d5498c22ed187b7497e95431e0a0e6f52afee611318d.jpg) +Figure 5: Comparison of Fashion-MNIST, SVHN, CIFAR-10 and VOC train error (top row) and test error (bottom row). RDD feedback training substantially improves test error performance over feedback alignment in both learning tasks. + +![](images/732fba694746f37b6955ef786dad9e9c4ad39a35a361aca12dcc2138b4d380ea.jpg) + +![](images/486d63bd913f650afa55188c4f5b343c83ef6a1eeaaf3ec86c1564bddefe346b.jpg) + +![](images/b9a9103400b4f4267bdd591a52467bc3033d48be48de94f1e7d7fa16e48c20cc.jpg) + +allows deep neural networks to achieve better learning performance than feedback alignment on Fashion-MNIST and CIFAR-10 (Figure 5). These results demonstrate the potential power of RDD as a means for solving the weight transport problem in biologically plausible deep learning models. + +One aspect of our algorithm that is still biologically implausible is that it does not adhere to Dale's principle, which states that a neuron performs the same action on all of its target cells (Strata & Harvey). This means that a neuron's outgoing connections cannot include both positive and negative weights. However, even under this constraint, a neuron can have an excitatory effect on one downstream target and an inhibitory effect on another, by activating intermediary inhibitory interneurons. Because our algorithm provides a causal estimate of one neuron's impact on another, theoretically, it could capture such polysynaptic effects. Therefore, this algorithm is in theory compatible with Dale's principle. Future work should test the effects of this algorithm when implemented in a network of neurons that are explicitly excitatory or inhibitory. + +# REFERENCES + +Mohamed Akrout, Collin Wilson, Peter C Humphreys, Timothy Lillicrap, and Douglas Tweed. Using weight mirrors to improve feedback alignment. arXiv preprint arXiv:1904.05391, 2019. +Joshua D Angrist and Jörn-Steffen Pischke. Mostly harmless econometrics: An empiricist's companion. Princeton university press, 2008. +Sergey Bartunov, Adam Santoro, Blake Richards, Luke Marris, Geoffrey E Hinton, and Timothy Lillicrap. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. In Advances in Neural Information Processing Systems, pp. 9368-9378, 2018. +Francis Crick. The recent excitement about neural networks. 337(6203):129-132, 1989-01-12. doi: 10.1038/337129a0. URL http://dx.doi.org/10.1038/337129a0. +Stephen Grossberg. Competitive learning: From interactive activation to adaptive resonance. 11(1): 23-63, 1987. ISSN 1551-6709. +John F Kolen and Jordan B Pollack. Backpropagation without weight transport. In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94), volume 3, pp. 1375-1380. IEEE, 1994. +Daniel Kunin, Jonathan M Bloom, Aleksandrina Goeva, and Cotton Seed. Loss landscapes of regularized linear autoencoders. arXiv preprint arXiv:1901.08168, 2019. +Benjamin James Lansdell and Konrad Paul Kording. Spiking allows neurons to estimate their causal effect. *bioRxiv*, pp. 253351, 2019. +Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, and Yoshua Bengio. Difference target propagation. In Joint European conference on machine learning and knowledge discovery in databases, pp. 498-515. Springer, 2015. +Timothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications, 7: 13276, 2016. +Ioana E Marinescu, Patrick N Lawlor, and Konrad P Kording. Quasi-experimental causality in neuroscience and behavioural research. Nature human behaviour, pp. 1, 2018. +Theodore H Moskovitz, Ashok Litwin-Kumar, and LF Abbott. Feedback alignment in deep convolutional networks. arXiv preprint arXiv:1812.06488, 2018. +Dhruva Venkita Raman, Adriana Perez Rotondo, and Timothy O'Leary. Fundamental bounds on learning performance in neural circuits. 116(21):10537-10546, 2019. ISSN 0027-8424. +Pieter R Roelfsema and Arjen van Ooyen. Attention-gated reinforcement learning of internal representations for classification. Neural computation, 17(10):2176-2214, 2005. +João Sacramento, Rui Ponte Costa, Yoshua Bengio, and Walter Senn. Dendritic cortical microcircuits approximate the backpropagation algorithm. pp. 8735-8746, 2018. +Benjamin Scellier and Yoshua Bengio. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. 11:24, 2017. ISSN 1662-5188. doi: 10.3389/fncom.2017.00024. URL http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5415673/. +Reza Shadmehr, Maurice A Smith, and John W Krakauer. Error correction, sensory prediction, and adaptation in motor control. 33:89-108, 2010. ISSN 0147-006X. +Piergiorgio Strata and Robin Harvey. Dale's principle. 50(5):349-350. ISSN 0361-9230. doi: 10.1016/S0361-9230(99)00100-8. URL http://www.sciencedirect.com/science/ article/pii/S0361923099001008. +Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992. + +# A APPENDIX + +# A.1 LIF NEURON SIMULATION DETAILS + +Post-synaptic spike responses at feedforward synapses, $\mathbf{p}$ , were calculated from pre-synaptic binary spikes using an exponential kernel function $\kappa$ : + +$$ +p _ {j} (t) = \sum_ {k} \kappa \left(t - \tilde {t} _ {j k}\right) \tag {14} +$$ + +where $\tilde{t}_{jk}$ is the $k^{\mathrm{th}}$ spike time of input neuron $j$ and $\kappa$ is given by: + +$$ +\kappa (t) = \left(e ^ {- t / \tau_ {L}} - e ^ {- t / \tau_ {S}}\right) \Theta (t) / \left(\tau_ {L} - \tau_ {s}\right) \tag {15} +$$ + +where $\tau_{s} = 0.003\mathrm{s}$ and $\tau_{L} = 0.01\mathrm{s}$ represent short and long time constants, and $\Theta$ is the Heaviside step function. Post-synaptic spike responses at feedback synapses, $\mathbf{q}$ , were computed in the same way. + +# A.2 RDD FEEDBACK TRAINING IMPLEMENTATION + +# A.2.1 WEIGHT SCALING + +Weights were shared between the convolutional network and the network of LIF neurons, but feedforward weights in the LIF network were scaled versions of the convolutional network weights: + +$$ +W _ {i j} ^ {\mathrm {L I F}} = \psi m W _ {i j} ^ {\text {C o n v}} / \sigma_ {W ^ {\text {C o n v}}} ^ {2} \tag {16} +$$ + +where $W^{\mathrm{Conv}}$ is a feedforward weight matrix in the convolutional network, $W^{\mathrm{LIF}}$ is the corresponding weight matrix in the LIF network, $m$ is the number of units in the upstream layer (ie. the number of columns in $W^{\mathrm{Conv}}$ ), $\sigma_{W^{\mathrm{Conv}}}^{2}$ is the standard deviation of $W^{\mathrm{Conv}}$ and $\psi$ is a hyperparameter. This rescaling ensures that spike rates in the LIF network stay within an optimal range for the RDD algorithm to converge quickly, even if the scale of the feedforward weights in the convolutional network changes during training. This avoids situations where the scale of feedforward weights is so small that little or no spiking occurs in the LIF neurons. + +# A.2.2 FEEDBACK TRAINING PARADIGM + +The RDD feedback training paradigm is implemented as follows. We start by providing driving input to the first layer in the network of LIF neurons. To create this driving input, we choose a subset of $20\%$ of the neurons in that layer, and create a unique input spike train for each of these neurons using a Poisson process with a rate of $200\mathrm{Hz}$ . All other neurons in the layer receive no driving input. Every $100\mathrm{ms}$ , a new set of neurons to receive driving input is randomly chosen. After $30~\mathrm{s}$ , this layer stops receiving driving input, and the process repeats for the next layer in the network. + +# A.3 NETWORK AND TRAINING DETAILS + +The network architectures used to train on Fashion-MNIST and CIFAR-10 are described in Table 1. + +Inputs were randomly cropped and flipped during training, and batch normalization was used at each layer. Networks were trained using a minibatch size of 32. + +# A.4 AKROUT ET AL. (2019) ALGORITHM IMPLEMENTATION + +In experiments that compared sign alignment using our RDD algorithm with the Akrout et al. (2019) algorithm, we kept the same RDD feedback training paradigm (ie. layers were sequentially driven, and a small subset of neurons in each layer was active at once). However, rather than updating feedback weights using RDD, we recorded the mean firing rates of the active neurons in the upstream + +
LayerFashion-MNISTSVHN & CIFAR-10VOC
Input28 × 28 × 132 × 32 × 332 × 32 × 3
1Conv2D 5 × 5, 64ReLUConv2D 5 × 5, 64ReLUConv2D 5 × 5, 64ReLU
2MaxPool 2 × 2, stride 2MaxPool 2 × 2, stride 2MaxPool 2 × 2, stride 2
3Conv2D 5 × 5, 64ReLUConv2D 5 × 5, 64ReLUConv2D 5 × 5, 64ReLU
4MaxPool 2 × 2, stride 2MaxPool 2 × 2, stride 2MaxPool 2 × 2, stride 2
5FC 384ReLUFC 384ReLUFC 384ReLU
6FC 192ReLUFC 192ReLUFC 192ReLU
7FC 10ReLUFC 10ReLUFC 21ReLU
+ +Table 1: Network architectures used to train on Fashion-MNIST, SVHN, CIFAR-10 and VOC. + +layer, $\mathbf{r}^l$ , and the mean firing rates in the downstream layer, $\mathbf{r}^{l + 1}$ . We then used the following feedback weight update rule: + +$$ +\Delta \mathbf {Y} = \eta \mathbf {r} ^ {l} \mathbf {r} ^ {(l + 1) ^ {T}} - \lambda_ {\mathrm {W D}} \mathbf {Y} \tag {17} +$$ + +where $Y$ are the feedback weights between layers $l + 1$ and $l$ , and $\eta$ and $\lambda_{\mathrm{WD}}$ are learning rate and weight decay hyperparameters, respectively. + +![](images/f8a43524da3500dafe69b159a2a16ebec99e7c1a1efb2a113179eacad69b2b99.jpg) +Figure S1: Comparison of average spike rates in the fully-connected layers of the LIF network vs. activities of the same layers in the convolutional network, when both sets of layers were fed the same input. Spike rates in the LIF network are largely correlated with activities of units in the convolutional network. + +![](images/6b3d96db0f78537314615d37b23e413744e83c525b019d96b3c7eef4da35350b.jpg) + +![](images/6aaf13b3902867324dabbfefcb0cbdbe4fc0a9711ce45d10deb765306d43f461.jpg) \ No newline at end of file diff --git a/spikebasedcausalinferenceforweightalignment/images.zip b/spikebasedcausalinferenceforweightalignment/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9eb89ccfb9a759068f1c8e1ea540bfa729b2b44a --- /dev/null +++ b/spikebasedcausalinferenceforweightalignment/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc400d421f2839abc6d9ff02f2523ee237a0603cc597c4df7019bd5a9d5d935f +size 482409 diff --git a/spikebasedcausalinferenceforweightalignment/layout.json b/spikebasedcausalinferenceforweightalignment/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ee53f3fd9e719ad754d46040f56a2e317be7ed32 --- /dev/null +++ b/spikebasedcausalinferenceforweightalignment/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:031d458d16abcb63ae345d5c54ce5b84b9d777927ccf7612964a00672de7df07 +size 400906 diff --git a/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_content_list.json b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4393015fbe2d5a20ae6f2afdfd6a225981860aaa --- /dev/null +++ b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6694f343ddfcc10ab0b8b0acc35d4b9fc23367b702cac63214f497c3b27f13b +size 85639 diff --git a/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_model.json b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2d6f7de4b4f057483b9d1d312c76ff36017ea7ed --- /dev/null +++ b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5260d1432ae892f406cec26fce1b83f2ed91c85e081e20a83fb72683234b4dc1 +size 101047 diff --git a/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_origin.pdf b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..44bf70a9c41d2e364e59af3e40678a18ca1c258b --- /dev/null +++ b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/115e2f66-7356-458e-9af5-93ab025579b5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27749cce62a7e87fc01767324a0b3bb31e4003124ae462f1d130f3864199be00 +size 574689 diff --git a/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/full.md b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a2275263ce6a9ef3ee3c84dcf4adfd5b74f6daf6 --- /dev/null +++ b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/full.md @@ -0,0 +1,419 @@ +# SPIKEGRAD: AN ANN-EQUIVALENT COMPUTATION MODEL FOR IMPLEMENTING BACK PROPAGATION WITH SPIKES + +Johannes C. Thiele, Olivier Bichler & Antoine Dupret +CEA, LIST + +91191 Gif-sur-Yvette, France +{johannes.thiele, olivier.bichler, antoine.dupret}@cea.fr + +# ABSTRACT + +Event-based neuromorphic systems promise to reduce the energy consumption of deep neural networks by replacing expensive floating point operations on dense matrices by low energy, sparse operations on spike events. While these systems can be trained increasingly well using approximations of the backpropagation algorithm, this usually requires high precision errors and is therefore incompatible with the typical communication infrastructure of neuromorphic circuits. In this work, we analyze how the gradient can be discretized into spike events when training a spiking neural network. To accelerate our simulation, we show that using a special implementation of the integrate-and-fire neuron allows us to describe the accumulated activations and errors of the spiking neural network in terms of an equivalent artificial neural network, allowing us to largely speed up training compared to an explicit simulation of all spike events. This way we are able to demonstrate that even for deep networks, the gradients can be discretized sufficiently well with spikes if the gradient is properly rescaled. This form of spike-based backpropagation enables us to achieve equivalent or better accuracies on the MNIST and CIFAR10 datasets than comparable state-of-the-art spiking neural networks trained with full precision gradients. The algorithm, which we call SpikeGrad, is based on accumulation and comparison operations and can naturally exploit sparsity in the gradient computation, which makes it an interesting choice for a spiking neuromorphic systems with on-chip learning capacities. + +# 1 INTRODUCTION + +Spiking neural networks (SNNs) are a new generation of artificial neural network models (Maass, 1997) that try to harness properties of biological neurons to build energy efficient spiking neuromorphic systems. Processing in traditional artificial neural networks (ANNs) is based on parallel processing of operations on dense tensors of fixed length. In contrast to this, spiking neuromorphic systems communicate with asynchronous events, which allows dynamic, data dependent computation that can exploit high temporal and spatial sparsity. + +The recent years have seen a large number of approaches devoted to optimization of spiking neural networks with the backpropagation algorithm, either by converting ANNs to SNNs (Diehl et al., 2015; Esser et al., 2016; Rueckauer et al., 2017; Sengupta et al., 2019) or by simulating spikes explicitly in the forward pass and optimizing these dynamics with floating point gradients (Lee et al., 2016; Yin et al., 2017; Wu et al., 2018b;c; Severa et al., 2019; Jin et al., 2018; Bellec et al., 2018; Zenke & Ganguli, 2018; Shrestha & Orchard, 2018). These methods aim to optimize SNNs for efficient inference, and backpropagation is performed offline on a standard computing system. It would however be desirable to also enable on-chip learning in neuromorphic chips using the power of the backpropagation algorithm, and to maintain the advantages of spike-based processing also in the error propagation phase. + +Previous work on the implementation of backpropagation with spikes is mostly concerned with biological plausibility. A non-spiking version of biologically inspired backpropagation is presented + +by Sacramento et al. (2018). Guerguiev et al. (2017), Neftci et al. (2017) and Samadi et al. (2017) introduce spike-based versions of the backpropagation algorithm using variants of (direct) feedback alignment (Lillicrap et al., 2016; Nøkland, 2016). The exact backpropagation algorithm, which backpropagates through symmetric weights, might however be required to achieve good inference performance on large-scale deep neural networks (Baldi & Sadowski, 2016; Bartunov et al., 2018). O'Connor & Welling (2016) and Thiele et al. (2019) present implementations of standard backpropagation where the gradient is coded into spikes and propagated through symmetric weights. In the same spirit, our work is mostly concerned with exploiting spike based information encoding for energy efficient processing, which means that inference performance and operational simplicity will be preferred over biological plausibility and complex neuron models. + +We demonstrate how backpropagation can be seamlessly integrated into the spiking neural network framework by using a second accumulation compartment that discretizes the error into spikes. By additionally weighting the activity counters by the learning rate, we obtain a system that is able to perform learning and inference based on accumulations and comparisons alone. As for the forward pass, this allows us to use the dynamic precision computation provided by the discretization of all operations into spike events, and to exploit the sparsity of the gradient. Using a similar reasoning as Binas et al. (2016) and Wu et al. (2019) have applied to forward propagation in SNNs, we show that the system obtained in this way can be mapped to an integer activation ANN whose activations are equivalent to the accumulated neuron responses for both the forward and the backward propagation phase. This allows us to simulate training of large-scale SNNs efficiently on graphics processing units (GPUs), using their equivalent ANN. Additionally, in contrast to conversion methods that approximate pre-trained ANNs with SNNs, this method guarantees that the inference precision of the SNN will be equivalent to the ANN. In contrast to O'Connor & Welling (2016), this is true for any number of spikes and arbitrary spike order. We demonstrate classification accuracies equivalent or superior to existing implementations of SNNs trained with full precision gradients, and comparable to the precision of standard ANNs using similar topologies. This is the first time competitive classification performances are reported on the CIFAR10 and CIFAR100 datasets using a large-scale SNN where both training and inference are fully implemented with spikes. To the best of our knowledge, our work provides for the first time a demonstration of how the sparsity of the gradient during backpropagation could be exploited within a large-scale SNN processing structure. + +# 2 THE SpikeGrad ALGORITHM + +We begin with the description of SpikeGrad, the spike-based backpropagation algorithm. The algorithm mainly consists of special implementations of the integrate-and-fire (IF) neuron model, that are used to encode and propagate information in the forward and backward pass. + +For each training example/mini-batch, integration is performed from $t = 0$ to $t = T$ for the forward pass and from $t = T + \Delta t$ to $t = \mathcal{T}$ in the backward pass. Since no explicit time is used in the algorithm, $\Delta t$ represents symbolically the (very short) time between the arrival of an incoming spike and the response of the neuron, which is only used here to describe causality. + +Spike discretization of the forward pass For forward propagation, the architecture is described by multiple layers (labeled by $l \in [0, L]$ ) of IF neurons with integration variable $V_{i}^{l}(t)$ and threshold $\Theta_{\mathrm{ff}}$ : + +$$ +V _ {i} ^ {l} (t + \Delta t) = V _ {i} ^ {l} (t) - \Theta_ {\mathrm {f f}} s _ {i} ^ {l} (t) + \sum_ {j} w _ {i j} ^ {l} s _ {j} ^ {l - 1} (t), \quad V _ {i} ^ {l} (0) = b _ {i} ^ {l}. \tag {1} +$$ + +The variable $w_{ij}^{l}$ is the weight and $b_{i}^{l}$ a bias value. The spike activation function $s_{i}^{l}(t) \in \{-1,0,1\}$ is a function which triggers a signed spike event depending on the internal variables of the neuron. It will be shown later that the specific choice of the activation function is fundamental for the mapping to an equivalent ANN. After a neuron has fired, its integration variable is decremented or incremented by the threshold value $\Theta_{\mathrm{ff}}$ , which is represented by the second term on the r.h.s. of equation 1. + +As a representation of the neuron activity, we use a trace $x_{i}^{l}(t)$ which accumulates spike information over a single example: + +$$ +x _ {i} ^ {l} (t + \Delta t) = x _ {i} ^ {l} (t) + \eta s _ {i} ^ {l} (t). \tag {2} +$$ + +By weighting the activity with the learning rate $\eta$ we avoid performing a multiplication when weighting the input with the learning rate for the weight update equation 8. + +Implementation of implicit ReLU and surrogate activation function derivative It is possible to define an implicit activation function based on how the neuron variables affect the spike activation function $s_i^l (t)$ . In our implementation, we use the following fully symmetric function to represent linear activation functions (used for instance in pooling layers): + +$$ +s _ {i} ^ {l, \ln} \left(V _ {i} ^ {l} (t)\right) := \left\{ \begin{array}{l l} 1 & \text {i f} V _ {i} ^ {l} (t) \geq \Theta_ {\mathrm {f f}} \\ - 1 & \text {i f} V _ {i} ^ {l} (t) \leq - \Theta_ {\mathrm {f f}} \\ 0 & \text {o t h e r w i s e} \end{array} . \right. \tag {3} +$$ + +The following function corresponds to the rectified linear unit (ReLU) activation function: + +$$ +s _ {i} ^ {l, \text {R e L U}} \left(V _ {i} ^ {l} (t), x _ {i} ^ {l} (t)\right) := \left\{ \begin{array}{l l} 1 & \text {i f} V _ {i} ^ {l} (t) \geq \Theta_ {\mathrm {f f}} \\ - 1 & \text {i f} V _ {i} ^ {l} (t) \leq - \Theta_ {\mathrm {f f}} \text {a n d} x _ {i} ^ {l} (t) > 0. \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {4} +$$ + +The pseudo-derivative of the activation function is denoted symbolically by $S_{i}^{\prime l}$ . We use $S_{i}^{\prime l,\mathrm{lin}}(T) = 1$ for the linear case. For the ReLU, we use a surrogate of the form: + +$$ +S _ {i} ^ {\prime l, \text {R e L U}} (T) := \left\{ \begin{array}{l l} 1 & \text {i f} V _ {i} ^ {l} (T) > 0 \text {o r} x _ {i} ^ {l} (T) > 0 \\ 0 & \text {o t h e r w i s e} \end{array} . \right. \tag {5} +$$ + +These choices will be motivated in the following sections. Note that the derivatives depend only on the final states of the neurons at time $T$ . + +Discretization of the error into spikes For gradient backpropagation, we introduce a second compartment with threshold $\Theta_{\mathrm{bp}}$ in each neuron, which integrates error signals from higher layers. The process discretizes errors in the same fashion as the forward pass discretizes an input signal into a sequence of signed spike signals: + +$$ +U _ {i} ^ {l} (t + \Delta t) = U _ {i} ^ {l} (t) - \Theta_ {\mathrm {b p}} z _ {i} ^ {l} (t) + \sum_ {k} w _ {k i} ^ {l + 1} \delta_ {k} ^ {l + 1} (t). \tag {6} +$$ + +To this end, we introduce a ternary error spike activation function $z_{i}^{l}(t) \in \{-1,0,1\}$ which is defined in analogy to equation 3 using the error integration variable $U_{i}^{l}(t)$ and the backpropagation threshold $\Theta_{\mathrm{bp}}$ . The error is then obtained by gating this ternarized variable $z_{i}^{l}(t)$ with one of the surrogate activation function derivatives of the previous section (linear or ReLU): + +$$ +\delta_ {i} ^ {l} (t) = z _ {i} ^ {l} (t) S _ {i} ^ {\prime l} (T). \tag {7} +$$ + +This ternary spike signal is backpropagated through the weights to the lower layers and also applied in the update rule of the weight increment accumulator $\omega_{ij}^{l}$ : + +$$ +\omega_ {i j} ^ {l} (t + \Delta t) = \omega_ {i j} ^ {l} (t) - \delta_ {i} ^ {l} (t) x _ {j} ^ {l - 1} (T), \tag {8} +$$ + +which is triggered every time an error spike (equation 7) is backpropagated. The weight updates are accumulated during error propagation and are applied after propagation is finished to update each weight simultaneously. In this way, the backpropagation of errors and the weight update will, exactly as forward propagation, only involve additions and comparisons of floating point numbers. + +The SpikeGrad algorithm can also be expressed in an event-based formulation, described in algorithms 1, 2 and 3. This formulation is closer to how the algorithm would be implemented in an actual SNN hardware implementation of the IF firing dynamics. + +Loss function and error scale We use the cross entropy loss function in the final layer applied to the softmax of the total integration $V_{i}^{L}(T)$ (no spikes are triggered in the top layer during inference). This requires more complex operations than accumulations, but is negligible if the number of classes is small. To make sure that sufficient error spikes are triggered in the top layer, and that error spikes arrive even in the lowest layer of the network, we apply a scaling factor $\alpha$ to the error values before transferring them to $U_{i}^{L}$ . This scaling factor also implicitly sets the precision of the gradient, since a higher number of spikes means that a large range of values can be represented. To counteract the relative increase of the gradient scale, the learning rates have to be rescaled by a factor $1 / \alpha$ . + +Input encoding As pointed out in Rueckauer et al. (2017) and Wu et al. (2018c), it is crucial to maintain the full precision of the input image to obtain good performances on complex standard benchmarks with SNNs. One possibility is to encode the input in a large number of spikes (Sengupta et al., 2019). Another possibility, which has been shown to require a much lower number of spikes in the network, is to multiply the input values directly with the weights of the first layer (just like in a standard ANN). The drawback is that the first layer then requires multiplication operations. The additional cost of this procedure may however be negligible if all other layers can profit from spike-based computation. This problematic does not exist for stimuli which are natively encoded in spikes. + +# Algorithm 1 Forward + +function PROPAGATE([l,i,j],s) + +$$ +V _ {i} ^ {l} \gets V _ {i} ^ {l} + s \cdot w _ {i j} ^ {l} +$$ + +$s_i^l\gets s_i^l (V_i^l,x_i^l)$ spike activation function + +$$ +i \neq 0 \text {t h e n} +$$ + +$$ +V _ {i} ^ {l} \leftarrow V _ {i} ^ {l} - s _ {i} ^ {l} \cdot \Theta_ {\mathrm {f f}} +$$ + +$$ +x _ {i} ^ {l} \leftarrow x _ {i} ^ {l} + \eta s _ {i} ^ {l} +$$ + +for $k$ in $l + 1$ connected to $i$ do + +PROPAGATE([l + 1, k, i], s_i^t) + +# Algorithm 2 Backward + +function BACKPROPAGATE([l,i,k], $\delta$ + +$$ +U _ {i} ^ {l} \leftarrow U _ {i} ^ {l} + \delta \cdot w _ {k i} ^ {l + 1} +$$ + +$z_{i}^{l}\gets z_{i}^{l}(U_{i}^{l})$ $\triangleright$ error activation function + +$$ +\delta_ {i} ^ {l} \leftarrow z _ {i} ^ {l} \cdot S _ {i} ^ {l} +$$ + +if $z_i^l\neq 0$ then + +$$ +U _ {i} ^ {l} \leftarrow U _ {i} ^ {l} - z _ {i} ^ {l} \cdot \Theta_ {\mathrm {b p}} +$$ + +for $j$ in layer $l - 1$ connected to $i$ do + +$$ +\begin{array}{l} \text {B A C K P R O P A G A T E} \left(\left[ l - 1, j, i \right], \delta_ {i} ^ {l}\right) \\ \omega_ {i j} ^ {l} \gets \omega_ {i j} ^ {l} - \delta_ {i} ^ {l} \cdot x _ {j} ^ {l - 1} \\ \end{array} +$$ + +# Algorithm 3 Training of single example/batch + +init: $V\gets b,U\gets 0,x\gets 0,\omega \gets 0$ + +while input spikes $s_i^{in}$ do + +for $k$ in $l = 0$ receiving $s_i^{tn}$ do + +PROPAGATE([0,k,i], $s_i^{kn}$ ) + +$$ +\begin{array}{l} \boldsymbol {S} ^ {\prime} \leftarrow \boldsymbol {S} ^ {\prime} (\boldsymbol {V}, \boldsymbol {x}) \\ \boldsymbol {U} ^ {\boldsymbol {L}} \leftarrow \alpha \cdot \partial \mathcal {L} / \partial \boldsymbol {v} ^ {\boldsymbol {L}} \\ \end{array} +$$ + +for $i$ in $l = L$ do + +while $|U_i^L |\geq \Theta_{\mathrm{bp}}$ do + +BACKPROPAGATE([L,i,-],0) + +$$ +\boldsymbol {w} \leftarrow \boldsymbol {w} + \omega +$$ + +$\triangleright$ variables in bold describe all neurons in network/layer + +> spikes corresponding to training input + +$\triangleright$ calculate surrogate derivatives + +$\triangleright$ calculate classification error + +$\triangleright$ backpropagate error spikes +> last layer receives no error + +$\triangleright$ update weights with weight update accumulator + +# 3 FORMULATION OF THE EQUIVALENT ANN + +The simulation of the temporal dynamics of spikes in SpikeGrad requires a large number of time steps or events if activation or error values are large. It would therefore be extremely beneficial if we were able to map the SNN to an equivalent ANN that can be trained much faster on standard hardware. In this section, we demonstrate that it is possible to find such an ANN using the forward and backward propagation dynamics of the SpikeGrad IF neuron model described in the previous section. + +Spike discretization error We start our analysis with equation 1. We reorder the terms and sum over the increments $\Delta V_{i}^{l}(t) = V_{i}^{l}(t + \Delta t) - V_{i}^{l}(t)$ every time the integration variable is changed either by a spike that arrives at time $t_j^s\in [0,T]$ via connection $j$ , or by a spike that is triggered at time $t_i^s\in [0,T]$ . With the initial conditions $V_{i}^{l}(0) = b_{i}^{l}$ , $s_i^l (0) = 0$ , we obtain the final value $V_{i}^{l}(T)$ : + +$$ +V _ {i} ^ {l} (T) = \sum_ {t _ {j} ^ {s}, t _ {i} ^ {s}} \Delta V _ {i} ^ {l} = - \Theta_ {\mathrm {f f}} \sum_ {t _ {i} ^ {s}} s _ {i} ^ {l} \left(t _ {i} ^ {s}\right) + \sum_ {j} w _ {i j} ^ {l} \sum_ {t _ {j} ^ {s}} s _ {j} ^ {l - 1} \left(t _ {j} ^ {s}\right) + b _ {i} ^ {l} \tag {9} +$$ + +By defining the total transmitted output of a neuron as $S_{i}^{l} \coloneqq \sum_{t_{s}^{s}} s_{i}^{l}(t_{i}^{s})$ we obtain: + +$$ +\frac {1}{\Theta_ {\mathrm {f f}}} V _ {i} ^ {l} (T) = \mathbb {S} _ {i} ^ {l} - S _ {i} ^ {l}, \quad \mathbb {S} _ {i} ^ {l} := \frac {1}{\Theta_ {\mathrm {f f}}} \left(\sum_ {j} w _ {i j} ^ {l} S _ {j} ^ {l - 1} + b _ {i} ^ {l}\right) \tag {10} +$$ + +The same reasoning can be applied to backpropagation of the gradient. We define the summed responses over error spikes times $\tau_{j}^{s}\in [T + \Delta t,\mathcal{T}]$ as $Z_{i}^{l}\coloneqq \sum_{\tau_{i}^{s}}z_{i}^{l}(\tau_{i}^{s})$ to obtain: + +$$ +\frac {1}{\Theta_ {\mathrm {b p}}} U _ {i} ^ {l} (\mathcal {T}) = \mathbb {Z} _ {i} ^ {l} - Z _ {i} ^ {l}, \quad \mathbb {Z} _ {i} ^ {l} := \frac {1}{\Theta_ {\mathrm {b p}}} \left(\sum_ {k} w _ {k i} ^ {l + 1} E _ {k} ^ {l + 1}\right) \tag {11} +$$ + +$$ +E _ {k} ^ {l + 1} = \sum_ {\tau_ {k} ^ {s}} \delta_ {k} ^ {l + 1} \left(\tau_ {k} ^ {s}\right) = \sum_ {\tau_ {k} ^ {s}} S _ {k} ^ {\prime l + 1} (T) z _ {k} ^ {l + 1} \left(\tau_ {k} ^ {s}\right) = S _ {k} ^ {\prime l + 1} (T) Z _ {k} ^ {l + 1}. \tag {12} +$$ + +In both equation 10 and equation 11, the terms $\mathbb{S}_i^l$ and $\mathbb{Z}_i^l$ are equivalent to the output of an ANN with signed integer inputs $S_{j}^{l - 1}$ and $E_{k}^{l + 1}$ . $1 / \Theta_{\mathrm{ff}}$ and $1 / \Theta_{\mathrm{bp}}$ are implicit scaling factors of activations and gradients. If gradients shall not be explicitly rescaled, backpropagation requires $\Theta_{\mathrm{bp}} = \Theta_{\mathrm{ff}}$ . The values of the residual integrations $1 / \Theta_{\mathrm{ff}}V_i^l (T)$ and $1 / \Theta_{\mathrm{bp}}U_i^l (\mathcal{T})$ therefore represent the spike discretization error $\mathrm{SDE}_{\mathrm{ff}}\coloneqq \mathbb{S}_i^l -S_i^l$ or $\mathrm{SDE}_{\mathrm{bp}}\coloneqq \mathbb{Z}_i^l -Z_i^l$ between the ANN outputs $\mathbb{S}_i^l$ and $\mathbb{Z}_i^l$ and the accumulated SNN outputs $S_{i}^{l}$ and $Z_{i}^{l}$ . Since we know that $V_{i}^{l}(T)\in (-\Theta_{\mathrm{ff}},\Theta_{\mathrm{ff}})$ and $U_{i}^{l}(\mathcal{T})\in (-\Theta_{\mathrm{bp}},\Theta_{\mathrm{bp}})$ , this gives bounds of $|\mathrm{SDE}_{\mathrm{ff}}| < 1$ and $|\mathrm{SDE}_{\mathrm{bp}}| < 1$ . + +So far we can only represent linear functions. We now consider an implementation where the ANN applies a ReLU activation function instead. The SDE in this case is: + +$$ +\mathrm {S D E} _ {\mathrm {f f}} ^ {\text {R e L U}} := \operatorname {R e L U} \left(\mathbb {S} _ {i} ^ {l}\right) - S _ {i} ^ {l}. \tag {13} +$$ + +We can calculate the error by considering that equation 4 forces the neuron in one of two regimes (note that $x_{i}^{l} > 0 \Leftrightarrow S_{i}^{l} > 0$ ): In one case, $S_{i}^{l} = 0$ , $V_{i}^{l}(T) < \Theta_{\mathrm{ff}}$ (this includes $V_{i}^{l}(T) \leq -\Theta_{\mathrm{ff}}$ ). This implies $\mathbb{S}_{i}^{l} = 1 / \Theta_{\mathrm{ff}} V_{i}^{l}(T)$ and therefore $|\mathrm{SDE}_{\mathrm{ff}}^{\mathrm{ReLU}}| < 1$ (or even $|\mathrm{SDE}_{\mathrm{ff}}^{\mathrm{ReLU}}| = 0$ if $V_{i}^{l}(T) \leq 0$ ). In the other case, $S_{i}^{l} > 0$ , $V_{i}^{l}(t) \in (-\Theta_{\mathrm{ff}}, \Theta_{\mathrm{ff}})$ , where equation 4 is equivalent to equation 3. + +This equivalence motivates the choice of equation 5 as a surrogate derivative for the SNN: the condition $(V_{i}^{l}(T) > 0$ or $x_{i}^{l}(T) > 0)$ can be seen to be equivalent to $\mathbb{S}_i^l (T) > 0$ , which defines the derivative of a ReLU. Finally, for the total weight increment $\Delta w_{ij}^{l}$ , it can be seen from equation 2 and equation 8 that: + +$$ +x _ {i} ^ {l} (T) = \sum_ {t _ {i} ^ {s}} \Delta x _ {i} ^ {l} \left(t _ {i} ^ {s}\right) = \eta S _ {i} ^ {l}, \quad \Rightarrow \quad \Delta w _ {i j} ^ {l} (\mathcal {T}) = \sum_ {\tau_ {i} ^ {s}} \Delta \omega_ {i j} ^ {l} \left(\tau_ {i} ^ {s}\right) = - \eta S _ {j} ^ {l - 1} E _ {i} ^ {l}, \tag {14} +$$ + +which is exactly the weight update formula of an ANN defined on the accumulated variables. We have therefore demonstrated that the SNN can be represented by an ANN by replacing all $S$ and $Z$ by $\mathbb{S}$ and $\mathbb{Z}$ and applying the corresponding activation function directly on these variables. The error that will be caused by this substitution compared to using the accumulated variables $S$ and $Z$ of an SNN is described by the SDE. This ANN can now be used for training of the SNN on GPUs. The SpikeGrad algorithm formulated on the variables $s$ , $z$ , $\delta$ and $x$ represents the algorithm that would be implemented on a dedicated, event-based spiking neural network hardware implementation of the IF neurons. We will now demonstrate how the SDE can be further reduced to obtain an ANN and SNN that are exactly equivalent. + +Response equivalence For a large number of spikes, the SDE may be negligible compared to the activation of the ANN. However, in a framework whose objective it is to minimize the number of spikes emitted by each neuron, this error can have a potentially large impact. + +One option to reduce the error between the ANN and the SNN output is to constrain the ANN during training to integer values. One possibility is to round the ANN outputs: + +$$ +\mathbb {S} _ {i} ^ {l, \text {r o u n d}} := \operatorname {r o u n d} [ \mathbb {S} _ {i} ^ {l} ] = \operatorname {r o u n d} \left[ \frac {1}{\Theta_ {\mathrm {f f}}} \left(\sum_ {j} w _ {i j} ^ {l} S _ {j} ^ {l - 1} + b _ {i} ^ {l}\right) \right], \tag {15} +$$ + +The round function here rounds to the next integer value, with boundary cases rounded away from zero. This behavior can be implemented in the SNN by a modified spike activation function which is applied after the full stimulus has been propagated. To obtain the exact response as the ANN, we have to take into account the current value of $\bar{S}_i^l$ and modify the threshold values: + +$$ +s _ {i} ^ {l, \text {r e s}} \left(V _ {i} ^ {l} (T), S _ {i} ^ {l}\right) := \left\{ \begin{array}{l l} 1 & \text {i f} V _ {i} ^ {l} (T) > \Theta_ {\mathrm {f f}} / 2 \text {o r} \left(S _ {i} ^ {l} \geq 0, V _ {i} ^ {l} (T) = \Theta_ {\mathrm {f f}} / 2\right) \\ - 1 & \text {i f} V _ {i} ^ {l} (T) < - \Theta_ {\mathrm {f f}} / 2 \text {o r} \left(S _ {i} ^ {l} \leq 0, V _ {i} ^ {l} (T) = - \Theta_ {\mathrm {f f}} / 2\right). \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {16} +$$ + +Because this spike activation function is applied only to the residual values, we call it the residual spike activation function. The function is applied to a layer after all spikes have been propagated with the standard spike activation function (equation 3 or equation 4). We start with the lowest layer and propagate all residual spikes to the higher layers, which use the standard activation function. We then proceed with setting the next layer to residual mode and propagate the residual spikes. This is continued until we arrive at the last layer of the network. + +By considering all possible rounding scenarios, it can be seen that equation 16 indeed implies: + +$$ +S _ {i} ^ {l} + s _ {i} ^ {l, \text {r e s}} \left(V _ {i} ^ {l} (T), S _ {i} ^ {l}\right) = \operatorname {r o u n d} \left[ S _ {i} ^ {l} + 1 / \Theta_ {\mathrm {f f}} V _ {i} ^ {l} (T) \right] = \operatorname {r o u n d} \left[ \mathbb {S} _ {i} ^ {l} \right]. \tag {17} +$$ + +The same principle can be applied to obtain integer-rounded error propagation: + +$$ +\mathbb {Z} _ {i} ^ {l, \text {r o u n d}} := \text {r o u n d} \left[ \mathbb {Z} _ {i} ^ {l} \right] = \text {r o u n d} \left[ \frac {1}{\Theta_ {\mathrm {b p}}} \left(\sum_ {k} w _ {k i} ^ {l + 1} E _ {k} ^ {l + 1}\right) \right]. \tag {18} +$$ + +We have to apply the following modified spike activation function in the SNN after the full error has been propagated by the standard error spike activation function: + +$$ +z _ {i} ^ {l, \text {r e s}} \left( \right.U _ {i} ^ {l} (\mathcal {T}, Z _ {i} ^ {l}) := \left\{\begin{array}{l l}1&\text {i f} U _ {i} ^ {l} (\mathcal {T})) > \Theta_ {\mathrm {b p}} / 2 \text {o r} \left(Z _ {i} ^ {l} \geq 0, U _ {i} ^ {l} (\mathcal {T}) = \Theta_ {\mathrm {b p}} / 2\right)\\- 1&\text {i f} U _ {i} ^ {l} (\mathcal {T}) < - \Theta_ {\mathrm {b p}} / 2 \text {o r} \left(Z _ {i} ^ {l} \leq 0, U _ {i} ^ {l} (\mathcal {T}) = - \Theta_ {\mathrm {b p}} / 2\right),\\0&\text {o t h e r w i s e}\end{array}\right. \tag {19} +$$ + +which implies: + +$$ +Z _ {i} ^ {l} + z _ {i} ^ {l, \text {r e s}} \left(U _ {i} ^ {l} (\mathcal {T}), Z _ {i} ^ {l}\right) = \operatorname {r o u n d} \left[ Z _ {i} ^ {l} + 1 / \Theta_ {\mathrm {b p}} U _ {i} ^ {l} (\mathcal {T}) \right] = \operatorname {r o u n d} \left[ \mathbb {Z} _ {i} ^ {l} \right]. \tag {20} +$$ + +We have therefore shown that the SNN will after each propagation phase have exactly the same accumulated responses as the corresponding integer activation ANN. The same principle can be applied to obtain other forms of rounding (e.g. floor and ceil), if equation 16 and equation 19 are modified accordingly. + +Computational complexity estimation Note that we have only demonstrated the equivalence of the accumulated neuron responses. However, for each of the response values, there is a large number of possible combinations of 1 and $-1$ values that lead to the same response. The computational complexity of the event-based algorithm depends therefore on the total number $n$ of these events. The best possible case is when the accumulated response value $S_{i}^{l}$ is represented by exactly $|S_{i}^{l}|$ spikes. In the worst case, a large number of additional redundant spikes is emitted which sum up to 0. The maximal number of spikes in each layer is bounded by the largest possible integration value that can be obtained. This depends on the maximal absolute weight value $w_{\mathrm{max}}^l$ , the number of connections $N_{\mathrm{in}}^l$ and the number of spike events $n^{l - 1}$ each connection receives, which is given by the maximal value of the previous layer (or the input in the first layer): + +$$ +n _ {\min } ^ {l} = \left| S _ {i} ^ {l} \right|, \quad n _ {\max } ^ {l} = \left\lfloor \frac {1}{\Theta_ {\mathrm {f f}}} N _ {\mathrm {i n}} ^ {l} w _ {\max } ^ {l} n _ {\max } ^ {l - 1} \right\rfloor . \tag {21} +$$ + +The same reasoning applies to backpropagation. Our experiments show that for input encodings where the input is provided in a continuous fashion, and weight values that are much smaller than the threshold value, the deviation from the best case scenario is rather small. This is because in this case the sub-threshold integration allows to average out the fluctuations in the signal. This way the firing rate stays rather close to its long term average and few redundant spikes are emitted. For the total number of spikes $n$ in the full network on the CIFAR10 test set, we obtain empirically $n - n_{\min} / n_{\min} < 0.035$ . + +# 4 EXPERIMENTS + +Classification performance Tables 1 and 2 compare the state-of-the-art results for SNNs on the MNIST and CIFAR10 datasets. It can be seen that in both cases, our results are competitive with respect to the state-of-the-art results of other SNNs trained with high precision gradients. Compared to results using the same topology, our algorithm performs at least equivalently. + +Table 1: Comparison of different state-of-the-art spiking CNN architectures on MNIST. * indicates that the same topology (28x28-15C5-P2-40C5-P2-300-10) was used. + +
ArchitectureMethodRec. Rate (max[mean±std])
Wu et al. (2018b)*BP float gradient99.42%
Rueckauer et al. (2017)CNN converted to SNN99.44%
Jin et al. (2018)*BP float gradient99.49%
This work*BP float gradient99.48[99.36 ± 0.06]%
This work*BP spike gradient99.52[99.38 ± 0.06]%
+ +Table 2: Comparison of different state-of-the-art spiking CNN architectures on CIFAR10. * indicates that the same topology (32x32-128C3-256C3-P2-512C3-P2-1024C3-512C3-1024-512-10) was used. + +
ArchitectureMethodRec. Rate (max[mean±std])
Rueckauer et al. (2017)CNN converted SNN (with BN)90.85%
Sengupta et al. (2019)VGG-16 converted to SNN91.55%
Wu et al. (2018c)*BP float gradient (no NeuNorm)89.32%
This work*BP float gradient89.72[89.38 ± 0.25]%
This work*BP spike gradient89.99[89.49 ± 0.28]%
+ +The final classification performance of the network as a function of the error scaling term $\alpha$ in the final layer can be seen in figure 1. Previous work on low bitwidth gradients (Zhou et al., 2018) found that gradients usually require a higher precision than both weights and activations. Our results also indicate that a certain minimum number of error spikes is necessary to achieve convergence. This strongly depends on the depth of the network and if enough spikes are triggered to provide sufficient gradient signal in the bottom layers. For the CIFAR10 network, convergence becomes unstable for approximately $\alpha < 300$ . If the number of operations is large enough for convergence, the required precision for the gradient does not seem to be extremely high. On the MNIST task, the difference in test performance between a gradient rescaled by a factor of 50 and a gradient rescaled by a factor of 100 becomes insignificant. In the CIFAR10 task, this is true for a rescaling by 400 or 500. Also the results obtained with the float precision gradients in tables 1 and 2 demonstrate the same performance, given the range of the error. + +To investigate the performance of SpikeGrad on databases with a larger number of classes, we also ran an additional experiment on the CIFAR100 dataset. Using the exactly same architecture as for CIFAR10 with $\alpha = 500$ (besides the final layer that is increased to 100 classes), we obtain a maximal classification score of $64.40\%$ . Running the same experiment with only the forward pass encoded in spikes, but floating point gradients, we obtain $64.69\%$ . This result could probably be improved by adapting the architecture better to the dataset. However, it demonstrates that coding the gradient into spikes does not lead to a large precision loss even in this scenario. + +Sparsity in backpropagated gradient To evaluate the potential efficiency of the spike coding scheme relative to an ANN, we use the metric of relative synaptic operations. A synaptic operation corresponds to a multiply-accumulate (MAC) in the case of an ANN, and a simple accumulation (ACC) in the case of an SNN. This metric allows us to compare networks based on their fundamental operation. The advantage of this metric is the fact that it does not depend on the exact implementation of the operations (for instance the number of bits used to represent each number). Since an ACC is however generally cheaper and easier to implement than a MAC, we can be sure that an SNN is more efficient in terms of its operations than the corresponding ANN if the number of ACCs is smaller than the number of MACs (without considering potential hardware overheads of event-based computing). + +Numbers were obtained with the integer activations of the equivalent ANN to keep simulation times tractable. As previously explained, the integer response of the equivalent ANN represents the best case scenario for the SNN, i.e. the activation encoding with the lowest number of spikes. The actual + +![](images/163eee5148c4227ed19ec7b8673e4057bc7ae3543d708f61955ceac72398ea38.jpg) +Figure 1: Number of relative synaptic operations during backpropagation for different error scaling factors $\alpha$ as a function of the epoch. Numbers are based on activation values of the equivalent integer activation ANN. Test performance with error is given for each $\alpha$ . (a) MNIST. (b) CIFAR10. + +![](images/eb2aa6115daf220b1716df08bf09378f46711f15ff9dfe4ac3bc6192d9cfe27d.jpg) + +![](images/be982a36b791b03650b00bdb3f6be767b21c525df44dbc2782969fd520f048ac.jpg) +Figure 2: Number of relative synaptic operations during backpropagation in each layer (connections in direction of backpropagation) for different epochs. Numbers are based on activation values of the equivalent integer activation ANN. (a) MNIST with $\alpha = 100$ . (b) CIFAR10 with $\alpha = 500$ . + +![](images/553da5bd4416ff39d76ca5009bd5b8bcc5d02d348f81c60d3d5523d7aff89ae3.jpg) + +number of events and synaptic operations in an SNN may therefore slightly deviate from these numbers. + +In figure 1, it can be seen that the number of operations decreases with increasing inference precision of the network. This is a result of the decrease of error in the classification layer, which leads to the emission of a smaller number of error spikes. Figure 2 demonstrates how the number of operations during the backpropagation phase is distributed in the layers of the network (the float input bottom layer and average pooling layers were omitted). While propagating deeper down the network, the relative number of operations decreases and the error becomes increasingly sparse. This tendency is consistent during the whole training process for different epochs. + +# 5 DISCUSSION AND CONCLUSION + +Using spike-based propagation of the gradient, we demonstrated that the paradigm of event-based information propagation can be easily translated to the backpropagation algorithm. We have not only shown that competitive inference performance can be achieved, but also that gradient propagation seems particularly suitable to leverage spike-based processing by exploiting high signal sparsity. For both forward and backward propagation, SpikeGrad requires a similar event-based communication infrastructure between neurons, which simplifies a possible spiking hardware implementation. One restriction of our algorithm is the need for negative spikes, which could be problematic for some neuromorphic hardware platforms. + +In particular the topology used for CIFAR10 classification is rather large for the given task. We decided to use the same topologies as other state-of-the-art SNNs to allow for better comparison. The relatively large number of parameters of these topologies may to a certain extent explain the very low number of relative synaptic operations we observe during backpropagation. It has been demonstrated that the number of parameters in many ANNs can be strongly reduced without a significant impact on performance (Han et al., 2016). It remains to show that the spike coding approach provides sufficient gains also in topologies that are optimized for minimal computation and memory requirements. This seems at least possible, since the reduction of operation count in SNNs is mainly caused by two factors: the first factor is actual sparsity, that means the number of zeros that are produced by thresholding and the ReLU activation function. The second factor is the more fine grained distribution of activation and error values. If most values are represented by small integers, only a few spikes are necessary to encode them. The consequence is that even topologies that are compressed and therefore have lower sparsity may be able to profit from spike coding if most of the activation or error values are small. This is typically the case for networks quantized to low precision integer values (see for instance (Zhou et al., 2018) or Wu et al. (2018a)) and could lead to interesting synergies between our spike-based computation model and ANN quantization methods. + +Our analysis of the potential energy efficiency of SpikeGrad in a dedicated SNN architecture is based on counting the number of basic operations that are performed at the synapses of a neuron. This does not take into account the potential hardware overhead that is introduced by the implementation of the IF neuron model and the event-based processing scheme. If the low operation count can in the end be translated into higher processing speed or increased energy efficiency, compared to a normal ANN, depends on the particular SNN hardware implementation. Such dedicated neuromorphic hardware is the subject of current research efforts (see for instance Merolla et al. (2014) and Davies et al. (2018)). We assumed in this work that such a system might be built and used operation count as our principal metric, since it enables us to easily approximate the energy consumption of a spiking neuromorphic chip if the average cost of such an operation is known. Our work shows that a hardware implementation of SpikeGrad might profit sufficiently from the low operation count that energy or speed improvements could be possible at least in certain cases. + +# REFERENCES + +Pierre Baldi and Peter Sadowski. A theory of local learning, the learning channel, and the optimality of backpropagation. *Neural Networks*, 38:51-74, 2016. +Sergey Bartunov, Adam Santoro, Blake A. Richards, Geoffrey E. Hinton, and Timothy P. Lillicrap. Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures. In Advances in Neural Information Processing Systems (NIPS), 2018. +Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, and Wolfgang Maass. Long short-term memory and learning-to-learn in networks of spiking neurons. In Advances in Neural Information Processing Systems (NIPS), 2018. +Jonathan Binas, Giacomo Indiveri, and Michael Pfeiffer. Deep counter networks for asynchronous event-based processing. arXiv:1611.00710v1, NIPS 2016 workshop "Computing with Spikes", 2016. +M. Davies, N. Srinivasa, T. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, Y. Liao, C. Lin, A. Lines, R. Liu, D. Mathaikutty, S. McCoy, A. Paul, J. Tse, G. Venkataramanan, Y. Weng, A. Wild, Y. Yang, and H. Wang. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 38(1):82-99, January 2018. +Peter U. Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Shih-Chii Liu, and Michael Pfeiffer. Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing. In International Joint Conference on Neural Networks (IJCNN), 2015. +Steven K. Esser, Paul A. Merolla, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Alexander Andreopoulos, David J. Berg, Jeffrey L. McKinstry, Timothy Melano, Davis R. Barch, Carmelo di Nolfo, Pallab Datta, Arnon Amir, Brian Taba, Myron D. Flickner, and Dharmendra S. Modha. Convolutional networks for fast, energy-efficient neuromorphic computing. Proceedings of the National Academy of Sciences, 113(41):11441-11446, 2016. +Jordan Guerguiev, Timothy P. Lillicrap, and Blake Richards. Towards deep learning with segregated dendrites. eLife, 2017. +Song Han, Huizi Mao, and William J. Dally. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffmann Coding. In International Conference on Learning Representations (ICLR), 2016. +Yingyezhe Jin, Peng Li, and Wenrui Zhang. Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks. In Advances in Neural Information Processing Systems (NIPS), pp. 7005-7015, 2018. +Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. Training Deep Spiking Neural Networks Using Backpropagation. Frontiers in Neuroscience, (10:508), 2016. +Timothy P. Lillicrap, Daniel Cownden, Douglas B. Tweed, and Colin J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications, 2016. +Wolfgang Maass. Networks of Spiking Neurons: The Third Generation of Neural Network Models. Electronic Colloquium on Computational Complexity, (9):1659-1671, 1997. +Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668-673, 2014. +Emre O. Neftci, Charles Augustine, Paul Somnath, and Georgios Detorakis. Event-Driven Random Backpropagation: Enabling Neuromorphic Deep Learning Machines. Frontiers in Neuroscience, 11(324), 2017. +Arild Nøkland. Direct Feedback Alignment Provides Learning in Deep Neural Networks. In Advances in Neural Information Processing Systems (NIPS), 2016. + +Peter O'Connor and Max Welling. Deep Spiking Networks. arXiv:1602.08323v2, NIPS 2016 workshop "Computing with Spikes", 2016. +Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer, and Shih-Chii Liu. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification. Frontiers in Neuroscience, 11(682), 2017. +Joo Sacramento, Rui Costa, Y Bengio, and Walter Senn. Dendritic cortical microcircuits approximate the backpropagation algorithm. In Advances in Neural Information Processing Systems (NIPS), 2018. +Arash Samadi, Timothy P. Lillicrap, and Douglas B. Tweed. Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights. Neural Computation, (29):578-602, 2017. +Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, and Kaushik Roy. Going Deeper in Spiking Neural Networks: VGG and Residual Architectures. Frontiers in Neuroscience, 13:95, 2019. +William Severa, Craig M. Vineyard, Ryan Dellana, Stephen J. Verzi, and James B. Aimone. Training deep neural networks for binary communication with the Whetstone method. Nature Machine Intelligence, (1):86-94, 2019. +Sumit Bam Shrestha and Garrick Orchard. Slayer: Spike layer error reassignment in time. In Advances in Neural Information Processing Systems (NIPS), pp. 1412-1421, 2018. +Johannes C. Thiele, Olivier Bichler, Antoine Dupret, Sergio Solinas, and Giacomo Indiveri. A Spiking Network for Inference of Relations Trained with Neuromorphic Backpropagation. In (to appear) International Joint Conference on Neural Networks (IJCNN), arXiv:1903.04341, 2019. +Jibin Wu, Yansong Chua, Malu Zhang, Qu Yang, Guoqi Li, and Haizhou Li. Deep Spiking Neural Network with Spike Count based Learning Rule. arXiv:1902.05705v1, 2019. +Yupei Wu, Lei Deng, Guoqi Li, Chen Feng, and Luping Shi. Training and Inference with Integers in Deep Neural Networks. In International Conference on Learning Representations (ICLR), Vancouver, Canada, number 12:331, 2018a. +Yupei Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks. Frontiers in Neuroscience, (12:331), 2018b. +Yupei Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Direct Training for Spiking Neural Networks: Faster, Larger, Better. arXiv:1809.05793v1, 2018c. +Shihui Yin, Shreyas K. Venkataramanaiah, Gregory K. Chen, Ram Krishnamurthy, Yu Cao, Chaitali Chakrabarti, and Jae-sun Seo. Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations. arXiv:1709.06206v1, 2017. +Friedemann Zenke and Surya Ganguli. Superspike: Supervised learning in multilayer spiking neural networks. Neural Computation, (30):1514-1541, 2018. +Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv:1606.06160v3, 2018. + +A APPENDIX + +B ADDITIONAL EXPLANATIONS + +B.1 ROUND FUNCTION + +It can be seen that the residual spike activation function covers indeed all possible cases. In particular, the boundary cases are rounded correctly: for $V_{i} = \Theta_{\mathrm{ff}}$ , we obtain $\mathbb{S} = S_{i} + 0.5$ . For $S_{i} \geq 0$ , this should be rounded to round $[\mathbb{S}] = S_{i} + 1$ and for $S_{i} < 0$ , we should obtain round $[\mathbb{S}] = S_{i}$ . Similarly, + +for $V_{i} = -\Theta_{\mathrm{ff}}$ , we obtain $\mathbb{S} = S_{i} - 0.5$ . For $S_{i} \leq 0$ , this should be rounded to round $[\mathbb{S}] = S_{i} - 1$ and for $S_{i} > 0$ , we should obtain round $[\mathbb{S}] = S_{i}$ . + +The same reasoning applies to error propagation. + +# C ADDITIONAL ARCHITECTURE DETAILS + +# C.1 AVERAGE POOLING + +All pooling layers are average pooling layers, since these are much easier to implement in a spiking neural network. Average pooling can simply be implemented as an IF neuron with constant weights: + +$$ +w _ {i j} ^ {l} = \frac {1}{p _ {i} ^ {l}}, \tag {22} +$$ + +where $p_i^l$ is the number of neurons in the pooling window of the neuron. + +# C.2 DROPOUT + +Dropout can be implemented by an additional gating variable which randomly removes some neurons during forward and backward propagation. During learning, the activations of the other neurons have to be rescaled by the inverse of the dropout probability $1 / (1 - p_{\mathrm{drop}})$ . This can be implemented in the SNN framework by rescaling the threshold values by $(1 - p_{\mathrm{drop}})$ . + +# C.3 MOMENTUM + +While momentum is not required for the basic algorithm, we used it to speed up training. In a SNN implementation, this could be implemented by adding additional accumulators that save past gradient information. + +# C.4 PARAMETER PRECISION + +All real valued variables are coded with 32 bit floating point variables. + +# D EXPERIMENTS + +# D.1 COMPUTING FRAMEWORK + +All experiments are performed with custom CUDA/cuDNN accelerated C++ code. Training is performed on RTX 2080 Ti graphic cards. + +# D.2 ERROR BARS + +For all experiments, the means, errors and maximal values are calculated over 20 simulation runs. + +# D.3 PREPROCESSING + +No preprocessing is used on the MNIST dataset. We separate the training set of size 60000 into 50000 training and 10000 validation examples, which are used to monitor convergence. Testing is performed on the test set of 10000 examples. + +For CIFAR10, the values of all color channels are divided by 255 and then rescaled by a factor of 20 to trigger sufficient activation in the network. The usual preprocessing and data augmentation is applied. For data augmentation images are padded with the image mean value by two pixels on each side and random slices of $32 \times 32$ are extracted. Additionally, images are flipped randomly along the vertical axis. We separate the training set of size 50000 into 40000 training and 10000 validation examples, which are used to monitor convergence. Testing is performed on the test set of 10000 examples. + +Final scores were obtained without retraining on the validation set. + +Table 3: Parameters used for training of MNIST. + +
ParameterValue
Epochs60
Batch size128
Θff1.0
Θbp1.0
Base learning rate η0.1
Momentum0.9
Decay policymutliplly by 0.1 every 20 epochs
Dropout (fc1 only)0.5
+ +Table 4: Parameters used for training of CIFAR10. + +
ParameterValue
Epochs300
Batch size16
Θff1.0
Θbp1.0
Base learning rate η0.001
Momentum0.9
Decay policymultiply by 0.1 after 150 epochs
Dropout (all except pool and top)0.2
+ +# D.4 HYPERPARAMETERS + +The hyperparameters for training can be seen in tables 3 and 4. + +The maximal inference performances in the results were achieved with $\alpha = 100$ for MNIST and $\alpha = 400$ for CIFAR10. \ No newline at end of file diff --git a/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/images.zip b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..acca286e87851a0cf13d0eb1542c42b257f37192 --- /dev/null +++ b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:840c6e7ee2257d79b304382dbddbddce1b456da8789911583558e2713d85c315 +size 422690 diff --git a/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/layout.json b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dc6fe5fa5b6bf287f6410398d2303fd7e2f74615 --- /dev/null +++ b/spikegradanannequivalentcomputationmodelforimplementingbackpropagationwithspikes/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c058c37313805d3363832f90d201dbe495548282428de016f2fc994b677ffb4d +size 443538 diff --git a/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_content_list.json b/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ca775e69685ebb2ea780dcdc81aacac952e20f27 --- /dev/null +++ b/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3785d8fbd2316414a3030c75d2e2834bc1d3c9bbce814293908ae4dabae6185b +size 86903 diff --git a/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_model.json b/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_model.json new file mode 100644 index 0000000000000000000000000000000000000000..24a7a00b6953d3b470768a1277ccd395e2f45c74 --- /dev/null +++ b/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b249f14e3dea2804298732af5f82276cdd5cfe665ac344f02131056315989409 +size 101273 diff --git a/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_origin.pdf b/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d6592b4b0301908fe3163d2b02e50299089a108c --- /dev/null +++ b/sqilimitationlearningviareinforcementlearningwithsparserewards/cee46d5e-1bfb-4498-9896-eab3fc64a057_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bf6c578b75fd1ac71245cee818c26ca96d0ad0aadac78bd0719e5636b884221 +size 2258461 diff --git a/sqilimitationlearningviareinforcementlearningwithsparserewards/full.md b/sqilimitationlearningviareinforcementlearningwithsparserewards/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dde2c6334a7faffc1df869bf71987975d08e9a67 --- /dev/null +++ b/sqilimitationlearningviareinforcementlearningwithsparserewards/full.md @@ -0,0 +1,325 @@ +# SQIL: IMITATION LEARNING VIA REINFORCEMENT LEARNING WITH SPARSE REWARDS + +Siddharth Reddy, Anca D. Dragan, Sergey Levine + +Department of Electrical Engineering and Computer Science + +University of California, Berkeley + +{sgr,anca,svlevine}@berkeley.edu + +# ABSTRACT + +Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of $r = +1$ for matching the demonstrated action in a demonstrated state, and a constant reward of $r = 0$ for all other behavior. Our method, which we call soft $Q$ imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo. This paper is a proof of concept that illustrates how a simple imitation method based on RL with constant rewards can be as effective as more complex methods that use learned rewards. + +# 1 INTRODUCTION + +Many sequential decision-making problems can be tackled by imitation learning: an expert demonstrates near-optimal behavior to an agent, and the agent attempts to replicate that behavior in novel situations (Argall et al., 2009). This paper considers the problem of training an agent to imitate an expert, given expert action demonstrations and the ability to interact with the environment. The agent does not observe a reward signal or query the expert, and does not know the state transition dynamics. + +Standard approaches based on behavioral cloning (BC) use supervised learning to greedily imitate demonstrated actions, without reasoning about the consequences of actions (Pomerleau, 1991). As a result, compounding errors cause the agent to drift away from the demonstrated states (Ross et al., 2011). The problem with BC is that, when the agent drifts and encounters out-of-distribution states, the agent does not know how to return to the demonstrated states. Recent methods based on inverse reinforcement learning (IRL) overcome this issue by training an RL agent not only to imitate demonstrated actions, but also to visit demonstrated states (Ng et al., 2000; Wulfmeier et al., 2015; Finn et al., 2016b; Fu et al., 2017). This is also the core idea behind generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016), which implements IRL using generative adversarial + +networks (Goodfellow et al., 2014; Finn et al., 2016a). Since the true reward function for the task is unknown, these methods construct a reward signal from the demonstrations through adversarial training, making them difficult to implement and use in practice (Kurach et al., 2018). + +The main idea in this paper is that the effectiveness of adversarial imitation methods can be achieved by a much simpler approach that does not require adversarial training, or indeed learning a reward function at all. Intuitively, adversarial methods encourage long-horizon imitation by providing the agent with (1) an incentive to imitate the demonstrated actions in demonstrated states, and (2) an incentive to take actions that lead it back to demonstrated states when it encounters new, out-of-distribution states. One of the reasons why adversarial methods outperform greedy methods, such as BC, is that greedy methods only do (1), while adversarial methods do both (1) and (2). Our approach is intended to do both (1) and (2) without adversarial training, by using constant rewards instead of learned rewards. The key idea is that, instead of using a learned reward function to provide a reward signal to the agent, we can simply give the agent a constant reward of $r = +1$ for matching the demonstrated action in a demonstrated state, and a constant reward of $r = 0$ for all other behavior. + +We motivate this approach theoretically, by showing that it implements a regularized variant of BC that learns long-horizon imitation by (a) imposing a sparsity prior on the reward function implied by the imitation policy, and (b) incorporating information about the state transition dynamics into the imitation policy. Intuitively, our method accomplishes (a) by training the agent using an extremely sparse reward function $- + 1$ for demonstrations, 0 everywhere else - and accomplishes (b) by training the agent with RL instead of supervised learning. + +We instantiate our approach with soft Q-learning (Haarnoja et al., 2017) by initializing the agent's experience replay buffer with expert demonstrations, setting the rewards to a constant $r = +1$ in the demonstration experiences, and setting rewards to a constant $r = 0$ in all of the new experiences the agent collects while interacting with the environment. Since soft Q-learning is an off-policy algorithm, the agent does not necessarily have to visit the demonstrated states in order to experience positive rewards. Instead, the agent replays the demonstrations that were initially added to its buffer. Thus, our method can be applied in environments with stochastic dynamics and continuous states, where the demonstrated states are not necessarily reachable by the agent. We call this method soft Q imitation learning (SQIL). + +The main contribution of this paper is SQL: a simple and general imitation learning algorithm that is effective in MDPs with high-dimensional, continuous observations and unknown dynamics. We run experiments in four image-based environments – Car Racing, Pong, Breakout, and Space Invaders – and three low-dimensional environments – Humanoid, HalfCheetah, and Lunar Lander – to compare SQL to two prior methods: BC and GAIL. We find that SQL outperforms BC and achieves competitive results compared to GAIL. Our experiments illustrate two key benefits of SQL: (1) that it can overcome the state distribution shift problem of BC without adversarial training or learning a reward function, which makes it easier to use, e.g., with images, and (2) that it is simple to implement using existing Q-learning or off-policy actor-critic algorithms. + +# 2 SOFT Q IMITATION LEARNING + +SQIL performs soft Q-learning (Haarnoja et al., 2017) with three small, but important, modifications: (1) it initially fills the agent's experience replay buffer with demonstrations, where the rewards are set to a constant $r = +1$ ; (2) as the agent interacts with the world and accumulates new experiences, it adds them to the replay buffer, and sets the rewards for these new experiences to a constant $r = 0$ ; and (3) it balances the number of demonstration experiences and new experiences (50% each) in each sample from the replay buffer. These three modifications are motivated theoretically in Section 3, via an equivalence to a regularized variant of BC. Intuitively, these modifications create a simple reward structure that gives the agent an incentive to imitate the expert in demonstrated states, and to take actions that lead it back to demonstrated states when it strays from the demonstrations. + +Algorithm 1 Soft Q Imitation Learning (SQIL) +1: Require $\lambda_{\mathrm{samp}}\in \mathbb{R}_{\geq 0},\mathcal{D}_{\mathrm{demo}}$ +2: Initialize $\mathcal{D}_{\mathrm{samp}}\gets \emptyset$ +3: while $Q_{\theta}$ not converged do +4: $\pmb {\theta}\gets \pmb {\theta} - \eta \nabla_{\pmb{\theta}}(\delta^{2}(\mathcal{D}_{\mathrm{demo}},1) + \lambda_{\mathrm{samp}}\delta^{2}(\mathcal{D}_{\mathrm{samp}},0))$ {See Equation 1} +5: Sample transition $(s,a,s^{\prime})$ with imitation policy $\pi (a|s)\propto \exp (Q_{\theta}(s,a))$ +6: $\mathcal{D}_{\mathrm{samp}}\gets \mathcal{D}_{\mathrm{samp}}\cup \{(s,a,s^{\prime})\}$ +7: end while + +Crucially, since soft Q-learning is an off-policy algorithm, the agent does not necessarily have to visit the demonstrated states in order to experience positive rewards. Instead, the agent replays the demonstrations that were initially added to its buffer. Thus, SQIL can be used in stochastic environments with high-dimensional, continuous states, where the demonstrated states may never actually be encountered by the agent. + +SQIL is summarized in Algorithm 1, where $Q_{\theta}$ is the soft Q function, $D_{\mathrm{demo}}$ are demonstrations, $\delta^2$ is the squared soft Bellman error, + +$$ +\delta^ {2} (\mathcal {D}, r) \triangleq \frac {1}{| \mathcal {D} |} \sum_ {(s, a, s ^ {\prime}) \in \mathcal {D}} \left(Q _ {\boldsymbol {\theta}} (s, a) - \left(r + \gamma \log \left(\sum_ {a ^ {\prime} \in \mathcal {A}} \exp \left(Q _ {\boldsymbol {\theta}} \left(s ^ {\prime}, a ^ {\prime}\right)\right)\right)\right)\right) ^ {2}, \tag {1} +$$ + +and $r \in \{0,1\}$ is a constant reward. The experiments in Section 4 use a convolutional neural network or multi-layer perceptron to model $Q_{\theta}$ , where $\theta$ are the weights of the neural network. Section A.3 in the appendix contains additional implementation details, including values for the hyperparameter $\lambda_{\mathrm{samp}}$ ; note that the simple default value of $\lambda_{\mathrm{samp}} = 1$ works well across a variety of environments. + +As the imitation policy in line 5 of Algorithm 1 learns to behave more like the expert, a growing number of expert-like transitions get added to the buffer $\mathcal{D}_{\mathrm{samp}}$ with an assigned reward of zero. This causes the effective reward for mimicking the expert to decay over time. Balancing the number of demonstration experiences and new experiences (50% each) sampled for the gradient step in line 4 ensures that this effective reward remains at least $1 / (1 + \lambda_{\mathrm{samp}})$ , instead of decaying to zero. In practice, we find that this reward decay does not degrade performance if SQIL is halted once the squared soft Bellman error loss converges to a minimum (e.g., see Figure 8 in the appendix). Note that prior methods also require similar techniques: both GAIL and adversarial IRL (AIRL) (Fu et al., 2017) balance the number of positive and negative examples in the training set of the discriminator, and AIRL tends to require early stopping to avoid overfitting. + +# 3 INTERPRETING SQL IS REGULARIZED BEHAVIORAL CLONING + +To understand why SQIL works, we sketch a surprising theoretical result: SQIL is equivalent to a variant of behavioral cloning (BC) that uses regularization to overcome state distribution shift. + +BC is a simple approach that seeks to imitate the expert's actions using supervised learning – in particular, greedily maximizing the conditional likelihood of the demonstrated actions given the demonstrated states, without reasoning about the consequences of actions. Thus, when the agent makes small mistakes and enters states that are slightly different from those in the demonstrations, the distribution mismatch between the states in the demonstrations and those actually encountered by the agent leads to compounding errors (Ross et al., 2011). We show that SQL is equivalent to augmenting BC with a regularization term that incorporates information about the state transition dynamics into the imitation policy, and thus enables long-horizon imitation. + +# 3.1 PRELIMINARIES + +Maximum entropy model of expert behavior. SQIL is built on soft Q-learning, which assumes that expert behavior follows the maximum entropy model (Ziebart et al., 2010; Levine, 2018). In + +an infinite-horizon Markov Decision Process (MDP) with a continuous state space $S$ and discrete action space $\mathcal{A}$ , the expert is assumed to follow a policy $\pi$ that maximizes reward $R(s, a)$ . The policy $\pi$ forms a Boltzmann distribution over actions, + +$$ +\pi (a | s) \triangleq \frac {\exp (Q (s , a))}{\sum_ {a ^ {\prime} \in \mathcal {A}} \exp (Q (s , a ^ {\prime}))}, \tag {2} +$$ + +where $Q$ is the soft Q function. The soft Q values are a function of the rewards and dynamics, given by the soft Bellman equation, + +$$ +Q (s, a) \triangleq R (s, a) + \gamma \mathbb {E} _ {s ^ {\prime}} \left[ \log \left(\sum_ {a ^ {\prime} \in \mathcal {A}} \exp \left(Q \left(s ^ {\prime}, a ^ {\prime}\right)\right)\right) \right]. \tag {3} +$$ + +In our imitation setting, the rewards and dynamics are unknown. The expert generates a fixed set of demonstrations $\mathcal{D}_{\mathrm{demo}}$ , by rolling out their policy $\pi$ in the environment and generating state transitions $(s,a,s^{\prime})\in \mathcal{D}_{\mathrm{demo}}$ . + +Behavioral cloning (BC). Training an imitation policy with standard BC corresponds to fitting a parametric model $\pi_{\theta}$ that minimizes the negative log-likelihood loss, + +$$ +\ell_ {\mathrm {B C}} (\boldsymbol {\theta}) \triangleq \sum_ {(s, a) \in \mathcal {D} _ {\mathrm {d e m o}}} - \log \pi_ {\boldsymbol {\theta}} (a | s). \tag {4} +$$ + +In our setting, instead of explicitly modeling the policy $\pi_{\theta}$ , we can represent the policy $\pi$ in terms of a soft Q function $Q_{\theta}$ via Equation 2: + +$$ +\pi (a | s) \triangleq \frac {\exp \left(Q _ {\boldsymbol {\theta}} (s , a)\right)}{\sum_ {a ^ {\prime} \in \mathcal {A}} \exp \left(Q _ {\boldsymbol {\theta}} (s , a ^ {\prime})\right)}. \tag {5} +$$ + +Using this representation of the policy, we can train $Q_{\theta}$ via the maximum-likelihood objective in Equation 4: + +$$ +\ell_ {\mathrm {B C}} (\boldsymbol {\theta}) \triangleq \sum_ {(s, a) \in \mathcal {D} _ {\mathrm {d e m o}}} - \left(Q _ {\boldsymbol {\theta}} (s, a) - \log \left(\sum_ {a ^ {\prime} \in \mathcal {A}} \exp \left(Q _ {\boldsymbol {\theta}} (s, a ^ {\prime})\right)\right)\right). \tag {6} +$$ + +However, optimizing the BC loss in Equation 6 does not in general yield a valid soft Q function $Q_{\theta}$ – i.e., a soft Q function that satisfies the soft Bellman equation (Equation 3) with respect to the dynamics and some reward function. The problem is that the BC loss does not incorporate any information about the dynamics into the learning objective, so $Q_{\theta}$ learns to greedily assign high values to demonstrated actions, without considering the state transitions that occur as a consequence of actions. As a result, $Q_{\theta}$ may output arbitrary values in states that are off-distribution from the demonstrations $\mathcal{D}_{\mathrm{demo}}$ . + +In Section 3.2, we describe a regularized BC algorithm that adds constraints to ensure that $Q_{\theta}$ is a valid soft Q function with respect to some implicitly-represented reward function, and further regularizes the implicit rewards with a sparsity prior. In Section 3.3, we show that this approach recovers an algorithm similar to SQIL. + +# 3.2 REGULARIZED BEHAVIORAL CLONING + +Under the maximum entropy model described in Section 3.1, expert behavior is driven by a reward function, a soft Q function that computes expected future returns, and a policy that takes actions with high soft Q values. In the previous section, we used these assumptions to represent the imitation policy in terms of a model of the soft Q function $Q_{\theta}$ (Equation 5). In this section, we represent the reward function implicitly in terms of $Q_{\theta}$ , as shown in Equation 7. This allows us to derive SQL as a variant of BC that imposes a sparsity prior on the implicitly-represented rewards. + +Sparsity regularization. The issue with BC is that, when the agent encounters states that are out-of-distribution with respect to $\mathcal{D}_{\mathrm{demo}}$ , $Q_{\theta}$ may output arbitrary values. One solution from prior work + +(Piot et al., 2014) is to regularize $Q_{\theta}$ with a sparsity prior on the implied rewards – in particular, a penalty on the magnitude of the rewards $\sum_{s \in S, a \in \mathcal{A}} |R_q(s, a)|$ implied by $Q_{\theta}$ via the soft Bellman equation (Equation 3), where + +$$ +R _ {q} (s, a) \triangleq Q _ {\boldsymbol {\theta}} (s, a) - \gamma \mathbb {E} _ {s ^ {\prime}} \left[ \log \left(\sum_ {a ^ {\prime} \in \mathcal {A}} \exp \left(Q _ {\boldsymbol {\theta}} \left(s ^ {\prime}, a ^ {\prime}\right)\right)\right) \right]. \tag {7} +$$ + +Note that the reward function $R_{q}$ is not explicitly modeled in this method. Instead, we directly minimize the magnitude of the right-hand side of Equation 7, which is equivalent to minimizing $|R_{q}(s,a)|$ . + +The purpose of the penalty on $|R_q(s, a)|$ is two-fold: (1) it imposes a sparsity prior motivated by prior work (Piot et al., 2013), and (2) it incorporates information about the state transition dynamics into the imitation learning objective, since $R_q(s, a)$ is a function of an expectation over next state $s'$ . (2) is critical for learning long-horizon behavior that imitates the demonstrations, instead of greedy maximization of the action likelihoods in standard BC. For details, see Piot et al. (2014). + +Approximations for continuous states. Unlike the discrete environments tested in Piot et al. (2014), we assume the continuous state space $\mathcal{S}$ cannot be enumerated. Hence, we approximate the penalty $\sum_{s \in \mathcal{S}, a \in \mathcal{A}} |R_q(s, a)|$ by estimating it from samples: transitions $(s, a, s')$ observed in the demonstrations $\mathcal{D}_{\mathrm{demo}}$ , as well as additional rollouts $\mathcal{D}_{\mathrm{samp}}$ periodically sampled during training using the latest imitation policy. This approximation, which follows the standard approach to constraint sampling (Calafiore & Dabbene, 2006), ensures that the penalty covers the state distribution actually encountered by the agent, instead of only the demonstrations. + +To make the penalty continuously differentiable, we introduce an additional approximation: instead of penalizing the absolute value $|R_q(s,a)|$ , we penalize the squared value $(R_q(s,a))^2$ . Note that since the reward function $R_{q}$ is not explicitly modeled, but instead defined via $Q_{\theta}$ in Equation 7, the squared penalty $(R_q(s,a))^2$ is equivalent to the squared soft Bellman error $\delta^2 (\mathcal{D}_{\mathrm{demo}}\cup \mathcal{D}_{\mathrm{samp}},0)$ from Equation 1. + +Regularized BC algorithm. Formally, we define the regularized BC loss function adapted from Piot et al. (2014) as + +$$ +\ell_ {\mathrm {R B C}} (\boldsymbol {\theta}) \triangleq \ell_ {\mathrm {B C}} (\boldsymbol {\theta}) + \lambda \delta^ {2} \left(\mathcal {D} _ {\mathrm {d e m o}} \cup \mathcal {D} _ {\mathrm {s a m p}}, 0\right), \tag {8} +$$ + +where $\lambda \in \mathbb{R}_{\geq 0}$ is a constant hyperparameter, and $\delta^2$ denotes the squared soft Bellman error defined in Equation 1. The BC loss encourages $Q_{\theta}$ to output high values for demonstrated actions at demonstrated states, and the penalty term propagates those high values to nearby states. In other words, $Q_{\theta}$ outputs high values for actions that lead to states from which the demonstrated states are reachable. Hence, when the agent finds itself far from the demonstrated states, it takes actions that lead it back to the demonstrated states. + +The RBC algorithm follows the same procedure as Algorithm 1, except that in line 4, RBC takes a gradient step on the RBC loss from Equation 8 instead of the SQIL loss. + +# 3.3 CONNECTION BETWEEN SQIL AND REGULARIZED BEHAVIORAL CLONING + +The gradient of the RBC loss in Equation 8 is proportional to the gradient of the SQIL loss in line 4 of Algorithm 1, plus an additional term that penalizes the soft value of the initial state $s_0$ (full derivation in Section A.1 of the appendix): + +$$ +\nabla_ {\boldsymbol {\theta}} \ell_ {\mathrm {R B C}} (\boldsymbol {\theta}) \propto \nabla_ {\boldsymbol {\theta}} \left(\delta^ {2} \left(\mathcal {D} _ {\mathrm {d e m o}}, 1\right) + \lambda_ {\mathrm {s a m p}} \delta^ {2} \left(\mathcal {D} _ {\mathrm {s a m p}}, 0\right) + V (s _ {0})\right). \tag {9} +$$ + +In other words, SQL solves a similar optimization problem to RBC. The reward function in SQL also has a clear connection to the sparsity prior in RBC: SQL imposes the sparsity prior from RBC, by training the agent with an extremely sparse reward function $-r = +1$ at the demonstrations, and $r = 0$ everywhere else. Thus, SQL can be motivated as a practical way to implement the ideas for regularizing BC proposed in Piot et al. (2014). + +The main benefit of using SQIL instead of RBC is that SQIL is trivial to implement, since it only requires a few small changes to any standard Q-learning implementation (see Section 2). Extending SQIL to MDPs with a continuous action space is also easy, since we can simply replace Q-learning + +with an off-policy actor-critic method (Haarnoja et al., 2018) (see Section 4.3). Given the difficulty of implementing deep RL algorithms correctly (Henderson et al., 2018), this flexibility makes SQIL more practical to use, since it can be built on top of existing implementations of deep RL algorithms. Furthermore, the ablation study in Section 4.4 suggests that SQIL actually performs better than RBC. + +# 4 EXPERIMENTAL EVALUATION + +Our experiments aim to compare SQLI to existing imitation learning methods on a variety of tasks with high-dimensional, continuous observations, such as images, and unknown dynamics. We benchmark SQLI against BC and GAIL on four image-based games - Car Racing, Pong, Breakout, and Space Invaders - and three state-based tasks - Humanoid, HalfCheetah, and Lunar Lander (Brockman et al., 2016; Bellemare et al., 2013; Todorov et al., 2012). We also investigate which components of SQLI contribute most to its performance via an ablation study on the Lunar Lander game. Section A.3 in the appendix contains additional experimental details. + +# 4.1 TESTING GENERALIZATION IN IMAGE-BASED CAR RACING + +The goal of this experiment is to study not only how well each method can mimic the expert demonstrations, but also how well they can acquire policies that generalize to new states that are not seen in the demonstrations. To do so, we train the imitation agents in an environment with a different initial state distribution $S_0^{\mathrm{train}}$ than that of the expert demonstrations $S_0^{\mathrm{demo}}$ , allowing us to systematically control the mismatch between the distribution of states in the demonstrations and the states actually encountered by the agent. We run experiments on the Car Racing game from OpenAI Gym. To create $S_0^{\mathrm{train}}$ , the car is rotated 90 degrees so that it begins perpendicular to the track, instead of parallel to the track as in $S_0^{\mathrm{demo}}$ . This intervention presents a significant generalization challenge to the imitation learner, since the expert demonstrations do not contain any examples of states where the car is perpendicular to the road, or even significantly off the road axis. The agent must learn to make a tight turn to get back on the road, then stabilize its orientation so that it is parallel to the road, and only then proceed forward to mimic the expert demonstrations. + +The results in Figure 1 show that SQL and BC perform equally well when there is no variation in the initial state. The task is easy enough that even BC achieves a high reward. Note that, in the unperturbed condition (right column), BC substantially outperforms GAIL, despite the well-known shortcomings of BC. This indicates that the adversarial optimization in GAIL can substantially hinder + +
Domain Shift (S0train)No Shift (S0demo)
Random-21 ± 56-68 ± 4
BC-45 ± 18698 ± 10
GAIL-DQL-97 ± 3-66 ± 8
SQIL (Ours)375 ± 19704 ± 6
Expert480 ± 11704 ± 79
+ +Figure 1: Average reward on 100 episodes after training. Standard error on three random seeds. + +learning, even in settings where standard BC is sufficient. SQIL performs much better than BC when starting from $S_0^{\mathrm{train}}$ , showing that SQIL is capable of generalizing to a new initial state distribution, while BC is not. SQIL learns to make a tight turn that takes the car through the grass and back onto the road, then stabilizes the car's orientation so that it is parallel to the track, and then proceeds forward like the expert does in the demonstrations. BC tends to drive straight ahead into the grass instead of turning back onto the road. + +![](images/3001f31e2170451859746facfcb7b85e70f99b2fbe9ba760f6573121775b48b6.jpg) +Figure 2: Image-based Atari. Smoothed with a rolling window of 100 episodes. Standard error on three random seeds. X-axis represents amount of interaction with the environment (not expert demonstrations). + +![](images/92da9c3ebd487193a586c353814dfb1039a960867159b01d781d8f2e5d116d3f.jpg) + +![](images/74961e3752126b985b4b5aeedff0efaeeb95b2e93956030dcc132fc1a0f5aab6.jpg) + +SQIL outperforms GAIL in both conditions. Since SQIL and GAIL both use deep Q-learning for RL in this experiment, the gap between them may be attributed to the difference in the reward functions they use to train the agent. SQIL benefits from providing a constant reward that does not require fitting a discriminator, while GAIL struggles to train a discriminator to provide learned rewards directly from images. + +# 4.2 IMAGE-BASED EXPERIMENTS ON ATARI + +The results in Figure 2 show that SQIL outperforms BC on Pong, Breakout, and Space Invaders - additional evidence that BC suffers from compounding errors, while SQIL does not. SQIL also outperforms GAIL on all three games, illustrating the difficulty of using GAIL to train an image-based discriminator, as in Section 4.1. + +# 4.3 INSTANTIATING SQIL FOR CONTINUOUS CONTROL IN LOW-DIMENSIONAL MUJOCO + +The experiments in the previous sections evaluate SQIL on MDPs with a discrete action space. This section illustrates how SQIL can be adapted to continuous actions. We instantiate SQL using soft actor-critic (SAC) – an off-policy RL algorithm that can solve continuous control tasks (Haarnoja et al., 2018). In particular, SAC is modified in the following ways: (1) the agent's experience replay buffer is initially filled with expert demonstrations, where rewards are set to $r = +1$ , (2) when taking gradient steps to fit the agent's soft Q function, a balanced number of demonstration experiences and new experiences (50% each) are sampled from the replay buffer, and (3) the agent observes rewards of $r = 0$ during its interactions with the environment, instead of an extrinsic reward signal that specifies the desired task. This instantiation of SQL is compared to GAIL on the Humanoid (17 DoF) and HalfCheetah (6 DoF) tasks from MuJoCo. + +The results show that SQL outperforms BC and performs comparably to GAIL on both tasks, demonstrating that SQL can be successfully deployed on problems with continuous actions, and that SQL can perform well even with a small number of demonstrations. This experiment also illustrates how SQL can be run on top of SAC or any other off-policy value-based RL algorithm. + +![](images/92eb7f24192f823ca937f42c4173edb66074375d07f42ce4e6e3ea0054e788b8.jpg) + +![](images/5bd70d806c3dd8d8494824ad4a0409737294cd82d3a1bb077529943a54bd78ef.jpg) +Figure 3: SQL: best performance on 10 consecutive training episodes. BC, GAIL: results from Dhariwal et al. (2017). + +# 4.4 ABLATION STUDY ON LOW-DIMENSIONAL LUNAR LANDER + +We hypothesize that SQIL works well because it combines information about the expert's policy from demonstrations with information about the environment dynamics from rollouts of the imitation policy periodically sampled during training. We also expect RBC to perform comparably to SQIL, since their objectives are similar. To test these hypotheses, we conduct an ablation study using the Lunar Lander game from OpenAI Gym. As in Section 4.1, we control the mismatch between the + +distribution of states in the demonstrations and the states encountered by the agent by manipulating the initial state distribution. To create $S_0^{\mathrm{train}}$ , the agent is placed in a starting position never visited in the demonstrations. + +In the first variant of SQIL, $\lambda_{\mathrm{samp}}$ is set to zero, to prevent SQIL from using additional samples drawn from the environment (see line 4 of Algorithm 1). This comparison tests if SQIL really needs to interact with the environment, or if it can rely solely on the demonstrations. In the second condition, $\gamma$ is set to zero to prevent SQIL from accessing information about state transitions (see Equation 1 and line 4 of Algorithm 1). This comparison tests if SQIL is actually extracting information about the dynamics from the samples, or if it can perform just as well with a naive regularizer (setting $\gamma$ to zero effectively imposes a penalty on the L2-norm of the soft Q values instead of the squared soft Bellman error). In the third condition, a uniform random policy is used to sample additional rollouts, instead of the imitation policy $\pi_{\theta}$ (see line 6 of Algorithm 1). This comparison tests how important it is that the samples cover the states encountered by the agent during training. In the fourth condition, we use RBC to optimize the loss in Equation 8. Instead of using SQIL to optimize the loss in line 4 of Algorithm 1. This comparison tests the effect of the additional $V(s_0)$ term in RBC vs. SQIL (see Equation 9). + +The results in Figure 4 show that all methods perform well when there is no variation in the initial state. When the initial state is varied, SQLI performs significantly better than BC, GAIL, and the ablated variants of SQLI. This confirms our hypothesis that SQLI needs to sample from the environment using the imitation policy, and relies on information about the dynamics encoded in the samples. Surprisingly, SQLI outperforms RBC by a large margin, suggesting that the penalty on the soft value of the initial + +
Domain Shift (S0train)No Shift (S0demo)
AblationRandom0.10 ± 0.300.04 ± 0.02
BC0.07 ± 0.030.93 ± 0.03
GAIL-TRPO0.67 ± 0.040.93 ± 0.03
SQIL (Ours)0.89 ± 0.020.88 ± 0.03
λsamp = 00.12 ± 0.020.87 ± 0.02
γ = 00.41 ± 0.020.84 ± 0.02
π = Unif0.47 ± 0.020.82 ± 0.02
RBC0.66 ± 0.020.89 ± 0.01
Expert0.93 ± 0.030.89 ± 0.31
+ +Figure 4: Best success rate on 100 consecutive episodes during training. Standard error on five random seeds. Performance bolded if at least within one standard error of expert. + +state $V(s_0)$ , which is present in RBC but not in SQIL (see Equation 9), degrades performance. + +# 5 DISCUSSION AND RELATED WORK + +Related work. Concurrently with SQIL, two other imitation learning algorithms that use constant rewards instead of a learned reward function were developed (Sasaki et al., 2019; Wang et al., 2019). We see our paper as contributing additional evidence to support this core idea, rather than proposing a competing method. First, SQIL is derived from sparsity-regularized BC, while the prior methods are derived from an alternative formulation of the IRL objective (Sasaki et al., 2019) and from support estimation methods (Wang et al., 2019), showing that different theoretical approaches independently lead to using RL with constant rewards as an alternative to adversarial training – a sign that this idea may be a promising direction for future work. Second, SQIL is shown to outperform BC and GAIL in domains that were not evaluated in Sasaki et al. (2019) or Wang et al. (2019) – in particular, tasks with image observations and significant shift in the state distribution between the demonstrations and the training environment. + +Summary. We contribute the SQLI algorithm: a general method for learning to imitate an expert given action demonstrations and access to the environment. Simulation experiments on tasks with high-dimensional, continuous observations and unknown dynamics show that our method outperforms BC and achieves competitive results compared to GAIL, while being simple to implement on top of existing off-policy RL algorithms. + +Limitations and future work. We have not yet proven that SQIL matches the expert's state occupancy measure in the limit of infinite demonstrations. One direction for future work would be to rigorously show whether or not SQIL has this property. Another direction would be to extend SQIL to recover not just the expert's policy, but also their reward function; e.g., by using a parameterized reward function to model rewards in the soft Bellman error terms, instead of using constant rewards. This could provide a simpler alternative to existing adversarial IRL algorithms. + +# REFERENCES + +Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469-483, 2009. +Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253-279, 2013. +Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. +Giuseppe Calafiore and Fabrizio Dabbene. Probabilistic and randomized methods for design under uncertainty. Springer, 2006. +Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017. +Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. arXiv preprint arXiv:1611.03852, 2016a. +Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pp. 49-58, 2016b. +Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. arXiv preprint arXiv:1710.11248, 2017. +Yang Gao, Ji Lin, Fisher Yu, Sergey Levine, Trevor Darrell, et al. Reinforcement learning from imperfect demonstrations. arXiv preprint arXiv:1802.05313, 2018. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. +David Ha and Jürgen Schmidhuber. Recurrent world models facilitate policy evolution. arXiv preprint arXiv:1809.01999, 2018. +Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165, 2017. +Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018. +Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel Dulac-Arnold, et al. Deep q-learning from demonstrations. arXiv preprint arXiv:1704.03732, 2017. +Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565-4573, 2016. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and Jonathan Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Hk4fpoA5Km. +Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. The gan landscape: Losses, architectures, regularization, and normalization. arXiv preprint arXiv:1807.04720, 2018. +Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909, 2018. + +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pp. 663-670, 2000. +Bilal Piot, Matthieu Geist, and Olivier Pietquin. Learning from demonstrations: Is it worth estimating a reward function? In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 17-32. Springer, 2013. +Bilal Piot, Matthieu Geist, and Olivier Pietquin. Boosted and reward-regularized classification for apprenticeship learning. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pp. 1249–1256. International Foundation for Autonomous Agents and Multiagent Systems, 2014. +Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1):88-97, 1991. +Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635, 2011. +Fumihiro Sasaki, Tetsuya Yohira, and Atsuo Kawaguchi. Sample efficient imitation learning for continuous control. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BkN5UoAqF7. +Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026-5033. IEEE, 2012. +Ruohan Wang, Carlo Ciliberto, Pierluigi Amadori, and Yiannis Demiris. Random expert distillation: Imitation learning via expert policy support estimation. arXiv preprint arXiv:1905.06750, 2019. +Markus Wulfmeier, Peter Ondruska, and Ingmar Posner. Maximum entropy deep inverse reinforcement learning. arXiv preprint arXiv:1507.04888, 2015. +Brian D Ziebart, J Andrew Bagnell, and Anind K Dey. Modeling interaction via the principle of maximum causal entropy. In Proceedings of the 27th International Conference on International Conference on Machine Learning, pp. 1255-1262. Omnipress, 2010. + +# A APPENDIX + +# A.1 DERIVATION OF RBC GRADIENT + +Let $\tau = (s_0, a_0, s_1, \dots, s_T)$ denote a rollout, where $s_T$ is an absorbing state. Let $V$ denote the soft value function. + +$$ +V (s) \triangleq \log \left(\sum_ {a \in \mathcal {A}} \exp \left(Q _ {\boldsymbol {\theta}} (s, a)\right)\right). \tag {10} +$$ + +Splitting up the squared soft Bellman error terms for $\mathcal{D}_{\mathrm{demo}}$ and $\mathcal{D}_{\mathrm{samp}}$ in Equation 8, + +$$ +\begin{array}{l} \nabla \ell_ {\mathrm {R B C}} (\boldsymbol {\theta}) = \sum_ {\tau \in \mathcal {D} _ {\mathrm {d e m o}}} \sum_ {t = 0} ^ {T - 1} - \left(\nabla Q _ {\boldsymbol {\theta}} \left(s _ {t}, a _ {t}\right) - \nabla V \left(s _ {t}\right)\right) \\ + \lambda_ {\mathrm {d e m o}} \sum_ {\tau \in \mathcal {D} _ {\mathrm {d e m o}}} \sum_ {t = 0} ^ {T - 1} \nabla \left(Q _ {\boldsymbol {\theta}} \left(s _ {t}, a _ {t}\right) - \gamma V \left(s _ {t + 1}\right)\right) ^ {2} + \lambda_ {\mathrm {s a m p}} \nabla \delta^ {2} \left(\mathcal {D} _ {\mathrm {s a m p}}, 0\right) \\ = \sum_ {\tau \in \mathcal {D} _ {\mathrm {d e m o}}} \sum_ {t = 0} ^ {T - 1} \nabla V (s _ {t}) - \gamma \nabla V (s _ {t + 1}) \\ + \lambda_ {\mathrm {d e m o}} \nabla \delta^ {2} \left(\mathcal {D} _ {\mathrm {d e m o}}, \frac {1}{2 \lambda_ {\mathrm {d e m o}}}\right) + \lambda_ {\mathrm {s a m p}} \nabla \delta^ {2} \left(\mathcal {D} _ {\mathrm {s a m p}}, 0\right). \tag {11} \\ \end{array} +$$ + +Setting $\gamma \triangleq 1$ turns the inner sum in the first term into a telescoping sum: + +$$ +(1 1) = \sum_ {\tau \in \mathcal {D} _ {\mathrm {d e m o}}} (\nabla V (s _ {0}) - \nabla V (s _ {T})) + \lambda_ {\mathrm {d e m o}} \nabla \delta^ {2} \left(\mathcal {D} _ {\mathrm {d e m o}}, \frac {1}{2 \lambda_ {\mathrm {d e m o}}}\right) + \lambda_ {\mathrm {s a m p}} \nabla \delta^ {2} \left(\mathcal {D} _ {\mathrm {s a m p}}, 0\right). \tag {12} +$$ + +Since $s_T$ is assumed to be absorbing, $V(s_{T})$ is zero. Thus, + +$$ +(1 2) = \sum_ {s _ {0} \in \mathcal {D} _ {\mathrm {d e m o}}} \nabla V (s _ {0}) + \lambda_ {\mathrm {d e m o}} \nabla \delta^ {2} \left(\mathcal {D} _ {\mathrm {d e m o}}, \frac {1}{2 \lambda_ {\mathrm {d e m o}}}\right) + \lambda_ {\mathrm {s a m p}} \nabla \delta^ {2} \left(\mathcal {D} _ {\mathrm {s a m p}}, 0\right), \tag {13} +$$ + +In our experiments, we have that all the demonstration rollouts start at the same initial state $s_0$ .5 Thus, + +$$ +(1 3) \propto \nabla \left(\delta^ {2} \left(\mathcal {D} _ {\text {d e m o}}, 1\right) + \lambda_ {\text {s a m p}} \delta^ {2} \left(\mathcal {D} _ {\text {s a m p}}, 0\right) + V \left(s _ {0}\right)\right), \tag {14} +$$ + +where $\lambda_{\mathrm{samp}} \in \mathbb{R}_{\geq 0}$ is a constant hyperparameter. + +# A.2 COMPARING THE BIASED AND UNBIASED VARIANTS OF GAIL + +As discussed in Section 4, to correct the original GAIL method's biased handling of rewards at absorbing states, we implement the suggested changes to GAIL in Section 4.2 of Kostrikov et al. (2019): adding a transition to an absorbing state and a self-loop at the absorbing state to the end of each rollout sampled from the environment, and adding a binary feature to the observations indicating whether or not a state is absorbing. This enables GAIL to learn a non-zero reward for absorbing states. We refer to the original, biased GAIL method as GAIL-DQL-B and GAIL-TRPO-B, and the unbiased version as GAIL-DQL-U and GAIL-TRPO-U. + +The mechanism for learning terminal rewards proposed in Kostrikov et al. (2019) does not apply to SQL, since SQL does not learn a reward function. SQL implicitly assumes a reward of zero at absorbing states in demonstrations. This is the case in all our experiments, which include some environments where terminating the episode is always undesirable (e.g., walking without falling down) and other environments where success requires terminating the episode (e.g., landing at a target), suggesting that SQL is not sensitive to the choice of termination reward, and neither significantly benefits nor is significantly harmed by setting the termination reward to zero. + +
Domain Shift (S0train)No Shift (S0demo)
Random-21 ± 56-68 ± 4
BC-45 ± 18698 ± 10
GAIL-DQL-B-91 ± 4-34 ± 21
GAIL-DQL-U-97 ± 3-66 ± 8
SQIL (Ours)375 ± 19704 ± 6
Expert480 ± 11704 ± 79
+ +![](images/0147534f110f9ca825ecaacd61a491648575705dce8bd4d079b503e7c5ea2686.jpg) +Figure 5: Image-based Car Racing. Average reward on 100 episodes after training. Standard error on three random seeds. + +![](images/3e8069ba5e812385c96e29e3cbd37337a6617da024ccd167b174b46f83a91b9e.jpg) +Figure 6: Image-based Atari. Smoothed with a rolling window of 100 episodes. Standard error on three random seeds. X-axis represents amount of interaction with the environment (not expert demonstrations). + +![](images/b33b67338a69852f3b3af8398d8a2b830a9119bc3edff4d83c2080a67b077189.jpg) + +Car Racing. The results in Figure 5 show that both the biased (GAIL-DQL-B) and unbiased (GAIL-DQL-U) versions of GAIL perform equally poorly. The problem of training an image-based discriminator for this task may be difficult enough that even with an unfair bias toward avoiding crashes that terminate the episode, GAIL-DQL-B does not perform better than GAIL-DQL-U. + +Atari. The results in Figure 6 show that SQIL outperforms both variants of GAIL on Pong and the unbiased version of GAIL (GAIL-DQL-U) on Breakout and Space Invaders, but performs comparably to the biased version of GAIL (GAIL-DQL-B) on Space Invaders and worse than it on Breakout. This may be due to the fact that in Breakout and Space Invaders, the agent has multiple lives – five in Breakout, and three in Space Invaders – and receives a termination signal that the episode has ended after losing each life. Thus, the agent experiences many more episode terminations than in Pong, exacerbating the bias in the way the original GAIL method handles rewards at absorbing states. Our implementation of GAIL-DQL-B in this experiment provides a learned reward of $r(s,a) = -\log(1 - D(s,a))$ , where $D$ is the discriminator (see Section A.3 in the appendix for details). The learned reward is always positive, while the implicit reward at an absorbing state is zero. Thus, the agent is inadvertently encouraged to avoid terminating the episode. For Breakout and Space Invaders, this just happens to be the right incentive, since the objective is to stay alive as long as possible. GAIL-DQL-B outperforms SQIL in Breakout and performs comparably to SQIL in Space Invaders because GAIL-DQL-B is accidentally biased in the right way. + +Lunar Lander. The results in Figure 7 show that when the initial state is varied, SQIL outperforms the unbiased variant of GAIL (GAIL-TRPO-U), but underperforms against the biased version of GAIL (GAIL-TRPO-B). The latter result is likely due to the fact that the implementation of GAIL-TRPO-B we used in this experiment provides a learned reward of $r(s,a) = \log (D(s,a))$ , where $D$ is the discriminator (see Section A.3 in the appendix for details). The learned reward is always negative, while the implicit reward at an absorbing state is zero. Thus, the agent is inadvertently encouraged to terminate the episode quickly. For the Lunar Lander game, this just happens to be the right incentive, since the objective is to land on the ground and thereby terminate the episode. As in the Atari experiments, GAIL-TRPO-B performs better than SQIL in this experiment because GAIL-TRPO-B is accidentally biased in the right way. + +
Domain Shift (S0train)No Shift (S0demo)
AblationRandom0.10 ± 0.300.04 ± 0.02
BC0.07 ± 0.030.93 ± 0.03
GAIL-TRPO-B (HE'16)0.98 ± 0.010.95 ± 0.02
GAIL-TRPO-U0.67 ± 0.040.93 ± 0.03
SQIL (Ours)0.89 ± 0.020.88 ± 0.03
λsamp = 00.12 ± 0.020.87 ± 0.02
γ = 00.41 ± 0.020.84 ± 0.02
π = Unif0.47 ± 0.020.82 ± 0.02
RBC0.66 ± 0.020.89 ± 0.01
Expert0.93 ± 0.030.89 ± 0.31
+ +![](images/723ba8a05aaad803303884fd26d68112cae33be8e8db0233cb058bc11ebcbce7.jpg) +Figure 7: Low-dimensional Lunar Lander. Best success rate on 100 consecutive episodes during training. Standard error on five random seeds. Performance bolded if at least within one standard error of expert. +Figure 8: Standard error over two random seeds. No smoothing across training steps. + +# A.3 IMPLEMENTATION DETAILS + +To ensure fair comparisons, the same network architectures were used to evaluate SQIL, GAIL, and BC. For Lunar Lander, we used a network architecture with two fully-connected layers containing 128 hidden units each to represent the Q network in SQIL, the policy and discriminator networks in GAIL, and the policy network in BC. For Car Racing, we used four convolutional layers (following (Ha & Schmidhuber, 2018)) and two fully-connected layers containing 256 hidden units each. For Humanoid and HalfCheetah, we used two fully-connected layers containing 256 hidden units each. For Atari, we used the convolutional neural network described in (Mnih et al., 2015) to represent the Q network in SQIL, as well as the Q network and discriminator network in GAIL. + +To ensure fair comparisons, the same demonstration data were used to train SQIL, GAIL, and BC. For Lunar Lander, we collected 100 demonstration rollouts. For Car Racing, Pong, Breakout, and Space Invaders, we collected 20 demonstration rollouts. Expert demonstrations were generated from scratch for Lunar Lander using DQN (Mnih et al., 2015), and collected from open-source pretrained policies for Car Racing (Ha & Schmidhuber, 2018) as well as Humanoid and HalfCheetah (Dhariwal et al., 2017). The Humanoid demonstrations were generated by a stochastic expert policy, while the HalfCheetah demonstrations were generated by a deterministic expert policy; both experts were trained using TRPO.6 We used two open-source implementations of GAIL: (Fu et al., 2017) for Lunar Lander, and (Dhariwal et al., 2017) for MuJoCo. We adapted the OpenAI Baselines implementation of GAIL to use soft Q-learning for Car Racing and Atari. Expert demonstrations were generated from scratch for Atari using DQN. + +For Lunar Lander, we set $\lambda_{\mathrm{samp}} = 10^{-6}$ . For Car Racing, we set $\lambda_{\mathrm{samp}} = 0.01$ . For all other environments, we set $\lambda_{\mathrm{samp}} = 1$ . + +SQIL was not pre-trained in any of the experiments. GAIL was pre-trained using BC for HalfCheetah, but was not pre-trained in any other experiments. + +In standard implementations of soft Q-learning and SAC, the agent's experience replay buffer typically has a fixed size, and once the buffer is full, old experiences are deleted to make room for new experiences. In SQIL, we never delete demonstration experiences from the replay buffer, but otherwise follow the standard implementation. + +We use Adam (Kingma & Ba, 2014) to take the gradient step in line 4 of Algorithm 1. + +The BC and GAIL performance metrics in Section 4.3 are taken from (Dhariwal et al., 2017).7 + +The GAIL and SQL policies in Section 4.3 are set to be deterministic during the evaluation rollouts used to measure performance. \ No newline at end of file diff --git a/sqilimitationlearningviareinforcementlearningwithsparserewards/images.zip b/sqilimitationlearningviareinforcementlearningwithsparserewards/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d75c5cba45c9daef1ed2bd5550b0dbf978408b7b --- /dev/null +++ b/sqilimitationlearningviareinforcementlearningwithsparserewards/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6fd0bd0963ef011784274f067f16a65329a5f49eef04c93e38e9958a42ae131 +size 384907 diff --git a/sqilimitationlearningviareinforcementlearningwithsparserewards/layout.json b/sqilimitationlearningviareinforcementlearningwithsparserewards/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cfec4ce91a6770b73150b08aa508e9865af673f3 --- /dev/null +++ b/sqilimitationlearningviareinforcementlearningwithsparserewards/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70ed7e5830364bb1d109f04ccb85ba6ef6e25c3f9e23e134a0bb05773807f544 +size 401275 diff --git a/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_content_list.json b/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a8e27eefa590fc3424ba0a99e9d09bdaa88bf5aa --- /dev/null +++ b/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03fea96d0fef660c8247d1f1a8645312521c12d5603b0d2c67e35ce70edb6474 +size 96581 diff --git a/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_model.json b/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..72311bf9995d289a5dec329dba456693d7296cfc --- /dev/null +++ b/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28c3fc17c9458acb47f2dda4a3c6ddfcd98c02563f08f3c89d007d5fb34af2b3 +size 119190 diff --git a/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_origin.pdf b/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..51bfdddcdaf37ff8d5d018fff958ebd84604ee11 --- /dev/null +++ b/statealignmentbasedimitationlearning/f67ee5a1-1732-40ad-97f6-15a45d92a3d6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6d76b4972e5388f460d6deea92301daabfaa05244064bac2d4de2ee2e5843be +size 2199526 diff --git a/statealignmentbasedimitationlearning/full.md b/statealignmentbasedimitationlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4388f2a98c02080c11d669df43831c2f8e335217 --- /dev/null +++ b/statealignmentbasedimitationlearning/full.md @@ -0,0 +1,445 @@ +# STATE ALIGNMENT-BASED IMITATION LEARNING + +Fangchen Liu Zhan Ling Tongzhou Mu Hao Su + +University of California San Diego + +La Jolla, CA 92093, USA + +{fliu,z6ling,t3mu,haosu}@eng.ucsd.edu + +# ABSTRACT + +Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of the current imitation learning methods fail because they focus on imitating actions. We propose a novel state alignment based imitation learning method to train the imitator to follow the state sequences in expert demonstrations as much as possible. The state alignment comes from both local and global perspectives and we combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings and imitation learning settings where the expert and imitator have different dynamics models. + +# 1 INTRODUCTION + +Learning from demonstrations (imitation learning, abbr. as IL) is a basic strategy to train agents for solving complicated tasks. Imitation learning methods can be generally divided into two categories: behavior cloning (BC) and inverse reinforcement learning (IRL). Behavior cloning (Ross et al., 2011b) formulates a supervised learning problem to learn a policy that maps states to actions using demonstration trajectories. Inverse reinforcement learning (Russell, 1998; Ng et al., 2000) tries to find a proper reward function that can induce the given demonstration trajectories. GAIL (Ho & Ermon, 2016) and its variants (Fu et al.; Qureshi et al., 2018; Xiao et al., 2019) are the recently proposed IRL-based methods, which uses a GAN-based reward to align the distribution of state-action pairs between the expert and the imitator. + +Although state-of-the-art BC and IRL methods have demonstrated compelling performance in standard imitation learning settings, e.g. control tasks (Ho & Ermon, 2016; Fu et al.; Qureshi et al., 2018; Xiao et al., 2019) and video games (Aytar et al., 2018b), these approaches are developed based on a strong assumption: the expert and the imitator share the same dynamics model; specifically, they have the same action space, and any feasible state-action pair leads to the same next state in probability for both agents. The assumption brings severe limitation in practical scenarios: Imagine that a robot with a low speed limit navigates through a maze by imitating another robot which moves fast, then, it is impossible for the slow robot to execute the exact actions as the fast robot. However, the demonstration from the fast robot should still be useful because it shows the path to go through the maze. + +We are interested in the imitation learning problem under a relaxed assumption: Given an imitator that shares the same state space with the expert but their dynamics may be different, we train the imitator to follow the state sequence in expert demonstrations as much as possible. This is a more general formulation since it poses fewer requirements on the experts and makes demonstration collection easier. Due to the dynamics mismatch, the imitator becomes more likely to deviate from the demonstrations compared with the traditional imitation learning setting. Therefore, it is very important that the imitator should be able to resume to the demonstration trajectory by itself. Note that neither BC-based methods nor GAIL-based IRL methods have learned to handle dynamics misalignment and deviation correction. + +To address the issues, we propose a novel approach with four main features: 1) State-based. Compared to the majority of literature in imitation learning, our approach is state-based rather than action-based. Not like BC and IRL that essentially match state-action pairs between the expert and the imitator, we only match states. An inverse model of the imitator dynamics is learned to recover the action; 2) Deviation Correction. A state-based $\beta$ -VAE (Higgins et al., 2017) is learned as the prior for the next state to visit. Compared with ordinary behavior cloning, this VAE-based next state predictor can advise the imitator to return to the demonstration trajectory when it deviates. The + +robustness benefits from VAE's latent stochastic sampling; 3) Global State Alignment. While the VAE can help the agent to correct its trajectory to some extent, the agent may still occasionally enter states that are far away from demonstrations, where the VAE has no clue how to correct it. So we have to add a global constraint to align the states in demonstration and imitation. Inspired by GAIL that uses reward to align the distribution of state-action pairs, we also formulate an IRL problem whose maximal cumulative reward is the Wasserstein Distance between states of demonstration and imitation. Note that we choose not to involve state-action pairs as in GAIL(Ho & Ermon, 2016), or state-state pairs as in an observation-based GAIL (Torabi et al., 2018a), because our state-only formulation imposes weaker constraints than the two above options, thus providing more flexibility to handle different agent dynamics; 4) Regularized Policy Update. We combine the prior for next state learned from VAE and the Wasserstein distance-based global constraint from IRL in a unified framework, by imposing a Kullback-Leibler divergence based regularizer to the policy update in the Proximal Policy Optimization algorithm. + +To empirically justify our ideas, we conduct experiments in two different settings. We first show that our approach can achieve similar or better results on the standard imitation learning setting, which assumes the same dynamics between the expert and the imitator. We then evaluate our approach in the more challenging setting that the dynamics of the expert and the imitator are different. In a number of control tasks, we either change the physics properties of the imitators or cripple them by changing their geometries. Existing approaches either fail or can only achieve very low rewards, but our approach can still exhibit decent performance. Finally, we show that even for imitation across agents of completely different actuators, it is still possible for the state-alignment based method to work. Surprisingly, a point mass and an ant in MuJoCo (Todorov et al., 2012) can imitate each other to navigate in a maze environment. + +Our contributions can be summarized as follows: + +- Propose to use a state alignment based method in the imitation learning problems where the expert's and the imitator's dynamics are different. +- Propose a local state alignment method based on $\beta$ -VAE and a global state alignment method based on Wasserstein distance. +- Combine the local alignment and global alignment components into a reinforcement learning framework by a regularized policy update objective. + +# 2 RELATED WORK + +Imitation learning is widely used in solving complicated tasks where pure reinforcement learning might suffer from high sample complexity, like robotics control (Le et al., 2017; Ye & Alterovitz, 2017; Pathak et al., 2018), autonomous vehicle (Fu et al.; Pomerleau, 1989), and playing video game (Hester et al., 2018; Pohlen et al., 2018; Aytar et al., 2018a). Behavioral cloning (Bain & Sommut, 1999) is a straightforward method to learn a policy in a supervised way. However, behavioral cloning suffers from the problem of compounding errors as shown by (Ross & Bagnell, 2010), and this can be somewhat alleviated by interactive learning, such as DAGGER (Ross et al., 2011b). Another important line in imitation learning is inverse reinforcement learning (Russell, 1998; Ng et al., 2000; Abbeel & Ng, 2004; Ziebart et al., 2008; Fu et al.), which finds a cost function under which the expert is uniquely optimal. + +Since IRL can be connected to min-max formulations, works like GAIL, SAM (Ho & Ermon, 2016; Blondé & Kalousis, 2018) utilize this to directly recover policies. Its connections with GANs (Goodfellow et al., 2014) also lead to $f$ -divergence minimization (Ke et al., 2019; Nowozin et al., 2016) and Wasserstein distance minimization (Xiao et al., 2019). One can also extend the framework from matching state-action pairs to state distribution matching, such as Torabi et al. (2018a); Sun et al. (2019); Schroecker & Isbell (2017). Other works (Aytar et al., 2018b; Liu et al., 2018; Peng et al., 2018) also learn from observation alone, by defining reward on state and using IRL to solve the tasks. Works like (Lee et al., 2019; Lee et al.) also use state-based reward for exploration. Torabi et al. (2018b); Edwards et al. (2018) will recover actions from observations by learning an inverse model or latent actions. However, our work aims to combine the advantage of global state distribution matching and local state transition alignment, which combines the advantage of BC and IRL through a novel framework. + +# 3 BACKGROUNDS + +Variational Autoencoders Kingma & Welling (2013); Rezende et al. (2014) provides a framework to learn both a probabilistic generative model $p_{\theta}(\mathbf{x}|\mathbf{z})$ as well as an approximated posterior + +![](images/82ca23911f56aacc01efec1afb807a45e68c2d339b8b9d06a7c0d224a73af1e0.jpg) +Figure 2: Visualization of state alignment + +distribution $q_{\phi}(\mathbf{z}|\mathbf{x})$ . $\beta$ -VAE is a variant VAE that introduces an adjustable hyperparameter $\beta$ to the original objective: + +$$ +\mathcal {L} (\theta , \phi ; \mathbf {x}, \mathbf {z}, \beta) = \mathbb {E} _ {q _ {\phi} (\mathbf {z} | \mathbf {x})} [ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) ] - \beta D _ {K L} (q _ {\phi} (\mathbf {z} | \mathbf {x}) \| p (\mathbf {z})) \tag {1} +$$ + +Larger $\beta$ will penalize the total correlation (Chen et al., 2018) to encourage more disentangled latent representations, while smaller $\beta$ often results in sharper and more precise reconstructions. + +Wasserstein distance The Wasserstein distance between two density functions $p(x)$ and $q(x)$ with support on a compact metric space $(M, d)$ has an alternative form due to Kantorovich-Rubenstein duality (Villani, 2008): + +$$ +\mathcal {W} (p, q) = \sup _ {\phi \in \mathcal {L} _ {1}} \mathbb {E} _ {p (x)} [ \phi (x) ] - \mathbb {E} _ {q (x)} [ \phi (x) ] \tag {2} +$$ + +Here, $\mathcal{L}_1$ is the set of all 1-Lipschitz functions from $\mathcal{M}$ to $\mathbb{R}$ . Compared with the prevalent KL-divergence and its extension, the f-divergence family, Wasserstein distance has a number of advantages theoretically and numerically. Please refer to Arjovsky et al. (2017) and Solomon (2018) for a detailed discussion. + +# 4 SAIL: STATE ALIGNMENT BASED IMITATION LEARNING + +# 4.1 OVERVIEW + +Our imitation learning method is based on state alignment from both local and global perspectives. For local alignment, the goal is to follow the transition of the demonstration as much as possible, and allow the return to the demonstration trajectory whenever the imitation deviates. To achieve both goals, we use a $\beta$ -VAE (Higgins et al., 2017) to generate the next state (Figure 2 Left). For global alignment, we set up an objective to minimize the Wasserstein distance between the states in the current trajectory and the demonstrations (Figure 2 Right). There has to be a framework to naturally combine the local alignment and global alignment components. We resort to the reinforcement learning framework by encoding the local alignment as policy prior and encoding the global alignment as reward over states. Using Proximal Policy Optimization (PPO) by Schulman et al. (2017) as the backbone + +RL solver, we derive a regularized policy update. To maximally exploit the knowledge from demonstrations and reduce interactions with the environment, we adopt a pre-training stage to produce a good initialization based on the same policy prior induced by the local alignment. Our method is summarized in Algorithm 1. In the rest parts of this section, we will introduce all the components of our method in details. + +![](images/497f52737ff3cbb9affd3b71a552bad7f9e6738ab2bba68567eabc9e7e59d399.jpg) +Figure 1: Using VAE as a state predictive model will be more self-correctable because of the stochastic sampling mechanism. But this won't happen when we use VAE to predict actions. + +# 4.2 LOCAL ALIGNMENT BY STATE PREDICTIVE VAE + +To align the transition of states locally, we need a predictive model to generate the next state which the agent should target at. And then we can train an inverse dynamics model to recover the cor + +Algorithm 1 SAIL: State Alignment based Imitation Learning +Require: Expert trajectories $\tau_{e}:[s_{1},a_{1},s_{2},a_{2},\ldots]\sim\pi_{e}$ , initial policy $\pi$ , inverse dynamics model $g$ , discriminator $\phi$ , total episode $T$ , memory capacity $S$ +1: if Imitator and Expert have the same dynamics model then +2: Pre-train $g$ using $\tau_{e}$ and transitions collected by a random policy +3: else +4: Pre-train $g$ using transitions collected by a random policy +5: end if +6: Pre-train VAE using $\tau_{e}$ , and obtain the policy prior +7: Pretrain $\pi$ using policy prior as described in Sec 4.5 +8: while episode $\leq T$ do +9: while $|\tau|\leq S$ do +10: Collect trajectory $\{(s,a,s',r,done)\}$ using $\pi$ +11: Update $r$ using (4) +12: Add $\{(s,a,s',r,done)\}$ to $\tau$ +13: end while +14: Train $\phi$ using $\max_{\phi\in\mathcal{L}_{1}}E_{s\sim\tau_{e}}[\phi(s)]-E_{s\sim\tau}[\phi(s)]$ +15: Update inverse dynamics model $g$ +16: Update policy using (5) +17: end while + +responding action, so as to provide a direct supervision for policy. It is worth noting that, while training an inverse dynamics model is generally challenging, it is not so hard if we only focus on the agent dynamics, especially when the low-dimensional control states are accessible as in many practical scenarios. The problem of how to learn high-quality inverse/forward dynamics models is an active research topic. + +Instead of using an ordinary network to memorize the subsequent states, which will suffer from the same issue of compounding errors as behavioral cloning (Ross & Bagnell, 2010; Ross et al., 2011a), we propose to use VAE to generate the next state based on the following two reasons. First, as shown in (Dai et al., 2018), VAE is more robust to outliers and regularize itself to find the support set of a data manifold, so it will generalize better for unseen data. Second, because of the latent stochastic sampling, the local neighborhood of a data point will have almost the same prediction, which is self-correctable when combined with a precise inverse dynamics model as illustrated in Figure 1. + +We can also use a VAE to generate action based on the current state. But if the agent deviated from the demonstration trajectory a little bit, this predicted action is not necessarily guide the agent back to the trajectory, as shown in Figure 1. And in Sec 5.3.2, we conduct experiments to compare the state predictive VAE and the action predictive VAE. + +Instead of the vanilla VAE, we use $\beta$ -VAE to balance the KL penalty and prediction error, with formulation shown in (1). In Sec 5, we discuss the effects of the hyper-parameter $\beta$ in different experiment settings as one of the ablation studies. + +# 4.3 GLOBAL ALIGNMENT BY WASSERSTEIN DISTANCE + +Due to the difference of dynamics between the expert and the imitator, the VAE-based local alignment cannot fully prevent the imitator from deviating from demonstrations. In such circumstances, we still need to assess whether the imitator is making progress in learning from the demonstrations. We, therefore, seek to control the difference between the state visitation distribution of the demonstration and imitator trajectories, which is a global constraint. + +Note that using this global constraint alone will not induce policies that follow from the demonstration. Consider the simple case of learning an imitator from experts of the same dynamics. The expert takes cyclic actions. If the expert runs for 100 cycles with a high velocity and the imitator runs for only 10 cycles with a low velocity within the same time span, their state distribution would still roughly align. That is why existing work such as GAIL aligns state-action occupancy measure. However, as shown later, our state-based distribution matching will be combined with the local alignment component, which will naturally resolve this issue. The advantage of this state-based distribution matching over state-action pair matching as in GAIL or state next-state pair matching in (Torabi et al., 2018a) is that the constraint becomes loosened. + +We use IRL approach to achieve the state distribution matching by introducing a reinforcement learning problem. Our task is to design the reward to train an imitator that matches the state distribution of the expert. + +Before introducing the reward design, we first explain the computation of the Wasserstein distance between the expert trajectories $\{\tau_e\}$ and imitator trajectory $\{\tau\}$ using the Kantorovich duality: + +$$ +\mathcal {W} \left(\tau_ {e}, \tau\right) = \sup _ {\phi \in \mathcal {L} _ {1}} \mathbb {E} _ {s \sim \tau_ {e}} [ \phi (s) ] - \mathbb {E} _ {s \sim \tau} [ \phi (s) ] \tag {3} +$$ + +where $\phi$ is the Kantorovich's potential, and serves as the discriminator in WGAN (Arjovsky et al., 2017). $\phi$ is trained with a gradient penalty term as WGAN-GP introduced in (Gulrajani et al., 2017) + +After the rollout of imitator policy is obtained, the potential $\phi$ will be updated by (3). Assume a transition among an imitation policy rollout of length $T$ is $(s_i,s_{i + 1})$ . To provide a dense signal every timestep, we assign the reward as: + +$$ +r \left(s _ {i}, s _ {i + 1}\right) = \frac {1}{T} \left[ \phi \left(s _ {i + 1}\right) - \mathbb {E} _ {s \sim \tau_ {e}} \phi (s) \right] \tag {4} +$$ + +We now explain the intuition of the above reward. By solving (3), those states of higher probability in demonstration will have a larger $\phi$ value. The reward in (4) will thus encourage the imitator to visit such states. + +Maximizing the curriculum reward will be equivalent to + +$$ +J (\pi) = \sum_ {t = 1} ^ {T} \mathbb {E} _ {s _ {t}, s _ {t + 1} \sim \pi} [ r (s _ {t}, s _ {t + 1}) ] = \sum_ {t = 1} ^ {T} \frac {\mathbb {E} _ {s _ {t + 1}} [ \phi (s _ {t + 1}) - \mathbb {E} _ {s \sim \tau_ {e}} [ \phi (s) ] ]}{T} = - \mathcal {W} (\tau_ {e}, \tau) +$$ + +In other words, the optimal policy of this MDP best matches the state visitation distributions w.r.t Wasserstein distance. + +Compared with AIRL (Fu et al.) that also defines rewards on states only, our approach indeed enjoys certain advantages in certain cases. We provide a theoretical justification in the Appendix D. + +# 4.4 REGULARIZED PPO POLICY UPDATE OBJECTIVE + +As mentioned in the second paragraph of Sec 4.3, the global alignment has to be combined with local alignment. This is achieved by adding a prior to the original clipped PPO objective. + +We maximize the following unified objective function: + +$$ +J (\pi_ {\theta}) = L ^ {C L I P} (\theta) - \lambda D _ {K L} \left(\pi_ {\theta} (\cdot | s _ {t}) \| p _ {a}\right) \tag {5} +$$ + +We will explain the two terms in detail. $L^{CLIP}(\theta)$ denotes the clipped surrogate objective used in the original PPO algorithm: + +$$ +L ^ {C L I P} (\theta) = \hat {\mathbb {E}} _ {t} \left[ \min \left(\frac {\pi_ {\theta} (a | s)}{\pi_ {\theta_ {o l d}} (a | s)} \hat {A} _ {t}, \operatorname {c l i p} \left(\frac {\pi_ {\theta} (a | s)}{\pi_ {\theta_ {o l d}} (a | s)}, 1 - \epsilon , 1 + \epsilon\right) \hat {A} _ {t}\right) \right], \tag {6} +$$ + +where $\hat{A}_t$ is an estimator of the advantage function at timestep $t$ . The advantage function is calculated based on a reward function described in Sec 4.3. + +The $D_{KL}$ term in (5) serves as a regularizer to keep the policy close to a learned policy prior $p_a$ . This policy prior $p_a$ is derived from the state predictive VAE and an inverse dynamics model. Assume the $\beta$ -VAE is $f(s_t) = s_{t+1}$ and the inverse dynamics model is $g_{inv}(s_t, s_{t+1}) = a$ . To solve the case when the agents have different dynamics, we learn a state prediction network and use a learned inverse dynamics to decode the action. We define the action prior as + +$$ +p _ {a} \left(a _ {t} \mid s _ {t}\right) \propto \exp \left(- \left\| \frac {g _ {\text {i n v}} \left(s _ {t} , f \left(s _ {t}\right)\right) - a _ {t}}{\sigma} \right\| ^ {2}\right) \tag {7} +$$ + +where the RHS is a pre-defined policy prior, a Gaussian distribution centered at $g_{inv}(s_t,f(s_t))$ . $\sigma$ controls how strong the action prior is when regularizing the policy update, which is a hyperparameter. Note that the inverse model can be further adjusted during interactions. + +$L^{CLIP}$ is computed through the advantage $\hat{A}_t$ and reflects the global alignment. The policy prior is obtained from the inverse model and local $\beta$ -VAE, which makes the $D_{KL}$ serve as a local alignment constraint. Furthermore, our method can be regarded as a combination of BC and IRL because our KL-divergence based action prior encodes the BC policy and we update the policy leveraging reward. + +We would note that our state-alignment method augments state distribution matching by taking relationships of two consecutive states into account with robustness concern. + +# 4.5 PRE-TRAINING + +We pretrain the state predictive VAE and the inverse dynamics model, and then obtain the policy prior in (7), which is a Gaussian distribution. For pre-training, we want to initialize PPO's Gaussian policy $\pi$ by this prior $p_a$ , by minimizing the KL-divergence between them. Practically, we use direct supervision from $g_{inv}(s_t,f(s_t))$ and $\sigma$ in (7) to directly train both the mean and variance of the policy network, which is more efficient during the pre-training stage. During the online interaction, the update rule of PPO's policy is by optimizing (5), and the variance will be further adjusted for all the dimensions of the action space. + +# 5 EXPERIMENTS + +We conduct two different kinds of experiments to show the superiority of our method. In Sec 5.1, we compare our method with behavior cloning (Bain & Sommut, 1999), GAIL (Ho & Ermon, 2016), and AIRL (Fu et al.) in control setting where the expert and the imitator have different dynamics model, e.g., both of them are ant robots but the imitator has shorter legs. In Sec 5.1, we further evaluate in the traditional imitation learning setting. Finally, in Sec 5.3, we conduct ablation study to show the contribution of the components. + +# 5.1 IMITATION LEARNING ACROSS AGENTS OF DIFFERENT ACTION DYNAMICS + +# 5.1.1 ACTORS OF MODIFIED PHYSICS AND GEOMETRY PROPERTIES + +We create environments using MuJoCo (Todorov et al., 2012) by changing some properties of experts, such as density and geometry of the body. We choose 2 environments, Ant and Swimmer, and augment them to 6 different environments: Heavy/Light/Disabled Ant/Swimmer. The Heavy/Light agents have modified density, and the disabled agents have modified head/tail/leg lengths. The demonstrations are collected from the standard Ant-v2 and Swimmer-v2. More descriptions of the environments and the demonstration collection process can be founded in the Appendix. + +We then evaluate our method on them. + +![](images/728e20be876a15982920599d5377dcea32222e1e26c3e6e369f824c21f9a4d66.jpg) +(a) DisabledAnt + +![](images/696f96e93b1a9cecc30730530c3299422352e84f4e7af768622209ab320e9750.jpg) +(b) LightAnt + +![](images/e38e9f70fdf072a9ca7c90e9778648b12ac4f5d030e4e815800fa77c962a66cc.jpg) +(c) HeavyAnt + +![](images/285701f24568334c67d4e4c77dfbab7a1987969558bc7722f2d5e4559c9b754f.jpg) +(d) DisabledSwimmer + +![](images/f9c1c1d9a374f1ecaf1c04c8cda766337747ade4c37d64f94c2249cb695d6a7f.jpg) +(e) LightSwimmer + +![](images/9078fa4c26990b3336cb5e053c3a89fc0b134448051131f6c2f916ecd4446189.jpg) +(f) HeavySwimmer +Figure 3: Comparison with BC, GAIL and AIRL when dynamics are different from experts. + +Figure 3 demonstrates the superiority of our methods over all the baselines. Our approach is the most stable in all the 6 environments and shows the leading performance in each of them. GAIL seems to be the most sensitive to dynamics difference. AIRL, which is designed to solve imitation learning for actors of different dynamics, can perform on par with our method in two swimmer-based environments (DisabledSwimmer and HeavySwimmer) that have relatively lower dimensional action space (2D for swimmer versus 8D for ants). + +Interestingly, the stability and performance of vanilla behavior cloning are quite reasonable in 4 of the environments, although it failed to move about in the DisabledAnt and HeavyAnt environments. For these two tasks, the agent will reach dangerous states by cloning actions, yet our method will not approach these states by using state-based imitation. In the other four games, BC agents do not die but just move less efficiently, so they have a sub-optimal yet still reasonable score. + +# 5.1.2 ACTORS OF HETEROGENEOUS ACTION DYNAMICS + +We consider an extremely challenging setting that the imitator and demonstrator are functionally different. One typical example of expert/imitator pair in practice would be a human and a humanoid robot. We consider a much simplified version but with similar nature - a Point and an Ant in MuJoCo. In this task, even if the state space cannot be exactly matched, there are still some shared dimensions across the state space of the imitator and the actor, e.g., the location of the center of mass, and the demonstration should still teach the imitator in these dimensions. + +We use the same setting as many hierarchical RL papers, such as HIRO and Near-Optimal RL (Nachum et al., 2018a;b). The agent need to reach a goal position in a maze, which is represented by $(\mathrm{x},\mathrm{y})$ coordinates. We also know that the first two dimensions of states are the position of the agent. The prior knowledge includes: (1) the goal space (or the common space that need to be matched) (2) the projection from the state space to the goal space (select the first two dimensions of the states). + +![](images/681a9dd8b3775d0635eac1dc106f721593684348f31d1104b629df570a0dec66.jpg) +(a) Original Ant + +![](images/0f1248fa2c12f56ca27d566ce76e5716c69e7f59a3b9f5f154fdc8038fd97f25.jpg) +(b) Disabled Ant + +![](images/4b35a721712fd2ada267306aec56fb93aa485fbaa145c44832126827882e63c6.jpg) +(c) PointMaze +Figure 4: Imitation Learning of Actors with Heterogeneous Action Dynamics. + +![](images/04d5b7e03491611bd580c5eab44aa627e5dc8669820b9a9f4e84e326cf5ab6ab.jpg) +(d) AntMaze + +The first task is that the Ant should reach the other side of the maze from several successful demonstrations of a Point robot. As shown in Figure 4(c) and Figure 4(d), the maze structure for the ant and point mass is exactly the same. + +To solve this problem, we first pre-train an VAE on the demonstrations, and use this VAE to propose the next "subgoal" for the Ant. This VAE is trained on the goal space (i.e. the first two dimensions) of the Point robot's trajectory. Then we train an inverse model for Ant, which will generate an action based on the Ant's current state (high dimensional) and goal predicted by VAE (2 dimensional). + +Our performance is shown in Figure 5(c). After 1M training steps, the agent has success rate of 0.8 to reach the other side of the maze. + +# 5.2 ACTORS OF THE SAME DYNAMICS (STANDARD IMITATION LEARNING) + +We also evaluate our algorithm on 6 non-trivial control tasks in MuJoCo: Swimmer, Hopper, Walker, Ant, HalfCheetach, and Humanoid. We first collect demonstration trajectories with Soft Actor-Critic, which can learn policies that achieve high scores in most of these environments2. For com- + +parison, we evaluate our method against 3 baselines: behavior cloning, GAIL, and AIRL3. Also, to create even stronger baselines for the cumulative reward and imitator run-time sample complexity, we initialize GAIL with behavior cloning, which would obtain higher scores in Swimmer and Walker. Lastly, to evaluate how much each algorithm depends on the amount of demonstrations, we sampled demonstration trajectories of ten and fifty episodes. + +Table 1 depicts representative results in Hopper and HalfCheetah4. The advantage of our methods over BC should be attributed to the inherent data augmentation by VAE. On Hopper-v2, we are significantly better with 10 demos but are just on par if the demos are increased to 50. On HalfCheetah-v2, the demo cheetah runs almost perfectly (12294 scores); in other words, the demo provides limited instruction when the imitator is even slightly off the demo states, thus the robustness from VAE becomes critical. + +Table 1: Performance on Hopper-v2 and HalfCheetah-v2 + +
Hopper-v2HalfCheetah-v2
# Demo10501050
Expert3566 ± 1.2412294.22 ± 273.59
BC1318.76 ± 804.363525.87 ± 160.74971.42 ± 249.624813.20 ± 1949.26
GAIL3372.66 ± 130.753363.97 ± 262.77474.42 ± 389.30-175.83 ± 26.76
BC-GAIL3132.11 ± 520.653130.82 ± 554.54578.85 ± 934.341597.51 ± 1173.93
AIRL3.07 ± 0.023.31 ± 0.02-146.46 ± 23.57755.46 ± 10.92
Our init3412.58 ± 450.973601.16 ± 300.141064.44 ± 227.327102.29 ± 910.54
Our final3539.56 ± 130.363614.19 ± 150.741616.34 ± 180.768817.32 ± 860.55
+ +# 5.3 ABLATION STUDY + +# 5.3.1 COEFFICIENT $\beta$ IN $\beta$ -VAE + +$\beta$ -VAE introduces an additional parameter to the original VAE. It controls the variance of the randomly sampled latent variable sampling, which subsequently affects the reconstruction quality and robustness. Theoretically, a smaller $\beta$ leads to better state prediction quality, with the cost of losing the deviation correction ability (Dai et al., 2018). + +To empirically show the role of beta and check the sensitivity of our algorithm with respect to beta, we evaluate VAE in settings of both the imitator has the same dynamics and has different dynamics. We select HalfCheetah-v2 and HeavyAnt as an example. For HalfCheetah-v2, we pretrain the inverse dynamics and VAE using given demonstrations so that the initial performance will tell the quality of the VAE's prediction. For DisabledAnt, we pretrain the dynamics with random trials, which results in forward/inverse dynamics estimation of less accuracy. In this case, we examine both its initialized performance and final performance. The results are shown in Table 2. We find out that for $\beta$ in [0.01, 0.1], the performance is better. Specifically, when the imitator is different from the expert, a smaller $\beta$ will result in poor performance as it overfits the demonstration data. + +We also compare our method with an ordinary MLP trained by MSE loss. We find out that VAE outperforms MLP in all settings. Note that the MLP-based approach is very similar to the state-based behavior cloning work of (Torabi et al., 2018b). + +# 5.3.2 ACTION PREDICTIVE $\beta$ -VAE + +In Figure 1, we mentioned that a VAE to predict the next action is less favorable. To justify the claim, we compare a VAE-based BC with a vanilla BC that both predict actions, as shown in Table 3. Experiments show that VAE-BC is even outperformed by a vanilla BC, especially when $\beta$ is larger than 0.001. Compared with the last line in Table 2, we can conclude that VAE is more useful when predicting state, which consolidates that the advantage really comes from our state-based approach but not only the robustness of VAE. + +# 5.3.3 EFFECT OF WASSERSTEIN DISTANCE AND KL REGULARIZATION + +In our policy update process, we use Wasserstein distance with KL regularization to update the policy. To analyze their effects on the performance, we use HalfCheetah-v2 and Humanoid-v2 with + +Table 2: Analyze the role of VAE coefficient. The "None" item means replacing VAE with an ordinary network with linear layers. + +
βEnvironments
HalfCheetah-50HalfCheetah-20HeavyAnt-InitialHeavyAnt-Final
0.22007.861289.21258.91282.13
0.152653.041151.931149.651502.68
0.17102.291797.441219.345208.45
0.055933.282215.71987.724850.62
0.015893.171982.62740.541921.26
0.0054415.041369.57320.54399.31
None4759.691123.79359.15-62.13
+ +Table 3: Compare behavior cloning to variational behavior cloning + +
βEnvironments
HalfCheetah-50Hopper-50
0.1230.52 ± 13.26203.87 ± 14.39
0.011320.04 ± 15.43438.10 ± 20.43
0.0013306.91 ± 12.513303.72 ± 10.46
None4813.20 ± 1949.263525.87 ± 6.74
+ +![](images/341a107e2dab00d7c1eb5e72659298290e3f4d8ec0b9023dc3c93fe247631e07.jpg) +(a) HalfCheetah-v2 + +![](images/c357aa668a68b70156d749008d22d44e12362bafca2379f7db89cf85c7590abd.jpg) +(b) Humanoid-v2 +Figure 5: (a), (b) show the effects of Wasserstein distance and KL regularization on HalfCheetah-v2 and Humanoid-v2 given 20 demonstration trajectories. And (c) presents the result on Antmaze. + +![](images/85f172bfeeaa4dca7ef0c47447c987dd8b7723a7ddafb9e0a7522244b7340488.jpg) +(c) AntMaze + +20 expert trajectories. For each environment, they use the same pretrained inverse model and VAE, thus they have the same behavior after pretraining. + +As shown in Figure 5(a)(b), Wasserstein distance combined with KL regularization performs the best. Wasserstein objective is used in our inverse RL based mechanism that would significantly penalize the exploration when the agent deviates from the demonstration far away. However, using this objective alone lacks constraints over consecutive states, thus performing the worst. The KL objective adds constraints over consecutive states using a VAE prior; however, VAE is unable to extrapolate to states when the imitator deviates far from the demo (green line gradually fails as in Fig 5 (b)), but this is the scenario when the Wasserstein distance would not favor, thus the reward from the Wasserstein distance will push the imitator back to the demonstration states. + +# 6 CONCLUSION + +We proposed SAIL, a flexible and practical imitation learning algorithms that use state alignment from both local and global perspective. We demonstrate the superiority of our method using MuJoCo environments, especially when the action dynamics are different from the demonstrations. + +# REFERENCES + +Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 1. ACM, 2004. + +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. + +Yusuf Aytar, Tobias Pfaff, David Budden, Thomas Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. In Advances in Neural Information Processing Systems, pp. 2930-2941, 2018a. +Yusuf Aytar, Tobias Pfaff, David Budden, Thomas Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. In Advances in Neural Information Processing Systems, pp. 2930-2941, 2018b. +Michael Bain and Claude Sommut. A framework for behavioural cloning. Machine intelligence, 15 (15):103, 1999. +Lionel Blondé and Alexandros Kalousis. Sample-efficient imitation learning via generative adversarial nets. arXiv preprint arXiv:1809.02064, 2018. +Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. In Advances in Neural Information Processing Systems, pp. 2610-2620, 2018. +Bin Dai, Yu Wang, John Aston, Gang Hua, and David Wipf. Connections with robust pca and the role of emergent sparsity in variational autoencoder models. The Journal of Machine Learning Research, 19(1):1573-1614, 2018. +Ashley D Edwards, Himanshu Sahni, Yannick Schroecker, and Charles L Isbell. Imitating latent policies from observation. arXiv preprint arXiv:1805.07914, 2018. +Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. *ICLR* 2018. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. +Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767-5777, 2017. +Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018. +Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning from demonstrations. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. *ICLR*, 2(5):6, 2017. +Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in neural information processing systems, pp. 4565-4573, 2016. +Liyiming Ke, Matt Barnes, Wen Sun, Gilwoo Lee, Sanjiban Choudhury, and Siddhartha Srinivasa. Imitation learning as $f$ -divergence minimization. arXiv preprint arXiv:1905.12888, 2019. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Hoang M Le, Yisong Yue, Peter Carr, and Patrick Lucey. Coordinated multi-agent imitation learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1995-2003. JMLR.org, 2017. +Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Ruslan Salakhutdinov, and Sergey Levine. State marginal matching with mixtures of policies. +Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric Xing, Sergey Levine, and Ruslan Salakhutdinov. Efficient exploration via state marginal matching. arXiv preprint arXiv:1906.05274, 2019. + +YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1118-1125. IEEE, 2018. +Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine. Near-optimal representation learning for hierarchical reinforcement learning. arXiv preprint arXiv:1810.01257, 2018a. +Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical reinforcement learning. In Advances in Neural Information Processing Systems, pp. 3303-3313, 2018b. +Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, volume 1, pp. 2, 2000. +Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in neural information processing systems, pp. 271-279, 2016. +Deepak Pathak, Parsa Mahmoudieh, Michael Luo, Pulkit Agrawal, Dian Chen, Fred Shentu, Evan Shelhamer, Jitendra Malik, Alexei A Efros, and Trevor Darrell. Zero-shot visual imitation. international conference on learning representations, 2018. +Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, and Sergey Levine. Sfv: Reinforcement learning of physical skills from videos. ACM Trans. Graph., 37(6), November 2018. +Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado van Hasselt, John Quan, Mel Večerík, et al. Observe and look further: Achieving consistent performance on atari. arXiv preprint arXiv:1805.11593, 2018. +Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances in neural information processing systems, pp. 305-313, 1989. +Ahmed H Qureshi, Byron Boots, and Michael C Yip. Adversarial imitation via variational inverse reinforcement learning. arXiv preprint arXiv:1809.06404, 2018. +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. +Stéphane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 661-668, 2010. +Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635, 2011a. +Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635, 2011b. +Stuart J Russell. Learning agents for uncertain environments. In $COLT$ , volume 98, pp. 101-103, 1998. +Yannick Schroecker and Charles L Isbell. State aware imitation learning. In Advances in Neural Information Processing Systems, pp. 2911-2920, 2017. +John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, pp. 1889-1897, 2015. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +Justin Solomon. Optimal transport on discrete domains. AMS Short Course on Discrete Differential Geometry, 2018. +Wen Sun, Anirudh Vemula, Byron Boots, and J Andrew Bagnell. Provably efficient imitation learning from observation alone. arXiv preprint arXiv:1905.10948, 2019. + +Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012. +Faraz Torabi, Garrett Warnell, and Peter Stone. Generative adversarial imitation from observation. arXiv preprint arXiv:1807.06158, 2018a. +Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954, 2018b. +Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008. +Huang Xiao, Michael Herman, Joerg Wagner, Sebastian Ziesche, Jalal Etesami, and Thai Hong Linh. Wasserstein adversarial imitation learning. arXiv preprint arXiv:1906.08113, 2019. +Gu Ye and Ron Alterovitz. Guided motion planning. In Robotics research, pp. 291-307. Springer, 2017. +Brian D Ziebart, Andrew Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. 2008. + +# A LEARNING ACROSS DIFFERENT ENVIRONMENTS + +PointMaze & AntMaze As shown in Figure 4, a point mass or an ant is put in a $24 \times 24$ U-maze. The task is to make the agent reach the other side of U-maze with the demonstration from the point mass. The ant is trained to reach a random goal in the maze from a random location, and should reach the other side of the maze. The state space of ant is 30-dim, which contains the positions and velocities. + +HeavyAnt Two times of original Ant's density. Two times of original gear of the armature. + +LightAnt One tenth of original Ant's density. + +DisabledAnt Two front legs are 3 quarters of original Ant's legs. + +HeavySwimmer 2.5 times of original Swimmer's density. + +LightSwimmer One twentieth of original Swimmer's density. + +DisabledSwimmer Make the last joint 1.2 times longer and the first joint 0.7 times of the original length + +The exact results of these environments are listed in Table 4, 5. All the statistics are calculated from 20 trails. + +Table 4: Performance on modifeid Swimmer + +
DisabledSwimmerLightSwimmerHeavySwimmer
BC249.09 ± 1.53277.99 ± 3.41255.95 ± 2.5
GAIL228.46 ± 2.02-4.11 ± 0.51254.91 ± 1.35
AIRL283.42 ± 3.6967.58 ± 25.09301.27 ± 5.21
SAIL(Ours)287.71 ± 2.31342.61 ± 6.14286.4 ± 3.2
+ +Table 5: Performance on modified Ant + +
DisabledAntHeavyAntLightAnt
BC1042.45 ± 75.13550.6 ± 77.624936.59 ± 53.42
GAIL-1033.54 ± 254.36-1089.34 ± 174.13-971.74 ± 123.14
AIRL-3252.69 ± 153.47-62.02 ± 5.33-626.44 ± 104.31
SAIL(Ours)3305.71 ± 67.215608.47 ± 57.674335.46 ± 82.34
+ +# B IMITATION BENCHMARK EXPERIMENTS SETTINGS AND RESULTS + +We use six MuJoCo (Todorov et al., 2012) control tasks. The name and version of the environments are listed in Table 6, which also list the state and action dimension of the tasks with expert performance and reward threshold to indicate the minimum score to solve the task. All the experts are trained by using SAC (Haarnoja et al., 2018) except Swimmer-v2 where TRPO (Schulman et al., 2015) get higher performance. + +Table 6: Performance on benchmark control tasks + +
EnvironmentState DimAction DimReward thresholdExpert Performance
Swimmer-v282360332
Hopper-v211338003566
Walker2d-v2176-4924
Ant-v2111860006157
HalfCheetah-v2176480012294
Humanoid-v23761710005187
+ +The exact performance of all methods are list in Table 7, 8, 9, 10, 11, 12. We compare GAIL(Ho & Ermon, 2016), behavior cloning, GAIL with behavior cloning initialization and AIRL to our method containing. Means and standard deviations are calculated from 20 trajectories after the agents converge and the number total interactions with environments is less than one million environment steps. + +Table 7: Performance on Swimmer-v2 with different trajectories + +
Swimmer-v2
#Demo5102050
Expert332.88 ± 1.24
BC328.85 ± 2.26331.17 ± 2.4332.17 ± 2.4330.65 ± 2.42
GAIL304.64 ± 3.16271.59 ± 11.7756.16 ± 5.99246.73 ± 5.76
BC-GAIL313.80 ± 3.42326.58 ± 7.87294.93 ± 12.21315.68 ± 9.99
AIRL332.11 ± 2.57338.43 ± 3.65335.67 ± 2.72340.08 ± 2.70
Our init332.36 ± 3.62335.78 ± 0.34336.23 ± 2.53334.03 ± 2.11
Our final332.22 ± 3.23339.67 ± 3.21336.18 ± 1.87336.31 ± 3.20
+ +Table 8: Performance on Hopper-v2 with different trajectories + +
Hopper-v2
#Demo5102050
Expert3566 ± 1.24
BC1471.40 ± 637.251318.76 ± 804.361282.46 ± 772.243525.87 ± 160.74
GAIL3300.32 ± 331.613372.66 ± 130.753201.97 ± 295.273363.97 ± 262.77
BC-GAIL3122.23 ± 358.653132.11 ± 520.653111.42 ± 414.283130.82 ± 554.54
AIRL4.12 ± 0.013.07 ± 0.024.11 ± 0.013.31 ± 0.02
Our init2322.49 ± 300.933412.58 ± 450.973314.03 ± 310.323601.16 ± 300.14
Our final3092.26 ± 670.723539.56 ± 130.363516.81 ± 280.983610.19 ± 150.74
+ +Table 9: Performance on Walker2d-v2 with different trajectories + +
Walker2d-v2
#Demo5102050
Expert5070.97 ± 209.19
BC1617.34 ± 693.634425.50 ± 930.624689.30 ± 372.334796.24 ± 490.05
GAIL1307.21 ± 388.55692.16 ± 145.341991.58 ± 446.66751.21 ± 150.18
BC-GAIL3454.91 ± 792.402094.68 ± 1425.053482.31 ± 828.212896.50 ± 828.18
AIRL-7.13 ± 0.11-7.39 ± 0.09-3.74 ± 0.13-4.64 ± 0.09
Our init1859.10 ± 720.442038.90 ± 260.784509.82 ± 1470.654757.58 ± 880.45
Our final2681.20 ± 530.673764.14 ± 470.014778.82 ± 760.344780.73 ± 360.66
+ +Table 10: Performance on Ant-v2 with different trajectories + +
Ant-v2
#Demo5102050
Expert6190.90 ± 254.18
BC3958.20 ± 661.283948.88 ± 753.415424.01 ± 473.055852.79 ± 572.97
GAIL340.02 ± 59.02335.25 ± 89.19314.35 ± 52.13284.18 ± 32.40
BC-GAIL-1081.30 ± 673.65-1177.27 ± 618.67-13618.45 ± 4237.79-1166.16 ± 1246.79
AIRL-839.32 ± -301.54-386.43 ± 156.98-586.07 ± 145.43-393.90 ± 145.13
Our init1150.82 ± 200.873015.43 ± 300.705200.58 ± 870.745849.88 ± 890.56
Our final1693.59 ± 350.743983.34 ± 250.995980.37 ± 420.165988.65 ± 470.03
+ +Table 11: Performance on HalfCheetah-v2 with different trajectories + +
HalfCheetah-v2
#Demo5102050
Expert12294.22 ± 208.41
BC225.42 ± 147.16971.42 ± 249.622782.76 ± 959.674813.20 ± 1949.26
GAIL-84.92 ± 43.29474.42 ± 389.30-116.70 ± 34.14-175.83 ± 26.76
BC-GAIL1362.59 ± 1255.57578.85 ± 934.343744.32 ± 1471.901597.51 ± 1173.93
AIRL782.36 ± 48.98-146.46 ± 23.571437.25 ± 25.45755.46 ± 10.92
Our init267.71 ± 90.381064.44 ± 227.323200.80 ± 520.047102.74 ± 910.54
Our final513.66 ± 15.311616.34 ± 180.766059.27 ± 344.418817.32 ± 860.55
+ +Table 12: Performance on Humanoid-v2 with different trajectories + +
Humanoid-v2
#Demo5102050
Expert5286.21 ± 145.98
BC1521.55± 272.143491.07± 518.644686.05 ±355.744746.88 ±605.61
GAIL485.92± 27.59486.44 ±27.18477.15± 22.07481.14± 24.37
BC-GAIL363.68 ±44.44410.03 ±33.07487.99± 30.77464.91 ±33.21
AIRL79.72 ± 4.2787.15 ± 5.01-1293.86 ± 10.7084.84 ± 6.46
Our init452.31 ± 190.121517.63 ± 110.454610.25 ± 2750.864776.83 ± 1320.46
Our final1225.58 ± 210.882190.43 ± 280.184716.91 ±680.294780.07 ± 700.01
+ +# C HYPER-PARAMETER AND NETWORK ARCHITECTURE + +When we pretrain the policy network with our methods, we choose $\beta = 0.05$ in $\beta$ -VAE. We use Adam with learning rate 3e-4 as the basic optimization algorithms for all the experiments. The policy network and value network used in the algorithms all use a three-layer relu network with hidden size 256. We choose $\sigma = 0.1$ in the policy prior for all the environments. + +# D COMPARISON WITH AIRL (FU ET AL.) FROM A THEORETICAL PERSPECTIVE + +Here we illustrate the theoretical advantage of our SAIL algorithm over AIRL in certain scenarios by an example. + +The theory of AIRL shows that it is able to recover the groundtruth reward of an MDP up to a constant if the reward of this MDP is defined on states only, when the adversarial learning reaches the equilibrium. Next we show a basic case that violates the theoretical assumption of AIRL but can be solved by our algorithm. + +![](images/33fc3b0fceaaf9513cad700c1be78c654f17bb68f26b412d4330e5895c6df133.jpg) +Figure 6: Two-ring MDP with deterministic transition + +Figure 6 shows the states and transition of an MDP. The demonstration policy jumps back and forth between $s_1$ and $s_2$ periodically. Because our algorithm has the action prior (local alignment), it is clear that we can solve this problem. The dynamics of many periodic games, such as Walker and HalfCheetah in MuJoco, are extension of this two-ring graph. + +It is easy to show that it is impossible for the adversarial game in AIRL to solve this problem at equilibrium. According to Sec 6 of Fu et al., the reward family of AIRL is parameterized as + +$$ +f _ {\theta} \left(s, s ^ {\prime}\right) = g (s) + \gamma h \left(s ^ {\prime}\right) - h (s) \tag {8} +$$ + +For simplicity of notation, let $\phi (s) = g(s) - h(s)$ and $\psi (s) = \gamma h(s)$ , then + +$$ +f _ {\theta} (s, s ^ {\prime}) = \phi (s) + \psi \left(s ^ {\prime}\right) \tag {9} +$$ + +In other words, the reward of AIRL is decomposable to the sum of two functions defined on states only. + +Again, for simplicity, we omit the arguments of functions but use subscripts to represent states. For example, $f_{12} = f(s_1,s_2)$ and $\phi_{1} = \phi (s_{1})$ . Then, + +$$ +f _ {1 2} = \phi_ {1} + \psi_ {2}, \quad f _ {1 1} = \phi_ {1} + \psi_ {1} \tag {10} +$$ + +$$ +f _ {2 1} = \phi_ {2} + \psi_ {1}, f _ {2 2} = \phi_ {2} + \psi_ {2} +$$ + +Assume that AIRL has reached the equilibrium and learned the optimal policy, then it must be true that $f_{12} > f_{11}$ and $f_{21} > f_{22}$ (otherwise, there exists other optimal policies). But $f_{12} > f_{11}$ implies that $\psi_2 > \psi_1$ , while $f_{21} > f_{22}$ implies that $\psi_1 > \psi_2$ , which is a contradiction. \ No newline at end of file diff --git a/statealignmentbasedimitationlearning/images.zip b/statealignmentbasedimitationlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b6b867d3fd96396e037ef3ff8d9dbe78078d47ff --- /dev/null +++ b/statealignmentbasedimitationlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61bed8476a2e74c81d347d324f1b32e4e7507c87bd0744ead7a0cf6f035b91e4 +size 816661 diff --git a/statealignmentbasedimitationlearning/layout.json b/statealignmentbasedimitationlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d09970e7dfc62a17e2a7c1ced17810cf18f04068 --- /dev/null +++ b/statealignmentbasedimitationlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b36c6793f6ba866d604f6be5ec6d038fa7ab90768657eedf806a648092ec2d1 +size 493098 diff --git a/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_content_list.json b/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9ce5f47c8d6f9378e685ad4bbbb9644c8600c6a6 --- /dev/null +++ b/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4684e6be63416d06e3d8ae68a8f5ecb377de564c2d70a15cfa42d57ba68ebc3 +size 96997 diff --git a/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_model.json b/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0a32acbaf2947192dde0a7b07b5bf28897add87b --- /dev/null +++ b/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea94848ede4da2c5ba1871dcee3e7e3a1eb56784149aa67f01e26a6d33c4b9aa +size 112992 diff --git a/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_origin.pdf b/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..17154775598abfc2064d6ce3d7ba3aad906ddd53 --- /dev/null +++ b/stateonlyimitationwithtransitiondynamicsmismatch/e3efde75-117b-4385-86dd-e13a52c794d3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0bfd36ab6f2219e3531747bb917247e4f7c43bcf554b352fad228ce265f5002 +size 3158017 diff --git a/stateonlyimitationwithtransitiondynamicsmismatch/full.md b/stateonlyimitationwithtransitiondynamicsmismatch/full.md new file mode 100644 index 0000000000000000000000000000000000000000..813dba50adaec435adebe20e01507b676b351670 --- /dev/null +++ b/stateonlyimitationwithtransitiondynamicsmismatch/full.md @@ -0,0 +1,417 @@ +# STATE-ONLY IMITATION WITH TRANSITION DYNAMICS MISMATCH + +Tanmay Gangwani + +Department of Computer Science + +University of Illinois, Urbana-Champaign + +gangwan2@illinois.edu + +Jian Peng + +Department of Computer Science + +University of Illinois, Urbana-Champaign + +jianpeng@illinois.edu + +# ABSTRACT + +Imitation Learning (IL) is a popular paradigm for training agents to achieve complicated goals by leveraging expert behavior, rather than dealing with the hardships of designing a correct reward function. With the environment modeled as a Markov Decision Process (MDP), most of the existing IL algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitator policy is to be learned. This is uncharacteristic of many real-life scenarios where discrepancies between the expert and the imitator MDPs are common, especially in the transition dynamics function. Furthermore, obtaining expert actions may be costly or infeasible, making the recent trend towards state-only IL (where expert demonstrations constitute only states or observations) ever so promising. Building on recent adversarial imitation approaches that are motivated by the idea of divergence minimization, we present a new state-only IL algorithm in this paper. It divides the overall optimization objective into two subproblems by introducing an indirect step and solves the subproblems iteratively. We show that our algorithm is particularly effective when there is a transition dynamics mismatch between the expert and imitator MDPs, while the baseline IL methods suffer from performance degradation. To analyze this, we construct several interesting MDPs by modifying the configuration parameters for the MuJoCo locomotion tasks from OpenAI Gym1. + +# 1 INTRODUCTION + +In the Reinforcement Learning (RL) framework, the objective is to train policies that maximize a certain reward criterion. Deep-RL, which combines RL with the recent advances in the field of deep-learning, has produced algorithms demonstrating remarkable success in areas such as games (Mnih et al., 2015; Silver et al., 2016), continuous control (Lillicrap et al., 2015), and robotics (Levine et al., 2016), to name a few. However, the application of these algorithms beyond controlled simulation environments has been fairly modest; one of the reasons being that manual specification of a good reward function is a hard problem. Imitation Learning (IL) algorithms (Pomerleau, 1991; Ng et al., 2000; Ziebart et al., 2008; Ho & Ermon, 2016) address this issue by replacing reward functions with expert demonstrations, which are easier to collect in most scenarios. + +The conventional setting used in most of the IL literature is the availability of state-action trajectories from the expert, $\tau := \{s_0, a_0, \dots, s_T, a_T\}$ , collected in an environment modeled as a Markov decision process (MDP) with transition dynamics $\mathcal{T}^{\mathrm{exp}}$ . These dynamics govern the distribution over the next state, given the current state and action. The IL objective is to leverage $\tau$ to train an imitator policy in the same MDP as the expert. This is a severe requirement that impedes the wider applicability of IL algorithms. In many practical scenarios, the transition dynamics of the environment in which the imitator policy is learned (henceforth denoted by $\mathcal{T}^{\mathrm{pol}}$ ) is different from the dynamics of the environment used to collect expert behavior, $\mathcal{T}^{\mathrm{exp}}$ . Consider self-driving cars as an example, where the goal is to learn autonomous navigation on a vehicle with slightly different gear-transmission characteristics than the vehicle used to obtain human driving demonstrations. We therefore strive + +![](images/de6275347512fe77f4f6d67abbddf1bb5faeae3efc0f5235ef793639697415fc.jpg) +(a) + +![](images/1f9ef2fbdcabf5e1b4b6e9f2628f3882303d3860c9d35c69bdd20e0ad857481b.jpg) +Figure 1: (a) Different amount of gravitation pull is one example of transition dynamics mismatch between the expert and the imitator MDPs. (b) An expert policy $\pi_{e}^{*}$ trained in $\mathcal{T}^{\mathrm{exp}}$ transfer poorly to an environment with dissimilar dynamics $\mathcal{T}^{\mathrm{pol}}$ (gravity $0.5\times$ ). (c) IL performance with GAIL degrades when $\mathcal{T}^{\mathrm{exp}} \neq \mathcal{T}^{\mathrm{pol}}$ , compared to the conventional IL setting of imitating in the same environment as the expert. + +![](images/abc81b5e2b044bf48263bdee331a51fdc53f38d3e842bf34903230fc8c305985.jpg) +(b) + +![](images/c12c33e8cbe230f833bf2d40ff59e5d80088f956f150d375ef28b5ec7ab00e41.jpg) +(c) + +for an IL method that could train agents under a transition dynamics mismatch, $\mathcal{T}^{\mathrm{exp}}\neq \mathcal{T}^{\mathrm{pol}}$ . We assume that other MDP attributes are the same for the expert and imitator environments. + +Beyond the dynamics equivalence, another assumption commonly used in IL literature is the availability of expert actions (along with the states). A few recent works (Torabi et al., 2018a;b; Sun et al., 2019) have proposed "state-only" IL algorithms, where expert demonstrations do not include the actions. This opens up the possibility of employing IL to situations such as kinesthetic teaching in robotics and learning from weak-supervision sources such as videos. Moreover, if $\mathcal{T}^{\mathrm{exp}}$ and $\mathcal{T}^{\mathrm{pol}}$ differ, then the expert actions, even if available, are not quite useful for imitation anyway, since the application of an expert action from any state leads to different next-state distributions for the expert and the imitator. Hence, our algorithm uses state-only expert demonstrations. + +We build on previous IL literature inspired by GAN-based adversarial learning - GAIL (Ho & Ermon, 2016) and AIRL (Fu et al., 2017). In both these methods, the objective is to minimize the distance between the visitation distributions $(\rho)$ induced by the policy and expert, under some suitable metric $d$ , such as Jensen-Shannon divergence. We classify GAIL and AIRL as direct imitation methods as they directly reduce $d(\rho_{\pi}, \rho^{*})$ . Different from these, we propose an indirect imitation approach which introduces another distribution $\tilde{\rho}$ as an intermediate or indirection step. In slight detail, starting with the Max-Entropy Inverse-RL objective (Ziebart et al., 2008), we derive a lower bound which transforms the overall IL problem into two sub-parts which are solved iteratively: the first is to train a policy to imitate a distribution $\tilde{\rho}$ represented by a trajectory buffer, and the second is to move the buffer distribution closer to expert's $(\rho^{*})$ over the course of training. The first part, which is policy imitation by reducing $d(\rho_{\pi}, \tilde{\rho})$ is done with AIRL, while the second part, which is reducing $d(\tilde{\rho}, \rho^{*})$ , is achieved using a Wasserstein critic (Arjovsky et al., 2017). We abbreviate our approach as I2L, for indirect-imitation-learning. + +We test the efficacy of our algorithm with continuous-control locomotion tasks from MuJoCo. Figure 1a depicts one example of the dynamics mismatch which we evaluate in our experiments. For the Ant agent, an expert walking policy $\pi_e^*$ is trained under the default dynamics provided in the OpenAI Gym, $\mathcal{T}^{\mathrm{exp}} = \mathrm{Earth}$ . The dynamics under which to learn the imitator policy are curated by modifying the gravity parameter to half its default value (i.e. $\frac{9.81}{2}$ ), $\mathcal{T}^{\mathrm{pol}} = \mathrm{PlanetX}$ . Figure 1b plots the average episodic returns of $\pi_e^*$ in the original and modified environments, and proves that direct policy transfer is infeasible. For Figure 1c, we just assume access to state-only expert demonstrations from $\pi_e^*$ , and do IL with the GAIL algorithm. GAIL performs well if the imitator policy is learned in the same environment as the expert ( $\mathcal{T}^{\mathrm{exp}} = \mathcal{T}^{\mathrm{pol}} = \mathrm{Earth}$ ), but does not succeed under mismatched transition dynamics, ( $\mathcal{T}^{\mathrm{exp}} = \mathrm{Earth}$ , $\mathcal{T}^{\mathrm{pol}} = \mathrm{PlanetX}$ ). In our experiments section, we consider other sources of dynamics mismatch as well, such as agent-density and joint-friction. We show that I2L trains much better policies than baseline IL algorithms in these tasks, leading to successful transfer of expert skills to an imitator in an environment dissimilar to the expert. + +We start by reviewing the relevant background on Max-Entropy IRL, GAIL and AIRL, since these methods form an integral part of our overall algorithm. + +# 2 BACKGROUND + +An RL environment modeled as an MDP is characterized by the tuple $(\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},\gamma)$ , where $\mathcal{S}$ is the state-space, and $\mathcal{A}$ is the action-space. Given an action $a_{t} \in \mathcal{A}$ , the next state is governed by the transition dynamics $s_{t+1} \sim \mathcal{T}(s_{t+1}|s_t,a_t)$ , and reward is computed as $r_t = \mathcal{R}(r_t|s_t,a_t)$ . The RL objective is to maximise the expected discounted sum of rewards, $\eta(\pi_\theta) = \mathbb{E}_{p_0,\mathcal{T},\pi}\left[\sum_{t=0}^\infty \gamma^t r(s_t,a_t)\right]$ , where $\gamma \in (0,1]$ is the discount factor, and $p_0$ is the initial state distribution. We define the unnormalized $\gamma$ -discounted state-visitation distribution for a policy $\pi$ by $\rho_\pi(s) = \sum_{t=0}^\infty \gamma^t P(s_t=s|\pi)$ , where $P(s_t=s|\pi)$ is the probability of being in state $s$ at time $t$ , when following policy $\pi$ and starting state $s_0 \sim p_0$ . The expected policy return $\eta(\pi_\theta)$ can then be written as $\mathbb{E}_{\rho_\pi(s,a)}[r(s,a)]$ , where $\rho_\pi(s,a) = \rho_\pi(s)\pi(a|s)$ is the state-action visitation distribution (also referred to as the occupancy measure). For any policy $\pi$ , there is a one-to-one correspondence between $\pi$ and its occupancy measure (Puterman, 1994). + +# 2.1 MAXIMUM ENTROPY IRL + +Designing reward functions that adequately capture the task intentions is a laborious and error-prone procedure. An alternative is to train agents to solve a particular task by leveraging demonstrations of that task by experts. Inverse Reinforcement Learning (IRL) algorithms (Ng et al., 2000; Russell, 1998) aim to infer the reward function from expert demonstrations, and then use it for RL or planning. The IRL method, however, has an inherent ambiguity, since many expert policies could explain a set of provided demonstrations. To resolve this, Ziebart (2010) proposed the Maximum Causal Entropy (MaxEnt) IRL framework, where the objective is to learn a reward function such that the resulting policy matches the provided expert demonstrations in the expected feature counts $f$ , while being as random as possible: + +$$ +\max _ {\pi} \mathcal {H} (\pi) \quad \mathrm {s . t .} \quad \mathbb {E} _ {s, a \sim \pi} [ \mathbf {f} (s, a) ] = \hat {\mathbf {f}} _ {\mathrm {d e m o}} +$$ + +where $\mathcal{H}(\pi) = \mathbb{E}_{\pi}[-\log \pi(a|s)]$ is the $\gamma$ -discounted causal entropy, and $\hat{\mathbf{f}}_{\mathrm{demo}}$ denotes the empirical feature counts of the expert. This constrained optimization problem is solved by minimizing the Lagrangian dual, resulting in the maximum entropy policy: $\pi_{\theta}(a|s) = \exp(Q_{\theta}^{\mathrm{soft}}(s,a) - V_{\theta}^{\mathrm{soft}}(s))$ , where $\theta$ is the Lagrangian multiplier on the feature matching constraint, and $Q_{\theta}^{\mathrm{soft}}, V_{\theta}^{\mathrm{soft}}$ are the soft value functions such that the following equations hold (please see Theorem 6.8 in Ziebart (2010)): + +$$ +Q _ {\theta} ^ {\mathrm {s o f t}} (s, a) = \underbrace {\theta^ {T} \mathbf {f} (s , a)} _ {r (s, a)} + \mathbb {E} _ {p (s ^ {\prime} | s, a)} [ V _ {\theta} ^ {\mathrm {s o f t}} (s ^ {\prime}) ] \quad , \quad V _ {\theta} ^ {\mathrm {s o f t}} (s) = \operatorname {s o f t m a x} _ {a} Q _ {\theta} ^ {\mathrm {s o f t}} (s, a) +$$ + +Inspired by the energy-based formulation of the maximum entropy policy described above, $\pi_{\theta}(a|s) = \exp (Q_{\theta}^{\mathrm{soft}}(s,a) - V_{\theta}^{\mathrm{soft}}(s))$ , recent methods (Finn et al., 2016; Haarnoja et al., 2017; Fu et al., 2017) have proposed to model complex, multi-modal action distributions using energy-based policies, $\pi (a|s)\propto \exp (f_{\omega}(s,a))$ , where $f_{\omega}(s,a)$ is represented by a universal function approximator, such as a deep neural network. We can then interpret the IRL problem as a maximum likelihood estimation problem: + +$$ +\max _ {\omega} \mathbb {E} _ {\tau \sim \operatorname {d e m o}} [ \log p _ {\omega} (\tau) ] \quad \text {w i t h}, \quad p _ {\omega} (\tau) = \frac {p \left(s _ {0}\right) \prod_ {t} p \left(s _ {t + 1} \mid s _ {t} , a _ {t}\right) e ^ {f _ {\omega} \left(s _ {t} , a _ {t}\right)}}{Z (\omega)} \tag {1} +$$ + +# 2.2 ADVERSARIAL IRL + +An important implication of casting IRL as maximum likelihood estimation is that it connects IRL to adversarial training. We now briefly discuss AIRL (Fu et al., 2017) since it forms a component of our proposed algorithm. AIRL builds on GAIL (Ho & Ermon, 2016), a well-known adversarial imitation learning algorithm. GAIL frames IL as an occupancy-measure matching (or divergence minimization) problem. Let $\rho_{\pi}(s,a)$ and $\rho_{E}(s,a)$ represent the state-action visitation distributions of the policy and the expert, respectively. Minimizing the Jenson-Shanon divergence $\min_{\pi}D_{JS}[\rho_{\pi}(s,a)\mid |\rho_{E}(s,a)]$ recovers a policy with a similar trajectory distribution as the expert. GAIL iteratively trains a policy $(\pi_{\theta})$ and a discriminator $(D_{\omega}:S\times \mathcal{A}\to (0,1))$ to optimize the min-max objective similar to GANs (Goodfellow et al., 2014): + +$$ +\min _ {\theta} \max _ {\omega} \mathbb {E} _ {(s, a) \sim \rho_ {E}} \left[ \log D _ {\omega} (s, a) \right] + \mathbb {E} _ {(s, a) \sim \pi_ {\theta}} \left[ \log \left(1 - D _ {\omega} (s, a)\right) \right] - \lambda \mathcal {H} (\pi_ {\theta}) \tag {2} +$$ + +GAIL attempts to learn a policy that behaves similar to the expert demonstrations, but it bypasses the process of recovering the expert reward function. Finn et al. (2016) showed that imposing a special structure on the discriminator makes the adversarial GAN training equivalent to optimizing the MLE objective (Equation 1). Furthermore, if trained to optimality, it is proved that the expert reward (up to a constant) can be recovered from the discriminator. They operate in a trajectory-centric formulation which can be inefficient for high dimensional state- and action-spaces. Fu et al. (2017) present AIRL which remedies this by proposing analogous changes to the discriminator, but operating on a single state-action pair: + +$$ +D _ {\omega} (s, a) = \frac {e ^ {f _ {\omega} (s , a)}}{e ^ {f _ {\omega} (s , a)} + \pi_ {\theta} (a | s)} \tag {3} +$$ + +Similar to GAIL, the discriminator is trained to maximize the objective in Equation 2; $f_{\omega}$ is learned, whereas the value of $\pi(a|s)$ is "filled in". The policy is optimized jointly using any RL algorithm with $\log D_{\omega} - \log (1 - D_{\omega})$ as rewards. When trained to optimality, $\exp(f_{\omega}(s,a)) = \pi^{*}(a|s) = \exp(A_{\mathrm{soft}}^{*}(s,a)/\alpha)$ ; hence $f_{\omega}$ recovers the soft advantage of the expert policy (up to a constant). + +# 2.3 STATE-ONLY IMITATION + +State-only IL algorithms extend the scope of applicability of IL by relieving the need for expert actions in the demonstrations. The original GAIL approach could be modified to work in the absence of actions. Specifically, Equation 2 could be altered to use a state-dependent discriminator $D_{\omega}(s)$ , and state-visitation (instead of state-action-visitation) distributions $\rho_E(s)$ and $\rho_{\pi_\theta}(s)$ . The AIRL algorithm, however, requires expert actions due to the special structure enforced on the discriminator (Equation 3), deeming it incompatible with state-only IL. This is because, even though $f_{\omega}$ could potentially be made a function of only the state $s$ , actions are still needed for the "filled in" $\pi_\theta(a|s)$ component. Inspired by GAIL, Torabi et al. (2018b) proposed GAIfO for state-only IL. The motivation is to train the imitator to perform actions that have similar effects in the environment, rather than mimicking the expert action. Algorithmically, GAIL is modified to make the discriminator a function of state transitions $D_{\omega}(s,s')$ , and include state-transition distributions $\rho(s,s')$ . + +# 3 INDIRECT IMITATION LEARNING (I2L) + +We now detail our I2L algorithm which alters the standard IL routine (used by GAIL, AIRL) by introducing an intermediate or indirection step, through a new distribution represented by a trajectory buffer. For this section, we ignore the properties of the transition dynamics for the expert and the imitator MDPs $(\mathcal{T}^{\mathrm{exp}},\mathcal{T}^{\mathrm{pol}})$ ; they can be the same or different, I2L has no specific dependence on this. $\tau$ denotes a trajectory, which is a sequence of state-action pairs, $\{s_0,a_0,\dots ,s_T,a_T\}$ . We begin with the expert's (unknown) trajectory distribution, although our final algorithm works with state-only expert demonstrations. + +Let the trajectory distribution induced by the expert be $p^*(\tau)$ , and its state-action visitation distribution be $\rho^*(s, a)$ . Using the parameterization from Equation 1, the likelihood objective to maximize for reward learning in MaxEnt-IRL can be written as (ignoring constants w.r.t $\omega$ ): + +$$ +\mathbb {E} _ {\tau \sim p * (\tau)} [ \log p _ {\omega} (\tau) ] = \mathbb {E} _ {(s, a) \sim \rho^ {*}} [ f _ {\omega} (s, a) ] - \log Z (\omega) \tag {4} +$$ + +As alluded to in Sections 2.2-2.3, if expert actions were available, one could optimize for $\omega$ by solving an equivalent adversarial min-max objective, as done in AIRL. To handle state-only IL, we proceed to derive a lower bound to this objective and optimize that instead. Let there be a surrogate policy $\tilde{\pi}$ with a state-action distribution $\tilde{\rho}(s,a)$ . The following proposition provides a lower bound to the likelihood objective in Equation 4. + +Proposition. Under mild assumptions of Lipschitz continuity of the function $f_{\omega}$ , we have that for two different state-action distributions $\rho^{*}$ and $\tilde{\rho}$ , + +$$ +\mathbb {E} _ {(s, a) \sim \rho^ {*}} [ f _ {\omega} (s, a) ] \geq \mathbb {E} _ {(s, a) \sim \tilde {\rho}} [ f _ {\omega} (s, a) ] - L W _ {1} (\rho^ {*}, \tilde {\rho}) +$$ + +where $L$ is the Lipschitz constant, and $W_{1}(\rho^{*},\tilde{\rho})$ is the 1-Wasserstein (or Earth Mover's) distance between the state-action distributions. + +Proof. Let $x \coloneqq s \oplus a$ denote the concatenation of state and action. Under Lipschitz continuity assumption for $f_{\omega}(x)$ , for any two inputs $x \sim X$ and $x' \sim X'$ , we have + +$$ +f _ {\omega} \left(x ^ {\prime}\right) - f _ {\omega} (x) \leq L \| \left(x ^ {\prime} - x\right) \| _ {1} +$$ + +Let $\mu(X, X')$ be any joint distribution over the random variables representing the two inputs, such that the marginals are $\rho^{*}(X)$ and $\tilde{\rho}(X')$ . Taking expectation w.r.t $\mu$ on both sides, we get + +$$ +\mathbb {E} _ {x ^ {\prime} \sim \rho} [ f _ {\omega} (x ^ {\prime}) ] - \mathbb {E} _ {x \sim \rho^ {*}} [ f _ {\omega} (x) ] \leq L \mathbb {E} _ {\mu} \| (x ^ {\prime} - x) \| _ {1} +$$ + +Since the above inequality holds for any $\mu$ , it also holds for $\mu^{*} = \arg \min_{\mu}\mathbb{E}_{\mu}\| (x^{\prime} - x)\|_{1}$ , which gives us the 1-Wasserstein distance + +$$ +\mathbb {E} _ {x ^ {\prime} \sim \tilde {\rho}} \left[ f _ {\omega} \left(x ^ {\prime}\right) \right] - \mathbb {E} _ {x \sim \rho^ {*}} \left[ f _ {\omega} (x) \right] \leq L W _ {1} \left(\rho^ {*}, \tilde {\rho}\right) +$$ + +Rearranging terms, + +$$ +\mathbb {E} _ {x \sim \rho^ {*}} [ f _ {\omega} (x) ] \geq \mathbb {E} _ {x ^ {\prime} \sim \tilde {\rho}} [ f _ {\omega} \left(x ^ {\prime}\right) ] - L W _ {1} (\rho^ {*}, \tilde {\rho}) +$$ + +![](images/77fa45067044477f025d4b2c878b964b168ab2b13d868b798175ea3337e71c09.jpg) + +We can therefore lower bound the likelihood objective (Equation 4) as: + +$$ +\mathbb {E} _ {\tau \sim p * (\tau)} [ \log p _ {\omega} (\tau) ] \geq \mathbb {E} _ {\tau \sim \tilde {p} (\tau)} [ \log p _ {\omega} (\tau) ] - L W _ {1} (\rho^ {*}, \tilde {\rho}) +$$ + +where $\tilde{p} (\tau)$ is the trajectory distribution induced by the surrogate policy $\tilde{\pi}$ . Since the original optimization (Equation 1) is infeasible under the AIRL framework in the absence of expert actions, we instead maximize the lower bound, which is to solve the surrogate problem: + +$$ +\max _ {\omega , \tilde {\rho}} \mathbb {E} _ {\tau \sim \tilde {p} (\tau)} \left[ \log p _ {\omega} (\tau) \right] - L W _ {1} \left(\rho^ {*}, \tilde {\rho}\right) \tag {5} +$$ + +This objective can be intuitively understood as follows. Optimizing w.r.t $\omega$ recovers the reward (or soft advantage) function of the surrogate policy $\tilde{\pi}$ , in the same spirit as MaxEnt-IRL. Optimizing w.r.t $\tilde{\rho}$ brings the state-action distribution of $\tilde{\pi}$ close (in 1-Wasserstein metric) to the expert's, along with a bias term that increases the log-likelihood of trajectories from $\tilde{\pi}$ , under the current reward model $\omega$ . We now detail the practical implementation of these optimizations. + +Surrogate policy. We do not use a separate explicit parameterization for $\tilde{\pi}$ . Instead, $\tilde{\pi}$ is implicitly represented by a buffer $\mathcal{B}$ , with a fixed capacity of $k$ trajectories2. In this way, $\tilde{\pi}$ can be viewed as a mixture of deterministic policies, each representing a delta distribution in trajectory space. $\mathcal{B}$ is akin to experience replay (Lin, 1992), in that it is filled with trajectories generated from the agent's interaction with the environment during the learning process. The crucial difference is that inclusion to $\mathcal{B}$ is governed by a priority-based protocol (explained below). Optimization w.r.t $\omega$ can now be done using adversarial training (AIRL), since the surrogate policy actions are available in $\mathcal{B}$ . Following Equation 3, the objective for the discriminator is: + +$$ +\max _ {\omega} \mathbb {E} _ {(s, a) \sim \mathcal {B}} \left[ \log \frac {e ^ {f _ {\omega} (s , a)}}{e ^ {f _ {\omega} (s , a)} + \pi_ {\theta} (a | s)} \right] + \mathbb {E} _ {(s, a) \sim \pi_ {\theta}} \left[ \log \frac {\pi_ {\theta} (a | s)}{e ^ {f _ {\omega} (s , a)} + \pi_ {\theta} (a | s)} \right] \tag {6} +$$ + +where $\pi_{\theta}$ is the learner (imitator) policy that is trained with $\log D_{\omega} - \log (1 - D_{\omega})$ as rewards. + +**Optimizing $\tilde{\rho}$ .** Since $\tilde{\rho}$ is characterized by the state-action tuples in the buffer $\mathcal{B}$ , updating $\tilde{\rho}$ amounts to refreshing the trajectories in $\mathcal{B}$ . For the sake of simplicity, we only consider the Wasserstein distance objective and ignore the other bias term, when updating for $\tilde{\rho}$ in Equation 5. Note that $\rho^{*}, \tilde{\rho}$ denote the state-action visitation distributions of the expert and the surrogate, respectively. Since we have state-only demonstrations from the expert (no expert actions), we minimize the Wasserstein distance between state visitations, rather than state-action visitations. Following the approach in WGANs (Arjovsky et al., 2017), we estimate $W_{1}$ using the Kantorovich-Rubinstein duality, and train a critic network $g_{\phi}$ with Lipschitz continuity constraint, + +$$ +W _ {1} \left(\rho^ {*} (s), \tilde {\rho} (s)\right) = \sup _ {\| g _ {\phi} \| _ {L} \leq 1} \mathbb {E} _ {s \sim \rho^ {*}} \left[ g _ {\phi} (s) \right] - \mathbb {E} _ {s \sim \tilde {\rho}} \left[ g _ {\phi} (s) \right] \tag {7} +$$ + +The empirical estimate of the first expectation term is done with the states in the provided expert demonstrations; for the second term, the states in $\mathcal{B}$ are used. With the trained critic $g_{\phi}$ , we obtain a + +![](images/aac0c3c48ee2e5f1538bcbccd07353c48d23543344364c039225e850d98349ba.jpg) +Figure 2: Environments for training an imitator policy are obtained by Table 1: Average episodic re-chang the default Gym configuration settings, one at a time. turns when $\mathcal{T}^{\mathrm{exp}} = \mathcal{T}^{\mathrm{pol}}$ + +![](images/cc2685eb41c409412d1d7aabafd11cfdc72f813be603c49ea36bf9c35ef08613.jpg) + +![](images/44a3586a9a71f69a8f997c574a992fdabf0b8f9aa8618bff63e24b716eadf0e5.jpg) + +![](images/572d8b8bd1dbd87b87cd005de53b608bf1cf3e2858d02ffd0e9f9ab05c6b2fd1.jpg) + +
EnvironmentGAIL-SI2LExpert (Traj. Return)
Walker2d371141076200
Hopper213027513700
Ant321733204800
Half-Cheetah597452407500
+ +Algorithm 1: Indirect Imitation Learning (I2L) +1 Networks: Policy $(\theta)$ , Discriminator $(\omega)$ Wasserstein critic $(\phi)$ +2 $\mathcal{B}\gets$ empty buffer +3 $\tau_{states}^{*}\coloneqq \{s_0,s_1,\dots ,s_T\} /\star$ State-only expert demonstration \*/ +4 for each iteration do +5 Run $\pi_{\theta}$ in environment and collect few trajectories $\tau$ +6 Update Wasserstein critic $\phi$ using $\mathcal{B}$ and $\tau_{states}^{*} / \star$ Equation 7 \*/ +7 Obtain trajectory score $\frac{1}{|\tau|}\sum_{s\in \tau}g_{\phi}(s)$ for each $\tau$ using $\phi$ +8 Add $\tau$ to $\mathcal{B}$ with the priority-based protocol, using the score as priority +9 Update the AIRL discriminator $\omega$ using $\tau$ and $\mathcal{B} / \star$ Equation 6 \*/ +10 Update policy $\theta$ with PPO using log $D_{\omega}-\log(1-D_{\omega})$ as rewards +11 end + +score for each trajectory generated by the agent. The score is calculated as $\frac{1}{|\tau|}\sum_{s\in \tau}g_{\phi}(s)$ , where $|\tau|$ is the length of the trajectory. Our buffer $\mathcal{B}$ is a priority-queue structure of fixed number of trajectories, the priority value being the score of the trajectory. This way, over the course of training, $\mathcal{B}$ is only updated with trajectories with higher scores, and by construction of the score function, these trajectories are closer to the expert's in terms of the Wasserstein metric. Further details on the update algorithm for the buffer and its alignment with the Wasserstein distance minimization objective are provided in Appendix 7.3. + +Algorithm. The major steps of the training procedure are outlined in Algorithm 1. The policy parameters $(\theta)$ are updated with the clipped-ratio version of PPO (Schulman et al., 2017). State-value function baselines and GAE (Schulman et al., 2015) are used for reducing the variance of the estimated policy-gradients. The priority buffer $\mathcal{B}$ uses the heap-queue algorithm (Appendix 7.3). The Lipschitz constant $L$ in Equation 5 is unknown and task-dependent. If $f_{\omega}$ is fairly smooth, $L$ is a small constant that can be treated as a hyper-parameter and absorbed into the learning rate. Please see Appendix 7.2 for details on the hyper-parameters. + +# 4 RELATED WORK + +There is an extensive amount of literature on IL with state-action expert demonstrations, and also on integrating IL and RL to bootstrap learning (Billard et al., 2008; Argall et al., 2009). Our work is most closely related to state-only IL and adversarial Inverse-RL methods discussed in Section 2. Here, we mention other related prior literature. BCO (Torabi et al., 2018a) is a state-only IL approach that learns an inverse dynamics model $p(a|s,s')$ by running a random exploration policy. The inverse model is then applied to infer actions from the state-only demonstrations, which in turn are used for imitation via Behavioral Cloning, making the approach vulnerable to the well-known issue of compounding errors (Ross et al., 2011). Kimura et al. (2018) learn an internal model $p(s'|s)$ on state-only demonstrations; the imitator policy is then trained with RL using rewards derived from the model. Imitation under a domain shift has been considered in Stadie et al. (2017); Liu et al. (2018). These methods incorporate raw images as observations and are designed to handle differences in context (such as viewpoints, visual appearance, object positions, surroundings) between the expert and the imitator environments. Gupta et al. (2017) propose learning invariant feature mappings to transfer skills from an expert to an imitator with a different morphology. However, the reward function for such a transfer is contingent on the assumption of time-alignment in episodic tasks. In our Algorithm 1, the adversarial training between the policy and buffer trajectories (AIRL, Line 9) bears some resemblance to the adversarial self-imitation approaches in (Guo et al., 2018; Gangwani + +![](images/ec47d2719c367dd9614e6a7f58e675a29a9eef60b156a93b1ba91acbe3f5c9f6.jpg) +Figure 3: Training progress for I2L and GAIL-S when the imitator and expert MDPs differ in the configuration of the gravity parameter. Gravity in $\mathcal{T}^{\mathrm{pol}}$ is $0.5\times$ the gravity in $\mathcal{T}^{\mathrm{exp}}$ . + +![](images/ef9d89d0404a8913ac0eb99f6e01da40f65b9c5d7aa07fda08751332a2062a28.jpg) +Figure 4: Training progress for I2L and GAIL-S when the imitator and expert MDPs differ in the configuration of the density parameter. Density of the bot in $\mathcal{T}^{\mathrm{pol}}$ is $2\times$ the density in $\mathcal{T}^{\mathrm{exp}}$ . + +et al., 2018). Those self-imitation methods are applicable for RL from sparse rewards, while our focus is IL from expert behavior, under transition dynamics mismatch. + +# 5 EXPERIMENTS + +In this section, we compare the performance of I2L to baseline methods for state-only IL from Section 2.3, namely GAIL with state-dependent discriminator, denoted by $GAIL-S$ , and GAIfO (Torabi et al., 2018b). We do the evaluation by modifying the continuous-control locomotion task from MuJoCo to introduce various types of transition dynamics mismatch between the expert and the imitator MDPs $(\mathcal{T}^{\mathrm{exp}} \neq \mathcal{T}^{\mathrm{pol}})$ . It should be noted that other aspects of the MDP $(S, A, R, \gamma)$ are assumed to be the same3. We, therefore, use dynamics and MDP interchangeably in this section. While the expert demonstrations are collected under the default configurations provided in OpenAI Gym, we construct the environments for the imitator by changing some parameters independently: a.) gravity in $\mathcal{T}^{\mathrm{pol}}$ is $0.5 \times$ the gravity in $\mathcal{T}^{\mathrm{exp}}$ , b.) density of the bot in $\mathcal{T}^{\mathrm{pol}}$ is $2 \times$ the density in $\mathcal{T}^{\mathrm{exp}}$ , and c.) the friction coefficient on all the joints of the bot in $\mathcal{T}^{\mathrm{pol}}$ is $3 \times$ the coefficient in $\mathcal{T}^{\mathrm{exp}}$ . Figure 2 has a visual. For all our experiments and tasks, we assume a single expert state-only demonstration of length 1000. We do not assume any access to the expert MDP beyond this. + +Performance when $\mathcal{T}^{\mathrm{exp}} = \mathcal{T}^{\mathrm{pol}}$ . Table 1 shows the average episodic returns for a policy trained for 5M timesteps using GAIL-S and I2L in the standard IL setting. The policy learning curves are included in Appendix 7.1. All our experiments average 8 independent runs with random seeds. Both the algorithms work fairly well in this scenario, though I2L achieves higher scores in 3 out of 4 tasks. These numbers serve as a benchmark when we evaluate performance with transition dynamics mismatch. The table also contains the expert demonstration score for each task. + +![](images/6fbfc2a223a39a70cd5cf44927c68f6b1a3475afb27994265c0405067aa27897.jpg) +Figure 5: Training progress for I2L and GAIL-S when the imitator and expert MDPs differ in the configuration of the friction parameter. The friction coefficient on all the joints of the bot in $\mathcal{T}^{\mathrm{pol}}$ is $3\times$ the coefficient in $\mathcal{T}^{\mathrm{exp}}$ . + +Performance when $\mathcal{T}^{\mathrm{exp}}\neq \mathcal{T}^{\mathrm{pol}}$ . Figures 3, 4 and 5 plot the training progress (mean and standard deviation) with GAIL-S and I2L under mismatched transition dynamics with low gravity, high density and high friction settings, respectively, as described above. We observe that I2L achieves faster + +![](images/89721c944320849feb84a6528bd57998deec5eb67145fda6a6bb324136bb441f.jpg) +Figure 6: Ablation on capacity of buffer $\mathcal{B}$ using low-gravity Half-Cheetah. + +
No dynamics mismatchLow gravity
HalfCheetahWalker2dHopperAntHalfCheetahWalker2dHopperAnt
GAlfO5082312221213452151829951683594
I2L52404107275133204155354725661617
High densityHigh friction
HalfCheetahWalker2dHopperAntHalfCheetahWalker2dHopperAnt
GAlfO-234378440366728833858876380
I2L39751988199933195554382520841145
+ +Table 2: Comparing performance of I2L with GAIfO (Torabi et al., 2018b), a state-only IL baseline. + +than GAIL-S in most of the situations. GAIL-S degrades severely in some cases. For instance, for Half-Cheetah under high density, GAIL-S drops to 923 (compared to 5974 with no dynamics change, Table 1), while I2L attains a score of 3975 (compared to 5240 with no dynamics change). Similarly, with Hopper under high friction, GAIL-S score reduces to 810 (from 2130 with no dynamics change), and the I2L score is 2084 (2751 with no dynamics change). The plots also indicate the final average performance achieved using the original GAIL (marked as GAIL-SA) and AIRL algorithms. Both of these methods require extra supervision in the form of expert actions. Even so, they generally perform worse than I2L, which can be attributed to the fact that the expert actions generated in $\mathcal{T}^{\mathrm{exp}}$ are not very useful when the dynamics shift to $\mathcal{T}^{\mathrm{pol}}$ . + +Comparison with GAIfo baseline. GAIfo (Torabi et al., 2018b) is a recent state-only IL method which we discuss in Section 2.3. Table 2 contrasts the performance of I2L with GAIfo for imitation tasks both with and without transition dynamics mismatch. We find GAIfo to be in the same ballpark as GAIL-S. It can learn good imitation policies if the dynamics are the same between the expert and the imitator, but loses performance with mismatched dynamics. Learning curves for GAIfo are included in Appendix 7.7. Furthermore, in Appendix 7.6, we compare to BCO (Torabi et al., 2018a). + +Ablation on buffer capacity. Algorithm 1 uses priority-queue buffer $\mathcal{B}$ of fixed number of trajectories to represent the surrogate state-action visitation $\tilde{\rho}$ . All our experiments till this point fixed the buffer capacity to 5 trajectories. To gauge the sensitivity of our approach to the capacity $|\mathcal{B}|$ , we ablate on it and report the results in Figure 6. The experiment is done with the low-gravity Half-Cheetah environment. We observe that the performance of I2L is fairly robust to $|\mathcal{B}|$ . Surprisingly, even a capacity of 1 trajectory works well, and having a large buffer ( $|\mathcal{B}| = 50$ ) also does not hurt performance much. The GAIL-S baseline on the same task is included for comparison. + +Empirical measurements of the lower-bound and Wasserstein approximations. Section 3 introduces a lower bound on the expected value of a function $f_{\omega}(s,a)$ under the expert's state-action visitation. In Appendix 7.4, we analyze the quality of the lower bound by plotting the approximation-gap for the different $\tilde{\rho}$ distributions obtained during training. We observe that the gap generally re + +duces. Finally, in Appendix 7.5, we plot the empirical estimate of the Wasserstein distance between the state-visitions of the buffer distribution and the expert, and note that this value also typically decreases over the training iterations. + +# 6 CONCLUSION + +In this paper, we presented I2L, an indirect imitation-learning approach that utilizes state-only expert demonstrations collected in the expert MDP, to train an imitator policy in an MDP with a dissimilar transition dynamics function. We derive a lower bound to the Max-Ent IRL objective that transforms it into two subproblems. We then provide a practical algorithm that trains a policy to imitate the distribution characterized by a trajectory-buffer using AIRL, whilst reducing the Wasserstein distance between the state-visitations of the buffer and expert, over the course of training. Our experiments in a variety of MuJoCo-based MDPs indicate that I2L is an effective mechanism for successful skill transfer from the expert to the imitator, especially under mismatched transition dynamics. + +# REFERENCES + +Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469-483, 2009. +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. +Aude Billard, Sylvain Calinon, Ruediger Dillmann, and Stefan Schaal. Robot programming by demonstration. Springer handbook of robotics, pp. 1371-1394, 2008. +Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. arXiv preprint arXiv:1611.03852, 2016. +Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. arXiv preprint arXiv:1710.11248, 2017. +Tanmay Gangwani, Qiang Liu, and Jian Peng. Learning self-imitating diverse policies. arXiv preprint arXiv:1805.10309, 2018. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. +Yijie Guo, Junhyuk Oh, Satinder Singh, and Honglak Lee. Generative adversarial self-imitation learning. arXiv preprint arXiv:1812.00950, 2018. +Abhishek Gupta, Coline Devin, YuXuan Liu, Pieter Abbeel, and Sergey Levine. Learning invariant feature spaces to transfer skills with reinforcement learning. arXiv preprint arXiv:1703.02949, 2017. +Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1352-1361. JMLR.org, 2017. +Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565-4573, 2016. +Daiki Kimura, Subhajit Chaudhury, Ryuki Tachibana, and Sakyasingha Dasgupta. Internal model from observations for reward shaping. arXiv preprint arXiv:1806.01267, 2018. +Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334-1373, 2016. +Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. + +Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293-321, 1992. +YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1118-1125. IEEE, 2018. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pp. 663-670, 2000. +Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1):88-97, 1991. +Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994. ISBN 0471619779. +Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635, 2011. +Stuart J Russell. Learning agents for uncertain environments. In $COLT$ , volume 98, pp. 101-103, 1998. +John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016. +Bradly C Stadie, Pieter Abbeel, and Ilya Sutskever. Third-person imitation learning. arXiv preprint arXiv:1703.01703, 2017. +Wen Sun, Anirudh Vemula, Byron Boots, and J Andrew Bagnell. Provably efficient imitation learning from observation alone. arXiv preprint arXiv:1905.10948, 2019. +Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954, 2018a. +Faraz Torabi, Garrett Warnell, and Peter Stone. Generative adversarial imitation from observation. arXiv preprint arXiv:1807.06158, 2018b. +Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. 2010. +Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pp. 1433-1438. Chicago, IL, USA, 2008. + +# 7 APPENDIX + +# 7.1 PERFORMANCE WHEN $\mathcal{T}^{\mathrm{EXP}} = \mathcal{T}^{\mathrm{POL}}$ + +![](images/b5fc90350682cf91635792e736812b4fb1bd3254cb25719a705be9571c802045.jpg) +Figure 7: Training progress for I2L and GAIL-S when the imitator and expert MDPs are the same. + +# 7.2 HYPER-PARAMETERS + +
Hyper-parameterValue
Wasserstein critic φ network3 layers, 64 hidden, tanh
Discriminator ω network3 layers, 64 hidden, tanh
Policy θ network3 layers, 64 hidden, tanh
Wasserstein critic φ optimizer, lr, gradient-stepsRMS-Prop, 5e-5, 20
Discriminator ω optimizer, lr, gradient-stepsAdam, 3e-4, 5
Policy θ algorithm, lrPPO (clipped ratio), 1e-4
Number of state-only expert demonstrations1 (1000 states)
Buffer B capacity5 trajectories
γ, λ (GAE)0.99, 0.95
+ +# 7.3 FURTHER DETAILS ON BUFFER $\mathcal{B}$ AND THE UPDATE MECHANISM + +The buffer $\mathcal{B}$ is a priority-queue structure, with a fixed capacity of $K^4$ trajectories. Each trajectory is a set of tuples $\tau \coloneqq \{s_i,a_i\}_{i = 0}^T$ . Denote the trajectories by $\{\tau_1,\dots ,\tau_K\}$ , and let $\{s\in \tau_i\}$ be the collection of states in trajectory $\tau_{i}$ . Buffer $\mathcal{B}$ characterizes the surrogate policy $\tilde{\pi}$ defined in Section 3. The state-visitation distribution of $\tilde{\pi}$ can then be written as: + +$$ +\tilde {\rho} (s) = \frac {1}{K T} \sum_ {i} \sum_ {s \in \tau_ {i}} \delta (s) \tag {8} +$$ + +where $\delta$ denotes the delta measure. Following Equation 7, our objective for optimizing $\tilde{\rho}$ is: + +$$ +\min _ {\tilde {\rho}} W _ {1} \left(\rho^ {*} (s), \tilde {\rho} (s)\right) = \min _ {\tilde {\rho}} \sup _ {\| g _ {\phi} \| _ {L} \leq 1} \mathbb {E} _ {s \sim \rho^ {*}} \left[ g _ {\phi} (s) \right] - \mathbb {E} _ {s \sim \tilde {\rho}} \left[ g _ {\phi} (s) \right] +$$ + +The min-max objective is optimized using an iterative algorithm. The Wasserstein critic $g(\phi)$ update is done with standard gradient descent using state samples from expert demonstrations and the buffer $\mathcal{B}$ . The update for $\tilde{\rho}$ is more challenging since $\tilde{\rho}$ is only available as an empirical measure (Equation 8). For the current iterate $\phi$ , the objective for $\tilde{\rho}$ then becomes: + +$$ +\max _ {\tilde {\rho}} \mathbb {E} _ {s \sim \tilde {\rho}} [ g _ {\phi} (s) ] = \max _ {\tilde {\rho}} \frac {1}{K T} \sum_ {i} \sum_ {s \in \tau_ {i}} g _ {\phi} (s) \tag {9} +$$ + +Section 3 defines the quantity $\frac{1}{T}\sum_{s\in \tau_i}g_\phi (s)$ as the score of the trajectory $\tau_{i}$ . Therefore, the objective in Equation 9 is to update the buffer $\mathcal{B}$ such that the average score of the $K$ trajectories in it increases. + +Priority-queue with priority $=$ score. Buffer $\mathcal{B}$ is implemented as a priority-queue (PQ) based on Min-Heap. Let the current PQ be $\{\tau_1, \dots, \tau_K\}$ , sorted based on score such that $score(\tau_i) \leq score(\tau_j)$ , $\forall i < j$ . Let $\{\Gamma_1, \Gamma_2, \dots\}$ be the new trajectories rolled out in the environment using the current learner policy $\pi_{\theta}$ (Line 5 in Algorithm 1). For each of these, PQ is updated using standard protocol: + +1 Update scores of $\{\tau_1,\dots ,\tau_K\}$ in PQ using the latest critic $\phi$ +2 for each $\Gamma_{i}$ do +3 Calculate score( $\Gamma_{i}$ ) using the latest critic $\phi$ +4 if score( $\Gamma_{i}$ ) > score( $\tau_{1}$ ) then +5 $\tau_{1}\gets \Gamma_{i}$ // replace the lowest scoring buffer trajectory with the new trajectory +6 heapify // PQ-library call to maintain the heap-invariant: score( $\tau_{i}$ ) $\leq$ score( $\tau_{j}$ ), $\forall i < j$ +7 end +8 end + +It follows from the PQ-protocol that the average score of the $K$ trajectories in the buffer $\mathcal{B}$ increases (or remains same), after the update, compared to the average score before. This aligns the update with the objective in Equation 9. + +# 7.4 EMPIRICAL CONVERGENCE OF LOWER BOUND + +In our main section, we derive the following lower bound which connects the expected value of a function $f_{\omega}$ under the expert's state-action visitation to the expected value under another surrogate distribution, and the 1-Wasserstein distance between the distributions: + +$$ +\mathbb {E} _ {(s, a) \sim \rho^ {*}} [ f _ {\omega} (s, a) ] \geq \mathbb {E} _ {(s, a) \sim \tilde {\rho}} [ f _ {\omega} (s, a) ] - L W _ {1} (\rho^ {*}, \tilde {\rho}) +$$ + +In this section, we provide empirical measurements on the gap between the original objective (LHS) and the lower bound (RHS). This gap depends on the specific choice of the surrogate distribution $\tilde{\rho}(s,a)$ . In our algorithm, $\tilde{\rho}$ is characterized by trajectories in the priority-queue buffer $\mathcal{B}$ , and is updated during the course of training based on the protocol detailed in Appendix 7.3. Figure 8 plots the estimated value of the lower bound for these different $\tilde{\rho}$ , and shows that the gap generally reduces over time. To get estimates of LHS and RHS, we need the following: + +- $\tilde{\rho}(s, a)$ : We take snapshots of the buffer $\mathcal{B}$ at periodic intervals of the training to obtain the different $\tilde{\rho}$ distributions. +- $\rho^{*}(s,a)$ : This is the expert's state-action distribution. We train separate oracle experts in the imitator's (learner's) environment, and use state-action tuples from this expert policy. Note that these oracle experts are NOT used in I2L (Algorithm 1), and are only for the purpose of measurement. +- $W_{1}(\rho^{*}(s,a),\tilde{\rho} (s,a))$ : A separate Wasserstein critic is trained using $(s,a)$ tuples from the oracle experts described above and the trajectories in buffer $\mathcal{B}$ . This critic is NOT used in I2L since we don't have access to oracle experts, and is only for the purpose of measurement. +- $f_{\omega}$ : We select the AIRL discriminator parameters $\omega$ from a particular training iteration. The same parameters are then used to calculate the LHS, and the RHS for different $\tilde{\rho}$ distributions. +- $L$ : The Lipschitz constant is unknown and hard to estimate for a complex, non-linear $f_{\omega}$ . We plot the lower bound for a few values: $\{0.5, 1.0, 1.5\}$ . + +Figure 8 shows the gap between the original objective and the lower bound for all our experimental settings, $\mathcal{T}^{\mathrm{exp}} = \mathcal{T}^{\mathrm{pol}}$ (top row), and $\mathcal{T}^{\mathrm{exp}} \neq \mathcal{T}^{\mathrm{pol}}$ (next 3 rows). We observe that the gap generally reduces as $\tilde{\rho}$ is updated over the iterations of I2L (Algorithm 1). A better lower bound in turn leads to improved gradients for updating the AIRL discriminator $f_{\omega}$ , ultimately resulting in more effective policy gradients for the imitator. + +![](images/9149e09caac149f59b631147de1ec7986926ef52132b96f0044c1698b693e30a.jpg) +Figure 8: Gap between the original objective and the lower bound for all our experimental settings, $\mathcal{T}^{\mathrm{exp}} = \mathcal{T}^{\mathrm{pol}}$ (top row), and $\mathcal{T}^{\mathrm{exp}} \neq \mathcal{T}^{\mathrm{pol}}$ (next 3 rows). + +# 7.5 EMPIRICAL WASSERSTEIN DISTANCES + +In each iteration of I2L, we update the Wasserstein critic $\phi$ using the states from the state-only expert demonstration, and states in the buffer $\mathcal{B}$ (Line 6, Algorithm 1). The objective is to obtain the 1-Wasserstein distance: + +$$ +W _ {1} \left(\rho^ {*} (s), \tilde {\rho} (s)\right) = \sup _ {\| g _ {\phi} \| _ {L} \leq 1} \mathbb {E} _ {s \sim \rho^ {*}} \left[ g _ {\phi} (s) \right] - \mathbb {E} _ {s \sim \mathcal {B}} \left[ g _ {\phi} (s) \right] +$$ + +In Figure 9, we plot the empirical estimate $\hat{W}_1$ of this distance over the course of training. To get the estimate at any time, the current critic parameters $\phi$ and buffer trajectories are used to calculate $\hat{E}_{s\sim \rho^*}[g_\phi (s)] - \hat{E}_{s\sim \mathcal{B}}[g_\phi (s)]$ . We show the values for all our experimental settings, $\mathcal{T}^{\mathrm{exp}} = \mathcal{T}^{\mathrm{pol}}$ + +(top row), and $\mathcal{T}^{\mathrm{exp}}\neq \mathcal{T}^{\mathrm{pol}}$ (next 3 rows). It can be seen that $\hat{W}_1(\rho^* (s),\tilde{\rho} (s))$ generally decreases over time in all situations. This is because our objective for optimizing $\tilde{\rho}$ (or updating the buffer $\mathcal{B}$ ) is to minimize this Wasserstein estimate. Please see Appendix 7.3 for more details. The fact that the buffer state-distribution gets closer to the expert's, together with the availability of actions in the buffer which induce those states in the imitator MDP $(\mathcal{T}^{\mathrm{pol}})$ , enables us to successfully use AIRL for imitation under mismatched transition dynamics. + +![](images/e584fcea70d5f8d27ff15453a81ba8cc1cc8f30e779a1231b71ae05e9454f045.jpg) + +![](images/1d5809377ba8fb6bb1d7f5c9f5109c1dceb9c08618e2ba0bef40107a96cc3e1b.jpg) +W1[p*(s),p(s)] + +![](images/1b93e6657c875888be382c4a8028d8e6ddf8c743f8f0addb7d5ccd72ea49562a.jpg) + +![](images/11ceea92e2e3198759f2a3c02981b74f602f6677aa947ec857588fc4a37b4e35.jpg) + +![](images/01f3dbdcfe293a8876238f3385bd19de5a26b19a3cd352d71cb1d81613cb6426.jpg) + +![](images/8959ea0e86054263af5a021ff8f4f3c5886066ebda3398548bea896999a1ed60.jpg) +W1[p(s),p(s)] + +![](images/c76e5cd22581b36e30b60614cfa8f58ca126b9eb1c8955bba55019f3b0c39213.jpg) + +![](images/c623b8e5e25cf35b4f73bffb455172d23cfc814f18b2dd541dbff9f5022a56d1.jpg) + +![](images/c528f852b96182d4452c90ffdfbe893e2a21acec9796cf1fe23ebfeeb89a15f6.jpg) + +![](images/2f8e6d1de022a4a7808480cc84918024d4e15c4a3900696093ee4f8928d8ec3e.jpg) +W1[p'(s),p(s)] + +![](images/3640d2e39cf1b45510555c5985e76a4b349b18f936610114010db99d90e6a512.jpg) +W1[p(s),p(s)] + +![](images/886eb32667a6300cb67023a6c8c76afae20516de7acc0970033989d0c919368d.jpg) + +![](images/03747c3b7104103f2013759fd548666340bb0b7dae3021abff3e9eae6aed81a4.jpg) +Figure 9: Estimate of the Wasserstein distance between $\rho^{*}(s)$ and $\tilde{\rho} (s)$ , for all our experimental settings, $\mathcal{T}^{\mathrm{exp}} = \mathcal{T}^{\mathrm{pol}}$ (top row), and $\mathcal{T}^{\mathrm{exp}}\neq \mathcal{T}^{\mathrm{pol}}$ (next 3 rows). + +![](images/6eb348ba8df569ffc3c53529127fa7bf6857b53c29356da27a11c2ede2926a18.jpg) + +![](images/a2a6b018a71bcf6c81e43fa9cb9395ea52ab9bce7c7833954d2f74177b73b716.jpg) + +![](images/a40f5fc78d4ca0a6708eb4bb0c547dd40c50d0b0e28ec987903e9e06ed4640ca.jpg) + +# 7.6 COMPARISON WITH BCO + +Figure 10 compares I2L with BCO (Torabi et al., 2018a) when the expert and imitator dynamics are same (top row) and under mismatched transition dynamics with low gravity, high density and high + +friction settings (next 3 rows). BCO proceeds by first learning an inverse dynamics model in the imitator's environment, $p(a|s,s')$ , to predict actions from state-transitions. This model is learned via supervised learning on trajectories generated by an exploratory policy. The inverse model is then used to infer actions from the state-transitions in state-only expert demonstrations. The imitator policy is trained with Behavioral Cloning (BC) using these inferred actions. We implement the BCO( $\alpha$ ) version from the paper since it is shown to be better than vanilla BCO. We observe the barring two situations (Ant with no dynamics mismatch, and Ant with $2 \times$ density), BCO( $\alpha$ ) is unsuccessful in learning high-return policies. This is potentially due to the difficulties in learning a robust inverse dynamics model, and the compounding error problem inherent to BC. Similar performance for BCO( $\alpha$ ) is also reported by Torabi et al. (2018b). + +![](images/96a7b1f3f5305a702b5a13af7aa359c530ed0a6059079bf97263216d44c9df1a.jpg) + +![](images/228436ab437cd020e9889c11c849e8dbbc221ffb43587e9de611760d11313d36.jpg) + +![](images/43f25aa5ca91edc1d624aa60c95c7846634eb2e6f0d525af2497fb4ec7228375.jpg) + +![](images/8ec5b0df325877bd8b38083aecf1a2ee8acfd5309af010767227ca15bbb1932c.jpg) +Figure 10: Comparison between I2L and BCO. + +# 7.7 COMPARISON WITH GAIFO + +![](images/bbd445130645ae10c82fbbb4acbcc812009b9ad3ac2af2da94abf70daf7c7283.jpg) +Figure 11: Comparison between I2L and two baselines derived from GAIL. The final performance of an agent trained with PPO using real rewards in $\mathcal{T}^{\mathrm{pol}}$ is also shown. + +# 7.8 COMPARISON WITH GAIL-SA AND AIRL + +![](images/6574d13fc0bbf4d06962f222bd2dea3cd7dfa5539dc64a9e7ad99dba15cb2270.jpg) +Figure 12: Comparison between I2L and baselines using expert actions: GAIL-SA and AIRL \ No newline at end of file diff --git a/stateonlyimitationwithtransitiondynamicsmismatch/images.zip b/stateonlyimitationwithtransitiondynamicsmismatch/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7bb73e53cbac212460bd4569801ec09112bdd5f1 --- /dev/null +++ b/stateonlyimitationwithtransitiondynamicsmismatch/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25b6635ac33dda6c1e11a8840d7c781dab93fe91f075cf3272d9183c62e31cb0 +size 1365317 diff --git a/stateonlyimitationwithtransitiondynamicsmismatch/layout.json b/stateonlyimitationwithtransitiondynamicsmismatch/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e32e0b29e5aafa5e0c03a5f3c9446afba0d98025 --- /dev/null +++ b/stateonlyimitationwithtransitiondynamicsmismatch/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44b8a3bed75b3ef5c08fa5d213b77b77235fa86e1a5622196f3fc884582f68da +size 649383 diff --git a/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_content_list.json b/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ee51a23b77ce6c5cc97bdcc9a7b9101109917c9b --- /dev/null +++ b/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ec886b8eb305c03caee86135efdd00c847e6aff51c9826fe5b48a3e4019545b +size 202340 diff --git a/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_model.json b/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e7a2dbb7f3f35d2942e22512e2bb65eb214eaeb2 --- /dev/null +++ b/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a1358cdb70da61fb7283e7d6e22338d1b800275c2395ad3013c305d8d27fe1a +size 226934 diff --git a/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_origin.pdf b/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..be78913ff212a095952c0c8a59fcf1b966e7356c --- /dev/null +++ b/stochasticaucmaximizationwithdeepneuralnetworks/35b10241-af37-42c5-bd6b-78dd76a7fe48_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d29b61bbef062de90317ff695d014de3df5177444823ae2b4203101e0c93a1c +size 1493145 diff --git a/stochasticaucmaximizationwithdeepneuralnetworks/full.md b/stochasticaucmaximizationwithdeepneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a9758d4a7ee533cff2397a883ce42309ac2be9a1 --- /dev/null +++ b/stochasticaucmaximizationwithdeepneuralnetworks/full.md @@ -0,0 +1,909 @@ +# STOCHASTIC AUC MAXIMIZATION WITH DEEP NEURAL NETWORKS + +# Mingrui Liu + +Department of Computer Science + +The University of Iowa + +Iowa City, IA, 52242, USA + +mingrui-liu@uiowa.edu + +# Zhuoning Yuan + +Department of Computer Science The University of Iowa + +Iowa City, IA, 52242, USA + +zhuoning-yuan@uiowa.edu + +# Yiming Ying + +Department of Mathematics and Statistics SUNY at Albany + +Albany, NY, 12222, USA + +yying@albany.edu + +# Tianbao Yang + +Department of Computer Science The University of Iowa + +Iowa City, IA, 52242, USA + +tianbao-yang@uiowa.edu + +# ABSTRACT + +Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification. However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data. In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model. Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a non-convex concave min-max problem. The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well. In particular, we propose to explore Polyak-Lojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme. An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate. Our experimental results demonstrate the effectiveness of the proposed algorithms. + +# 1 INTRODUCTION + +Deep learning has been witnessed with tremendous success for various tasks, including computer vision (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016; Ren et al., 2015), speech recognition (Hinton et al., 2012; Mohamed et al., 2012; Graves, 2013), natural language processing (Bahdanau et al., 2014; Sutskever et al., 2014; Devlin et al., 2018), etc. From an optimization perspective, all of them are solving an empirical risk minimization problem in which the objective function is a surrogate loss of the prediction error made by a deep neural network in comparison with the ground-truth label. For example, for image classification task, the objective function is often chosen as the cross entropy between the probability distribution calculated by forward propagation of a convolutional neural network and the vector encoding true label information (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016), where the cross entropy is a surrogate loss of the misclassification rate. However, when the data is imbalanced, this formulation is not reasonable since the data coming from minor class have little effect in this case and the model is almost determined by the data from the majority class. + +To address this issue, AUC maximization has been proposed as a new learning paradigm (Zhao et al., 2011). Statistically, AUC (short for Area Under the ROC curve) is defined as the probability that the prediction score of a positive example is higher than that of a negative example (Hanley & McNeil, 1982; 1983). Compared with misclassification rate and its corresponding surrogate loss, AUC is more suitable for imbalanced data setting (Elkan, 2001). Several online or stochastic algorithms for + +AUC maximization have been developed based on a convex surrogate loss (Zhao et al., 2011; Gao et al., 2013; Ying et al., 2016; Liu et al., 2018; Natole et al., 2018). However, all of these works only consider learning a linear predictive model. This naturally motivates the following question: + +# How to design stochastic algorithms with provable guarantees to solve the AUC maximization problem with a deep neural network as the predictive model? + +In this paper, we make some efforts to answer this question. We design two algorithms with state-of-the-art complexities for this problem. Based on a surrogated loss of AUC and inspired by the min-max reformulation in (Ying et al., 2016), we cast the problem into a non-convex concave min-max stochastic optimization problem, where it is nonconvex in the primal variable and concave in the dual variable. This allows us to leverage the inexact proximal point algorithmic framework proposed in (Rafique et al., 2018) to solve stochastic AUC maximization with a deep neural network. However, their algorithms are limited for stochastic AUC maximization with a deep neural network due to three reasons. First, their algorithms are general and do not utilize the underlying favorable property of the objective function induced by an overparameterized deep neural network, which prevents them from designing better algorithms with faster convergence. Second, these algorithms use a polynomially decaying step size scheme instead of the widely used geometrically decaying step size scheme in deep neural network training. Third, the algorithm in (Rafique et al., 2018) with the best attainable complexity only applies to the finite-sum setting, which needs to go through all data at the end of each stage and are not applicable to the pure stochastic setting. + +To address these limitations, we propose to leverage the Polyak-Lojasiewicz (PL) condition of the objective function for AUC maximization with a deep neural network. The PL condition (or its equivalent condition) has been proved for a class of linear and non-linear neural networks (Hardt & Ma, 2016; Charles & Papailiopoulos, 2017; Zhou & Liang, 2017). It is the key to recent developments that prove that (stochastic) gradient descent can find a global minimum for an overparameterized deep neural network (Allen-Zhu et al., 2018; Du et al., 2018b). It is also observed in practice for learning deep neural networks (Li & Yuan, 2017; Kleinberg et al., 2018). From an optimization perspective, the PL condition has been considered extensively for designing faster optimization algorithms in the literature (Karimi et al., 2016; Reddi et al., 2016; Lei et al., 2017). However, there still remains a big gap between existing algorithms that focus on solving a minimization problem and the considered min-max problem of AUC maximization. It is a non-trivial task to leverage the PL condition of a non-convex minimization objective for developing faster primal-dual stochastic algorithms to solve its equivalent non-convex concave min-max problem. The main theoretical contributions in this paper are to solve this issue. Our contributions are: + +- We propose a stochastic algorithm named Proximal Primal-Dual Stochastic Gradient (PPD-SG) for solving a min-max formulation of AUC maximization under the PL condition of the surrogated AUC objective with a deep neural network. We establish a convergence rate in the order of $O(1 / \epsilon)$ , which is faster than that achieved by simply applying the result in (Rafique et al., 2018) to the considered problem under the PL condition, i.e., $O(1 / \epsilon^3)$ and $O(n / \epsilon)$ with $n$ being the size of training set. +- In addition, we propose an AdaGrad-style primal-dual algorithm named Proximal Primal-Dual Adagrad (PPD-Adagrad), and show that it enjoys better adaptive complexity when the growth of cumulative stochastic gradient is slow. This is the first time an adaptive convergence of a stochastic AdaGrad-style algorithm is established for solving non-convex concave min-max problems. +- We evaluate the proposed algorithms on several large-scale benchmark datasets. The experimental results show that our algorithms have superior performance than other baselines. + +To the best of our knowledge, this is the first work incorporating PL condition into stochastic AUC maximization with a deep neural network as the predictive model, and more generally into solving a non-convex concave min-max problem. Our results achieve the state-of-the-art iteration complexity for non-convex concave min-max problems. + +# 2 RELATED WORK + +Stochastic AUC Maximization. Stochastic AUC maximization in the classical online setting is challenging due to its pairwise nature. There are several studies trying to update the model each + +time based on a new sampled/received training data. Instead of storing all examples in the memory, Zhao et al. (2011) employ reservoir sampling technique to maintain representative samples in a buffer, based on which their algorithms update the model. To get optimal regret bound, their buffer size needs to be $O(\sqrt{n})$ , where $n$ is the number of received training examples. Gao et al. (2013) design a new algorithm which is not buffer-based. Instead, their algorithm needs to maintain the first-order and second-order statistics of the received data to compute the stochastic gradient, which is prohibitive for high dimensional data. Based on a novel saddle-point reformulation of a surrogate loss of AUC proposed by (Ying et al., 2016), there are several studies (Ying et al., 2016; Liu et al., 2018; Natole et al., 2018) trying to design stochastic primal-dual algorithms. Ying et al. (2016) employ the classical primal-dual stochastic gradient (Nemirovski et al., 2009) and obtain $\widetilde{O}(1/\sqrt{t})$ convergence rate. Natole et al. (2018) add a strongly convex regularizer, invoke composite mirror descent (Duchi et al., 2010) and achieve $\widetilde{O}(1/t)$ convergence rate. Liu et al. (2018) leverage the structure of the formulation, design a multi-stage algorithm and achieve $\widetilde{O}(1/t)$ convergence rate without strong convexity assumptions. However, all of them only consider learning a linear model, which results in a convex objective function. + +Non-Convex Min-max Optimization. Stochastic optimization of non-convex min-max problems have received increasing interests recently (Rafique et al., 2018; Lin et al., 2018; Sanjabi et al., 2018; Lu et al., 2019; Jin et al., 2019). When the objective function is weakly convex in the primal variable and is concave in the dual variable, Rafique et al. (2018) design a proximal guided algorithm in spirit of the inexact proximal point method (Rockafellar, 1976), which solves a sequence of convex-concave subproblems constructed by adding a quadratic proximal term in the primal variable with a periodically updated reference point. Due to the potential non-smoothness of objective function, they show the convergence to a nearly-stationary point for the equivalent minimization problem. In the same vein as (Rafique et al., 2018), Lu et al. (2019) design an algorithm by adopting the block alternating minimization/maximization strategy and show the convergence in terms of the proximal gradient. When the objective is weakly convex and weakly concave, Lin et al. (2018) propose a proximal algorithm which solves a strongly monotone variational inequality in each epoch and establish its convergence to stationary point. Sanjabi et al. (2018) consider non-convex non-concave min-max games where the inner maximization problem satisfies a PL condition, based on which they design a multi-step deterministic gradient descent ascent with convergence to a stationary point. It is notable that our work is different in that (i) we explore the PL condition for the outer minimization problem instead of the inner maximization problem; (ii) we focus on designing stochastic algorithms instead of deterministic algorithms. + +Leveraging PL Condition for Minimization. PL condition is first introduced by Polyak (Polyak, 1963), which shows that gradient descent is able to enjoy linear convergence to a global minimum under this condition. Karimi et al. (2016) show that stochastic gradient descent, randomized coordinate descent, greedy coordinate descent are able to converge to a global minimum with faster rates under the PL condition. If the objective function has a finite-sum structure and satisfies PL condition, there are several non-convex SVRG-style algorithms (Reddi et al., 2016; Lei et al., 2017; Nguyen et al., 2017; Zhou et al., 2018; Li & Li, 2018; Wang et al., 2018), which are guaranteed to converge to a global minimum with a linear convergence rate. However, the stochastic algorithms in these works are developed for a minimization problem, and hence is not applicable to the min-max formulation for stochastic AUC maximization. To the best of our knowledge, Liu et al. (2018) is the only work that leverages an equivalent condition to the PL condition (namely quadratic growth condition) to develop a stochastic primal-dual algorithm for AUC maximization with a fast rate. However, as mentioned before their algorithm and analysis rely on the convexity of the objective function, which does not hold for AUC maximization with a deep neural network. + +Finally, we notice that PL condition is the key to many recent works in deep learning for showing there is no spurious local minima or for showing global convergence of gradient descent and stochastic gradient descent methods (Hardt & Ma, 2016; Li & Yuan, 2017; Arora et al., 2018; Allen-Zhu et al., 2018; Du et al., 2018b;a; Li & Liang, 2018; Allen-Zhu et al., 2018; Zou et al., 2018; Zou & Gu, 2019). Using the square loss, it has also been proved that the PL condition holds globally or locally for deep linear residual network (Hardt & Ma, 2016), deep linear network, one hidden layer neural network with Leaky ReLU activation (Charles & Papailiopoulos, 2017; Zhou & Liang, 2017). Several studies (Li & Yuan, 2017; Arora et al., 2018; Allen-Zhu et al., 2018; Du et al., 2018b; Li & Liang, 2018) consider the trajectory of (stochastic) gradient descent on learning neural networks, and their analysis imply the PL condition in a certain form. For example, Du et al. (2018b) show that when + +the width of a two layer neural network is sufficiently large, a global optimum would lie in the ball centered at the initial solution, in which PL condition holds. Allen-Zhu et al. (2018) extends this insight further to overparameterized deep neural networks with ReLU activation, and show that the PL condition holds for a global minimum around a random initial solution. + +# 3 PRELIMINARIES AND NOTATIONS + +Let $\| \cdot \|$ denote the Euclidean norm. A function $f(\mathbf{x})$ is $\rho$ -weakly convex if $f(\mathbf{x}) + \frac{\rho}{2}\| \mathbf{x} \|^2$ is convex, where $\rho$ is the so-called weak-convexity parameter. A function $f(\mathbf{x})$ satisfies PL condition with parameter $\mu > 0$ if $f(\mathbf{x}) - f(\mathbf{x}_*) \leq \frac{1}{2\mu}\|\nabla f(\mathbf{x})\|^2$ , where $\mathbf{x}_*$ stands for the optimal solution of $f$ . Let $\mathbf{z} = (\mathbf{x}, y) \sim \mathbb{P}$ denote a random data following an unknown distribution $\mathbb{P}$ , where $\mathbf{x} \in \mathcal{X}$ represents the feature vector and $y \in \mathcal{Y} = \{-1, +1\}$ represents the label. Denote by $\mathcal{Z} = \mathcal{X} \times \mathcal{Y}$ and by $p = \operatorname*{Pr}(y = 1) = \mathbb{E}_y[\mathbb{I}_{[y = 1]}\]$ , where $\mathbb{I}(\cdot)$ is the indicator function. + +The area under the curve (AUC) on a population level for a scoring function $h: \mathcal{X} \to \mathbb{R}$ is defined as + +$$ +\operatorname {A U C} (h) = \Pr \left(h (\mathbf {x}) \geq h \left(\mathbf {x} ^ {\prime}\right) \mid y = 1, y ^ {\prime} = - 1\right), +$$ + +where $\mathbf{z} = (\mathbf{x},y)$ and $\mathbf{z}' = (\mathbf{x}',y')$ are drawn independently from $\mathbb{P}$ . By employing the squared loss as the surrogate for the indicator function that is a common choice used by previous studies (Ying et al., 2016; Gao et al., 2013), the AUC maximization problem can be formulated as + +$$ +\min _ {h \in \mathcal {H}} \mathbb {E} _ {\mathbf {z}, \mathbf {z} ^ {\prime}} \left[ (1 - h (\mathbf {x}) + h (\mathbf {x} ^ {\prime})) ^ {2} \middle | y = 1, y ^ {\prime} = - 1 \right], +$$ + +where $\mathcal{H}$ denotes a hypothesis class. All previous works of AUC maximization assume $h(\mathbf{x}) = \mathbf{w}^{\top}\mathbf{x}$ for simplicity. Instead, we consider learning a general nonlinear model parameterized by $\mathbf{w}$ , i.e. $h(\mathbf{w};\mathbf{x})$ , which is not necessarily linear or convex in terms of $\mathbf{w}$ (e.g., $h(\mathbf{w};\mathbf{x})$ can be a score function defined by a neural network with weights denoted by $\mathbf{w}$ ). Hence, the corresponding optimization problem becomes + +$$ +\min _ {\mathbf {w} \in \mathbb {R} ^ {d}} P (\mathbf {w}) := \mathbb {E} _ {\mathbf {z}, \mathbf {z} ^ {\prime}} \left[ (1 - h (\mathbf {w}; \mathbf {x}) + h (\mathbf {w}; \mathbf {x} ^ {\prime})) ^ {2} | y = 1, y ^ {\prime} = - 1 \right] \tag {1} +$$ + +The following proposition converts the original optimization problem (1) into a saddle-point problem, which is similar to Theorem 1 in (Ying et al., 2016). For completeness, the proof is included in the supplement. + +Proposition 1. The optimization problem (1) is equivalent to + +$$ +\min _ {\mathbf {w} \in \mathbb {R} ^ {d}, (a, b) \in \mathbb {R} ^ {2}} \max _ {\alpha \in \mathbb {R}} f (\mathbf {w}, a, b, \alpha) := \mathbb {E} _ {\mathbf {z}} [ F (\mathbf {w}, a, b, \alpha ; \mathbf {z}) ], \tag {2} +$$ + +where $\mathbf{z} = (\mathbf{x},y)\sim \mathbb{P}$ , and + +$$ +\begin{array}{l} F (\mathbf {w}, a, b, \alpha , \mathbf {z}) = (1 - p) \left(h (\mathbf {w}; \mathbf {x}) - a\right) ^ {2} \mathbb {I} _ {[ y = 1 ]} + p (h (\mathbf {w}; \mathbf {x}) - b) ^ {2} \mathbb {I} _ {[ y = - 1 ]} \\ + 2 \left(1 + \alpha\right) \left(p h (\mathbf {w}; \mathbf {x}) \mathbb {I} _ {[ y = - 1 ]} - (1 - p) h (\mathbf {w}; \mathbf {x}) \mathbb {I} _ {[ y = 1 ]}\right) - p (1 - p) \alpha^ {2} \\ \end{array} +$$ + +Remark: It is notable that the min-max formulation (2) is more favorable than the original formulation (1) for developing a stochastic algorithm that updates the model parameters based on one example or a mini-batch of samples. For stochastic optimization of (1), one has to carefully sample both positive and negative examples, which is not allowed in an online setting. It is notable that in the classical batch-learning setting, $p$ becomes the ratio of positive training examples and the expectation in (2) becomes average over $n$ individual functions. However, our algorithms are applicable to both batch-learning setting and online learning setting. + +Define $\mathbf{v} = (\mathbf{w}^{\top},a,b)^{\top}$ , $\phi (\mathbf{v}) = \max_{\alpha}f(\mathbf{v},\alpha)$ . It is clear that $\min_{\mathbf{w}}P(\mathbf{w}) = \min_{\mathbf{v}}\phi (\mathbf{v})$ and $P(\mathbf{w})\leq \phi (\mathbf{v})$ for any $\mathbf{v} = (\mathbf{w}^{\top},a,b)^{\top}$ . The following assumption is made throughout the paper. + +Assumption 1. (1) $\mu (\phi (\mathbf{v}) - \phi (\mathbf{v}_{*}))\leq \frac{1}{2}\| \nabla \phi (\mathbf{v})\|^{2}$ , where $\mu >0$ and $\mathbf{v}_{*}$ is the optimal solution of $\phi$ . (2) $h(\mathbf{w};\mathbf{x})$ is $\tilde{L}$ -Lipschitz continuous in terms of $\mathbf{w}$ for all $\mathbf{x}$ . (3) $\phi (\mathbf{v})$ is $L$ -smooth. (4) Var $[h(\mathbf{w};\mathbf{x})|y = -1]\leq \sigma^2$ , Var $[h(\mathbf{w};\mathbf{x})|y = 1]\leq \sigma^2$ . (5) $0\leq h(\mathbf{w};\mathbf{x})\leq 1$ . (6) Given a initial solution $\bar{\mathbf{v}}_0$ , there exists $\Delta_0 > 0$ such that $\phi (\bar{\mathbf{v}}_0) - \phi (\mathbf{v}_*)\leq \Delta_0$ , where $\mathbf{v}_{*}$ is the global minimum of $\phi$ . + +Remark: The first condition is inspired by a PL condition on the objective function $P(\mathbf{w})$ for learning a deep neural network. and the following Lemma 1 establishes the connection. $h(\mathbf{w};\mathbf{x})\in [0,1]$ holds when $h$ is defined as the sigmoid function composited with the forward propagation function of a neural network. + +Algorithm 1 Proximally Guided Algorithm (PGA) (Rafique et al., 2018) +1: Initialize $\bar{\mathbf{v}}_0 = \mathbf{0} \in \mathbb{R}^{d+2}$ , $\bar{\alpha}_0 = 0$ , the global index $j = 0$ +2: for $k = 1, \ldots, K$ do +3: $\mathbf{v}_0^k = \bar{\mathbf{v}}_{k-1}, \alpha_0^k = \bar{\alpha}_{k-1}, \eta_k = \eta_0 / k, T_k = T_0 \cdot k^2$ +4: for $t = 1, \ldots, T_k$ do +5: Receive $\mathbf{z}_j = (\mathbf{x}_j, y_j)$ from $\mathbb{P}$ , $\hat{\mathbf{g}}_{\mathbf{v}} = \nabla_{\mathbf{v}} F(\mathbf{v}_{t-1}^k, \alpha_{t-1}^k; \mathbf{z}_j)$ , $\hat{\mathbf{g}}_{\alpha} = \nabla_{\alpha} F(\mathbf{v}_{t-1}^k, \alpha_{t-1}^k; \mathbf{z}_j)$ +6: $\mathbf{v}_t^k = \Pi_{\Omega_1} \left[ \mathbf{v}_{t-1}^k - \eta_k \left( \hat{\mathbf{g}}_{\mathbf{v}} + \frac{1}{\gamma} (\mathbf{v}_{t-1}^k - \mathbf{v}_0^k) \right) \right]$ , where $\Omega_1 = \{\mathbf{v} : \| \mathbf{v} \| \leq R_1\}$ +7: $\alpha_t^k = \Pi_{\Omega_2} \left[ \alpha_{t-1}^k + \eta_k \hat{\mathbf{g}}_{\alpha} \right]$ , where $\Omega_2 = \{\alpha : |\alpha| \leq R_2\}$ +8: end for +9: $\bar{\mathbf{v}}_k = \frac{1}{T_k} \sum_{t=1}^{T_k} \mathbf{v}_t^k$ , $\bar{\alpha}_k = \frac{1}{T_k} \sum_{t=1}^{T_k} \alpha_t^k$ +10: end for +11: Sample $\tau$ uniformly randomly from $\{1, \ldots, K\}$ +12: return $\bar{\mathbf{v}}_\tau, \bar{\alpha}_\tau$ + +Lemma 1. Suppose $\| \nabla_{\mathbf{w}}h(\mathbf{w};\mathbf{x})\| \leq \tilde{L}$ for all $\mathbf{w}$ and $\mathbf{x}$ . If $P(\mathbf{w})$ satisfies PL condition, i.e. there exists $\mu' > 0$ , such that $\mu'(P(\mathbf{w}) - \min_{\mathbf{w}} P(\mathbf{w})) \leq \frac{1}{2} \| \nabla_{\mathbf{w}}P(\mathbf{w})\|^2$ , then we have $\mu(\phi(\mathbf{v}) - \phi(\mathbf{v}_*)) \leq \frac{1}{2} \| \nabla \phi(\mathbf{v})\|^2$ , where $\mu = \frac{1}{\max\left(\frac{1}{2\min(p,1-p)} + \frac{2\tilde{L}^2}{\mu'\min(p^2,(1-p)^2)}, \frac{2}{\mu'}\right)}$ . + +Remark: The PL condition of $P(\mathbf{w})$ could be proved for learning a neural network similar to existing studies, which is not the main focus of this paper. Nevertheless, In Appendix A.7, we provide an example for AUC maximization with one-hidden layer neural network. + +Warmup. We first discuss the algorithms and their convergence results of (Rafique et al., 2018) applied to the considered min-max problem. They have algorithms for problems in batch-learning setting and online learning setting. Since the algorithms for the batch-learning setting have complexities scaling with $n$ , we will concentrate on the algorithm for the online learning setting. The algorithm is presented in Algorithm 1, which is a direct application of Algorithm 2 of (Rafique et al., 2018) to an online setting. Since their analysis requires the domain of the primal and the dual variable to be bounded, hence we add a ball constraint on the primal variable and the dual variable as well. As long as $R_{1}$ and $R_{2}$ is sufficiently large, they should not affect the solution. The convergence result of Algorithm 1 is stated below. + +Theorem 1. (Rafique et al., 2018) Suppose $f(\mathbf{v},\alpha)$ is $\rho$ -weakly convex in $\mathbf{v}$ and concave in $\alpha$ . Let $\gamma = 1 / 2\rho$ , and define $\hat{\mathbf{v}}_{\tau} = \arg \min_{\mathbf{v}}\phi (\mathbf{v}) + \frac{1}{2\gamma}\| \mathbf{v} - \bar{\mathbf{v}}_{\tau}\|^{2}$ . Algorithm 1 with $T_{k} = ck^{2}$ and $K = \widetilde{O} (\epsilon^{-2})$ ensures that $\mathbb{E}\left[\mathrm{dist}^2 (0,\partial \phi (\hat{\mathbf{v}}_\tau))\right]\leq \frac{1}{\gamma^2}\mathbb{E}\| \hat{\mathbf{v}}_\tau -\bar{\mathbf{v}}_\tau \| ^2\leq \epsilon^2$ . The total iteration complexity is $\widetilde{O} (\epsilon^{-6})$ . + +Remark: Under the condition $\phi (\mathbf{v})$ is smooth and the returned solution is within the added bounded ball constraint, the above result implies $\mathbb{E}[\| \nabla \phi (\bar{\mathbf{v}}_{\tau})\|^{2}\leq \epsilon ]$ with a complexity of $\widetilde{O}(1 / \epsilon^3)$ . It further implies that with a complexity of $\widetilde{O}(1 / (\mu^3\epsilon^3))$ we have $\mathbb{E}[\phi (\bar{\mathbf{v}}_{\tau}) - \min_{\mathbf{v}}\phi (\mathbf{v})]\leq \epsilon$ under the assumed PL condition. + +We can see that this complexity result under the PL condition of $\phi (\mathbf{v})$ is worse than the typical complexity result of stochastic gradient descent method under the PL condition (i.e., $O(1 / \epsilon)$ ) (Karimi et al., 2016). It remains an open problem how to design a stochastic primal-dual algorithm for solving $\min_{\mathbf{v}}\max_{\alpha}F(\mathbf{v},\alpha)$ in order to achieve a complexity of $O(1 / \epsilon)$ in terms of minimizing $\phi (\mathbf{v})$ . A naive idea is to solve the inner maximization problem of $\alpha$ first and the use SGD on the primal variable $\mathbf{v}$ . However, this is not viable since exact maximization over $\alpha$ is a non-trivial task. + +# 4 ALGORITHMS AND THEORETICAL ANALYSIS + +In this section, we present two primal-dual algorithms for solving the min-max optimization problem (2) with corresponding theoretical convergence results. For simplicity, we first assume the positive ratio $p$ is known in advance, which is true in the batch-learning setting. Handling the unknown $p$ in an online learning setting is a simple extension, which will be discussed in Section 4.3. The proposed algorithms follow the same proximal point framework proposed in (Rafique et al., 2018), i.e., we + +Algorithm 2 Proximal Primal-Dual Stochastic Gradient (PPD-SG) +1: Initialize $\bar{\mathbf{v}}_0 = \mathbf{0} \in \mathbb{R}^{d+2}$ , $\bar{\alpha}_0 = 0$ , the global index $j = 0$ +2: for $k = 1, \ldots, K$ do +3: $\mathbf{v}_0^k = \bar{\mathbf{v}}_{k-1}, \alpha_0^k = \bar{\alpha}_{k-1}, \eta_k = \eta_0 \exp\left(-\left(k - 1\right)\frac{\mu / L}{5 + \mu / L}\right)$ +4: for $t = 1, \ldots, T_k - 1$ do +5: Receive $\mathbf{z}_j = (\mathbf{x}_j, y_j)$ from $\mathbb{P}$ , $\hat{\mathbf{g}}_{\mathbf{v}} = \nabla_{\mathbf{v}} F(\mathbf{v}_{t-1}^k, \alpha_{t-1}^k; \mathbf{z}_j)$ , $\hat{\mathbf{g}}_{\alpha} = \nabla_{\alpha} F(\mathbf{v}_{t-1}^k, \alpha_{t-1}^k; \mathbf{z}_j)$ +6: $\mathbf{v}_t^k = \mathbf{v}_{t-1}^k - \eta_k\left(\hat{\mathbf{g}}_{\mathbf{v}} + \frac{1}{\gamma}(\mathbf{v}_{t-1}^k - \mathbf{v}_0^k)\right)$ +7: $\alpha_t^k = \alpha_{t-1}^k + \eta_k\hat{\mathbf{g}}_{\alpha}$ +8: $j = j + 1$ +9: end for +10: $\bar{\mathbf{v}}_k = \frac{1}{T_k} \sum_{t=0}^{T_k-1} \mathbf{v}_t^k$ +11: Draw a minibatch $\{\mathbf{z}_j, \ldots, \mathbf{z}_{j+m_k-1}\}$ of size $m_k$ +12: $\bar{\alpha}_k = \frac{\sum_{i=j}^{j+m_k-1} h(\bar{\mathbf{w}}_k; \mathbf{x}_i) \mathbb{I}_{y_i=-1}}{\sum_{i=j}^{j+m_k-1} \mathbb{I}_{y_i=-1}} - \frac{\sum_{i=j}^{j+m_k-1} h(\bar{\mathbf{w}}_k; \mathbf{x}_i) \mathbb{I}_{y_i=1}}{\sum_{i=j}^{j+m_k-1} \mathbb{I}_{y_i=1}}$ +13: $j = j + m_k$ +14: end for +15: return $\bar{\mathbf{v}}_K, \bar{\alpha}_K$ + +solve the following convex-concave problems approximately and iteratively: + +$$ +\min _ {\mathbf {v}} \max _ {\alpha \in \mathbb {R}} \left\{f (\mathbf {v}, \alpha) + \frac {1}{2 \gamma} \| \mathbf {v} - \mathbf {v} _ {0} \| ^ {2} \right\} \tag {3} +$$ + +where $\gamma < 1 / L$ to ensure that the new objective function becomes convex and concave, and $\mathbf{v}_0$ is periodically updated. + +# 4.1 PROXIMAL PRIMAL-DUAL STOCHASTIC GRADIENT + +Our first algorithm named Proximal Primal-Dual Stochastic Gradient (PPD-SG) is presented in Algorithm 2. Similar to Algorithm 1, it has a nested loop, where the inner loop is to approximately solve a regularized min-max optimization problem (3) using stochastic primal-dual gradient method, and the outer loop updates the reference point and learning rate. One key difference is that PPD-SG uses a geometrically decaying step size scheme, while Algorithm 1 uses a polynomially decaying step size scheme. Another key difference is that at the end of $k$ -th outer loop, we update the dual variable $\bar{\alpha}_k$ in Step 12, which is motivated by its closed-form solution given $\bar{\mathbf{v}}_k$ . In particular, the given $\bar{\mathbf{v}}_k$ , the dual solution that optimizes the inner maximization problem is given by: + +$$ +\alpha = \frac {\mathbb {E} [ h (\bar {\mathbf {w}} _ {k} ; \mathbf {x}) I _ {y = - 1} ]}{1 - p} - \frac {\mathbb {E} [ h (\bar {\mathbf {w}} _ {k} ; \mathbf {x}) I _ {y = 1} ]}{p} = \mathbb {E} _ {\mathbf {x}} [ h (\bar {\mathbf {w}} _ {k}; \mathbf {x}) | y = - 1 ] - \mathbb {E} _ {\mathbf {x}} [ h (\bar {\mathbf {w}} _ {k}; \mathbf {x}) | y = 1 ]. +$$ + +In the algorithm, we only use a small number of samples in Step 11 to compute an estimation of the optimal $\alpha$ given $\bar{\mathbf{v}}_k$ . These differences are important for us to achieve lower iteration complexity of PPD-SG. Next, we present our convergence results of PPD-SG. + +Lemma 2 (One Epoch Analysis of Algorithm 2). Suppose Assumption 1 and there exists $G > 0$ such that $\| \hat{\mathbf{g}}_t^k\| _2\leq G$ where $\hat{\mathbf{g}}_t^k = \left(\nabla_{\mathbf{v}}F(\mathbf{v}_t^k,\alpha_t^k;\mathbf{z})^\top +\frac{1}{\gamma}\left(\mathbf{v}_t^k -\mathbf{v}_0^k\right)^\top , - \nabla_\alpha F(\mathbf{v}_t^k,\alpha_t^k;\mathbf{z})^\top\right)^\top$ . Define $\phi_k(\mathbf{v}) = \phi (\mathbf{v}) + \frac{1}{2\gamma}\| \mathbf{v} - \bar{\mathbf{v}}_{k - 1}\| ^2,\mathbf{s}_k = \arg \min_{\mathbf{v}\in \mathbb{R}^{d + 2}}\phi_k(\mathbf{v}).$ Choosing $m_{k - 1}\geq \frac{2(\sigma^2 + C)}{p(1 - p)\eta_k^2G^2T_k}$ with $C = \frac{2}{\ln(\frac{1}{\max(p,1 - p)})}\max (p,1 - p)^{\frac{1}{\ln(1 / \max(p,1 - p))}}$ then we have + +$$ +\mathbb {E} _ {k - 1} \left[ \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \min _ {\mathbf {v}} \phi_ {k} (\mathbf {v}) \right] \leq \frac {\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2} + 1 6 \tilde {L} ^ {2} \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + 4 \eta_ {k} G ^ {2}. +$$ + +where $\mathbb{E}_{k - 1}$ stands for the conditional expectation conditioning on all the stochastic events until $\bar{\mathbf{v}}_{k - 1}$ is generated. + +Theorem 2. Suppose the same conditions in Lemma 2 hold. Set $\eta_{k} = \eta_{0}\exp \left(-\left(k - 1\right)\frac{\mu / L}{5 + \mu / L}\right)$ and $T_{k} = \frac{\max(2,16\tilde{L}^{2})}{L\eta_{0}}\exp \left((k - 1)\frac{\mu / L}{5 + \mu / L}\right),m_{k} = \frac{2(\sigma^{2} + C)L}{p(1 - p)G^{2}\eta_{0}\max(2,16\tilde{L}^{2})}\exp \left(k\frac{\mu / L}{5 + \mu / L}\right)$ with $C = \frac{2}{\ln(\frac{1}{\max(p,1 - p)})}\max (p,1 - p)^{\frac{1}{\ln(1 / \max(p,1 - p))}},\gamma = \frac{1}{2L}$ in Algorithm 2. To return $\bar{\mathbf{v}}_K$ such that $\mathbb{E}\left[\phi (\bar{\mathbf{v}}_K) - \phi (\mathbf{v}_*)\right]\leq \epsilon ,$ it suffices to choose $K\geq \left(\frac{5L}{\mu} +1\right)\max \left(\log \frac{2\Delta_0}{\epsilon},\log K + \log \frac{48G^2\eta_0}{5\epsilon}\right).$ + +Algorithm 3 Inner Loop of Proximal Primal-Dual AdaGrad (PPD-AdaGrad) + +1: for $t = 1, \dots, T_k - 1$ do +2:Receive $\mathbf{z}_j = (\mathbf{x}_j,y_j)$ from $\mathbb{P}$ $\hat{\mathbf{g}}_{\mathbf{v}} = \nabla_{\mathbf{v}}F(\mathbf{v}_{t}^{k},\alpha_{t}^{k};\mathbf{z}_{j})$ $\hat{\mathbf{g}}_{\alpha} = \nabla_{\alpha}F(\mathbf{v}_{t}^{k},\alpha_{t}^{k};\mathbf{z}_{j})$ +3: $\hat{\mathbf{g}}_t^k = [\hat{\mathbf{g}}_{\mathbf{v}} + \frac{1}{\gamma} (\mathbf{v}_t^k -\mathbf{v}_0^k); - \hat{\mathbf{g}}_\alpha ]\in \mathbb{R}^{d + 3},\hat{\mathbf{g}}_{1:t}^k = [\hat{\mathbf{g}}_{1:t - 1}^k,\hat{\mathbf{g}}_t^k ],s_{t,i}^k = \left\| \hat{\mathbf{g}}_{1:t,t}^k\right\| _2,$ +4: $H_{t}^{k} = \delta I + \mathrm{diag}(s_{t}^{k}),\psi_{t}^{k}(\mathbf{u}) = \frac{1}{2}\langle \mathbf{u} - \mathbf{u}_{0}^{k},H_{t}^{k}(\mathbf{u} - \mathbf{u}_{0}^{k})\rangle ,$ where $\mathbf{u}_0^k = [\mathbf{v}_0^k;\bar{\alpha}_0^k ]\in \mathbb{R}^{d + 3}$ +5: $\mathbf{u}_{t + 1}^k = \arg \min_{\mathbf{u}}\left\{\eta_k\langle \frac{1}{t}\sum_{\tau = 1}^t\hat{\mathbf{g}}_\tau^k,\mathbf{u}\rangle +\frac{1}{t}\psi_t^k (\mathbf{u})\right\}$ +6: end for + +The number of iterations is at most $\widetilde{O}\left(\frac{LG^2}{\mu^2\epsilon}\right)$ , and the required number of samples is at most $\widetilde{O}\left(\frac{L^3\sigma^2}{\mu^2\epsilon}\right)$ , where $\widetilde{O}(\cdot)$ hides logarithmic factors of $L, \mu, \epsilon, \delta$ . Where $\widetilde{O}(\cdot)$ hides logarithmic factor of $L, \mu, \epsilon, G, \sigma$ . + +Remark: The above complexity result is similar to that of (Karimi et al., 2016) for solving nonconvex minimization problem under the PL condition up to a logarithmic factor. Compared with the complexity result of Algorithm 1 discussed earlier, i.e., $\widetilde{O}(1 / (\mu^3\epsilon^3))$ , the above complexity in the order of $\widetilde{O}(1 / (\mu^2\epsilon))$ is much better - it not only improves the dependence on $\epsilon$ but also improves the dependence on $\mu$ . + +# 4.2 PROXIMAL PRIMAL-DUAL ADAGRAD + +Our second algorithm named Proximal Primal-Dual Adagrad (PPD-Adagrad) is a AdaGrad-style algorithm. Since it only differs from PPD-SG in the updates of the inner loop, we only present the inner loop in Algorithm 3. The updates in the inner loop are similar to the adaptive updates of traditional AdaGrad (Duchi et al., 2011). We aim to achieve an adaptive convergence by using PPD-AdaGrad. The analysis of PPD-AdaGrad is inspired by the analysis of AdaGrad for non-convex minimization problems (Chen et al., 2019). The key difference is that we have to carefully deal with the primal-dual updates for the non-convex min-max problem. We summarize the convergence results of PPD-AdaGrad below. + +Lemma 3 (One Epoch Analysis of Algorithm 3). Suppose Assumption 1 and $\| \hat{\mathbf{g}}_t^k\|_\infty \leq \delta$ hold. Define $\phi_k(\mathbf{v}) = \phi (\mathbf{v}) + \frac{1}{2\gamma}\| \mathbf{v} - \bar{\mathbf{v}}_{k - 1}\|^2,\mathbf{s}_k = \arg \min_{\mathbf{v}\in \mathbb{R}^{d + 2}}\phi_k(\mathbf{v}).$ Choosing $m_{k - 1}\geq \frac{2(\sigma^2 + C)}{p(1 - p)(d + 3)\eta_k^2}$ with $C = \frac{2}{\ln(\frac{1}{\max(p,1 - p)})}\max (p,1 - p)^{\frac{1}{\ln(1 / \max(p,1 - p))}}$ , and + +$$ +T _ {k} = \inf \left\{\tau : \tau \geq M _ {k} \max \left(\frac {\left(\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : \tau , i} ^ {k} \| _ {2}\right) \max \left(1 , 8 \tilde {L} ^ {2}\right)}{c}, 2 c \left(\sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: \tau , i} ^ {k} \| _ {2} + (d + 3) \left(\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1: \tau , i} ^ {k} \| _ {2}\right)\right)\right) \right\} +$$ + +with $M_{k} > 0$ , $c > 0$ , then we have + +$$ +\mathbb {E} _ {k - 1} \left[ \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \min _ {\mathbf {v}} \phi_ {k} (\mathbf {v}) \right] \leq \frac {c \left(\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| _ {2} ^ {2} + \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| _ {2} ^ {2}\right)}{\eta_ {k} M _ {k}} + \frac {\eta_ {k}}{c M _ {k}}. +$$ + +where $\mathbb{E}_{k - 1}$ stands for the conditional expectation conditioning on all the stochastic events until $\bar{\mathbf{v}}_{k - 1}$ is generated. + +Theorem 3. Suppose the same conditions as in Lemma 3 hold. Set $\eta_{k} = \eta_{0}\exp \left(-\frac{(k - 1)}{2}\frac{\mu / L}{5 + \mu / L}\right)$ , $M_{k} = \frac{4c}{L\eta_{0}}\exp \left(\frac{(k - 1)}{2}\frac{\mu / L}{5 + \mu / L}\right)$ , $m_{k} = \frac{2(\sigma^{2} + C)}{p(1 - p)\eta_{0}^{2}(d + 3)}\exp \left(k\frac{\mu / L}{5 + \mu / L}\right)$ with $C = \ln \left(\frac{2}{\max\left(\frac{1}{\max(p,1 - p)}\right)}\right)$ , $\gamma = \frac{1}{2L}$ and $T_{k}$ as in Lemma 3 where $c = \frac{1}{\sqrt{d + 3}}$ . Suppose $\| \hat{\mathbf{g}}_{1:T_k,i}^k\| _2\leq \delta \cdot T_k^\alpha$ for $\forall k$ , where $0\leq \alpha \leq \frac{1}{2}$ . To return $\bar{\mathbf{v}}_K$ such that $\mathbb{E}\left[\phi (\bar{\mathbf{v}}_K) - \phi (\mathbf{v}_*)\right]\leq \epsilon$ , it suffices to choose $K\geq \left(\frac{5L}{\mu} +1\right)\max \left(\log \frac{2\Delta_0}{\epsilon},\log K + \log \frac{\eta_0^2L}{5c^2\epsilon}\right)$ . The number of iterations is at most $\widetilde{O}\left(\left(\frac{L\delta^2d}{\mu^2\epsilon}\right)^{\frac{1}{2(1 - \alpha)}}\right)$ , and the required number of samples is at most $\widetilde{O}\left(\frac{L^3\sigma^2}{\mu^2\epsilon}\right)$ , where $\widetilde{O} (\cdot)$ hides logarithmic factors of $L,\mu ,\epsilon ,\delta$ + +Remark: When the cumulative growth of stochastic gradient is slow, i.e., $\alpha < 1/2$ , the number of iterations is less than that in Theorem 2, which exhibits adaptive iteration complexity. + +# 4.3 EXTENSIONS + +Algorithm 4 Update $T_{+}, T_{-}, \widehat{p}, p(1 - p), \bar{y}$ given data $\{\mathbf{z}_j, \dots, \mathbf{z}_{j + m - 1}\}$ + +1: Update $T_{-} = T_{-} + \sum_{i = j}^{j + m - 1}\mathbb{I}_{[y_j = -1]}$ , $T_{+} = T_{+} + \sum_{i = j}^{j + m - 1}\mathbb{I}_{[y_j = 1]}$ + +$$ +2: \widehat {p} = T _ {+} / \left(T _ {+} + T _ {-}\right), \bar {y} = \frac {(j + 2) \bar {y} + \sum_ {i = j} ^ {j + m - 1} \mathbb {I} _ {[ y _ {i} = 1 ]}}{j + m + 2}, \widehat {p (1 - p)} = \frac {(j + 1) \widehat {p (1 - p)} + \sum_ {i = j} ^ {j + m - 1} (\mathbb {I} _ {[ y _ {i} = 1 ]} - \bar {y}) ^ {2}}{j + m + 1} +$$ + +Setting $\eta_k, T_k, m_k$ . It is notable that the setting of $\eta_k, T_k, m_k$ depends on unknown parameters $\mu, L$ , etc., which are typically unknown. One heuristic to address this issue is that we can decrease $\eta_k$ by a constant factor larger than 1 (e.g., 2 or 5 or 10), and similarly increase $T_k$ and $m_k$ by a constant factor. Another heuristic is to decrease the step size by a constant factor when the performance on a validation data saturates (Krizhevsky et al., 2012). + +Variants when $p$ is unknown. In the online learning setting when $p$ is unknown, the stochastic gradients of $f$ in both $\mathbf{v}$ and $\alpha$ are not directly available. To address this issue, we can keep unbiased estimators for both $p$ and $p(1 - p)$ which are independent of the new arrived data, and update these estimators during the optimization procedure. All values depending on $p$ and $p(1 - p)$ (i.e., $F, \mathbf{g}_{\mathbf{v}}, \mathbf{g}_{\alpha}$ ) are estimated by substituting $p$ and $p(1 - p)$ by $\widehat{p}$ and $\widehat{p(1 - p)}$ (i.e., $\hat{F}, \hat{\mathbf{g}}_{\mathbf{v}}, \hat{\mathbf{g}}_{\alpha}$ ) respectively. The approach for keeping unbiased estimator $\hat{p}$ and $p(1 - p)$ during the optimization is described in Algorithm 4, where $j$ is the global index, and $m$ is the number of examples received. + +Extensions to multi-class problems. In the previous analysis, we only consider the binary classification problem. We can extend it to the multi-class setting. To this end, we first introduce the definition of AUC in this setting according to (Hand & Till, 2001). Suppose there are $c$ classes, we have $c$ scoring functions for each class, namely $h(\mathbf{w}_1; \mathbf{x}), \ldots, h(\mathbf{w}_c; \mathbf{x})$ . We assume that these scores are normalized such that $\sum_{k=1}^{c} h(\mathbf{w}_c; \mathbf{x}) = 1$ . Note that if these functions are implemented by a deep neural network, they can share the lower layers and have individual last layer of connections. The AUC is defined as + +$$ +\operatorname {A U C} (h) = \frac {1}{c (c - 1)} \sum_ {i = 1} ^ {c} \sum_ {j \neq i} \Pr \left(h \left(\mathbf {w} _ {i}, \mathbf {x}\right) \geq h \left(\mathbf {w} _ {i}; \mathbf {x} ^ {\prime}\right) | y = i, y ^ {\prime} = j\right), \tag {4} +$$ + +Similar to Proposition 1, we can cast the problem into + +$$ +\min _ {\mathbf {w}, \mathbf {a}, \mathbf {b}} \max _ {\alpha} \frac {1}{c (c - 1)} \sum_ {i = 1} ^ {c} \sum_ {j \neq i} \mathbb {E} _ {\mathbf {z}} \left[ F _ {i j} \left(\mathbf {w} _ {i}, a _ {i j}, b _ {i j}, \alpha_ {i j}; \mathbf {z}\right) \right], \tag {5} +$$ + +where $\mathbf{w} = [\mathbf{w}_1,\dots ,\mathbf{w}_c],\mathbf{a},\mathbf{b},\alpha \in \mathbb{R}^{c\times c},i,j = 1,\ldots ,c,\mathbf{z} = (\mathbf{x},y)\sim \mathbb{P},p_i = \operatorname *{Pr}(y = i)$ , and + +$$ +\begin{array}{l} F _ {i j} (\mathbf {w} _ {i}, a _ {i j}, b _ {i j}, \alpha_ {i j}, \mathbf {z}) = p _ {j} (h (\mathbf {w} _ {i}; \mathbf {x}) - a _ {i j}) ^ {2} \mathbb {I} _ {[ y = i ]} + p _ {i} (h (\mathbf {w} _ {i}; \mathbf {x}) - b _ {i j}) ^ {2} \mathbb {I} _ {[ y = j ]} \\ + 2 \left(1 + \alpha_ {i j}\right) \left(p _ {i} h \left(\mathbf {w} _ {i}; \mathbf {x}\right) \mathbb {I} _ {[ y = j ]} - p _ {j} h \left(\mathbf {w} _ {i}; \mathbf {x}\right) \mathbb {I} _ {[ y = i ]}\right) - p _ {i} p _ {j} \alpha_ {i j} ^ {2}. \\ \end{array} +$$ + +Then we can modify our algorithms to accommodate the multiple class pairs. We can also add another level of sampling of class pairs into computing the stochastic gradients. + +# 5 EXPERIMENTAL RESULTS + +In this section, we present some empirical results to verify the effectiveness of the proposed algorithms. We compare our algorithms (PPD-SG and PPD-AdaGrad) with three baseline methods including PGA (Algorithm 1), Online AUC method (Ying et al., 2016) (OAUC) that directly employs the standard primal-dual stochastic gradient method with a decreasing step size for solving the min-max formulation, and the standard stochastic gradient descent (SGD) for minimizing cross-entropy loss. Comparing with PGA and OAUC allows us to verify the effectiveness of the proposed algorithms for solving the same formulation, and comparing with SGD allows us to verify the effectiveness of maximizing AUC for imbalanced data. We use a residual network with 20 layers (ResNet-20) to implement the deep neural network for all algorithms. + +We use the stagewise step size strategy as in (He et al., 2016) for SGD, i.e. the step size is decreased by 10 times at $40\mathrm{K}$ , $60\mathrm{K}$ . For PPD-SG and PPD-AdaGrad, we set $T_{s} = T_{0}3^{k}$ , $\eta_{s} = \eta_{0} / 3^{k}$ . $T_{0}, \eta_{0}$ are tuned on a validation data. The value of $\gamma$ is tuned for PGA and the same value is used for PPD-SG and PPD-AdaGrad. The initial step size is tuned in [0.1, 0.05, 0.01, 0.008, 0.005] and $T_{0}$ is tuned in $[200 \sim 2000]$ for each algorithm separately. The batch size is set to 128. For STL10, we use a smaller batch size 32 due to the limited training data. + +![](images/8372b4fac4dfb413e44e0c4e7d3c559ed6e2567fd8c890a46ac6be7425877542.jpg) + +![](images/d9d621484e5e56c3b70b85b1abef4b0a2d43b806bc97200171e315d6a0482319.jpg) + +![](images/95e188da2c144d4a310230634456d2e9b257236cdc923308e159f28e7bf9ec6f.jpg) + +![](images/e310b2689ecb3e1e09f301206a340faa4b31c742bfba871337e0c5fbc0485211.jpg) + +![](images/7b2cf4b2d3486400da4ad0d2c1d5638a268615f14b1ac1712073a1292ea82c7e.jpg) + +![](images/45e952757e5e62c75c9578dc7b9488ceeccd72b85d48cf3320f6269927e1fe6c.jpg) + +![](images/c04529eafadc20c8e7dce8aca6543cb4160186a182d53dd16ccf53852981d8d1.jpg) + +![](images/273a30a14c9999b7ff2e4c7f43529218ace93ea4b31d9b66ddb4a6fd1b94cbae.jpg) + +![](images/ef139a6bc62a176f3e13b7c7f387755a4096db4911dd8edc86ee914b0fe50384.jpg) + +![](images/3adf48146a3f5fbf700432d41888095eaac89492d14309d46e89ee3d53e292d5.jpg) + +![](images/00038f3d8454b82160fdb8f991e8762929f032b134ea75476cd4e7c4f32f2def.jpg) + +![](images/94122a7ecf9e9f8d493bf1029cb30f318fbdd59b4d7f034adb10446dd204e806.jpg) + +![](images/afe0b79c7535baeea63e9e8f3cfd0a7c6dd4badcb4208d7d883ff77d6e683d03.jpg) +Figure 1: Comparison of testing AUC on Cat&Dog, CIFAR10, CIFAR100 and STL10. + +![](images/ee82a56e5548db80eda1be8b11e76f744b651a6517d5efd1697bef0eb55bcc4c.jpg) + +![](images/7a2cd1d3575b7efe77d3167222a7c0d4d813f88484d5f6cce6efa77c2b04003c.jpg) + +![](images/e4f64d2c0559039786838b7bfb0ab1092e0e0fa48b72cb4c91c1a727ee698fe6.jpg) + +We conduct the comparisons on four benchmark datasets, i.e., Cat&Dog (C2), CIFAR10 (C10), CIFAR100 (C100), STL10. STL10 is an extension of CIFAR10 and the images are acquired from ImageNet. Cat&Dog is from Kaggle containing 25,000 images of dogs and cats and we choose an 80:20 split to construct training and testing set. We use $19\mathrm{k} / 1\mathrm{k}$ , $45\mathrm{k} / 5\mathrm{k}$ , $45\mathrm{k} / 5\mathrm{k}$ , $4\mathrm{k} / 1\mathrm{k}$ training/validation split on C2, C10, C100, and STL10 respectively. For each dataset, we construct multiple binary classification tasks with varying imbalanced ratio of number negative examples to number of positive examples. For details of construction of binary classification tasks, please refer to the Appendix A.8. + +We report the convergence of AUC on testing data in Figure 1, where the title shows the ratio of the majority class to the minority class. The results about the convergence of AUC versus the time in seconds are also presented in Figure 3. From the results we can see that for the balanced settings with ratio equal to $50\%$ , SGD performs consistently better than other methods on C2 and CIFAR10 data. However, it is worse than AUC optimization based methods on CIFAR100 and STL10. For imbalanced settings, AUC maximization based methods are more advantageous than SGD in most cases. In addition, PPD-SG and PPD-AdaGrad are mostly better than other baseline algorithms. In certain cases, PPD-AdaGrad can be faster than PPD-SG. Finally, we observe even better performance (in Appendix) by a mixed strategy that pre-trains the model with SGD and then switches to PPD-SG. + +# 6 CONCLUSION + +In this paper, we consider stochastic AUC maximization problem when the predictive model is a deep neural network. By building on the saddle point reformulation and exploring Polyak-Lojasiewicz condition in deep learning, we have proposed two algorithms with state-of-the-art complexities for stochastic AUC maximization problem. We have also demonstrated the efficiency of our proposed algorithms on several benchmark datasets, and the experimental results indicate that our algorithms converge faster than other baselines. One may consider to extend the analysis techniques to other problems with the min-max formulation. + +# ACKNOWLEDGMENTS + +The authors thank the anonymous reviewers for their helpful comments. M. Liu, Z. Yuan and T. Yang are partially supported by National Science Foundation CAREER Award 1844403. + +# REFERENCES + +Zeyuan Allen-Zhu, Yanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. arXiv preprint arXiv:1811.03962, 2018. +Sanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu. A convergence analysis of gradient descent for deep linear neural networks. arXiv preprint arXiv:1810.02281, 2018. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. +Zachary Charles and Dimitris Papailiopoulos. Stability and generalization of learning algorithms that converge to global optima. arXiv preprint arXiv:1710.08402, 2017. +Zaiyi Chen, Zhuoning Yuan, Jinfeng Yi, Bowen Zhou, Enhong Chen, and Tianbao Yang. Universal stagewise learning for non-convex problems with convergence on averaged solutions. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Syx5V2CcFm. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. arXiv preprint arXiv:1811.03804, 2018a. +Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018b. +John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011. +John C Duchi, Shai Shalev-Shwartz, Yoram Singer, and Ambuj Tewari. Composite objective mirror descent. In $COLT$ , pp. 14-26, 2010. +Charles Elkan. The foundations of cost-sensitive learning. In International joint conference on artificial intelligence, volume 17, pp. 973-978. Lawrence Erlbaum Associates Ltd, 2001. +Wei Gao, Rong Jin, Shenghuo Zhu, and Zhi-Hua Zhou. One-pass auc optimization. In ICML (3), pp. 906-914, 2013. +Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. +David J Hand and Robert J Till. A simple generalisation of the area under the roc curve for multiple class classification problems. Machine learning, 45(2):171-186, 2001. +James A Hanley and Barbara J McNeil. The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology, 143(1):29-36, 1982. +James A Hanley and Barbara J McNeil. A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology, 148(3):839-843, 1983. +Moritz Hardt and Tengyu Ma. Identity matters in deep learning. arXiv preprint arXiv:1611.04231, 2016. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Brian Kingsbury, et al. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal processing magazine, 29, 2012. +Chi Jin, Praneeth Netrapalli, and Michael I Jordan. Minmax optimization: Stable limit points of gradient descent ascent are locally optimal. arXiv preprint arXiv:1902.00618, 2019. + +Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the polyak-fojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 795-811. Springer, 2016. +Robert Kleinberg, Yuanzhi Li, and Yang Yuan. An alternative view: When does sgd escape local minima? arXiv preprint arXiv:1802.06175, 2018. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. +Lihua Lei, Cheng Ju, Jianbo Chen, and Michael I Jordan. Non-convex finite-sum optimization via scsg methods. In Advances in Neural Information Processing Systems, pp. 2348-2358, 2017. +Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In Advances in Neural Information Processing Systems, pp. 8157-8166, 2018. +Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with relu activation. In Advances in Neural Information Processing Systems, pp. 597-607, 2017. +Zhize Li and Jian Li. A simple proximal stochastic gradient method for nonsmooth nonconvex optimization. In Advances in Neural Information Processing Systems, pp. 5564-5574, 2018. +Qihang Lin, Mingrui Liu, Hassan Rafique, and Tianbao Yang. Solving weakly-convex-weakly-concave saddle-point problems as weakly-monotone variational inequality. arXiv preprint arXiv:1810.10207, 2018. +Mingrui Liu, Xiaoxuan Zhang, Zaiyi Chen, Xiaoyu Wang, and Tianbao Yang. Fast stochastic auc maximization with o (1/n)-convergence rate. In International Conference on Machine Learning, pp. 3195-3203, 2018. +Songtao Lu, Ioannis Tsaknakis, Mingyi Hong, and Yongxin Chen. Hybrid block successive approximation for one-sided non-convex min-max problems: Algorithms and applications. arXiv preprint arXiv:1902.08294, 2019. +Abdel-rahman Mohamed, George E Dahl, and Geoffrey Hinton. Acoustic modeling using deep belief networks. IEEE Transactions on Audio, Speech, and Language Processing, 20(1):14-22, 2012. +Michael Natole, Yiming Ying, and Siwei Lyu. Stochastic proximal algorithms for auc maximization. In International Conference on Machine Learning, pp. 3707-3716, 2018. +Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on optimization, 19(4):1574-1609, 2009. +Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013. +Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takac. Stochastic recursive gradient algorithm for nonconvex optimization. arXiv preprint arXiv:1705.07261, 2017. +Boris Teodorovich Polyak. Gradient methods for minimizing functionals. Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki, 3(4):643-653, 1963. +Hassan Rafique, Mingrui Liu, Qihang Lin, and Tianbao Yang. Non-convex min-max optimization: Provable algorithms and applications in machine learning. arXiv preprint arXiv:1810.02060, 2018. +Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic variance reduction for nonconvex optimization. In International conference on machine learning, pp. 314-323, 2016. +Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91-99, 2015. + +R Tyrrell Rockafellar. Monotone operators and the proximal point algorithm. SIAM journal on control and optimization, 14(5):877-898, 1976. +Maziar Sanjabi, Meisam Razaviyayn, and Jason D Lee. Solving non-convex non-concave min-max games under polyak-1 ojasiewicz condition. arXiv preprint arXiv:1812.02878, 2018. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112, 2014. +Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, and Vahid Tarokh. Spiderboost: A class of faster variance-reduced algorithms for nonconvex optimization. arXiv preprint arXiv:1810.10690, 2018. +Yiming Ying, Longyin Wen, and Siwei Lyu. Stochastic online auc maximization. In Advances in Neural Information Processing Systems, pp. 451-459, 2016. +Peilin Zhao, Rong Jin, Tianbao Yang, and Steven C Hoi. Online auc maximization. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 233-240, 2011. +Dongruo Zhou, Pan Xu, and Quanquan Gu. Stochastic nested variance reduced gradient descent for nonconvex optimization. In Advances in Neural Information Processing Systems, pp. 3921-3932, 2018. +Yi Zhou and Yingbin Liang. Characterization of gradient dominance and regularity conditions for neural networks. arXiv preprint arXiv:1710.06910, 2017. +Difan Zou and Quanquan Gu. An improved analysis of training over-parameterized deep neural networks. arXiv preprint arXiv:1906.04688, 2019. +Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep relu networks. arXiv preprint arXiv:1811.08888, 2018. + +# A APPENDIX + +# A.1 PROOF OF PROPOSITION 1 + +Proof. It suffices to prove that + +$$ +\mathbb {E} _ {\mathbf {z}, \mathbf {z} ^ {\prime}} \left[ (1 - h (\mathbf {w}; \mathbf {x}) + h (\mathbf {w}; \mathbf {x} ^ {\prime})) ^ {2} | y = 1, y ^ {\prime} = - 1 \right] = 1 + \frac {\min _ {(a , b) \in \mathbb {R} ^ {2}} \max _ {\alpha \in \mathbb {R}} \mathbb {E} _ {\mathbf {z}} \left[ F (\mathbf {w} , a , b , \alpha ; \mathbf {z}) \right]}{p (1 - p)} \tag {6} +$$ + +Note that + +$$ +\begin{array}{l} \mathrm {L H S} = 1 + \mathbb {E} \left[ h ^ {2} (\mathbf {w}; \mathbf {x}) | y = 1 \right] + \mathbb {E} \left[ h ^ {2} (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 \right] - 2 \mathbb {E} \left[ h (\mathbf {w}; \mathbf {x}) | y = 1 \right] + 2 \mathbb {E} \left[ h (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 \right] \\ - 2 \left(\mathbb {E} [ h (\mathbf {w}; \mathbf {x}) | y = 1 ]\right) \left(\mathbb {E} [ h (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 ]\right) \\ = 1 + \mathbb {E} \left[ h ^ {2} (\mathbf {w}; \mathbf {x}) | y = 1 \right] - \left(\mathbb {E} \left[ h (\mathbf {w}; \mathbf {x}) | y = 1 \right]\right) ^ {2} + \mathbb {E} \left[ h ^ {2} (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 \right] - \left(\mathbb {E} \left[ h (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 \right]\right) ^ {2} \\ - 2 \mathbb {E} [ h (\mathbf {w}; \mathbf {x}) | y = 1 ] + 2 \mathbb {E} [ h (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 ] + (\mathbb {E} [ h (\mathbf {w}; \mathbf {x}) | y = 1 ] - \mathbb {E} [ h (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 ]) ^ {2} \\ = 1 + \min _ {(a, b) \in \mathbb {R} ^ {2}} \mathbb {E} \left[ (h (\mathbf {w}; \mathbf {x}) - a) ^ {2} | y = 1 \right] + \mathbb {E} \left[ (h (\mathbf {w}; \mathbf {x} ^ {\prime}) - b) ^ {2} | y ^ {\prime} = - 1 \right] - 2 \mathbb {E} \left[ h (\mathbf {w}; \mathbf {x}) | y = 1 \right] \\ + 2 \mathbb {E} [ h (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 ] + \max _ {\alpha \in \mathbb {R}} \left[ 2 \alpha \left(\mathbb {E} [ h (\mathbf {w}; \mathbf {x} ^ {\prime}) | y ^ {\prime} = - 1 ] - \mathbb {E} [ h (\mathbf {w}; \mathbf {x}) | y = 1 ]\right) - \alpha^ {2} \right] \\ = 1 + \min _ {(a, b) \in \mathbb {R} ^ {2}} \max _ {\alpha \in \mathbb {R}} \mathbb {E} _ {\mathbf {z}} \left\{\frac {1}{p} (h (\mathbf {w}; \mathbf {x}) - a) ^ {2} \mathbb {I} _ {[ y = 1 ]} + \frac {1}{1 - p} (h (\mathbf {w}; \mathbf {x}) - b) ^ {2} \mathbb {I} _ {[ y = - 1 ]} \right. \\ \left. + 2 (1 + \alpha) \left(\frac {1}{1 - p} h (\mathbf {w}; \mathbf {x}) \mathbb {I} _ {[ y = - 1 ]} - \frac {1}{p} h (\mathbf {w}; \mathbf {x}) \mathbb {I} _ {[ y = 1 ]}\right) - \alpha^ {2} \right\} \\ = 1 + \frac {\min _ {(a , b) \in \mathbb {R} ^ {2}} \max _ {\alpha \in \mathbb {R}} \mathbb {E} _ {\mathbf {z}} [ F (\mathbf {w} , a , b , \alpha ; \mathbf {z}) ]}{p (1 - p)} = \mathrm {R H S}. \tag {7} \\ \end{array} +$$ + +Note that the optimal values of $a, b, \alpha$ are chosen as $a^* = \mathbb{E}[h(\mathbf{w}; \mathbf{x}) | y = 1]$ , $b = \mathbb{E}[h(\mathbf{w}; \mathbf{x}') | y' = -1]$ , $\alpha^* = \mathbb{E}[h(\mathbf{w}; \mathbf{x}') | y' = -1] - \mathbb{E}[h(\mathbf{w}; \mathbf{x}) | y = 1]$ . + +# A.2 PROOF OF LEMMA 2 + +Proof. Define $\alpha_{*,k} = \arg \max_{\alpha} f(\bar{\mathbf{v}}_k, \alpha)$ , $\mathbf{u} = (\mathbf{v}^\top, \alpha)^\top \in \mathbb{R}^{d+3}$ , $\mathbf{u}_{*,k} = (\mathbf{v}_*^\top, \alpha_{*,k})^\top$ , $\mathbf{u}_t^k = ((\mathbf{v}_t^k)^\top, \alpha_t^k)^\top$ , $\mathbf{g}_t^k = \left(\nabla_{\mathbf{v}} f(\mathbf{v}_t^k, \alpha_t^k)^\top + \frac{1}{\gamma} (\mathbf{v}_t^k - \mathbf{v}_0^k)^\top, -\nabla_{\alpha} f(\mathbf{v}_t^k, \alpha_t^k)^\top\right)^\top$ . + +$$ +\begin{array}{l} \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \min _ {\mathbf {v}} \phi_ {k} (\mathbf {v}) \stackrel {(a)} {=} \max _ {\alpha} \left[ f (\bar {\mathbf {v}} _ {k}, \alpha) + \frac {1}{2 \gamma} \| \bar {\mathbf {v}} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] - \min _ {\mathbf {v}} \max _ {\alpha} \left[ f (\mathbf {v}, \alpha) + \frac {1}{2 \gamma} \| \mathbf {v} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] \\ \stackrel {(b)} {\leq} \left[ f (\bar {\mathbf {v}} _ {k}, \alpha_ {*}, k) + \frac {1}{2 \gamma} \| \bar {\mathbf {v}} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] - \left[ f (\mathbf {s} _ {k}, \bar {\alpha} _ {k}) + \frac {1}{2 \gamma} \| \mathbf {s} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] \\ \stackrel {(c)} {\leq} \frac {\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + \frac {\| \bar {\alpha} _ {k - 1} - \alpha_ {* , k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + \eta_ {k} G ^ {2} + \frac {\sum_ {t = 0} ^ {T _ {k} - 1} (\mathbf {u} _ {t} ^ {k} - \mathbf {u} _ {* , k}) ^ {\top} (\mathbf {g} _ {t} ^ {k} - \hat {\mathbf {g}} _ {t} ^ {k})}{T _ {k}}, \\ \end{array} +$$ + +where (a) comes from the definition of $\phi_{k}$ , (b) holds because $\min_{\mathbf{v}} \max_{\alpha} \left[f(\mathbf{v}, \alpha) + \frac{1}{2\gamma} \| \mathbf{v} - \bar{\mathbf{v}}_{k-1} \|^2\right] \geq f(\mathbf{s}_k, \bar{\alpha}_k) + \frac{1}{2\gamma} \| \mathbf{s}_k - \bar{\mathbf{v}}_{k-1} \|^2$ , (c) comes from the standard analysis of primal-dual stochastic gradient method. + +Denote $\mathbb{E}_{k - 1}$ by taking the conditional expectation conditioning on all the stochastic events until $\bar{\mathbf{v}}_{k - 1}$ is generated. Taking $\mathbb{E}_{k - 1}$ on both sides and noting that $\hat{\mathbf{g}}_t^k$ is an unbiased estimator of $\mathbf{g}_t^k$ for $\forall t,k$ , we have + +$$ +\mathbb {E} _ {k - 1} \left[ \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \underset {\mathbf {v}} {\min} \phi_ {k} (\mathbf {v}) \right] \leq \frac {\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + \frac {\mathbb {E} _ {k - 1} \| \bar {\alpha} _ {k - 1} - \alpha_ {*, k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + \eta_ {k} G ^ {2} + \mathbf {I}, +$$ + +where + +$$ +\mathbf {I} = \mathbb {E} _ {k - 1} \left[ \frac {\sum_ {t = 0} ^ {T _ {k} - 1} \left(\alpha_ {t} ^ {k} - \alpha_ {* , k}\right) \left(- \nabla_ {\alpha} f \left(\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k}\right) - \left(- \nabla_ {\alpha} F \left(\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k} ; \xi_ {t} ^ {k}\right)\right)\right)}{T _ {k}} \right]. +$$ + +Define $\widetilde{\alpha}_0^k = \alpha_0^k$ and + +$$ +\widetilde {\alpha} _ {t + 1} ^ {k} = \arg \min _ {\alpha} \left(- \nabla_ {\alpha} f \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}\right) - \left(- \nabla_ {\alpha} F \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}; \xi_ {t} ^ {k}\right)\right)\right) \alpha + \frac {1}{2 \eta_ {k}} \left(\alpha - \widetilde {\alpha} _ {t} ^ {k}\right). +$$ + +By first-order optimality condition, we have + +$$ +\begin{array}{l} \left(\widetilde {\alpha} _ {t} ^ {k} - \alpha_ {* k}\right) \left(- \nabla_ {\alpha} f \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}\right) - \left(- \nabla_ {\alpha} F \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}; \xi_ {t} ^ {k}\right)\right)\right) \\ \leq \frac {(\widetilde {\alpha} _ {t} ^ {k} - \alpha_ {*, k}) ^ {2} - (\widetilde {\alpha} _ {t + 1} ^ {k} - \alpha_ {*, k}) ^ {2}}{2 \eta_ {k}} + \frac {\eta_ {k}}{2} \left(- \nabla_ {\alpha} f \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}\right) - \left(- \nabla_ {\alpha} F \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}; \xi_ {t} ^ {k}\right)\right)\right) ^ {2} \tag {8} \\ \end{array} +$$ + +Note that + +$$ +\begin{array}{l} \mathbf {I} = \mathbb {E} _ {k - 1} \left[ \frac {\sum_ {t = 0} ^ {T _ {k} - 1} \left(\alpha_ {t} ^ {k} - \widetilde {\alpha} _ {t} ^ {k} + \widetilde {\alpha} _ {t} ^ {k} - \alpha_ {* , k}\right) \left(- \nabla_ {\alpha} f \left(\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k}\right) - \left(- \nabla_ {\alpha} F \left(\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k} ; \xi_ {t} ^ {k}\right)\right)\right)}{T _ {k}} \right] \\ = \mathbb {E} _ {k - 1} \left[ \frac {\sum_ {t = 0} ^ {T _ {k} - 1} (\widetilde {\alpha} _ {t} ^ {k} - \alpha_ {*, k}) \left(- \nabla_ {\alpha} f (\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k}) - (- \nabla_ {\alpha} F (\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k} ; \xi_ {t} ^ {k}))\right)}{T _ {k}} \right] \\ \leq \frac {\mathbb {E} _ {k - 1} \left(\widetilde {\alpha} _ {0} ^ {k} - \alpha_ {*, k}\right) ^ {2}}{2 \eta_ {k} T _ {k}} + \eta_ {k} G ^ {2} = \frac {\mathbb {E} _ {k - 1} \left(\bar {\alpha} _ {t - 1} - \alpha_ {*, k}\right) ^ {2}}{2 \eta_ {k} T _ {k}} + \eta_ {k} G ^ {2} \\ \end{array} +$$ + +where the first inequality holds due to (8). Hence we have + +$$ +\mathbb {E} _ {k - 1} \left[ \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \underset {\mathbf {v}} {\min} \phi_ {k} (\mathbf {v}) \right] \leq \frac {\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + \frac {\mathbb {E} _ {k - 1} \| \bar {\alpha} _ {k - 1} - \alpha_ {*, k} \| ^ {2}}{\eta_ {k} T _ {k}} + 2 \eta_ {k} G ^ {2}. +$$ + +Define $\mathbf{x}_{j:j+m_{k-1}-1} = (\mathbf{x}_j, \ldots, \mathbf{x}_{j+m_{k-1}-1}), y_{j:j+m_{k-1}-1} = (y_j, \ldots, y_{j+m_{k-1}-1})$ , and $\tilde{f}(\mathbf{x}_{j:j+m_{k-1}-1}, y_{j:j+m_{k-1}-1}) = \frac{\sum_{i=j}^{j+m_{k-1}-1} h(\bar{\mathbf{w}}_{k-1}; \mathbf{x}_i) \mathbb{I}_{y_i=y}}{\sum_{i=j}^{j+m_{k-1}-1} \mathbb{I}_{y_i=y}} - \mathbb{E}_{\mathbf{x}}[h(\bar{\mathbf{w}}_{k-1}; \mathbf{x})|y]$ . Note that $0 \leq$ + +$h \leq 1$ . Then we know that + +$$ +\begin{array}{l} \mathbb {E} _ {\mathbf {x} _ {j: j + m _ {k - 1} - 1}} \big (\tilde {f} ^ {2} \big (\mathbf {x} _ {j: j + m _ {k - 1} - 1}, y _ {j: j + m _ {k - 1} - 1} \big) \big | y _ {j: j + m _ {k - 1} - 1} \big) \\ \leq \frac {\sigma^ {2}}{\sum_ {i = j} ^ {j + m _ {k - 1} - 1} \mathbb {I} _ {y _ {i} = y}} \cdot \mathbb {I} _ {\left(\sum_ {i = j} ^ {j + m _ {k - 1} - 1} \mathbb {I} _ {y _ {i} = y} > 0\right)} + 1 \cdot \mathbb {I} _ {\left(\sum_ {i = j} ^ {j + m _ {k - 1} - 1} \mathbb {I} _ {y _ {i} = y} = 0\right)}. \\ \end{array} +$$ + +Hence + +$$ +\begin{array}{l} \mathbb {E} _ {k - 1} \left[ \tilde {f} ^ {2} (\mathbf {x} _ {j}, \dots , \mathbf {x} _ {j + m _ {k - 1} - 1}, y _ {j}, \dots , y _ {j + m _ {k - 1} - 1}) \right] \\ = \mathbb {E} _ {y _ {j: j + m _ {k - 1} - 1}} \left[ \mathbb {E} _ {\mathbf {x} _ {j: j + m _ {k - 1} - 1} - 1} \left(\tilde {f} ^ {2} \left(\mathbf {x} _ {j: j + m _ {k - 1} - 1}, y _ {j: j + m _ {k - 1} - 1}\right) \mid y _ {j: j + m _ {k - 1} - 1} \right] \right. \\ \leq \mathbb {E} _ {y _ {j: j + m _ {k - 1} - 1}} \left[ \frac {\sigma^ {2}}{\sum_ {i = j} ^ {j + m _ {k - 1} - 1} \mathbb {I} _ {y _ {i} = y}} \cdot \mathbb {I} _ {\left(\sum_ {i = j} ^ {j + m _ {k - 1} - 1} \mathbb {I} _ {y _ {i} = y} > 0\right)} + 1 \cdot \mathbb {I} _ {\left(\sum_ {i = j} ^ {j + m _ {k - 1} - 1} \mathbb {I} _ {y _ {i} = y} = 0\right)} \right] \\ \leq \frac {\sigma^ {2}}{m _ {k - 1} \Pr (y _ {i} = y)} + (1 - \Pr (y _ {i} = y)) ^ {m _ {k - 1}}. \\ \end{array} +$$ + +Hence we have + +$$ +\begin{array}{l} \mathbb {E} _ {k - 1} \left\| \bar {\alpha} _ {k - 1} - \alpha_ {*}, k - 1 \right\| ^ {2} \\ = \mathbb {E} _ {k - 1} \left[ \frac {\sum_ {i = j} ^ {j + m _ {k - 1} - 1} h (\bar {\mathbf {w}} _ {k - 1} ; \mathbf {x} _ {i}) \mathbb {I} _ {y _ {i} = - 1}}{\sum_ {i = j} ^ {j + m _ {k - 1} - 1} \mathbb {I} _ {y _ {i} = - 1}} - \mathbb {E} _ {\mathbf {x}} [ h (\bar {\mathbf {w}} _ {k - 1}; \mathbf {x}) | y = - 1 ] \right. \\ \left. + \mathbb {E} _ {\mathbf {x}} [ h (\bar {\mathbf {w}} _ {k - 1}; \mathbf {x}) | y = 1 ] - \frac {\sum_ {i = j} ^ {j + m _ {k} - 1} h \left(\bar {\mathbf {w}} _ {k - 1} ; \mathbf {x} _ {i}\right) \mathbb {I} _ {y _ {i} = 1}}{\sum_ {i = j} ^ {j + m _ {k - 1} - 1} \mathbb {I} _ {y _ {i} = 1}} \right] ^ {2} \\ \leq \frac {2 \sigma^ {2}}{m _ {k - 1} \Pr (y _ {i} = - 1)} + 2 (1 - \Pr (y _ {i} = - 1)) ^ {m _ {k - 1}} + \frac {2 \sigma^ {2}}{m _ {k - 1} \Pr (y _ {i} = 1)} + 2 (1 - \Pr (y _ {i} = 1)) ^ {m _ {k - 1}} \\ = \frac {2 \sigma^ {2}}{m _ {k - 1} p (1 - p)} + 2 p ^ {m _ {k - 1}} + 2 (1 - p) ^ {m _ {k - 1}} \leq 2 \left(\frac {\sigma^ {2}}{m _ {k - 1} p (1 - p)} + 2 (\max (p, 1 - p)) ^ {m _ {k - 1}}\right) \\ \stackrel {(a)} {\leq} 2 \left(\frac {\sigma^ {2}}{m _ {k - 1} p (1 - p)} + \frac {C}{m _ {k - 1}}\right) \leq \frac {2 (\sigma^ {2} + C)}{m _ {k - 1} p (1 - p)}. \\ \end{array} +$$ + +where $C = \frac{2}{\ln(\frac{1}{\max(p, 1 - p)})} \max(p, 1 - p)^{\frac{1}{\ln(1 / \max(p, 1 - p))}}$ , and (a) holds since the function $x \max(p, 1 - p)^x$ achieves its maximum at point $x = 1 / \ln(1 / \max(p, 1 - p))$ . + +By the update of $\bar{\alpha}_{k-1}$ , $2\tilde{L}$ -Lipschitz continuity of $\mathbb{E}[h(\mathbf{w};\mathbf{x})|y = -1] - \mathbb{E}[h(\mathbf{w};\mathbf{x})|y = 1]$ , and noting that $\alpha_{*,k} = \mathbb{E}[h(\bar{\mathbf{w}}_k;\mathbf{x})|y = -1] - \mathbb{E}[h(\bar{\mathbf{w}}_k;\mathbf{x})|y = 1]$ , we have + +$$ +\begin{array}{l} \mathbb {E} _ {k - 1} \left\| \bar {\alpha} _ {k - 1} - \alpha_ {*, k} \right\| ^ {2} = \mathbb {E} _ {k - 1} \left\| \bar {\alpha} _ {k - 1} - \alpha_ {*, k - 1} + \alpha_ {*, k - 1} - \alpha_ {*, k} \right\| ^ {2} \\ \leq \mathbb {E} _ {k - 1} \left(2 \| \bar {\alpha} _ {k - 1} - \alpha_ {*, k - 1} \| ^ {2} + 2 \| \alpha_ {*, k - 1} - \alpha_ {*, k} \| ^ {2}\right) \leq \frac {4 (\sigma^ {2} + C)}{m _ {k - 1} p (1 - p)} + 8 \tilde {L} ^ {2} \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2}. \\ \end{array} +$$ + +Taking $m_{k - 1}\geq \frac{2(\sigma^2 + C)}{p(1 - p)\eta_k^2G^2T_k}$ , then we have + +$$ +\mathbb {E} _ {k - 1} \left[ \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \min _ {\mathbf {v}} \phi_ {k} (\mathbf {v}) \right] \leq \frac {\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2} + 1 6 \tilde {L} ^ {2} \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + 4 \eta_ {k} G ^ {2}. +$$ + +![](images/451db15d815d0952939a6de4a6f7a47952114fa1907e88ee7bb6fe7787ba7ac7.jpg) + +# A.3 PROOF OF THEOREM 2 + +Proof. Define $\phi_k(\mathbf{v}) = \phi(\mathbf{v}) + \frac{1}{2\gamma} \| \mathbf{v} - \bar{\mathbf{v}}_{k-1} \|^2$ . We can see that $\phi_k(\mathbf{v})$ is convex and smooth function since $\gamma \leq 1 / L$ . The smoothness parameter of $\phi_k$ is $\hat{L} = L + \gamma^{-1}$ . Define $\mathbf{s}_k = \arg \min_{\mathbf{v} \in \mathbb{R}^{d+2}} \phi_k(\mathbf{v})$ . According to Theorem 2.1.5 of (Nesterov, 2013), we have + +$$ +\left\| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \right\| ^ {2} \leq 2 \hat {L} \left(\phi_ {k} (\bar {\mathbf {v}} _ {k}) - \phi_ {k} (\mathbf {s} _ {k})\right). \tag {9} +$$ + +Combining (9) with Lemma 2 yields + +$$ +\mathbb {E} _ {k - 1} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} \leq 2 \hat {L} \left(\frac {\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2} + 1 6 \tilde {L} ^ {2} \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + 4 \eta_ {k} G ^ {2}\right). \tag {10} +$$ + +Note that $\phi_k(\bar{\mathbf{v}})$ is $(\gamma^{-1} - L)$ -strongly convex, and $\gamma = \frac{1}{2L}$ , we have + +$$ +\phi_ {k} \left(\bar {\mathbf {v}} _ {k - 1}\right) \geq \phi_ {k} \left(\mathbf {s} _ {k}\right) + \frac {L}{2} \left\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \right\| ^ {2}. \tag {11} +$$ + +Plugging in $\mathbf{s}_k$ into Lemma 2 and combining (11) yield + +$$ +\begin{array}{l} \mathbb {E} _ {k - 1} \left[ \phi (\bar {\mathbf {v}} _ {k}) + L \| \bar {\mathbf {v}} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] \\ \leq \phi_ {k} (\bar {\mathbf {v}} _ {k - 1}) - \frac {L}{2} \| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2} + \frac {\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2} + 1 6 \tilde {L} ^ {2} \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2}}{2 \eta_ {k} T _ {k}} + 4 \eta_ {k} G ^ {2}. \\ \end{array} +$$ + +By using $\eta_k T_k L \geq \max(2, 16\tilde{L}^2)$ , rearranging the terms, and noting that $\phi_k(\bar{\mathbf{v}}_{k-1}) = \phi(\bar{\mathbf{v}}_{k-1})$ , we have + +$$ +\frac {\left\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \right\| ^ {2} + 1 6 \tilde {L} ^ {2} \mathbb {E} _ {k - 1} \left\| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \right\| ^ {2}}{2 \eta_ {k} T _ {k}} \leq \phi (\bar {\mathbf {v}} _ {k - 1}) - \mathbb {E} _ {k - 1} [ \phi (\bar {\mathbf {v}} _ {k}) ] + 4 \eta_ {k} G ^ {2}. \tag {12} +$$ + +Combining (12) and (10) yields + +$$ +\left. \mathbb {E} _ {k - 1} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} \leq 6 L \left(\phi (\bar {\mathbf {v}} _ {k - 1}) - \mathbb {E} _ {k - 1} [ \phi (\bar {\mathbf {v}} _ {k}) ] + 8 \eta_ {k} G ^ {2}\right). \right. \tag {13} +$$ + +Taking expectation on both sides over all randomness until $\bar{\mathbf{v}}_{k - 1}$ is generated and by the tower property, we have + +$$ +\mathbb {E} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} \leq 6 L \left(\mathbb {E} \left[ \phi (\bar {\mathbf {v}} _ {k - 1}) - \phi (\mathbf {v} _ {*}) \right] - \mathbb {E} \left[ \phi (\bar {\mathbf {v}} _ {k}) - \phi (\mathbf {v} _ {*}) \right] + 8 \eta_ {k} G ^ {2}\right). \tag {14} +$$ + +Note that $\phi (\mathbf{v})$ is $L$ -smooth and hence is $L$ -weakly convex, so we have + +$$ +\begin{array}{l} \phi (\bar {\mathbf {v}} _ {k - 1}) \geq \phi (\bar {\mathbf {v}} _ {k}) + \langle \nabla \phi (\bar {\mathbf {v}} _ {k}), \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \rangle - \frac {L}{2} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2} \\ = \phi (\bar {\mathbf {v}} _ {k}) + \left\langle \nabla \phi (\bar {\mathbf {v}} _ {k}) + 2 L (\bar {\mathbf {v}} _ {k} - \bar {\mathbf {v}} _ {k - 1}), \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \right\rangle + \frac {3}{2} L \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2} \\ \stackrel {(a)} {=} \phi (\bar {\mathbf {v}} _ {k}) + \left\langle \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}), \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \right\rangle + \frac {3}{2} L \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2} \tag {15} \\ \stackrel {(b)} {=} \phi (\bar {\mathbf {v}} _ {k}) - \frac {1}{2 L} \left\langle \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}), \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \nabla \phi (\bar {\mathbf {v}} _ {k}) \right\rangle + \frac {3}{8 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2} \\ = \phi (\bar {\mathbf {v}} _ {k}) - \frac {1}{8 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} - \frac {1}{4 L} \left\langle \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}), \nabla \phi (\bar {\mathbf {v}} _ {k}) \right\rangle + \frac {3}{8 L} \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2}, \\ \end{array} +$$ + +where (a) and (b) hold by the definition of $\overline{\phi}_k$ . + +Rearranging the terms in (15) yields + +$$ +\begin{array}{l} \phi (\bar {\mathbf {v}} _ {k}) - \phi (\bar {\mathbf {v}} _ {k - 1}) \leq \frac {1}{8 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} + \frac {1}{4 L} \left\langle \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}), \nabla \phi (\bar {\mathbf {v}} _ {k}) \right\rangle - \frac {3}{8 L} \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2} \\ \stackrel {(a)} {\leq} \frac {1}{8 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} + \frac {1}{8 L} \left(\| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} + \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2}\right) - \frac {3}{8 L} \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2} \\ = \frac {1}{4 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} - \frac {1}{4 L} \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2} \\ \stackrel {(b)} {\leq} \frac {1}{4 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} - \frac {\mu}{2 L} \left(\phi (\bar {\mathbf {v}} _ {k}) - \phi (\mathbf {v} _ {*})\right), \tag {16} \\ \end{array} +$$ + +where (a) holds by using $\langle \mathbf{a},\mathbf{b}\rangle \leq \frac{1}{2} (\| \mathbf{a}\| ^2 +\| \mathbf{b}\| ^2)$ , and (b) holds by the PL property of $\phi$ + +Define $\Delta_k = \phi(\bar{\mathbf{v}}_k) - \phi(\mathbf{v}_*)$ . Combining (14) and (16), we can see that + +$$ +\mathbb {E} [ \Delta_ {k} - \Delta_ {k - 1} ] \leq \frac {3}{2} \left(\mathbb {E} [ \Delta_ {k - 1} - \Delta_ {k} ] + 8 \eta_ {k} G ^ {2}\right) - \frac {\mu}{2 L} \mathbb {E} [ \Delta_ {k} ], +$$ + +which implies that + +$$ +\left(\frac {5}{2} + \frac {\mu}{2 L}\right) \mathbb {E} [ \Delta_ {k} ] \leq \frac {5}{2} \mathbb {E} [ \Delta_ {k - 1} ] + 1 2 \eta_ {k} G ^ {2}. +$$ + +As a result, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \Delta_ {k} \right] \leq \frac {5}{5 + \mu / L} \mathbb {E} \left[ \Delta_ {k - 1} \right] + \frac {2 4 \eta_ {k} G ^ {2}}{5 + \mu / L} = \left(1 - \frac {\mu / L}{5 + \mu / L}\right) \left(\mathbb {E} \left[ \Delta_ {k - 1} \right] + \frac {2 4}{5} \eta_ {k} G ^ {2}\right) \\ \leq \left(1 - \frac {\mu / L}{5 + \mu / L}\right) ^ {k} \mathbb {E} [ \Delta_ {0} ] + \frac {2 4}{5} G ^ {2} \sum_ {j = 1} ^ {k} \eta_ {j} \left(1 - \frac {\mu / L}{5 + \mu / L}\right) ^ {k + 1 - j}. \\ \end{array} +$$ + +By setting $\eta_{k} = \eta_{0}\exp \left(-\left(k - 1\right)\frac{\mu / L}{5 + \mu / L}\right)$ we have + +$$ +\begin{array}{l} \mathbb {E} [ \Delta_ {k} ] \leq \left(1 - \frac {\mu / L}{5 + \mu / L}\right) ^ {k} \mathbb {E} [ \Delta_ {0} ] + \frac {2 4}{5} G ^ {2} \eta_ {0} \sum_ {j = 1} ^ {k} \exp \left(- k \frac {\mu / L}{5 + \mu / L}\right) \\ \leq \exp \left(- k \frac {\mu / L}{5 + \mu / L}\right) \Delta_ {0} + \frac {2 4}{5} G ^ {2} \eta_ {0} k \exp \left(- k \frac {\mu / L}{5 + \mu / L}\right). \\ \end{array} +$$ + +To achieve $\mathbb{E}[\Delta_k] \leq \epsilon$ , it suffices to let $K$ satisfy $\exp \left(-K \frac{\mu / L}{5 + \mu / L}\right) \leq \min \left(\frac{\epsilon}{2\Delta_0}, \frac{5\epsilon}{48KG^2\eta_0}\right)$ , i.e. $K \geq \left(\frac{5L}{\mu} + 1\right) \max \left(\log \frac{2\Delta_0}{\epsilon}, \log K + \log \frac{48G^2\eta_0}{5\epsilon}\right)$ . + +Since $\eta_k T_k L \geq \max(2, 16\tilde{L}^2)$ , by the setting of $\eta_k$ , we set $T_k = \frac{\max(2, 16\tilde{L}^2)}{L\eta_0} \exp\left((k - 1)\frac{\mu/L}{5 + \mu/L}\right)$ . Then the total iteration complexity is + +$$ +\sum_ {k = 1} ^ {K} T _ {k} \leq \frac {\operatorname* {m a x} (2 , 1 6 \tilde {L} ^ {2})}{L \eta_ {0}} \cdot \frac {\exp \left(K \frac {\mu / L}{5 + \mu / L}\right) - 1}{\exp \left(\frac {\mu / L}{5 + \mu / L}\right) - 1} = \widetilde {O} \left(\frac {K G ^ {2}}{\mu \epsilon}\right) = \widetilde {O} \left(\frac {L G ^ {2}}{\mu^ {2} \epsilon}\right). +$$ + +The required number of samples is + +$$ +\sum_ {k = 1} ^ {K} m _ {k} = \frac {2 (\sigma^ {2} + C) L}{p (1 - p) G ^ {2} \eta_ {0} \max (2 , 1 6 \tilde {L} ^ {2})} \cdot \frac {\exp \left(K \frac {\mu / L}{5 + \mu / L}\right) - 1}{\exp \left(\frac {\mu / L}{5 + \mu / L}\right) - 1} = \widetilde {O} \left(\frac {L ^ {3} \sigma^ {2}}{\mu^ {2} \epsilon}\right). +$$ + +![](images/ab4e3c70011bd5ba2dcec94df2ee24552ed5798cf768887adc0342b16d9068ea.jpg) + +# A.4 PROOF OF LEMMA 3 + +Proof. Define $\alpha_{*,k} = \arg \max_{\alpha} f(\bar{\mathbf{v}}_k, \alpha)$ , $\mathbf{u} = (\mathbf{v}^{\top}, \alpha)^{\top} \in \mathbb{R}^{d + 3}$ , $\mathbf{u}_{*,k} = (\mathbf{v}_*^{\top}, \alpha_{*,k})^{\top}$ , $\mathbf{u}_t^k = ((\mathbf{v}_t^k)^\top, \alpha_t^k)^\top$ . + +$$ +\begin{array}{l} \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \min _ {\mathbf {v}} \phi_ {k} (\mathbf {v}) \stackrel {(a)} {=} \max _ {\alpha} \left[ f (\bar {\mathbf {v}} _ {k}, \alpha) + \frac {1}{2 \gamma} \| \bar {\mathbf {v}} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] - \min _ {\mathbf {v}} \max _ {\alpha} \left[ f (\mathbf {v}, \alpha) + \frac {1}{2 \gamma} \| \mathbf {v} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] \\ \stackrel {(b)} {\leq} \left[ f (\bar {\mathbf {v}} _ {k}, \alpha_ {*}, k) + \frac {1}{2 \gamma} \| \bar {\mathbf {v}} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] - \left[ f (\mathbf {s} _ {k}, \bar {\alpha} _ {k}) + \frac {1}{2 \gamma} \| \mathbf {s} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] \\ \stackrel {(c)} {\leq} \frac {1}{T _ {k}} \sum_ {t = 1} ^ {T _ {k}} \left[ f (\mathbf {v} _ {t} ^ {k}, \alpha_ {*}, k) + \frac {1}{2 \gamma} \| \mathbf {v} _ {t} ^ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} - \left(f (\mathbf {s} _ {k}, \alpha_ {t} ^ {k}) + \frac {1}{2 \gamma} \| \mathbf {s} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2}\right) \right] \\ = \frac {1}{T _ {k}} \sum_ {t = 1} ^ {T _ {k}} \left[ \left(f \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {* , k}\right) + \frac {1}{2 \gamma} \left\| \mathbf {v} _ {t} ^ {k} - \bar {\mathbf {v}} _ {k - 1} \right\| ^ {2}\right) - \left(f \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}\right) + \frac {1}{2 \gamma} \left\| \mathbf {v} _ {t} ^ {k} - \bar {\mathbf {v}} _ {k - 1} \right\| ^ {2}\right) \right. \\ \left. + \left(f \left(\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}\right) + \frac {1}{2 \gamma} \left\| \mathbf {v} _ {t} ^ {k} - \bar {\mathbf {v}} _ {k - 1} \right\| ^ {2}\right) - \left(f \left(\mathbf {s} _ {k}, \alpha_ {t} ^ {k}\right) + \frac {1}{2 \gamma} \left\| \mathbf {s} _ {k} - \bar {\mathbf {v}} _ {k - 1} \right\| ^ {2}\right) \right] \\ \leq \frac {1}{T _ {k}} \sum_ {t = 1} ^ {T _ {k}} \left\langle \nabla_ {\mathbf {v}} \left(f (\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}) + \frac {1}{2 \gamma} \| \mathbf {v} _ {t} ^ {k} - \mathbf {v} _ {t} ^ {0} \| ^ {2}\right), \mathbf {v} _ {t} ^ {k} - \mathbf {s} _ {k} \right\rangle + \left\langle - \nabla_ {\alpha} \left(f (\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}) + \frac {1}{2 \gamma} \| \mathbf {v} _ {t} ^ {k} - \mathbf {v} _ {t} ^ {0} \| ^ {2}\right), \alpha_ {t} ^ {k} - \alpha_ {* , k} \right\rangle \\ = \frac {\sum_ {t = 1} ^ {T _ {k}} \left\langle \mathbf {u} _ {t} ^ {k} - \mathbf {u} _ {*, k} , \hat {\mathbf {g}} _ {t} ^ {k} \right\rangle}{T _ {k}} + \frac {\sum_ {t = 1} ^ {T _ {k}} \left\langle \mathbf {u} _ {t} ^ {k} - \mathbf {u} _ {*, k} , \mathbf {g} _ {t} ^ {k} - \hat {\mathbf {g}} _ {t} ^ {k} \right\rangle}{T _ {k}} \\ = \mathbf {I} + \mathbf {I I} \tag {17} \\ \end{array} +$$ + +where (a) comes from the definition of $\phi_{k}$ , (b) holds because $\min_{\mathbf{v}}\max_{\alpha}\left[f(\mathbf{v},\alpha) + \frac{1}{2\gamma}\|\mathbf{v} - \bar{\mathbf{v}}_{k-1}\|^2\right] \geq f(\mathbf{s}_k,\bar{\alpha}_k) + \frac{1}{2\gamma}\|\mathbf{s}_k - \bar{\mathbf{v}}_{k-1}\|^2$ , (c) holds by Jensen's inequality. + +Now we bound $\mathbf{I}$ and $\mathbf{II}$ separately. Define $\| \mathbf{u}\| _H = \sqrt{\mathbf{u}^\top H\mathbf{u}}$ $\psi_0^k (\mathbf{u}) = 0$ $\psi_{T_k}^{k,*}$ to be the conjugate of $\frac{1}{\eta_k}\psi_{T_k}^k$ , which is $\psi_{T_k}^{k,*}(\mathbf{g}) = \sup_{\mathbf{u}}\left\{\langle \mathbf{g},\mathbf{u}\rangle -\frac{1}{\eta_k}\psi_{T_k}^k (\mathbf{u})\right\}$ . Note that + +$$ +\begin{array}{l} T _ {k} \cdot \mathbf {I} = \sum_ {t = 1} ^ {T _ {k}} \left\langle \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {t} ^ {k} \right\rangle - \sum_ {t = 1} ^ {T _ {k}} \left\langle \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {*}, k \right\rangle - \frac {1}{\eta_ {k}} \psi_ {T _ {k}} ^ {k} (\mathbf {u} _ {*}, k) + \frac {1}{\eta_ {k}} \psi_ {T _ {k}} ^ {k} (\mathbf {u} _ {*}, k) \\ \leq \frac {1}{\eta_ {k}} \psi_ {T _ {k}} ^ {k} \left(\mathbf {u} _ {*, k}\right) + \sum_ {t = 1} ^ {T _ {k}} \left\langle \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {t} ^ {k} \right\rangle + \sup _ {\mathbf {u}} \left\{\left\langle - \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} \right\rangle - \frac {1}{\eta_ {k}} \psi_ {T _ {k}} ^ {k} \left(\mathbf {u} _ {*, k}\right) \right\} \tag {18} \\ = \frac {1}{\eta_ {k}} \psi_ {T _ {k}} ^ {k} (\mathbf {u} _ {*}, k) + \sum_ {t = 1} ^ {T _ {k}} \left\langle \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {t} ^ {k} \right\rangle + \psi_ {T _ {k}} ^ {k, *} \left(- \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {t} ^ {k}\right), \\ \end{array} +$$ + +where the last equality holds by the definition of $\psi_{T_k}^{k,*}$ . + +In addition, note that + +$$ +\begin{array}{l} \psi_ {T _ {k}} ^ {k, *} \left(- \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {t} ^ {k}\right) \stackrel {(a)} {=} \left\langle - \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {T _ {k} + 1} ^ {k} \right\rangle - \frac {1}{\eta_ {k}} \psi_ {T _ {k}} ^ {k} \left(\mathbf {u} _ {T _ {k} + 1} ^ {k}\right) \stackrel {(b)} {\leq} \left\langle - \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {T _ {k} + 1} \right\rangle - \frac {1}{\eta_ {k}} \psi_ {T _ {k} - 1} ^ {k} \left(\mathbf {u} _ {T _ {k} + 1} ^ {k}\right) \\ \leq \sup _ {\mathbf {u}} \left\{\left\langle - \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {k} ^ {t}, \mathbf {u} \right\rangle - \frac {1}{\eta_ {k}} \psi_ {T _ {k} - 1} ^ {k} (\mathbf {u}) \right\} = \psi_ {T _ {k} - 1} ^ {k, *} \left(- \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {t} ^ {k}\right) \\ \stackrel {(c)} {\leq} \psi_ {T _ {k} - 1} ^ {k, *} \left(- \sum_ {t = 1} ^ {T _ {k} - 1} \hat {\mathbf {g}} _ {t} ^ {k}\right) + \left\langle - \mathbf {g} _ {T _ {k}} ^ {k}, \nabla \psi_ {T _ {k} - 1} ^ {k, *} \left(- \sum_ {t = 1} ^ {T _ {k} - 1} \hat {\mathbf {g}} _ {t} ^ {k}\right) \right\rangle + \frac {\eta_ {k}}{2} \| \hat {\mathbf {g}} _ {T _ {k}} \| _ {\psi_ {T _ {k} - 1}} ^ {2}, \tag {19} \\ \end{array} +$$ + +where (a) holds due to the update of the Algorithm 3, (b) holds since $\psi_{t + 1}^{k}(\mathbf{u})\geq \psi_{t}^{k}(\mathbf{u})$ , (c) holds by the $\eta_{k}$ -smoothness of $\psi_t^{k,*}$ with respect to $\| \cdot \|_{\psi_t^{k,*}} = \| \cdot \|_{(H_t^k)^{-1}}$ + +By (19) and noting that $\nabla \psi_{T_k - 1}^{k,*}\left(-\sum_{t = 1}^{T_k - 1}\hat{\mathbf{g}}_t^k\right) = \mathbf{u}_{T_k}^k$ , we have + +$$ +\sum_ {t = 1} ^ {T _ {k}} \left\langle \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {t} ^ {k} \right\rangle + \psi_ {T _ {k}} ^ {k, *} \left(- \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {t} ^ {k}\right) \leq \sum_ {t = 1} ^ {T _ {k} - 1} \left\langle \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {t} ^ {k} \right\rangle + \psi_ {T _ {k} - 1} ^ {k, *} \left(- \sum_ {t = 1} ^ {T _ {k} - 1} \hat {\mathbf {g}} _ {t} ^ {k}\right) + \frac {\eta_ {k}}{2} \| \hat {\mathbf {g}} _ {T _ {k}} \| _ {\psi_ {T _ {k} - 1} ^ {k, *}} ^ {2} \tag {20} +$$ + +Using (20) recursively and noting that $\psi_0^k (\mathbf{u}) = 0$ , we know that + +$$ +\sum_ {t = 1} ^ {T _ {k}} \left\langle \hat {\mathbf {g}} _ {t} ^ {k}, \mathbf {u} _ {t} ^ {k} \right\rangle + \psi_ {T _ {k}} ^ {k, *} \left(- \sum_ {t = 1} ^ {T _ {k}} \hat {\mathbf {g}} _ {t} ^ {k}\right) \leq \frac {\eta_ {k}}{2} \sum_ {t = 1} ^ {T _ {k}} \| \hat {\mathbf {g}} _ {t} ^ {k} \| _ {\psi_ {t - 1} ^ {k, *}} ^ {2} \tag {21} +$$ + +Combining (18) and (21), we have + +$$ +\mathbf {I} \leq \frac {1}{\eta_ {k} T _ {k}} \psi_ {T _ {k}} ^ {k} \left(\mathbf {u} _ {* , k}\right) + \frac {\eta_ {k}}{2 T _ {k}} \sum_ {t = 1} ^ {T _ {k}} \| \hat {\mathbf {g}} _ {t} ^ {k} \| _ {\psi_ {t - 1} ^ {k, *}} ^ {2} \tag {22} +$$ + +By Lemma 4 of (Duchi et al., 2011) and setting $\delta \geq \max_{t} \| \hat{\mathbf{g}}_t^k\|_\infty$ , we know that $\sum_{t=1}^{T_k} \| \hat{\mathbf{g}}_t^k\|_{\psi_{t-1}^{k,*}}^2 \leq 2\sum_{i=1}^{d+3} \| \hat{\mathbf{g}}_{1:T_k}^k\|_2$ , and hence + +$$ +\begin{array}{l} \mathbf {I} \leq \frac {1}{\eta_ {k} T _ {k}} \psi_ {T _ {k}} ^ {k} (\mathbf {u} _ {*}, k) + \frac {\eta_ {k}}{T _ {k}} \sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: T _ {k}} ^ {k} \| _ {2} \\ = \frac {\delta \| \mathbf {u} _ {1} ^ {k} - \mathbf {u} _ {*, k} \| _ {2} ^ {2}}{2 \eta_ {k} T _ {k}} + \frac {\left\langle \mathbf {u} _ {1} ^ {k} - \mathbf {u} _ {*, k} , \operatorname {d i a g} \left(s _ {T _ {k}} ^ {k}\right) \left(\mathbf {u} _ {1} ^ {k} - \mathbf {u} _ {*, k}\right) \right\rangle}{2 \eta_ {k} T _ {k}} + \frac {\eta_ {k}}{T _ {k}} \sum_ {i = 1} ^ {d} \| \hat {\mathbf {g}} _ {1: T _ {k}} ^ {k} \| _ {2} \tag {23} \\ \leq \frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \| \mathbf {u} _ {1} ^ {k} - \mathbf {u} _ {* , k} \| _ {2} ^ {2} + \frac {\eta_ {k}}{T _ {k}} \sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: T _ {k}} ^ {k} \| _ {2} \\ \end{array} +$$ + +Denote $\mathbb{E}_{k - 1}$ by taking the conditional expectation conditioning on filtration $\mathcal{F}_{k - 1}$ , where $\mathcal{F}_{k - 1}$ is the $\sigma$ -algebra generated by all random variables until $\bar{\mathbf{v}}_{k - 1}$ is generated. Taking $\mathbb{E}_{k - 1}$ on both sides of (17), and employing (23) yields + +$$ +\begin{array}{l} \mathbb {E} _ {k - 1} \left[ \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \min _ {\mathbf {v}} \phi_ {k} (\mathbf {v}) \right] \\ \leq \mathbb {E} _ {k - 1} \left[ \frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \left(\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| _ {2} ^ {2} + \| \bar {\alpha} _ {k - 1} - \alpha_ {* , k} \| _ {2} ^ {2}\right) + \frac {\eta_ {k}}{T _ {k}} \sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: T _ {k}} ^ {k} \| _ {2} \right] + \mathbb {E} _ {k - 1} (\mathbf {I I}) \\ = \left(\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| _ {2} ^ {2}\right) \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}}\right) + \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \| \bar {\alpha} _ {k - 1} - \alpha_ {* , k} \| _ {2} ^ {2}\right) \\ + \mathbb {E} _ {k - 1} \left(\frac {\eta_ {k}}{T _ {k}} \sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: T _ {k}} ^ {k} \| _ {2}\right) + \mathbb {E} _ {k - 1} (\mathbf {I I}) \tag {24} \\ \end{array} +$$ + +where the equality holds since $\bar{\mathbf{v}}_{k - 1} - \mathbf{s}_k$ is measurable with respect to $\mathcal{F}_{k - 1}$ + +Note that + +$$ +\begin{array}{l} \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \| \bar {\alpha} _ {k - 1} - \alpha_ {*, k} \| _ {2} ^ {2}\right) = \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \| \bar {\alpha} _ {k - 1} - \alpha_ {*, k - 1} + \alpha_ {*, k - 1} - \alpha_ {*, k} \| ^ {2}\right) \\ \leq \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \left(2 \| \bar {\alpha} _ {k - 1} - \alpha_ {* , k - 1} \| ^ {2} + 2 \| \alpha_ {* , k - 1} - \alpha_ {* , k} \| ^ {2}\right)\right) \\ \stackrel {(a)} {=} \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}}\right) \mathbb {E} _ {k - 1} \left(2 \| \bar {\alpha} _ {k - 1} - \alpha_ {*, k - 1} \| ^ {2}\right) + \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \cdot 2 \| \alpha_ {*, k - 1} - \alpha_ {*, k} \| ^ {2}\right) \\ \stackrel {(b)} {\leq} \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}}\right) \cdot \frac {4 (\sigma^ {2} + C)}{m _ {k - 1} p (1 - p)} + \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \cdot 2 \| \alpha_ {*, k - 1} - \alpha_ {*, k} \| ^ {2}\right) \\ \stackrel {(c)} {\leq} \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}}\right) \cdot \frac {4 (\sigma^ {2} + C)}{m _ {k - 1} p (1 - p)} + \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \cdot 8 \tilde {L} ^ {2} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2}\right) \\ \end{array} +$$ + +where (a) holds because $\bar{\alpha}_{k - 1} - \alpha_{*,k - 1}$ and $\frac{\delta + \max_i\|\hat{\mathbf{g}}_{1:T_k,i}^k\|_2}{2\eta_kT_k}$ are independent conditioning on $\mathcal{F}_{k - 1}$ , (b) holds because of the update of $\bar{\alpha}_{k - 1}$ and $\alpha_{*,k} = \mathbb{E}\left[h(\bar{\mathbf{w}}_k;\mathbf{x})|y = -1\right] - \mathbb{E}\left[h(\bar{\mathbf{w}}_k;\mathbf{x})|y = 1\right]$ , (c) holds due to the $2\tilde{L}$ -Lipschitz continuity of $\mathbb{E}\left[h(\mathbf{w};\mathbf{x})|y = -1\right] - \mathbb{E}\left[h(\mathbf{w};\mathbf{x})|y = 1\right]$ . + +Taking $m_{k - 1}\geq \frac{2(\sigma^2 + C)}{p(1 - p)(d + 3)\eta_k^2}$ , then we have + +$$ +\begin{array}{l} \mathbb {E} _ {k - 1} \left[ \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \min _ {\mathbf {v}} \phi_ {k} (\mathbf {v}) \right] \\ \leq \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}}\right) \| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2} + \mathbb {E} _ {k - 1} \left(\frac {\eta_ {k}}{T _ {k}} \sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: T _ {k}, i} ^ {k} \| _ {2}\right) + \mathbb {E} _ {k - 1} (\mathbf {I I}) \\ + \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 T _ {k}}\right) \cdot 2 \eta_ {k} (d + 3) + \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \cdot 8 \tilde {L} ^ {2} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2}\right) \\ = \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}}\right) \| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2} + \mathbb {E} _ {k - 1} \left[ \frac {\eta_ {k}}{T _ {k}} \left(\sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: T _ {k}, i} ^ {k} \| _ {2} + (d + 3) \left(\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1: T _ {k}, i} ^ {k} \| _ {2}\right)\right) \right] \\ + \mathbb {E} _ {k - 1} (\mathbf {I I}) + \mathbb {E} _ {k - 1} \left(\frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \cdot 8 \tilde {L} ^ {2} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2}\right) \\ \end{array} +$$ + +Define $\widetilde{\alpha}_0^k = \alpha_0^k$ and + +$$ +\widetilde {\alpha} _ {t + 1} ^ {k} = \arg \min _ {\alpha} \left[ \frac {\eta_ {k}}{t} \sum_ {\tau = 1} ^ {t} \left(- \nabla_ {\alpha} f (\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}) - (- \nabla_ {\alpha} F (\mathbf {v} _ {t} ^ {k}, \alpha_ {t} ^ {k}; \xi_ {t} ^ {k}))\right) \alpha + \frac {1}{t} \psi_ {t} ^ {k} (\alpha) \right], +$$ + +where $\psi_t^k (\alpha) = \psi_t^k (\mathbf{u})$ in which $\mathbf{u} = [0,\dots ,0,\alpha ]$ and $\mathbf{u}_0^k = [0,\dots ,0,\alpha_0^k ]$ . By setting + +$$ +T_{k} = \inf \left\{\tau :\tau \geq M_{k}\max \left(\frac{(\delta + \max_{i}\| \hat{\mathbf{g}}_{1:\tau,i}^{k}\|_{2})\max (1,8\tilde{L}^{2})}{c},2c\left(\sum_{i = 1}^{d + 3}\| \hat{\mathbf{g}}_{1:\tau ,i}^{k}\|_{2} + (d + 3)\left(\delta +\max_{i}\| \hat{\mathbf{g}}_{1:\tau ,i}^{k}\|_{2}\right)\right)\right)\right\} , +$$ + +then $T_{k}$ is a stopping time which is bounded almost surely. By stopping time argument, we have + +$$ +\mathbb {E} _ {k - 1} \left[ \frac {\sum_ {t = 1} ^ {T _ {k}} (\mathbf {v} _ {t} ^ {k} - \mathbf {v} _ {*}) ^ {\top} \left(\nabla_ {\mathbf {v}} f (\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k}) - \nabla_ {\mathbf {v}} F (\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k} ; \xi_ {t} ^ {k}))\right)}{T _ {k}} \right] = 0 +$$ + +$$ +\mathbb {E} _ {k - 1} \left[ \frac {\sum_ {t = 1} ^ {T _ {k}} (\alpha_ {t} ^ {k} - \widetilde {\alpha} _ {t} ^ {k}) ^ {\top} \left(- \nabla_ {\alpha} f (\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k}) - \left(- \nabla_ {\alpha} F (\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k} ; \xi_ {t} ^ {k})\right)\right)}{T _ {k}} \right] = 0 +$$ + +Hence we know that + +$$ +\mathbb {E} _ {k - 1} \left(\mathbf {I I}\right) = \mathbb {E} _ {k - 1} \left[ \frac {\sum_ {t = 1} ^ {T _ {k}} \left(\widetilde {\alpha} _ {t} ^ {k} - \alpha_ {*, k}\right) \left(- \nabla_ {\alpha} f \left(\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k}\right) - \left(- \nabla_ {\alpha} F \left(\mathbf {v} _ {t} ^ {k} , \alpha_ {t} ^ {k} ; \xi_ {t} ^ {k}\right)\right)\right)}{T _ {k}} \right]. +$$ + +Note that the variance of stochastic gradient is smaller than its second moment, we can follow the similar analysis of bounding $\mathbf{I}$ to show that + +$$ +\mathbb {E} _ {k - 1} \left(\mathbf {I I}\right) \leq \mathbb {E} _ {k - 1} \left[ \frac {\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : T _ {k} , i} ^ {k} \| _ {2}}{2 \eta_ {k} T _ {k}} \| \mathbf {u} _ {1} ^ {k} - \mathbf {u} _ {* , k} \| _ {2} ^ {2} + \frac {\eta_ {k}}{T _ {k}} \sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: T _ {k}} ^ {k} \| _ {2} \right]. +$$ + +Following the same analysis of bounding the RHS of (23), we know that + +$$ +\mathbb {E} _ {k - 1} \left[ \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \underset {\mathbf {v}} {\min} \phi_ {k} (\mathbf {v}) \right] \leq \frac {c \left(\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| _ {2} ^ {2} + \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| _ {2} ^ {2}\right)}{\eta_ {k} M _ {k}} + \frac {\eta_ {k}}{c M _ {k}}. +$$ + +![](images/29471e04bc85d41b68d048226a84860849617b8cb08089153da4d8f5637e03ec.jpg) + +# A.5 PROOF OF THEOREM 3 + +Proof. Define $\phi_k(\mathbf{v}) = \phi(\mathbf{v}) + \frac{1}{2\gamma} \| \mathbf{v} - \bar{\mathbf{v}}_{k-1} \|^2$ . We can see that $\phi_k(\mathbf{v})$ is convex and smooth function since $\gamma \leq 1 / L$ . The smoothness parameter of $\phi_k$ is $\hat{L} = L + \gamma^{-1}$ . Define $\mathbf{s}_k = \arg \min_{\mathbf{v} \in \mathbb{R}^{d+2}} \phi_k(\mathbf{v})$ . According to Theorem 2.1.5 of (Nesterov, 2013), we have + +$$ +\left\| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \right\| ^ {2} \leq 2 \hat {L} \left(\phi_ {k} (\bar {\mathbf {v}} _ {k}) - \phi_ {k} (\mathbf {s} _ {k})\right). \tag {25} +$$ + +Combining (25) with Lemma 3 yields + +$$ +\mathbb {E} _ {k - 1} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} \leq 2 \hat {L} \left(\frac {c \left(\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| _ {2} ^ {2} + \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| _ {2} ^ {2}\right)}{\eta_ {k} M _ {k}} + \frac {\eta_ {k}}{c M _ {k}}\right). \tag {26} +$$ + +Note that $\phi_k(\bar{\mathbf{v}})$ is $(\gamma^{-1} - L)$ -strongly convex, and $\gamma = \frac{1}{2L}$ , we have + +$$ +\phi_ {k} \left(\bar {\mathbf {v}} _ {k - 1}\right) \geq \phi_ {k} \left(\mathbf {s} _ {k}\right) + \frac {L}{2} \left\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \right\| ^ {2}. \tag {27} +$$ + +Plugging in $\mathbf{s}_k$ into Lemma 3 and combining (27) yields + +$$ +\mathbb {E} _ {k - 1} \left[ \phi \left(\bar {\mathbf {v}} _ {k}\right) + L \| \bar {\mathbf {v}} _ {k} - \bar {\mathbf {v}} _ {k - 1} \| ^ {2} \right] \leq \phi_ {k} \left(\bar {\mathbf {v}} _ {k - 1}\right) - \frac {L}{2} \| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| ^ {2} + \frac {c \left(\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \| _ {2} ^ {2} + \mathbb {E} _ {k - 1} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| _ {2} ^ {2}\right)}{\eta_ {k} M _ {k}} + \frac {\eta_ {k}}{c M _ {k}} +$$ + +By taking $\eta_k M_k L \geq 4c$ , rearranging the terms, and noting that $\phi_k(\bar{\mathbf{v}}_{k-1}) = \phi(\bar{\mathbf{v}}_{k-1})$ , we have + +$$ +\frac {c \left(\left\| \bar {\mathbf {v}} _ {k - 1} - \mathbf {s} _ {k} \right\| _ {2} ^ {2} + \mathbb {E} _ {k - 1} \left\| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \right\| _ {2} ^ {2}\right)}{\eta_ {k} M _ {k}} \leq \phi (\bar {\mathbf {v}} _ {k - 1}) - \mathbb {E} _ {k - 1} \left[ \phi (\bar {\mathbf {v}} _ {k}) \right] + \frac {\eta_ {k}}{c M _ {k}}. \tag {28} +$$ + +Combining (28) and (26) yields + +$$ +\mathbb {E} _ {k - 1} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} \leq 6 L \left(\phi (\bar {\mathbf {v}} _ {k - 1}) - \mathbb {E} _ {k - 1} [ \phi (\bar {\mathbf {v}} _ {k}) ] + 2 \frac {\eta_ {k}}{c M _ {k}}\right). \tag {29} +$$ + +Taking expectation on both sides over all randomness until $\bar{\mathbf{v}}_{k - 1}$ is generated and by the tower property, we have + +$$ +\mathbb {E} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} \leq 6 L \left(\mathbb {E} \left[ \phi (\bar {\mathbf {v}} _ {k - 1}) - \phi (\mathbf {v} _ {*}) \right] - \mathbb {E} \left[ \phi (\bar {\mathbf {v}} _ {k}) - \phi (\mathbf {v} _ {*}) \right] + \frac {2 \eta_ {k}}{c M _ {k}}\right). \tag {30} +$$ + +Note that $\phi (\mathbf{v})$ is $L$ -smooth and hence is $L$ -weakly convex, so we have + +$$ +\begin{array}{l} \phi (\bar {\mathbf {v}} _ {k - 1}) \geq \phi (\bar {\mathbf {v}} _ {k}) + \langle \nabla \phi (\bar {\mathbf {v}} _ {k}), \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \rangle - \frac {L}{2} \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2} \\ = \phi (\bar {\mathbf {v}} _ {k}) + \left\langle \nabla \phi (\bar {\mathbf {v}} _ {k}) + 2 L (\bar {\mathbf {v}} _ {k} - \bar {\mathbf {v}} _ {k - 1}), \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \right\rangle + \frac {3}{2} L \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2} \\ \end{array} +$$ + +$$ +\stackrel {(a)} {=} \phi (\bar {\mathbf {v}} _ {k}) + \left\langle \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}), \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \right\rangle + \frac {3}{2} L \| \bar {\mathbf {v}} _ {k - 1} - \bar {\mathbf {v}} _ {k} \| ^ {2} \tag {31} +$$ + +$$ +\begin{array}{l} \stackrel {(b)} {=} \phi (\bar {\mathbf {v}} _ {k}) - \frac {1}{2 L} \left\langle \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}), \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \nabla \phi (\bar {\mathbf {v}} _ {k}) \right\rangle + \frac {3}{8 L} \left\| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) - \nabla \phi (\bar {\mathbf {v}} _ {k}) \right\| ^ {2} \\ = \phi (\bar {\mathbf {v}} _ {k}) - \frac {1}{8 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} - \frac {1}{4 L} \left\langle \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}), \nabla \phi (\bar {\mathbf {v}} _ {k}) \right\rangle + \frac {3}{8 L} \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2}, \\ \end{array} +$$ + +where (a) and (b) hold by the definition of $\bar{\phi}_k$ . + +Rearranging the terms in (31) yields + +$$ +\begin{array}{l} \phi (\bar {\mathbf {v}} _ {k}) - \phi (\bar {\mathbf {v}} _ {k - 1}) \leq \frac {1}{8 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} + \frac {1}{4 L} \left\langle \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}), \nabla \phi (\bar {\mathbf {v}} _ {k}) \right\rangle - \frac {3}{8 L} \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2} \\ \stackrel {(a)} {\leq} \frac {1}{8 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} + \frac {1}{8 L} \left(\| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} + \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2}\right) - \frac {3}{8 L} \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2} \\ = \frac {1}{4 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} - \frac {1}{4 L} \| \nabla \phi (\bar {\mathbf {v}} _ {k}) \| ^ {2} \\ \stackrel {(b)} {\leq} \frac {1}{4 L} \| \nabla \phi_ {k} (\bar {\mathbf {v}} _ {k}) \| ^ {2} - \frac {\mu}{2 L} \left(\phi (\bar {\mathbf {v}} _ {k}) - \phi (\mathbf {v} _ {*})\right), \tag {32} \\ \end{array} +$$ + +where (a) holds by using $\langle \mathbf{a},\mathbf{b}\rangle \leq \frac{1}{2} (\| \mathbf{a}\| ^2 +\| \mathbf{b}\| ^2)$ , and (b) holds by the PL property of $\phi$ + +Define $\Delta_k = \phi(\bar{\mathbf{v}}_k) - \phi(\mathbf{v}_*)$ . Combining (30) and (32), we can see that + +$$ +\mathbb {E} \left[ \Delta_ {k} - \Delta_ {k - 1} \right] \leq \frac {3}{2} \left(\mathbb {E} \left[ \Delta_ {k - 1} - \Delta_ {k} \right] + \frac {2 \eta_ {k}}{c M _ {k}}\right) - \frac {\mu}{2 L} \mathbb {E} \left[ \Delta_ {k} \right], +$$ + +which implies that + +$$ +\left(\frac {5}{2} + \frac {\mu}{2 L}\right) \mathbb {E} [ \Delta_ {k} ] \leq \frac {5}{2} \mathbb {E} [ \Delta_ {k - 1} ] + \frac {3 \eta_ {k}}{c M _ {k}}. +$$ + +As a result, we have + +$$ +\begin{array}{l} \mathbb {E} [ \Delta_ {k} ] \leq \frac {5}{5 + \mu / L} \mathbb {E} [ \Delta_ {k - 1} ] + \frac {6 (\eta_ {k} / c M _ {k})}{5 + \mu / L} = \left(1 - \frac {\mu / L}{5 + \mu / L}\right) \left(\mathbb {E} [ \Delta_ {k - 1} ] + \frac {6 \eta_ {k}}{5 c M _ {k}}\right) \\ \leq \left(1 - \frac {\mu / L}{5 + \mu / L}\right) ^ {k} \mathbb {E} [ \Delta_ {0} ] + \frac {6}{5 c} \sum_ {j = 1} ^ {k} \frac {\eta_ {j}}{M _ {j}} \left(1 - \frac {\mu / L}{5 + \mu / L}\right) ^ {k + 1 - j}. \\ \end{array} +$$ + +By setting $\eta_{k} = \eta_{0}\exp \left(-\frac{(k - 1)}{2}\frac{\mu / L}{5 + \mu / L}\right)$ , $M_{k} = \frac{4c}{L\eta_{0}}\exp \left(\frac{(k - 1)}{2}\frac{\mu / L}{5 + \mu / L}\right)$ at $k$ -th stage, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \Delta_ {k} \right] \leq \left(1 - \frac {\mu / L}{5 + \mu / L}\right) ^ {k} \mathbb {E} \left[ \Delta_ {0} \right] + \frac {\eta_ {0} ^ {2} L}{1 0 c ^ {2}} \sum_ {j = 1} ^ {k} \exp \left(- k \frac {\mu / L}{5 + \mu / L}\right) \\ \leq \exp \left(- k \frac {\mu / L}{5 + \mu / L}\right) \Delta_ {0} + \frac {\eta_ {0} ^ {2} L}{1 0 c ^ {2}} k \exp \left(- k \frac {\mu / L}{5 + \mu / L}\right). \\ \end{array} +$$ + +To achieve $\mathbb{E}[\Delta_K] \leq \epsilon$ , it suffices to let $K$ satisfy $\exp \left(-K \frac{\mu / L}{5 + \mu / L}\right) \leq \min \left(\frac{\epsilon}{2\Delta_0}, \frac{5c^2\epsilon}{K\eta_0^2L}\right)$ , i.e. $K \geq \left(\frac{5L}{\mu} + 1\right) \max \left(\log \frac{2\Delta_0}{\epsilon}, \log K + \log \frac{\eta_0^2L}{5c^2\epsilon}\right)$ . + +Take $c = \frac{1}{\sqrt{d + 3}}$ . If $\| \hat{\mathbf{g}}_{1:T_k,i}^k\|_2 \leq \delta \cdot T_k^\alpha$ for $\forall k$ , where $0 \leq \alpha \leq \frac{1}{2}$ , and note that when $\tau \geq 1$ , + +$$ +\begin{array}{l} \max \left(\frac {(\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1 : \tau , i} ^ {k} \| _ {2}) \max (1 , 8 \tilde {L} ^ {2})}{2 c}, c \left(\sum_ {i = 1} ^ {d + 3} \| \hat {\mathbf {g}} _ {1: \tau , i} ^ {k} \| _ {2} + (d + 3) \left(\delta + \max _ {i} \| \hat {\mathbf {g}} _ {1: \tau , i} ^ {k} \| _ {2}\right)\right)\right) \\ \leq \left[ (4 + 8 \tilde {L} ^ {2}) \sqrt {d + 3} \right] \delta \cdot \tau^ {\alpha} \\ \end{array} +$$ + +so we have $T_{k} \leq \frac{4c}{L\eta_{0}} \exp \left(\frac{(k - 1)}{2}\frac{\mu / L}{5 + \mu / L}\right) \cdot \left[(4 + 8\tilde{L}^{2})\sqrt{d + 3}\right] \delta T_{k}^{\alpha}$ , and hence + +$$ +T _ {k} \leq \left(\frac {4 \delta c}{L \eta_ {0}} \exp \left(\frac {(k - 1)}{2} \frac {\mu / L}{5 + \mu / L}\right) \cdot \left[ (4 + 8 \tilde {L} ^ {2}) \sqrt {d + 3} \right]\right) ^ {\frac {1}{1 - \alpha}}. +$$ + +Noting that $c = \frac{1}{\sqrt{d + 3}}$ , we can see that the total iteration complexity is + +$$ +\sum_ {k = 1} ^ {K} T _ {k} \leq \left(\frac {4 \delta (4 + 8 \tilde {L} ^ {2})}{L \eta_ {0}}\right) ^ {\frac {1}{1 - \alpha}} \cdot \frac {\exp \left(K \frac {\mu / L}{(5 + \mu / L) (2 - 2 \alpha)}\right) - 1}{\exp \left(\frac {\mu / L}{(5 + \mu / L) (2 - 2 \alpha)}\right) - 1} = \widetilde {O} \left(\left(\frac {L \delta^ {2} d}{\mu^ {2} \epsilon}\right) ^ {\frac {1}{2 (1 - \alpha)}}\right). +$$ + +The required number of samples is + +$$ +\sum_ {k = 1} ^ {K} m _ {k} = \frac {2 (\sigma^ {2} + C)}{p (1 - p) \eta_ {0} ^ {2} (d + 3)} \cdot \frac {\exp \left(K \frac {\mu / L}{5 + \mu / L}\right) - 1}{\exp \left(\frac {\mu / L}{5 + \mu / L}\right) - 1} = \widetilde {O} \left(\frac {L ^ {3} \sigma^ {2}}{\mu^ {2} \epsilon}\right). +$$ + +# A.6 PROOF OF LEMMA 1 + +Proof. For any fixed $\mathbf{w}$ , define $(a_{\mathbf{w}}^{*}, b_{\mathbf{w}}^{*}) = \arg \min_{a, b} \phi(\mathbf{w}, a, b) (\phi(\mathbf{w}, a, b)$ is strongly convex in terms of $(a, b)$ , so the argmin is well-defined and unique). Note that + +$$ +\phi (\mathbf{v}) - \phi (\mathbf{v}_{*}) = \phi (\mathbf{w},a,b) - \min_{\mathbf{w},a,b}\phi (\mathbf{w},a,b) = \phi (\mathbf{w},a,b) - \phi (\mathbf{w},a_{\mathbf{w}}^{*},b_{\mathbf{w}}^{*}) + \phi (\mathbf{w},a_{\mathbf{w}}^{*},b_{\mathbf{w}}^{*}) - \min_{\mathbf{w},a,b}\phi (\mathbf{w},a,b) +$$ + +We bound $\phi (\mathbf{w},a,b) - \phi (\mathbf{w},a_{\mathbf{w}}^{*},b_{\mathbf{w}}^{*})$ and $\phi (\mathbf{w},a_{\mathbf{w}}^{*},b_{\mathbf{w}}^{*}) - \min_{\mathbf{w},a,b}\phi (\mathbf{w},a,b)$ respectively: + +- Note that $\phi(\mathbf{w}, a, b)$ is strong convex in $(a, b)$ with modulus $2\min(p, 1 - p)$ , so the PL condition holds, which means that + +$$ +\phi (\mathbf {w}, a, b) - \phi (\mathbf {w}, a _ {\mathbf {w}} ^ {*}, b _ {\mathbf {w}} ^ {*}) \leq \frac {1}{4 \min (p , 1 - p)} \| \nabla_ {(a, b)} \phi (\mathbf {w}, a, b) \| ^ {2} +$$ + +$$ +\begin{array}{l} \phi (\mathbf {w}, a _ {\mathbf {w} ^ {*}}, b _ {\mathbf {w} ^ {*}}) - \min _ {\mathbf {w}, a, b} \phi (\mathbf {w}, a, b) = \min _ {a, b} \phi (\mathbf {w}, a, b) - \min _ {\mathbf {w}, a, b} \phi (\mathbf {w}, a, b) \leq \frac {1}{2 \mu} \left\| \nabla_ {\mathbf {w}} \min _ {a, b} \phi (\mathbf {w}, a, b) \right\| ^ {2} \\ = \frac {1}{2 \mu} \left\| \nabla_ {\mathbf {w}} \phi (\mathbf {w}, a, b) + \nabla_ {\mathbf {w}} \phi (\mathbf {w}, a _ {\mathbf {w}} ^ {*}, b _ {\mathbf {w}} ^ {*}) - \nabla_ {\mathbf {w}} \phi (\mathbf {w}, a, b) \right\| ^ {2} \\ \leq \frac {1}{2 \mu} \left(2 \| \nabla_ {\mathbf {w}} \phi (\mathbf {w}, a, b) - \nabla_ {\mathbf {w}} \phi \left(\mathbf {w}, a _ {\mathbf {w}} ^ {*}, b _ {\mathbf {w}} ^ {*}\right) \| ^ {2} + 2 \| \nabla_ {\mathbf {w}} \phi (\mathbf {w}, a, b) \| ^ {2}\right) \\ \leq \frac {1}{2 \mu} \left(8 \tilde {L} ^ {2} \| (a, b) - \left(a _ {\mathbf {w}} ^ {*}, b _ {\mathbf {w}} ^ {*}\right) \| ^ {2} + 2 \| \nabla_ {\mathbf {w}} \phi (\mathbf {w}, a, b) \| ^ {2}\right) \\ \leq \frac {1}{2 \mu} \left(\frac {8 \tilde {L} ^ {2}}{4 \min (p ^ {2} , (1 - p) ^ {2})} \left\| \nabla_ {(a, b)} \phi (\mathbf {w}, a, b) \right\| ^ {2} + 2 \| \nabla_ {\mathbf {w}} \phi (\mathbf {w}, a, b) \| ^ {2}\right), \\ \end{array} +$$ + +where the last inequality holds since $\phi (\mathbf{w},a,b)$ is strongly convex in $(a,b)$ with modulus $2\min (p,1 - p)$ . + +Combining these two cases, we know that $\phi (\mathbf{v}) - \phi (\mathbf{v}_*)\leq \frac{1}{2\mu'}\| \nabla \phi (\mathbf{v})\| ^2$ , where $\mu^{\prime} =$ $\frac{1}{\max\left(\frac{1}{2\min(p,1 - p)} + \frac{2\hat{G}^2}{\mu\min(p^2,(1 - p)^2)},\frac{2}{\mu}\right)}.$ + +# A.7 AN EXAMPLE THAT SATISFIES PL CONDITION + +One Hidden Layer Neural Network One hidden neural network satisfies $h(\mathbf{w};\mathbf{x}) = \sigma (\mathbf{w}^{\top}\mathbf{x})$ where $\sigma$ is the activation function. We have the following theorem: + +Theorem 4. Let $\sigma$ be the Leaky ReLU activation function such that $\sigma(z) = c_1z$ for $z > 0$ and $\sigma(z) = c_2z$ if $z \leq 0$ . If $\mathbb{E}[\mathbf{x}|y = 1] = \mathbb{E}[\mathbf{x}|y = -1] = 0$ , $\mathbb{E}\left[\mathbf{x}\mathbf{x}'^{\top} \mid y = 1, y = -1\right] = \mathbf{0}_{d \times d}$ , then + +$$ +f (\mathbf {w}) := \mathbb {E} _ {\mathbf {z}, \mathbf {z} ^ {\prime}} \left[ (1 - \sigma \left(\mathbf {w} ^ {\top} \mathbf {x}\right) + \sigma \left(\mathbf {w} ^ {\top} \mathbf {x} ^ {\prime}\right)) ^ {2} | y = 1, y ^ {\prime} = - 1 \right] +$$ + +satisfies PL condition with $\mu = 2\min (c_1^2,c_2^2)\left[\lambda_{min}\left(\mathbb{E}\left[\mathbf{x}\mathbf{x}^\top |y = 1\right]\right) + \lambda_{min}\left(\overline{\mathbb{E}}\left[\mathbf{x}\mathbf{x}^\top |y = -1\right]\right)\right]$ , where $\lambda_{min}$ stands for the minimum eigenvalue. + +Remark: Consider the case that $\mathbf{x}$ is a zero mean Gaussian distribution with non-degenerate covariance matrix. Then $\mu > 0$ since the minimum eigenvalue appeared in the expression of $\mu$ is positive. + +Proof. Define $g_{1}(x) = (1 - x)^{2}$ , $g_{2}(\mathbf{w}) = \sigma (\mathbf{w}^{\top}\mathbf{x}) - \sigma (\mathbf{w}^{\top}\mathbf{x}^{\prime})$ , $F(\mathbf{w}) = (1 - \sigma (\mathbf{w}^{\top}\mathbf{x}) + \sigma (\mathbf{w}^{\top}\mathbf{x}^{\prime}))^{2}$ . We know that $f(\mathbf{w}) = \mathbb{E}_{\mathbf{z},\mathbf{z}'}[F(\mathbf{w})|y = 1, y' = -1]$ , $F(\mathbf{w}) = g_{1}(g_{2}(\mathbf{w}))$ . For fixed $\mathbf{x}$ , $\mathbf{x}'$ , we can write $\sigma (\mathbf{w}^{\top}\mathbf{x})$ and $\sigma (\mathbf{w}^{\top}\mathbf{x}^{\prime})$ as $a\mathbf{w}^{\top}\mathbf{x}$ and $b\mathbf{w}^{\top}\mathbf{x}^{\prime}$ respectively, and it is obvious that $a^{2} \geq \min(c_{1}^{2}, c_{2}^{2})$ and $b^{2} \geq \min(c_{1}^{2}, c_{2}^{2})$ . Note that $g_{1}$ is 2-strongly convex. Since the conditional expectation perserves the strong convexity, as a result, for $\forall \mathbf{w}$ , let $\mathbf{w}_p$ be the closest optimal point of + +$\mathbf{w}$ such that $f_{*} = f(\mathbf{w}_{p})$ , we have + +$$ +\begin{array}{l} f \left(\mathbf {w} _ {p}\right) - f (\mathbf {w}) = \mathbb {E} \left[ g _ {1} \left(g _ {2} \left(\mathbf {w} _ {p}\right)\right) | y = 1, y ^ {\prime} = - 1 \right] - \mathbb {E} \left[ g _ {1} \left(g _ {2} (\mathbf {w})\right) | y = 1, y ^ {\prime} = - 1 \right] \\ \geq \mathbb {E} \left[ \langle \nabla g _ {1} (g _ {2} (\mathbf {w})), g _ {2} (\mathbf {w} _ {p}) - g _ {2} (\mathbf {w}) \rangle | y = 1, y = - 1 \right] + \mathbb {E} \left[ (g _ {2} (\mathbf {w}) - g _ {2} (\mathbf {w} _ {p})) ^ {2} | y = 1, y ^ {\prime} = - 1 \right] \\ = \mathbb {E} \left[ \left\langle 2 \left(g _ {2} (\mathbf {w}) - 1\right), \left(g _ {2} \left(\mathbf {w} _ {p}\right) - g _ {2} (\mathbf {w})\right) \right\rangle | y = 1, y = - 1 \right] + \mathbb {E} \left[ \left(g _ {2} (\mathbf {w}) - g _ {2} \left(\mathbf {w} _ {p}\right)\right) ^ {2} | y = 1, y ^ {\prime} = - 1 \right] \\ = \mathbb {E} \left[ \left\langle - 2 \left(1 - a \mathbf {w} ^ {\top} \mathbf {x} + b \mathbf {w} ^ {\top} \mathbf {x} ^ {\prime}\right), \left(a \mathbf {x} ^ {\top} - b \mathbf {x} ^ {\prime \top}\right) \left(\mathbf {w} _ {p} - \mathbf {w}\right) \right\rangle \mid y = 1, y ^ {\prime} = - 1 \right] \\ + \mathbb {E} \left[ \left((a \mathbf {x} ^ {\top} - b \mathbf {x} ^ {\prime \top}) (\mathbf {w} _ {p} - \mathbf {w})\right) ^ {2} \mid y = 1, y ^ {\prime} = - 1 \right] \\ = \mathbb {E} \left[ \left\langle 2 \left(1 - a \mathbf {w} ^ {\top} \mathbf {x} + b \mathbf {w} ^ {\top} \mathbf {x} ^ {\prime}\right) \left(b \mathbf {x} ^ {\prime} - a \mathbf {x}\right), \mathbf {w} _ {p} - \mathbf {w} \right\rangle \mid y = 1, y ^ {\prime} = - 1 \right] \\ + \mathbb {E} \left[ \left((a \mathbf {x} ^ {\top} - b \mathbf {x} ^ {\prime \top}) (\mathbf {w} _ {p} - \mathbf {w})\right) ^ {2} \Big | y = 1, y ^ {\prime} = - 1 \right] \\ = \left\langle \nabla f (\mathbf {w}), \mathbf {w} _ {p} - \mathbf {w} \right\rangle + \mathbb {E} \left[ \left(\mathbf {w} _ {p} - \mathbf {w}\right) ^ {\top} \left(a \mathbf {x} - b \mathbf {x} ^ {\prime}\right) \left(a \mathbf {x} ^ {\top} - b \mathbf {x} ^ {\prime \top}\right) \left(\mathbf {w} _ {p} - \mathbf {w}\right) \mid y = 1, y ^ {\prime} = - 1 \right] \\ = \left\langle \nabla f (\mathbf {w}), \mathbf {w} _ {p} - \mathbf {w} \right\rangle + \left(\mathbf {w} _ {p} - \mathbf {w}\right) ^ {\top} \mathbb {E} \left[ \left(a ^ {2} \mathbf {x x} ^ {\top} + b ^ {2} \mathbf {x} ^ {\prime} \mathbf {x} ^ {\prime \top}\right) | y = 1, y ^ {\prime} = - 1 \right] \left(\mathbf {w} _ {p} - \mathbf {w}\right) \\ \geq \left\langle \nabla f (\mathbf {w}), \mathbf {w} _ {p} - \mathbf {w} \right\rangle + \left(\mathbf {w} _ {p} - \mathbf {w}\right) ^ {\top} \lambda_ {\min } \left(\mathbb {E} \left[ \left(a ^ {2} \mathbf {x x} ^ {\top} + b ^ {2} \mathbf {x} ^ {\prime} \mathbf {x} ^ {\prime \top}\right) | y = 1, y ^ {\prime} = - 1 \right]\right) \left(\mathbf {w} _ {p} - \mathbf {w}\right) \\ \stackrel {(*)} {\geq} \langle \nabla f (\mathbf {w}), \mathbf {w} _ {p} - \mathbf {w} \rangle + \frac {2 \lambda_ {\min} \left(\mathbb {E} \left[ a ^ {2} \mathbf {x x} ^ {\top} | y = 1 \right]\right) + 2 \lambda_ {\min} \left(\mathbb {E} \left[ b ^ {2} \mathbf {x x} ^ {\top} | y = - 1 \right]\right)}{2} \| \mathbf {w} _ {p} - \mathbf {w} \| ^ {2} \\ \geq \langle \nabla f (\mathbf {w}), \mathbf {w} _ {p} - \mathbf {w} \rangle + \frac {2 \min \left(c _ {1} ^ {2} , c _ {2} ^ {2}\right) \left[ \lambda_ {\min } \left(\mathbb {E} \left[ \mathbf {x x} ^ {\top} | y = 1 \right]\right) + \lambda_ {\min } \left(\mathbb {E} \left[ \mathbf {x x} ^ {\top} | y = - 1 \right]\right) \right]}{2} \| \mathbf {w} _ {p} - \mathbf {w} \| ^ {2} \\ \geq \min _ {\mathbf {w} ^ {\prime}} \left[ \langle \nabla f (\mathbf {w}), \mathbf {w} ^ {\prime} - \mathbf {w} \rangle + \frac {2 \min \left(c _ {1} ^ {2} , c _ {2} ^ {2}\right) \left[ \lambda_ {\min } \left(\mathbb {E} \left[ \mathbf {x x} ^ {\top} | y = 1 \right]\right) + \lambda_ {\min } \left(\mathbb {E} \left[ \mathbf {x x} ^ {\top} | y = - 1 \right]\right) \right]}{2} \| \mathbf {w} ^ {\prime} - \mathbf {w} \| ^ {2} \right] \\ = - \frac {1}{4 \operatorname* {m i n} \left(c _ {1} ^ {2} , c _ {2} ^ {2}\right) \left[ \lambda_ {\min } \left(\mathbb {E} \left[ \mathbf {x x} ^ {\top} \mid y = 1 \right]\right) + \lambda_ {\min } \left(\mathbb {E} \left[ \mathbf {x x} ^ {\top} \mid y = - 1 \right]\right) \right]} \| \nabla f (\mathbf {w}) \| ^ {2}, \\ \end{array} +$$ + +where $(\ast)$ holds since $\lambda_{\min}(A + B) \geq \lambda_{\min}(A) + \lambda_{\min}(B)$ , and the last inequality holds since $a^2 \geq \min(c_1^2, c_2^2)$ and $b^2 \geq \min(c_1^2, c_2^2)$ . + +# A.8 DATASET PREPARATION + +We construct the datasets in the following ways: For CIFAR10/STL10, we label the first 5 classes as negative ("-") class and the last 5 classes as positive ("+") class, which leads to a 50/50 class ratio. For CIFAR100, we label the first 50 classes as negative ("-") class and the last 50 classes as positive ("+") class. For the imbalanced cases, we randomly remove $90\%$ , $80\%$ , $60\%$ data from negative samples on all training data, which lead to 91/9, 83/17, 71/29 ratio respectively. For testing data, we keep them unchanged. + +# A.9 MORE EXPERIMENTS + +Model pretraining is effective in many deep learning tasks, and thus we further evaluate the performance of the proposed methods on pretrained models. We first train the model using SGD up to 2000 iterations with an initial step size of 0.1, and then continue training using PPD-SG. We denote this method as $PPD - SG + pretrain$ and the results are shown in Figure 2. The parameters are tuned in the same range as in Section 5. It is observed that pretraining model helps the convergence of model and it can achieve the better performance in terms of AUC in most cases. + +# A.10 ADDITIONAL EXPERIMENTS WITH DIFFERENT LABELING ORDER + +To investigate the effects of labeling order, we also attempt to randomly partition the classes as positive or negative equally. For CIFAR10 and STL10 dataset, we randomly partition the 10 classes into two labels (i.e., randomly select 5 classes as positive label and other 5 classes as negative label). For CIFAR100 dataset, we randomly partition the 100 classes into two labels (i.e., randomly select 50 classes as positive label and other 50 classes as negative label). After that we randomly remove $95\%$ , $90\%$ , from negative samples on all training data, which lead to 20:1, 10:1 ratios respectively. For testing data, we keep them unchanged. We also add AdaGrad for minimizing cross-entropy loss as a new baseline. The corresponding experimental results are included in Figure 3. We can see that PPD-Adagrad and PPD-SG converge faster than other baselines. + +![](images/7e3ac08dec365b9009c6c7f7787a8fb0953eec92c6a18bc61c19074381908269.jpg) + +![](images/466000b78198ef68f52bfa1bf016f5346d50725acbdfe081e2da45c178b70942.jpg) + +![](images/9da5248341cea8b820a6bdc3d1433327bb27c5b39ca054fb17296d7af2f24d3d.jpg) + +![](images/37c429958cb12bb02aa0e610d1edf809512166079b1716d0ff169e22b1c9158b.jpg) + +![](images/6895108f46334c17f877ceb5b11aa177b10653948e063d597a39332dc77e9493.jpg) + +![](images/0f7b66545bb7bd89aaee52bc895a3cc024c1a415536af4f0da5db26e1d76a47d.jpg) + +![](images/55a9520463531b2763185fc7d014a51ac1375aa21b6dac2890c0a8ed40de7bd4.jpg) + +![](images/b5b7ac01ed03d962dba526dbf882ebe353aab855d6d23ab893cf729e1d75d886.jpg) + +![](images/52bc8f77903ee39dc8d868fd22827d330d2500343373080952c0b92f38f16abf.jpg) + +![](images/68c1fb764c554dac667087d76863c876c3c081c8bd7ca98d34fcb8edf90c83e5.jpg) + +![](images/e02d383756f69a8455d60347165aeb6d4f7dbd64973b498c34cae7bbea23c2a5.jpg) + +![](images/88cd45a93513596e9b97fe8acffbef095671d9d4ad4d7e1d65a27b8fc2376064.jpg) + +![](images/5cf97d78826c3ac75f8006188d31c25f48c88bf4120d6330b20bc528c93bf51d.jpg) +Figure 2: Comparison of testing AUC on Cat&Dog, CIFAR10, CIFAR100 and STL10. + +![](images/4517b2016d40988a36fb207e1f0903efb468b1d080d77394fde3e99282a2a037.jpg) + +![](images/b1436bb927ca2bdf7cbf237bfeebf5fc2fbff13fe0e076c71ce770f4ab920a6b.jpg) + +![](images/afcd1972d87c120b7c2c34d3e6f3a16d7a5b524f6daadfbc6c1c279c54b128b2.jpg) + +![](images/3524ac3619078043e9c4427601b81853f608b5b54011a7c901f39efd58972af9.jpg) + +![](images/1c23b356dcac48ba6d8bf78d3281b23b4312e4016f014ce9ebe4c21c02d792c2.jpg) + +![](images/a61bad5fbd656b5b241a2afd547e9bb1e7b1a55a1addd51ccc2b05ee41743c46.jpg) + +![](images/9ed145a78dd7114a1977eef8bedc8a75f86415cb827e2608b902b569ccd9ea12.jpg) + +![](images/8f420ae442a9be1fe4e0836c4172218be22fdf85ad5ceaef8c633324aac44a46.jpg) +Figure 3: Comparison of testing AUC on Cat&Dog, CIFAR10, CIFAR100 and STL10. For CIFAR10 and STL10 dataset, we randomly partition the 10 classes into two labels (i.e., randomly select 5 classes as positive label and other 5 classes as negative label). For CIFAR100 dataset, we randomly partition the 100 classes into two labels (i.e., randomly select 50 classes as positive label and other 50 classes as negative label). + +![](images/d9f64c62013658f7ed87dec9a21e25966cdb59be9f790b7e6815a875f149ed18.jpg) + +![](images/0879fb95e3743d9faedcb6a96b3b015e5e3f2a322fc499933595c0d32b2dd45d.jpg) + +![](images/262dd83d69ea33393f499ae4505ae2f3fd6bf3c5fe60c3996ae0d037bf4eabf1.jpg) \ No newline at end of file diff --git a/stochasticaucmaximizationwithdeepneuralnetworks/images.zip b/stochasticaucmaximizationwithdeepneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bce1cb6ce607d9a533fc896628277a0bd8057684 --- /dev/null +++ b/stochasticaucmaximizationwithdeepneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6239af29fbbc4017d5fcdebe22eb5ee9693352decc55fdaed9256f6d04fad16 +size 1973735 diff --git a/stochasticaucmaximizationwithdeepneuralnetworks/layout.json b/stochasticaucmaximizationwithdeepneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bf69379de3c5ec652075af976227a97396a9e27e --- /dev/null +++ b/stochasticaucmaximizationwithdeepneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c689aaa62989fcceec99231f13c7a0542d1f4aba76c862ab7a0f54a2e5c52ca0 +size 1073895 diff --git a/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_content_list.json b/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bfd2091529c1d81b12e640589e456f019f3f9db9 --- /dev/null +++ b/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31bb1c3e35816cb3ca633137b949caec305ecd5656420fb38396df0de7d872cd +size 93903 diff --git a/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_model.json b/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6061063cb86599914e6d02228e805cf67d01852b --- /dev/null +++ b/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72bea683654d732e8b3914542cd4a204cd1f25f98493ef55c865d9bc83b8c8c8 +size 109877 diff --git a/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_origin.pdf b/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..edfc0bfee57ef820b4420ac2e3cc38e913f85fd8 --- /dev/null +++ b/svqnsequentialvariationalsoftqlearningnetworks/c2cf74f9-6333-40fc-b250-fdae53dc15f4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28a1aff5c2c501c860ebd0cd535c83fc4b69931b22fb01bd9a052aedb54d913b +size 4115540 diff --git a/svqnsequentialvariationalsoftqlearningnetworks/full.md b/svqnsequentialvariationalsoftqlearningnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c8ff5aac3f8a1dad1ed1dc0e76f8a024c540c954 --- /dev/null +++ b/svqnsequentialvariationalsoftqlearningnetworks/full.md @@ -0,0 +1,405 @@ +# SVQN:SEQUENTIAL VARIATIONAL SOFT Q-LEARNING NETWORKS + +Shiyu Huang, Hang Su, Jun Zhu*, Ting Chen* + +Dept. of Comp. Sci. & Tech., BNRist Center, Institute for AI, THBI Lab, Tsinghua University hsy17@mails.tsinghua.edu.cn; {suhangss, dcszj, tingchen} @tsinghua.edu.cn + +# ABSTRACT + +Partially Observable Markov Decision Processes (POMDPs) are popular and flexible models for real-world decision-making applications that demand the information from past observations to make optimal decisions. Standard reinforcement learning algorithms for solving Markov Decision Processes (MDP) tasks are not applicable, as they cannot infer the unobserved states. In this paper, we propose a novel algorithm for POMDPs, named sequential variational soft Q-learning networks (SVQNs), which formalizes the inference of hidden states and maximum entropy reinforcement learning (MERL) under a unified graphical model and optimizes the two modules jointly. We further design a deep recurrent neural network to reduce the computational complexity of the algorithm. Experimental results show that SVQNs can utilize past information to help decision making for efficient inference, and outperforms other baselines on several challenging tasks. Our ablation study shows that SVQNs have the generalization ability over time and are robust to the disturbance of the observation. + +# 1 INTRODUCTION + +In recent years, substantial progress has been made in deep reinforcement learning for solving various challenging tasks, including the computer Go game (Silver et al., 2016), Atari games (Mnih et al., 2015), StarCraft (Zambaldi et al., 2018; Pang et al., 2018) and the first-person shooting (FPS) games (Lample & Chaplot, 2017; Wu & Tian, 2016; Huang et al., 2019). However, in many real-world applications, decision-making problems are partially observable (Astrom, 1965), preventing such problems from being solved by standard reinforcement learning algorithms. Formally, these kinds of problems are often defined as Partially Observable Markov Decision Processes (POMDPs) (Kaelbling et al., 1998), which demand information from past observations to help in the decision-making process (McCallum, 1993). + +Although numerous efforts (Hausknecht & Stone, 2015; Foerster et al., 2016; Igl et al., 2018; Zhu et al., 2018) have been paid to tackle this problem, there still exist various challenges. For example, Egorov (2015) tries to solve POMDPs by using the belief of the agent as the input of DQN (Mnih et al., 2015), but this algorithm needs access to the environment model. However, in many reinforcement learning tasks, it is not possible for the agent to acquire the underlying transition function, making such algorithms inapplicable. Some recent work (Karkus et al., 2017; McAllester & Singh, 2013; Babayan et al., 2018) tries to solve POMDPs under the model-free setting, i.e., the agent does not need to know and learn the transition function of the environment. For instance, Karkus et al. (2017) trained an agent to navigate in a partially observable grid world under the model-free setting, i.e., the agent can only observe a part of the grid world and does not learn the transition function. The agent uses its local observations to update its beliefs (McAllester & Singh, 2013; Babayan et al., 2018). In their experiments, the ground truth of the state is the full map plus the location of the agent, which means the representation of the state is explicit. However, in some complex tasks, it is impossible to acquire or design the state or beliefs. + +To solve the unknown representation problem of the state, Hausknecht & Stone (2015) and Zhu et al. (2018) try to represent the state as latent variables of neural networks. However, they only use + +a deep recurrent neural network to capture the historical information and fail to utilize the Markov property of the state in POMDPs. Igl et al. (2018) apply sequential Monte Carlo (SMC) (Le et al., 2017) to introduce inductive bias to the neural network, which can embody the Markov properties of the state. They can infer the state from the past observations online. However, they separate the planning algorithm from the inference of the state. + +To infer the hidden states and optimize the planning module jointly, we represent POMDPs as a unified probabilistic graphical model (PGM) and derive a single evidence lower bound (ELBO). We apply structured variational inference to optimize the ELBO. In our implementation, we design generative models to infer the hidden variables, however, the distribution of the latent variables is conditioned on previous hidden states. This is different from standard VAEs (Kingma & Welling, 2013), whose prior of the latent variables can be a standard Gaussian distribution. So, we apply an additional approximate function to tackle the conditional prior problem. The planning can also be solved under the PGM framework. Fortunately, maximum entropy reinforcement learning (MERL) (Levine, 2018) provide a tool to formalize the planning as a probabilistic inference task. + +In this paper, we propose a novel end-to-end neural network called the sequential variational soft Q-learning network (SVQN), which integrates the learning of hidden states and the optimization of the planning within the same framework. A deep recurrent neural network (RNN) (Cho et al., 2014; Hochreiter & Schmidhuber, 1997) in SVQNs is used to reduce the computational complexity, because the feature extraction can share the same weights in the RNN. Experimental results show that the SVQN can utilize past information to help in decision making for efficient inference, and outperforms other baselines on several challenging tasks. Our ablation study shows that SVQNs have the generalization ability over time and are robust to the disturbance of the observation. + +Contributions: (1) We derive the variational lower bound for POMDPs, which allows us to integrate the optimization of the control problem and learning of the hidden state under a unified graphical model. (2) We propose to tackle the difficulty of the inference of the hidden state and solve the problem of a conditional prior using the generative models. (3) We design an end-to-end deep recurrent neural network, which can reduce the computational complexity and be trained efficiently. + +# 2 RELATED WORK + +We summarize some related work in POMDPs and inference methods for sequential data. + +Model-Based and Model-Free methods for POMDPs: When the environment model is accessible, POMDPs can be solved by model-based method. Egorov (2015) used model-based methods to solve POMDPs, but their agents need to know the belief-update function and the transition function. When the environment model is unknown, model-free methods should be applied. Recently, some researchers (Hausknecht & Stone, 2015; Zhu et al., 2018) used recurrent neural networks to capture the historical information, but they failed to utilize the Markov property of the state in POMDPs. Our work proposes generative models for the algorithm learning, which tackles the difficulty of the inference of hidden states and introduces inductive bias to the network structure. Igl et al. (2018) applied sequential Monte Carlo (SMC) to the POMDPs. They can infer the hidden state from the past observations online. However, they separate the planning algorithm and the inference of the hidden state. Our algorithm is derived from a unified graphical model, which can train the inference model and the planning algorithm jointly. + +Explorations in POMDPs: In contrast to MDP methods, POMDP methods should do exploration to gather information with no immediate reward which can then be used in the future to gain higher rewards. Pathak et al. (2017) and Choi et al. (2018) designed deep reinforcement learning algorithms to help the exploration in their tasks. Actually, their tasks are POMDP tasks. However, they just treat their tasks as MDP tasks and intuitively add exploration tricks to standard reinforcement learning algorithms. Their exploration tricks include the reconstruction of observations and actions, which comes from intuitive concepts such as curiosity and attention. The generative models in our method also need to reconstruct the observations and actions, but our algorithm is derived from solid theoretical foundations. + +Inference for Sequential Data: The observation in POMDP tasks is sequential data, and we need to do inference for the hidden state from observations. Coquelin et al. (2009) used a particle filter to estimate the belief state given past observations. However, their method needs to access to the + +![](images/a3d3733e5d1d6ea8cd1da3667b10b1117a643ccb48cfe3a14499e7573b6dbe06.jpg) +Figure 1: The graphical models for Markov decision processes (MDPs) (a) and partially observable Markov processes (POMDPs) (b). Grey nodes are observed, white nodes are hidden. In POMDPs, the state $s$ is not observable and must be inferred from past observations. The $\mathcal{O}_t$ is a binary random variable, where $\mathcal{O}_t = 1$ denotes that the action is optimal at time $t$ , and $\mathcal{O}_t = 0$ denotes that the action is not optimal. + +environment model. Chung et al. (2015) derived the evidence lower bound of latent variables for sequential data and designed a novel deep neural network to infer recurrent latent variables. Our inference method has some superficial resemblances to their method, but we derived the evidence lower bound from a different graphical model and tackle the difficulty of how to integrate the RL planning algorithm instead of just inferring the hidden variables. + +# 3 PRELIMINARIES + +We start by briefly reviewing the backgrounds and notations related to our method, including partially observable Markov decision processes and maximum entropy reinforcement learning which solves optimal control problems using a probabilistic framework. + +# 3.1 PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES + +In many control tasks, the complete state of the environment is unknown to the agent. Instead, the agent can only receive local observations, which are typically conditioned on the current state of the system. The agent needs to make decisions based on the historical information. This kind of problems can be formalized as partially observable Markov decision processes (POMDPs) (Smith & Simmons, 2004). Formally, a POMDP is represented as a tuple $(S, A, O, T, Z, r)$ , where $S$ , $A$ and $O$ are state space, action space and observation space, respectively. The reward function $r(s, a)$ is the received reward when taking action $a$ in state $s$ . $T(s, a, s')$ is the state-transition function, which defines the probability of the succeeding state $s'$ after taking action $a$ in state $s$ . $Z(s, a, o)$ is the observation function, which defines the probability of emitted observation $o$ after taking action $a$ in state $s$ . + +POMDPs use the belief $b_{t}(s)$ to maintain the distribution of the unknown state at time $t$ , and the agent updates its belief with a Bayesian filter when receiving a new observation as + +$$ +b _ {t} \left(s ^ {\prime}\right) = \eta Z \left(s ^ {\prime}, a _ {t}, o _ {t}\right) \sum_ {s \in S} T \left(s, a _ {t}, s ^ {\prime}\right) b _ {t - 1} (s), \tag {1} +$$ + +where $\eta$ is a normalizing constant and $a_{t}$ is the action at time $t$ . POMDP planning needs to find a policy $\pi$ to maximize the accumulative reward as + +$$ +V _ {\pi} \left(b _ {0}\right) = \mathbb {E} \left(\sum_ {t = 0} ^ {T} \gamma^ {t} r \left(s _ {t}, a _ {t}\right) \mid b _ {0}, \pi\right), \tag {2} +$$ + +where $s_t$ is the state at time $t$ , and $\gamma \in (0,1)$ is a discount factor. + +# 3.2 MAXIMUM ENTROPY REINFORCEMENT LEARNING + +POMDPs require solving two problems: the inference of state and optimal control. To integrate both of them under a unified framework, we represent POMDPs as a probabilistic graphical model. So the optimal control problem needs to be solved as a probabilistic inference task. Fortunately, the maximum entropy reinforcement learning (MERL) algorithm provides an effective tool to solve optimal control problem under PGM framework. The graphical model of Markov decision processes + +is shown in Fig. 1(a). We borrow the notation from Levine (2018) to illustrate the algorithm. Levine (2018) introduces a binary random variable $\mathcal{O}_t$ to the graphical model, where $\mathcal{O}_t = 1$ denotes that the action is optimal at time $t$ , and $\mathcal{O}_t = 0$ denotes that the action is not optimal. The probability distribution of $\mathcal{O}$ is $p(\mathcal{O}_t = 1|s_t,a_t) = \exp (r(s_t,a_t))$ , and the variational lower bound is given by: + +$$ +\log p \left(\mathcal {O} _ {1: T}\right) \geq \mathbb {E} _ {\left(s _ {1: T}, a _ {1: T}\right) \sim \pi \left(s _ {1: T}, a _ {1: T}\right)} \left[ \sum_ {t = 1} ^ {T} r \left(s _ {t}, a _ {t}\right) - \log \pi \left(a _ {t} \mid s _ {t}\right) \right], \tag {3} +$$ + +where $\pi (a|s)$ is the policy function. Standard reinforcement learning only needs to maximize the cumulative reward. However, MERL will maximize an extra term, which is the policy entropy at each visited state. To get the optimal solution for MERL, two messages are introduced, i.e., $\beta_{t}(s_{t},a_{t}) = p(\mathcal{O}_{t:T}|s_{t},a_{t})$ and $\beta_{t}(s_{t}) = p(\mathcal{O}_{t:T}|s_{t})$ , and their relations are given by: + +$$ +\beta_ {t} (s _ {t}) = \int_ {A} \beta_ {t} (s _ {t}, a _ {t}) p (a _ {t} | s _ {t}) d a _ {t}, +$$ + +$$ +\beta_ {t} \left(s _ {t}, a _ {t}\right) = \int_ {S} \beta_ {t + 1} \left(s _ {t + 1}\right) p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) p \left(\mathcal {O} _ {t} \mid s _ {t}, a _ {t}\right) d s _ {t + 1}, \tag {4} +$$ + +and the optimal policy is given by: + +$$ +\pi \left(a _ {t} \mid s _ {t}, \mathcal {O} _ {t: T}\right) = \frac {\beta_ {t} \left(s _ {t} , a _ {t}\right)}{\beta_ {t} \left(s _ {t}\right)}. \tag {5} +$$ + +We can define the Q-value function and V-value function as below: + +$$ +Q \left(s _ {t}, a _ {t}\right) = \log \beta_ {t} \left(s _ {t}, a _ {t}\right), \tag {6} +$$ + +$$ +V \left(s _ {t}\right) = \log \beta_ {t} \left(s _ {t}\right), \tag {7} +$$ + +and the update functions are: + +$$ +V (s _ {t}) = \log \int_ {A} p (a _ {t} | s _ {t}) \exp (Q (s _ {t}, a _ {t})) d a _ {t}, +$$ + +$$ +Q \left(s _ {t}, a _ {t}\right) = r \left(s _ {t}, a _ {t}\right) + \log \mathbb {E} _ {s _ {t + 1} \sim p \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right)} \left[ \exp \left(V \left(s _ {t + 1}\right)\right) \right]. +$$ + +To maximize the variational lower bound in Eq. (3), we can use a parameterized Q-function $Q_{\theta}(s_t,a_t)$ , where $\theta$ is the function parameter, and it can be learned via gradient descent: + +$$ +\theta = \theta - \alpha \mathbb {E} \left[ \frac {Q _ {\theta} \left(s _ {t} , a _ {t}\right)}{d \theta} - \left(r \left(s _ {t}, a _ {t}\right) + \log \int_ {A} p \left(a _ {t + 1} \mid s _ {t + 1}\right) \exp \left(Q _ {\theta} \left(s _ {t + 1}, a _ {t + 1}\right)\right) d a _ {t + 1}\right) \right], \tag {9} +$$ + +where $\alpha$ is the learning rate. This algorithm is called soft Q-learning, because the update function for Q-value can be considered a soft-update version of the standard Bellman backup. For a better understanding of MERL, we recommend readers reference this tutorial (Levine, 2018). + +# 4 SEQUENTIAL VARIATIONAL SOFT Q-LEARNING NETWORKS + +We now present our algorithm in detail. We first derive the variational lower bound for POMDPs, and then illustrate how to deal with the conditional prior. + +# 4.1 VARIATIONAL LOWER BOUND FOR POMDPS + +Different from MDPs, the state $s$ of POMDPs in the PGM (shown in Fig. 1(b)) is unobservable, which need to be inferred from the action $a$ and the observation $o$ . We need to derive different variational lower bound for POMDPs, which can be used to infer the hidden state and do planning jointly. + +Unlike the observation function $Z(s, a, o)$ in Section 2.1, we assume that the observation is emitted from the hidden state $s$ , which means the probability distribution of observations are only conditioned on states, i.e., $o_t \sim p(o_t | s_t)$ . This assumption can hold in many tasks. For example, the observation in a partially observed maze is only determined by the full map and the agent's location. + +![](images/1fda6e9a7208c611c51116ceaece7a06fa947714d8a50eb02ab09f8b3c7dcc5c.jpg) +Figure 2: The structure of sequential variational soft Q-learning networks (SVQNs). Black solid lines represent forward paths of the neural network, gray dashed lines represent reconstruction paths and blue arrows stand for sampling latent variables using the re-parameterization trick. Double-arrows indicate that the algorithm needs to minimize the KL-divergence between two probability distributions. The model takes the observation $o_{t}$ , previous action $a_{t-1}$ and reward $r_{t-1}$ as inputs, and it uses a neural network $f_{\theta}(o_{t}, a_{t-1}, r_{t-1})$ to extract the low-dim hidden feature $w_{t}$ . The recurrent unit $rnn_{\theta}(w_{t}, h_{t-1})$ is used to capture the historical information $h_{t}$ . $q_{\theta}(s_{t}|h_{t})$ and $p_{\theta}(s_{t}|s_{t-1}, a_{t-1})$ are proposed distributions of hidden states. The hidden state $s_{t}$ and inner hidden state $\hat{s}_{t}$ are sampled from $q_{\theta}(s_{t}|h_{t})$ and $p_{\theta}^{prior}(s_{t}|s_{t-1}, a_{t-1})$ respectively. The Q-function $Q_{\theta}(s_{t}, a_{t})$ is learned via the temporal difference (TD) algorithm with soft update. + +We apply the structured variational inference to optimize the evidence lower bound of POMDPs. In structured variational inference, different parts of the proposal distributions can be optimized separately, which means we can fix some approximate functions and optimize other approximate functions. In POMDPs, we will use two approximate functions $q_{\pi}(a_t|s_t)$ and $q_{\theta}(s_t|s_{t - 1},a_{t - 1},o_t)$ . $q_{\pi}(\cdot)$ approximates the optimal policy and $q_{\theta}(\cdot)$ approximates the function of inferring hidden states, where $\theta$ is the parameter of the approximate function. When $q_{\theta}(\cdot)$ is fixed, the learning procedure is same as MERL, so that $q_{\pi}(\cdot)$ can be learned via the soft Q-learning algorithm. Conversely, when $q_{\pi}(\cdot)$ is fixed as the optimal policy, we can learn the inference function $q_{\theta}(\cdot)$ for hidden states. We can derive the evidence lower bound(ELBO) as below: + +$$ +\begin{array}{l} \log p \left(\mathcal {O} _ {0: T}, a _ {0: T}, o _ {1: T}\right) = \log \mathbb {E} _ {q _ {\theta} \left(s _ {1: T} \mid \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T}\right)} \left[ \frac {p \left(s _ {1 : T} , \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T}\right)}{q _ {\theta} \left(s _ {1 : T} \mid \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T}\right)} \right] \tag {10} \\ \geq \mathbb {E} _ {q _ {\theta} (s _ {1: T} | \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T})} \log \left[ \frac {p (s _ {1 : T} , \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T})}{q _ {\theta} (s _ {1: T} | \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T})} \right], \\ \end{array} +$$ + +and the ELBO can be written as: + +$$ +\begin{array}{l} \mathcal {L} \left(\mathcal {O} _ {0: T}, a _ {0: T}, o _ {1: T}\right) = \mathbb {E} _ {q _ {\theta} \left(s _ {1: T} \mid \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T}\right)} \sum_ {t = 1} ^ {T} \left\{r \left(s _ {t}, a _ {t}\right) + \log \left[ p \left(a _ {t}\right) p \left(o _ {t} \mid s _ {t}\right) \right] \right. \tag {11} \\ \left. \left. - \mathcal {D} _ {K L} \left[ q _ {\theta} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}, o _ {t}\right) \right| | p \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \right] \right\}. \\ \end{array} +$$ + +To get the optimal action in POMDPs, we need to maximize the ELBO above. The first term in the ELBO is the cumulative reward $\sum_{t=1}^{T} r(s_t, a_t)$ , which can be optimized via maximum entropy reinforcement learning algorithm. + +We apply generative models to optimize the rest terms in the ELBO. The term $p(o_{t}|s_{t})$ means that the hidden state $s_t$ needs to have the ability to generate current observation $o_t$ . The Kullback-Leibler divergence term $\mathcal{D}_{KL}[q_{\theta}(s_t|s_{t-1},a_{t-1},o_t)||p(s_t|s_{t-1},a_{t-1})]$ indicates that we should minimize the gap between approximate function $q_{\theta}(s_t|s_{t-1},a_{t-1},o_t)$ and the prior $p(s_t|s_{t-1},a_{t-1})$ . In standard VAEs, the prior $p(\cdot)$ is generally fixed and can be set as a standard Gaussian. However, in this generative model, the prior of the hidden state $s_t$ is conditioned on previous state $s_{t-1}$ and action $a_{t-1}$ . We will show how we deal with the conditional prior in next sub-section. More details about the derivation process of the ELBO can be found in Appendix A. + +# 4.2 VARIATIONAL AUTOENCODERS FOR THE CONDITIONAL PRIOR + +The KL-divergence term in Eq. (11) introduces a conditional prior for the hidden state $s_t$ , but the true conditional distribution $p(s_t | s_{t-1}, a_{t-1})$ is unknown, making it intractable to do inference. Hence we propose a new parameterized function $p_\theta^{prior}(s_t | s_{t-1}, a_{t-1})$ to approximate the true distribution $p(s_t | s_{t-1}, a_{t-1})$ . We use a standard VAE to learn the $p_\theta^{prior}(\cdot)$ . Hence $(s_{t-1}, a_{t-1})$ can be treated as the inner observed data in this VAE model and the inner hidden state $\hat{s}_t$ is inferred from $(s_{t-1}, a_{t-1})$ . We can write down the ELBO for the conditional prior directly by using the ELBO of standard VAEs (see in Appendix B): + +$$ +\begin{array}{l} \mathcal {L} \left(s _ {t - 1}, a _ {t - 1}\right) = - \mathcal {D} _ {K L} \left(p _ {\theta} ^ {\text {p r i o r}} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \| p \left(s _ {t}\right)\right) \\ + \mathbb {E} _ {p _ {\theta} ^ {p r i o r} (s _ {t} | s _ {t - 1}, a _ {t - 1})} \left[ \log p (s _ {t - 1}, a _ {t - 1} | s _ {t}) \right], \\ \end{array} +$$ + +where $p(s_{t})$ can be set as a standard Gaussian. The $p_{\theta}^{prior}(\cdot)$ can be learned via a standard VAE paradigm. + +The outputs of $p_{\theta}^{prior}(\cdot)$ are $\mu^{prior}$ and $\sigma^{prior}$ , so that the KL-divergence term in Eq. (11) can be written as: + +$$ +\begin{array}{l} \mathcal {D} _ {K L} \left[ q _ {\theta} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}, o _ {t}\right) | | p \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \right] \\ \approx \mathcal {D} _ {K L} \left[ q _ {\theta} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}, o _ {t}\right) \mid \mid p _ {\theta} ^ {p r i o r} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \right] \tag {13} \\ = \mathcal {D} _ {K L} \left[ q _ {\theta} (s _ {t} | s _ {t - 1}, a _ {t - 1}, o _ {t}) | | \mathcal {N} (\mu^ {p r i o r}, \mathrm {d i a g} (\sigma^ {p r i o r ^ {2}})) \right]. \\ \end{array} +$$ + +Unlike standard VAEs, the second distribution of the KL-divergence function in Eq. (13) is not a standard normal distribution. But we can still get the final loss function if we expand the KL-divergence function. We present the final formula of the loss function for Eq. (13) in Appendix C. + +# 4.3 SEQUENTIAL VARIATIONAL SOFT Q-LEARNING NETWORKS + +We design a deep recurrent neural network to improve the ELBO derived in Section 4.1 and Section 4.2. Fig. 2 shows the overall structure of our algorithm. + +In our implementation, there are two generative models, i.e., one of the generate model learns the conditional prior $q_{\theta}^{prior}(\cdot)$ and reconstructs the inner observed data $(s_{t-1}, a_{t-1})$ , and another generate model learns the $q_{\theta}(\cdot)$ in Eq. (13) and reconstructs the current observation $o_t$ . + +For the first generative model, there are two losses, i.e., $L_{KL}^{inner}$ and $L_{MSE}^{inner}$ . $L_{KL}^{inner}$ is the negated KL-divergence between $p_{\theta}^{prior}(s_t|s_{t-1}, a_{t-1})$ and a standard Gaussian: + +$$ +L _ {K L} ^ {\text {i n n e r}} = - \mathcal {D} _ {K L} \left[ p _ {\theta} ^ {\text {p r i o r}} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \right| \left| \mathcal {N} (\mathbf {0}, I) \right]. \tag {14} +$$ + +And $L_{MSE}^{inner}$ is the reconstruction loss of $(s_{t - 1},a_{t - 1})$ + +$$ +L _ {M S E} ^ {\text {i n n e r}} = M S E \left(\left(s _ {t - 1}, a _ {t - 1}\right), \varphi_ {\theta} ^ {\text {i n n e r}} \left(\hat {s} _ {t}\right)\right), \tag {15} +$$ + +where $MSE(\cdot)$ is the mean-square error function, the inner hidden state $\hat{s}_t$ is sampled from $p_{\theta}^{prior}(s_t | s_{t-1}, a_{t-1})$ , and $\varphi_{\theta}^{inner}(\cdot)$ is a reconstruction function. + +For the second generative model, there are also two losses, i.e., $L_{KL}^{elbo}$ and $L_{MSE}^{elbo}$ . $L_{KL}^{elbo}$ is defined as: + +$$ +L _ {K L} ^ {\text {e l b o}} = - \mathcal {D} _ {K L} \left[ q _ {\theta} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}, o _ {t}\right) \mid \mid p _ {\theta} ^ {\text {p r i o r}} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \right]. \tag {16} +$$ + +And $L_{MSE}^{elbo}$ is the reconstruction loss of $o_t$ : + +$$ +L _ {M S E} ^ {e l b o} = M S E \left(o _ {t}, \varphi_ {\theta} ^ {e l b o} \left(s _ {t}\right)\right), \tag {17} +$$ + +where $MSE(\cdot)$ is the mean-square error function, the hidden state $s_t$ is sampled from $q_{\theta}(s_t | s_{t-1}, a_{t-1}, o_t)$ , and $\varphi_{\theta}^{elbo}(\cdot)$ is a reconstruction function. + +![](images/1ac4b382cd2e330393b682a48411b22f9783d0e8f83b236cd7f419a177d7134d.jpg) +Figure 3: Screen shots of Atari games and ViZDoom tasks. From left to right, they are Pong, ChopperCommand, DoubleDunk, Asteroids, Health Gather, Health Gather v2 and Defend Center. When only given one frame as the input, these Atari games are POMDPs because we can not obtain the velocity of the moving object from a single observation. Because the agent in ViZDoom environment can only see in just one direction, it naturally introduces partial observability to these tasks. + +![](images/09f8ea3805c7660cdbe1294ab20efe2de37d076e72ff0361c6d651292d30fddb.jpg) + +![](images/4ab8b053f619efb7b4d2a3fb14cc3f80e88503719bf76c55deabaec401d2d97f.jpg) + +![](images/d76bbfebf3ef9241627ae16128fadb6c36cd02afc9f6317ab793cd6d1fce4a00.jpg) + +![](images/275e3aafe079f5003cad6df07156fbf593ffd3125acb2796eb897ac27098ac21.jpg) + +![](images/b4850813568b202f47fdafedc613caddc47cc5d5d08394390b5d183a5e071f49.jpg) + +![](images/808b6fd1ddf110a6706b2542a44c42813857a185bf2213884876deda52af1537.jpg) + +Finally, we get two kinds of loss functions for the two generative models, i.e., the reconstruction loss $L_{MES} = L_{MSE}^{inner} + L_{MSE}^{elbo}$ and the KL-divergence loss $L_{KL} = L_{KL}^{inner} + L_{KL}^{elbo}$ . + +And for the planning algorithm, we use the soft Q-learning algorithm (Levine, 2018). Its loss function is the temporal difference error $L_{TD}$ , which is updated via Eq. (9). All these losses can jointly be optimized via stochastic gradient descent algorithms. + +To reduce the computation complexity, we use a deep recurrent neural network to capture the historical information. We first use the $f_{\theta}(o_t, a_{t-1}, r_{t-1})$ to extract a low-dim hidden feature $w_t$ from current input, and then feed this features to a recurrent unit $rnn_{\theta}(\cdot)$ to get the recurrent output $h_t$ . Because $h_t$ contains the information of past observations, the loss function in Eq. (16) can be rewritten as: + +$$ +L _ {K L} ^ {\text {e l b o}} = - \mathcal {D} _ {K L} \left[ q _ {\theta} \left(s _ {t} \mid h _ {t}\right) \mid \mid p _ {\theta} ^ {\text {p r i o r}} \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \right]. \tag {18} +$$ + +In the training, we tried both LSTM cell (Hochreiter & Schmidhuber, 1997) and GRU cell (Cho et al., 2014) as basic recurrent units. Because of the generalization ability of the recurrent neural network (Lample & Chaplot, 2017), we can just sample a fixed length $H$ of sequential data for training instead of using the full-length data. In the experiment, we also studied how the training length $H$ influences the final performance. We use a parallel training strategy, i.e., the program hosts multiple games in parallel and they all send batched data to a central data memory. This is quite similar to the data collection method in ELF (Tian et al., 2017). More details about the training strategy can be found in Appendix E. + +# 5 EXPERIMENTS + +We evaluate our algorithm on flickering Atari (Hausknecht & Stone, 2015) and ViZDoom platform (Kempka et al., 2016). Flickering Atari was previously used as the test environment in DRQN (Hausknecht & Stone, 2015), ADRQN (Zhu et al., 2018) and DVRL (Igl et al., 2018). The ViZDoom platform is a 3D FPS game for AI research. The agent in ViZDoom needs to navigate in the 3D environment to accomplish various tasks, such as gathering resources, shooting enemies and looking for the exit. Because the agent can only see in just one direction each time step, this naturally introduces partial observability to the task. + +# 5.1 EXPERIMENT SETUP + +We used some recent algorithms as baselines. Because the true state and transition function are unknown, only the methods which can be trained under model-free setting will be used for comparison. Baselines are listed as below: +Deep Q-Networks (DQN) (Mnih et al., 2015): A method which uses standard Bellman backup and uses the temporal difference error as the objective function. +Deep Soft Q-Networks (DSQN) (Levine, 2018): A method which uses soft Bellman backup and uses the temporal difference error as the objective function. +Deep Recurrent Q-Networks (DRQN) (Hausknecht & Stone, 2015): A method which applies LSTM on the top of the DQN. +Action-Specific Deep Recurrent Q-Networks (ADRQN) (Zhu et al., 2018): A method which extends the DRQN, i.e., it uses both the observation and action as inputs. + +Deep Variational Reinforcement Learning (DVRL) (Igl et al., 2018) A method which combines sequential Monte Carlo and A2C (Dhariwal et al., 2017) to solve POMDPs. + +# 5.2 EVALUATION ON FLICKERING ATARI + +Atari environments (Bellemare et al., 2013) are widely used as the benchmark for deep reinforcement learning algorithms due to its high dimensional observation spaces and numerous challenging tasks. Flickering Atari was introduced by (Hausknecht & Stone, 2015). In each running step, the observation may be obscured with a certain probability, i.e., the raw screen will be either fully observable or fully obscured with black pixels. The settings of experiments keep in line with DVRL (Igl et al., 2018), i.e., only one frame is used, the frameskip is set to four and each frame is obscured with a probability of 0.5. + +We choose four Atrai games (i.e., Pong, ChopperCommand, DoubleDunk and Asteroids) for evaluation. Fig. 3 shows the screen shots of these games. When only given one frame as the input, these tasks are POMDPs because we cannot obtain the velocity of the moving object from a single observation. These tasks have high dimensional and continuous observation spaces, while discrete action spaces. The basic network architecture and hyper-parameters are similar to DQN (Mnih et al., 2015). For the recurrent neural networks, we use a sequence length of 5 for training. All the algorithms train for 10000,000 steps and run for 100 episodes during evaluation. The training details can be found in Appendix E. + +Table 1 shows the performance results of different algorithms on these Atari games. We can see that our algorithms significantly outperform other baselines on three of the games and get close score to DVRL on ChopperCommand. Compared with DRQN and ADRQN, our method introduces inductive bias to the network, which helps state estimate and RL planning for POMDPs. Compared with DVRL, our method can achieve competitive performance with lower sampling complexity. + +
DQNDSQNDRQNADRQNDVRLSVQN(GRU)SVQN(LSTM)
Pong-4.9(±2.6)-2.1(±2.3)1.6(±7.8)7(±4.6)18.2(±2.7)19.2(±1.3)18.6(±1.9)
Chopper1350(±731)1250(±522)1090(±409)1608(±707)6602(±449)6005(±258)5805(±312)
DDunk-16.2(±3.1)-16.9(±2.8)-14.4(±3.2)-12.9(±3.6)-6.0(±1.3)-5.8(±1.4)-5.5(±1.2)
Asteroids935(±410)940(±320)871(±340)1040(±432)1539(±73)1645(±102)1585(±86)
+ +Table 1: Evaluation results of different models on flickering Atari. The values are the final evaluation scores after training for different algorithms. Values in parentheses indicate the standard deviation. Evaluations use Mann-Whitney rank test and bold numbers indicate statistical significance at the $5\%$ level. Our algorithms outperform other baselines on three of the games and get close score to DVRL on ChopperCommand. + +# 5.3 EVALUATION ON VIZDOOM TASKS + +We designed three tasks in ViZDoom as the evaluation tasks for our algorithm. + +Health Gather: The agent needs to gather health supplies in a flat map. The agent will lose health every time step. Accordingly, if it can't gather enough health supplies, it will die and the game is over. At each time step, the agent will receive a reward of 0.001, which encourages the agent to live a longer life. When it collects one health supply, it will receive a reward of 1. The observation for the agent is the grayscale image of its vision and its health value. The agent has three actions to choose, i.e., TURN LEFT, TURN RIGHT and MOVE FORWARD. + +Health Gather v2: This is a more difficult task than previous task with a more complex map. There is some poison in the map. When the agent encounters poison, it will lose health. We use the same reward scenario, observations and actions in Health Gather. + +Defend Center: In this task, the agent stands in the center of a flat map. Monsters will walk toward the agent from the edge of the map. When the monsters touch the agent, the agent will lose health. The agent holds a gun with limited ammo, and it can kill the monsters by pressing down the shooting button. The agent gets reward of 4 by killing monsters, reward of -0.2 by using ammo and reward of + +![](images/35ee8c6e51dbfebee3bdd6d452dfacfe977c123518387a59c1411565b1fdf256.jpg) +(a) + +![](images/e7712cb999e9e8ddc4cf288a18a4110234345da91f7d4075c8fe0c3e36334bd6.jpg) +(b) + +![](images/8210e9fc14d89626184af127b79d2c1cdf8985fddae27a40891867547cfae1b5.jpg) +(c) +Figure 4: (a) Training curves on Health Gather. Both SVQN models outperform other baselines. (b) Performances of the models with different training sequence lengths on Health Gather. When the training sequence length is too long, it is hard for the algorithm to gather useful information through a long gradient flow. (c) Evaluation results of different models under different observation probabilities of the Health Gather, and all models are trained with full observation. The results show that SVQN models are more robust to the disturbance of the observation than other algorithms. + +
DQNDSQNDRQNADRQNSVQN(GRU)SVQN(LSTM)
Health Gather27.8(±1.4)23.3(±1.7)32.4(±2.0)29.9(±1.6)37.7(±0.9)42.3(±1.7)
Health Gather v29.8(±1.5)6.7(±2.1)13.2(±2.5)14.3(±1.2)15.8(±1.1)18.2(±0.9)
Defend Center35.4(±1.3)38.5(±1.5)46.0(±0.5)45.8(±2.2)50.3(±2.1)48(±1.4)
+ +Table 2: Evaluation results of different models on ViZDoom. The values are the final evaluation scores after training for different algorithms. Values in parentheses indicate the standard deviation. Evaluations use Mann-Whitney rank test and bold numbers indicate statistical significance at the $5\%$ level. The SVQN models achieve the best performance on these three tasks. + +-1 when losing health. The observation for the agent is the grayscale image of its vision, its health value and the ammo remained. The agent has three actions to choose, i.e., TURN LEFT, TURN RIGHT and ATTACK. + +Fig. 3 shows the screen shots of these three tasks. All the algorithms are trained with the same basic network architecture and use the same hyperparameters. All the models take only one observation as input at each time step and the vision inputs are resized to the resolution of $84 \times 84$ . For the recurrent neural networks, we use a sequence length of 5 for training. All the algorithms train for 300,000 steps and run for 20 episodes during evaluation. The discount factor $\gamma$ is set to 0.95, learning rate is 0.0001 and the Adam optimizer (Kingma & Ba, 2014) is used for training. The network architecture for these tasks and training details are shown in Appendix D. + +Fig. 4(a) shows the training curves of different methods on the Health Gather. Both SVQN models outperform other baselines. Table 2 reports the final scores of different models on these three tasks. The SVQN models achieve the best performance on these three tasks. Experiments on ViZDoom indicate that generative models of SVQNs can improve agent's exploration ability in complicated unknown environment + +# 5.4 ABLATION STUDY + +Training sequence length: We study how the training sequence length $H$ impacts the algorithm's performance. We use the SVQN(LSTM) model and the environment of Health Gather for this study. Fig. 4(b) shows training processes of models with different training sequence lengths ( $H = 2, 5, 10, 15$ ). We can see that, when the sequence length is too short ( $H = 2$ ), the model can't gather enough historical information, which causes performance degradation. When the sequence length is too long ( $H = 15$ ), it is arduous for the network to do optimization, which will also + +lead to performance degradation. This experiment also shows that with a limited training length, the algorithm can generalize to the time of arbitrary lengths during testing. Thanks to this generalization ability, we can train the agent with a fixed length of data instead of the full-length data, which can reduce the computation complexity. + +Observation Probability: We also study whether different models are robust to the noise of the observation. We add a modification to the game Health Gather, i.e., at each time step, the observation of the screen is either fully revealed or fully obscured with a fixed observation probability $p$ . Fig. 4(c) show the evaluation results of different models under different observation probabilities $(p = 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9)$ . All the models are trained under the standard environment with full observations. The results show that SVQN models are relatively robust to the disturbance of the observation compared to other algorithms. + +# 6 CONCLUSIONS + +We propose a novel algorithm named Sequential Variational Soft Q-Learning Networks (SVQN) to solve POMDPs with the discrete action space. SVQN is model-free and does not need to know the true state's representation. We apply generative models to deal with the conditional prior of hidden states and use a recurrent neural network to reduce the computational complexity, i.e., with a small length of training data, it can generalize to the test data with an arbitrary length. Our designed deep neural network can be trained end-to-end, which optimizes the planning and inference of hidden states jointly. Experimental results show that SVQN outperforms previous methods on challenging tasks and has the robustness to the disturbance of the observation. SVQN is also flexible and can be integrated with other maximum entropy reinforcement learning algorithms, such as soft actor-critic (Haarnoja et al., 2018). In the future, we will try to develop algorithms for POMDPs problems with the continuous action space. + +# ACKNOWLEDGMENTS + +This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, U19B2034, U1811461), Beijing NSF Project (No. L172037), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, the JP Morgan Faculty Research Program and the NVIDIA NVAIL Program with GPU/DGX Acceleration. + +# REFERENCES + +K. J. Aström. Optimal control of Markov decision processes with incomplete state estimation. *J. Math. Anal. Appl.*, 10:174-205, 1965. +Benedicte M Babayan, Naoshige Uchida, and Samuel J Gershman. Belief state representation in the dopamine system. Nature communications, 9(1):1891, 2018. +Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47: 253-279, 2013. +Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnnc encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. +Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, and Honglak Lee. Contingency-aware exploration in reinforcement learning. arXiv preprint arXiv:1811.01483, 2018. +Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980-2988, 2015. + +Pierre-Arnaud Coquelin, Romain Deguest, and Rémi Munos. Particle filter-based policy gradient in pomdpds. In Advances in Neural Information Processing Systems, pp. 337-344, 2009. +Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017. +Maxim Egorov. Deep reinforcement learning with pomdpms. 2015. +Jakob N Foerster, Yannis M Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate to solve riddles with deep distributed recurrent q-networks. arXiv preprint arXiv:1602.02672, 2016. +Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018. +Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In 2015 AAAI Fall Symposium Series, 2015. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Shiyu Huang, Hang Su, Jun Zhu, and Ting Chen. Combo-action: Training agent for fps game with auxiliary tasks. AAAI, 2019. +Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, and Shimon Whiteson. Deep variational reinforcement learning for pomdp's. arXiv preprint arXiv:1806.02426, 2018. +Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99-134, 1998. +Peter Karkus, David Hsu, and Wee Sun Lee. Qmdp-net: Deep learning for planning under partial observability. In Advances in Neural Information Processing Systems, pp. 4694-4704, 2017. +Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaśkowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. In 2016 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1-8. IEEE, 2016. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Guillaume Lample and Devendra Singh Chaplot. Playing fps games with deep reinforcement learning. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. +Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, and Frank Wood. Auto-encoding sequential monte carlo. arXiv preprint arXiv:1705.10306, 2017. +Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909, 2018. +David A McAllester and Satinder Singh. Approximate planning for factored pomdpds using belief state simplification. arXiv preprint arXiv:1301.6719, 2013. +R Andrew McCallum. Overcoming incomplete perception with utile distinction memory. In Proceedings of the Tenth International Conference on Machine Learning, pp. 190-196, 1993. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. + +Zhen-Jia Pang, Ruo-Ze Liu, Zhou-Yu Meng, Yi Zhang, Yang Yu, and Tong Lu. On reinforcement learning for full-length game of starcraft. arXiv preprint arXiv:1809.09095, 2018. +Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 16-17, 2017. +David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016. +Trey Smith and Reid Simmons. Heuristic search value iteration for pomdps. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, pp. 520-527. AUAI Press, 2004. +Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and C Lawrence Zitnick. Elf: An extensive, lightweight and flexible research platform for real-time strategy games. In Advances in Neural Information Processing Systems, pp. 2659-2669, 2017. +Yuxin Wu and Yuandong Tian. Training agent for first-person shooter game with actor-critic curriculum learning. 2016. +Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, et al. Relational deep reinforcement learning. arXiv preprint arXiv:1806.01830, 2018. +Pengfei Zhu, Xin Li, Pascal Poupart, and Guanghui Miao. On improving deep reinforcement learning for pomdpss. arXiv preprint arXiv:1804.06309, 2018. + +# A DERIVATION OF THE VARIATIONAL LOWER BOUND + +$$ +\begin{array}{l} \log p \left(\mathcal {O} _ {0: T}, a _ {0: T}, o _ {1: T}\right) \\ = \log \mathbb {E} _ {q (s _ {1: T} | \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T})} \left[ \frac {p (s _ {1 : T} , \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T})}{q (s _ {1: T} | \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T})} \right] \\ \geq \mathbb {E} _ {q (s _ {1: T} | \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T})} \log \left[ \frac {p (s _ {1 : T} , \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T})}{q (s _ {1: T} | \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T})} \right] \\ = \int q \left(s _ {1: T} \mid \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T}\right) \log \left[ \frac {p \left(s _ {1 : T} , \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T}\right)}{q \left(s _ {1: T} \mid \mathcal {O} _ {0 : T} , a _ {0 : T} , o _ {1 : T}\right)} \right] d s _ {1: T} \\ = \int \sum_ {t = 1} ^ {T} q \left(s _ {1: T} \mid \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T}\right) \log \left[ \frac {p \left(a _ {t}\right) p \left(\mathcal {O} _ {t} \mid s _ {t} , a _ {t}\right) p \left(s _ {t} \mid s _ {t - 1} , a _ {t - 1}\right) p \left(o _ {t} \mid s _ {t}\right)}{q \left(s _ {t} \mid s _ {t - 1} , a _ {t - 1} , o _ {t}\right)} \right] d s _ {1: T} \\ = \sum_ {t = 1} ^ {T} \int q \left(s _ {1: t} \mid \mathcal {O} _ {1: t}, a _ {0: t}, o _ {1: t}\right) \log \left[ \frac {p \left(a _ {t}\right) p \left(\mathcal {O} _ {t} \mid s _ {t} , a _ {t}\right) p \left(s _ {t} \mid s _ {t - 1} , a _ {t - 1}\right) p \left(o _ {t} \mid s _ {t}\right)}{q \left(s _ {t} \mid s _ {t - 1} , a _ {t - 1} , o _ {t}\right)} \right] d s _ {1: t} \\ = \sum_ {t = 1} ^ {T} \left\{\int q \left(s _ {1: t} \mid \mathcal {O} _ {1: t}, a _ {0: t}, o _ {1: t}\right) \log \left[ p \left(a _ {t}\right) p \left(\mathcal {O} _ {t} \mid s _ {t}, a _ {t}\right) p \left(o _ {t} \mid s _ {t}\right) \right] d s _ {1: t} \right. \tag {19} \\ + \int q (s _ {1: t} | \mathcal {O} _ {1: t}, a _ {0: t}, o _ {1: t}) \log \left[ \frac {p (s _ {t} | s _ {t - 1} , a _ {t - 1})}{q (s _ {t} | s _ {t - 1} , a _ {t - 1} , o _ {t})} \right] d s _ {1: t} \Bigg \} \\ = \sum_ {t = 1} ^ {T} \left\{\int q \left(s _ {1: t} \mid \mathcal {O} _ {1: t}, a _ {0: t}, o _ {1: t}\right) \log \left[ p \left(a _ {t}\right) p \left(\mathcal {O} _ {t} \mid s _ {t}, a _ {t}\right) p \left(o _ {t} \mid s _ {t}\right) \right] d s _ {1: t} \right. \\ \left. \right. - \int q \left(s _ {1: t - 1} \mid \mathcal {O} _ {1: t - 1}, a _ {0: t - 1}, o _ {1: t - 1}\right) \mathcal {D} _ {K L} \left[ \right. q \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}, o _ {t}\right)\left. \right\rvert | p \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right)\left. \right] d s _ {1: t} \left. \right\} \\ = \mathbb {E} _ {q (s _ {1: T} | \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T})} \sum_ {t = 1} ^ {T} \left\{\log [ p (a _ {t}) p (\mathcal {O} _ {t} | s _ {t}, a _ {t}) p (o _ {t} | s _ {t}) ] \right. \\ \left. - \mathcal {D} _ {K L} \left[ q \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}, o _ {t}\right) | | p \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \right] \right\} \\ \simeq \sum_ {t = 1} ^ {T} \left\{r \left(s _ {t}, a _ {t}\right) + \log \left[ p \left(a _ {t}\right) p \left(o _ {t} \mid s _ {t}\right) \right] \right. \\ \left. - \mathcal {D} _ {K L} \left[ q \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}, o _ {t}\right) \mid \mid p \left(s _ {t} \mid s _ {t - 1}, a _ {t - 1}\right) \right] \right\}, \text {w h e r e} s _ {1: T} \sim q \left(s _ {1: T} \mid \mathcal {O} _ {1: T}, a _ {0: T}, o _ {1: T}\right) \\ \end{array} +$$ + +# B VARIATIONAL AUTOENCODERS + +Variational auto-encoders (VAEs) (Kingma & Welling, 2013) are effective generative models that can recover complex distributions over the data space. In the VAE, there are observed data $x$ and the underlying causal factor $z$ . Usually, it's intractable to do inference for the posterior $p(z|x)$ . The VAE uses a variational function $q(z|x)$ to approximate the true posterior. The lower bound of the marginal distribution of the observed data is given by: + +$$ +\log p (x) = \log \mathbb {E} _ {q (z | x)} \left[ \frac {p (x , z)}{q (z | x)} \right] \geq \mathbb {E} _ {q (z | x)} \left[ \log \frac {p (x , z)}{q (z | x)} \right], \tag {20} +$$ + +and the lower bound can be equivalently written as : + +$$ +\mathcal {L} (x) = - \mathcal {D} _ {K L} (q (z | x) | | p (z)) + \mathbb {E} _ {q (z | x)} [ \log p (x | z) ], \tag {21} +$$ + +where $\mathcal{D}_{KL}(\cdot ||\cdot)$ is Kullback-Leibler divergence between two distributions. The approximate posterior $q(z|x)$ is often set as Gaussian $\mathcal{N}(\mu, \mathrm{diag}(\sigma^2))$ , where $\mu$ and $\sigma$ are the outputs of a non-linear function of $x$ . The generative model $p(x|z)$ and inference model $q(z|x)$ can be trained jointly via standard backpropagation techniques. + +# C OPTIMIZE THE KLS IN EQ. (15) + +We give out the final form of the loss function as below: + +$$ +\begin{array}{l} \mathcal {D} _ {K L} (p | | q) = \int p (x) \frac {p (x)}{q (x)} d x \\ = \int p (x) \log p (x) d x - \int p (x) q (x) d x \\ = \log {\frac {\sigma_ {q}}{\sigma_ {p}}} + \frac {\sigma_ {p} ^ {2} + (\mu_ {p} - \mu_ {q}) ^ {2}}{2 \sigma_ {q} ^ {2}} - \frac {1}{2} \\ = \frac {1}{2} \left[ 2 \log \frac {\sigma_ {q}}{\sigma_ {p}} + \frac {\sigma_ {p} ^ {2} + \left(\mu_ {p} - \mu_ {q}\right) ^ {2}}{\sigma_ {q} ^ {2}} - 1 \right] \tag {22} \\ = \frac {1}{2} \left[ - 2 \log \frac {\sigma_ {p}}{\sigma_ {q}} + \frac {\sigma_ {p} ^ {2} + (\mu_ {p} - \mu_ {q}) ^ {2}}{\sigma_ {q} ^ {2}} - 1 \right] \\ = - \frac {1}{2} \left[ 2 (\log (\sigma_ {p}) - \log (\sigma_ {q})) + 1 - \frac {\sigma_ {p} ^ {2} + (\mu_ {p} - \mu_ {q}) ^ {2}}{\sigma_ {q} ^ {2}} \right] \\ \end{array} +$$ + +, where $p$ and $q$ are two Gaussian distributions. $\sigma_p, \sigma_q, \mu_q$ and $\mu_p$ are their standard deviations and means respectively. + +# D NETWORK ARCHITECTURE + +# D.1 ATARI + +The network architecture for Atari is shown as below: + +![](images/1cfc4d7370c7d4ac89cc122431ae857d3cc335c06b6e268e028303514b526e27.jpg) + +![](images/68f57ae29dae2edb7b956bc4bb00bcdb61d1bbbbaf5f5bd6fe2e6b1eedc2a0eb.jpg) + +![](images/26681b98fa6bc77943e1ec03518660758f949afc70e6a9192ea62dec53b2c1ae.jpg) + +# D.2 VIZDOOM + +The network architecture for ViZDoom is shown as below: + +![](images/cfc466efb9b720d82fc1c36b3ce969cc0655661ecb5bb42d4ead56c6fa03f8ee.jpg) + +![](images/40060848b94ad0d85e1764b53938d668e1eddfe108eb92f17d1d1119c38e91f5.jpg) + +![](images/4b01346b124b83a7f7bfe61e20e707af0bab782808e511bab64230c1fb360dc7.jpg) + +# E TRAINING DETAILS + +Training architechture: + +![](images/c56cb8ce613b049f5960a28e4f33b3942f407a74ca9ca661a4d7e48c4c05eab9.jpg) + +The data produce module, training module and evaluation module are run in parallel. The framework is implemented by Python and Tensorflow. + +# E.1 ATARI + +All the models take only one observation as input at each timestep and the vision inputs are resized to the resolution of $84 \times 84$ . For the recurrent neural networks, we use sequence length of 5 for training. All the algorithms train for 1,0000,000 steps and run for 100 episodes during evaluation. + +# E.2 VIZDOOM + +All the algorithms are trained with the same basic network architectures and use the same hyperparameters. All the models take only one observation as input at each timestep and the vision inputs are resized to the resolution of $84 \times 84$ . For the recurrent neural networks, we use sequence length of 5 for training. All the algorithms train for 300,000 steps and run for 20 episodes during evaluation. The discount factor $\gamma$ is set to 0.95, learning rate is 0.0001 and the Adam optimizer is used for training. \ No newline at end of file diff --git a/svqnsequentialvariationalsoftqlearningnetworks/images.zip b/svqnsequentialvariationalsoftqlearningnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ddf72ef5c2acec728da70985f214712e8d1a8d93 --- /dev/null +++ b/svqnsequentialvariationalsoftqlearningnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c65dcd1bca9edc6906ef07272d1237cc2a9942d273665f1209f625fc169c8129 +size 735312 diff --git a/svqnsequentialvariationalsoftqlearningnetworks/layout.json b/svqnsequentialvariationalsoftqlearningnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..080542ad44fad3f11ffebe021b63a1216f9dd6d6 --- /dev/null +++ b/svqnsequentialvariationalsoftqlearningnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6509f6cdf7a408b541c05a96d4bec91ade18ebeef1b5198c1ab1d79cbd9c938a +size 494532