paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks
1 INTRODUCTION The laws driving the physical world are often best described by partial differential equations ( PDEs ) that relate how a magnitude of interest changes in time with its change in space . They describe how the atmosphere and oceans circulate and interact , how structures deform under load and how electromagnetic waves propagate ( Courant & Hilbert , 2008 ) . Knowledge of these equations lets us predict the weather ( Coiffier , 2011 ) , build sturdier structures , and communicate wirelessly . Yet , in many cases we only know the PDEs governing a system partially ( Isakov , 2006 ) or not at all , or solving them is too computationally costly to be practical ( Ames , 2014 ) . Machine learning researchers try to fill in these gaps with models trained on collected data . For example , neural networks have been trained for weather forecasts ( Shi et al. , 2015 ) and fluid flow simulations ( Belbute-Peres et al. , 2020 ) , both of which are traditionally outcomes of PDE solvers . Even the dynamics of discrete dynamical systems such as traffic ( Li et al. , 2018 ) and crowds ( Zhang et al. , 2017 ) have been learned from data . A challenge facing these models is the high cost of acquiring training data , so the data is usually only available sparsely distributed in space . Since graphs are a natural way to structure sparse data , models incorporating graph neural networks ( GNNs ) have been particularly successful for spatio-temporal forecasting ( Yu et al. , 2018 ; Wu et al. , 2019 ) . In the domain of physical processes we can reasonably assume that the observed system follows a PDE . There are mainly two ways to incorporate this assumption as a-priori knowledge into a model . First , we can encode a known PDE into a loss function that encourages the model to fulfill the equation ( Raissi et al. , 2019 ) . Another way to go about this is to derive the model structure itself from known laws such as the convection-diffusion equation ( de Bézenac et al. , 2018 ) . In this paper we will follow the second approach . Consider a dynamical system on a bounded domain Ω ⊂ Rd that is governed by the PDE ∂tu = F ( t , x , u , ∂xu , ∂ 2 xu , ... ) ( 1 ) on functions u : [ 0 , T ] × Ω → Rm . If we have a dense measurement u0 : Ω → Rm of the current state of the system and a solution u that satisfies Eq . ( 1 ) for all t ∈ [ 0 , T ] and also fulfills the initial condition u ( 0 , x ) = u0 ( x ) at all points x ∈ Ω , we can use u as a forecast for the state of the system until time T . From a spatio-temporal forecasting perspective , this means that we can forecast the evolution of the system if we have a continuous measurement of the state , know the dynamics F , and can find solutions of Eq . ( 1 ) efficiently . Unfortunately , in practice we only have a finite number of measurements at arbitrary points and only know the dynamics partially or not at all . Contributions . An established numerical method for forecasts in systems with fully specified dynamics is the finite element method ( FEM ) ( Brenner et al. , 2008 ) . In this paper , we introduce the first graph-based model for spatio-temporal forecasting that is derived from FEM in a principled way . Our derivation establishes a direct connection between the form of the unknown dynamics and the structure of the model . Through this connection our model can incorporate prior knowledge on the governing physical processes via assumptions on the form of the underlying dynamics . We employ this mechanism to derive a specialized model for transport problems from the convection equation . The way that the model structure arises from the underlying equation makes our models uniquely interpretable . We show that our transport model disentangles convection and the remainder of the learned dynamics such as source/sink behavior , and that the activations of the model correspond to a learned flow field , which can be visualized and analyzed . In experiments on multi-step forecasting of sea surface temperature and gas flow , our model improves upon baselines from recurrent , temporalconvolutional , and continuous-time model classes with further improvement by the transport model . 2 BACKGROUND . 2.1 FINITE ELEMENT METHOD . In the following , we will outline how to approximate a solution u to the dynamics in Eq . ( 1 ) from an initial value u0 by discretizing u in space using finite elements . Let X be a set of points with a triangulation T of d-dimensional , non-overlapping simplices X = { x ( i ) ∈ Rd } Ni=1 T = { ∆ ( j ) | ∆ ( j ) ⊂ X , |∆ ( j ) | = d+ 1 } NT j=1 ( 2 ) such that ∪∆∈T CH ( ∆ ) equals the domain Ω where CH ( ∆ ) is the convex hull of simplex ∆ . So we define a simplex ∆ ( j ) ∈ T representing the j-th mesh cell as the set of vertices of the cell and denote the domain volume covered by the cell by the convex hull CH ( ∆ ( j ) ) of the vertices . We will assume u to be a scalar field , i.e . u : [ 0 , T ] ×Ω→ R. If u is a vector field , we treat it as a system of m scalar fields instead . For a detailed introduction to FEM , we refer the reader to Igel ( 2017 ) . Basis Functions . A priori , we assume that the unknown solution u to our problem lies in an infinitedimensional function space U . The first step in FEM to make the problem numerically feasible is to approximate U with a finite-dimensional linear subspace Ũ . This subspace can then be written in terms of linear combinations of basis functions Ũ = span { ϕ ( 1 ) , ... , ϕ ( N ) } . There are many possible bases and the choice determines various qualities of the resulting procedure such as continuity of the approximation and the sparsity pattern of the mass matrix in Eq . ( 7 ) . In our case , we choose the so-called P1 basis of piecewise linear functions ( hat functions ) , see Fig . 2a ( Igel , 2017 ) . There are as many basis functions as there are points and each is uniquely defined by being linear when restricted to a single cell ∆ ∈ T and the constraint ϕ ( j ) ( x ( i ) ) = { 1 if x ( i ) = x ( j ) 0 otherwise ∀x ( i ) ∈ X . ( 3 ) So the basis function ϕ ( j ) is 1 at x ( j ) , falls linearly to 0 on mesh cells adjacent to x ( j ) and is 0 everywhere else . The resulting finite-dimensional function space Ũ is the space of linear interpolators between values at the vertices , see Fig . 2b . An important property is that if we expand u ∈ Ũ in this basis , the value of u at the i-th node is just its i-th coefficient . u ( x ( i ) ) = ∑N j=1 cjϕ ( j ) ( x ( i ) ) = ci ( 4 ) Galerkin Method . A piecewise linear approximation u ∈ Ũ is not differentiable everywhere and therefore can not fulfill Eq . ( 1 ) exactly . So instead of requiring an exact solution , we ask that the residual R ( u ) = ∂tu−F ( t , x , u , ... ) be orthogonal to the approximation space Ũ with respect to the inner product 〈u , v〉Ω = ∫ Ω u ( x ) · v ( x ) dx at any fixed time t. In effect we are looking for the best possible solution within Ũ . Because Ũ is generated by a finite basis , the orthogonality requirement decomposes into N equations , one for each basis function . 〈R ( u ) , v〉Ω = 0 ∀v ∈ Ũ ⇐⇒ 〈R ( u ) , ϕ ( i ) 〉Ω = 0 ∀i = 1 , ... , N ( 5 ) Plugging the residual back in and using the linearity of the inner product , we can reconstruct a system of equations that resemble the PDE that we started with . 〈∂tu , ϕ ( i ) 〉 = 〈F ( t , x , u , ... ) , ϕ ( i ) 〉Ω ∀i = 1 , ... , N ( 6 ) At this point we can stack the system of N equations into a vector equation . If we plug in the basis expansion ∑N j=1 cjϕ ( j ) for u into the left hand side , we get a linear system A∂tc = m ( 7 ) where Aij = 〈ϕ ( i ) , ϕ ( j ) 〉Ω is the so called mass matrix , c is the vector of basis coefficients of u , and mi = 〈F ( t , x , u , ... ) , ϕ ( i ) 〉Ω captures the effect of the dynamics F . The left hand side evaluates to A∂tc , because the basis functions are constant with respect to time . The right hand side can not be further simplified without additional assumptions on F . Method of Lines . If we can evaluate the right hand sidem , we can solve the linear system in Eq . ( 7 ) for the temporal derivatives of the coefficients of u at each point in time . In fact we have converted the PDE into a system of ordinary differential equations ( ODEs ) which we can solve with an arbitrary ODE solver given an initial value c ( 0 ) as in Fig . 2c . This is known as the method of lines because we solve for u along parallel lines in time . To find a vector field u : [ 0 , T ] × Ω → Rm instead of a scalar field , we treat the m dimensions of u as a system of m scalar fields . This results in m copies of Eq . ( 7 ) , which we need to solve simultaneously . Because the mass matrixA is constant with respect to u , we can combine the system into a matrix equation A∂tC = M ( 8 ) where C , M ∈ RN×m are the stacked c and m vectors , respectively . In summary , the spatial discretization with finite elements allows us to turn the PDE ( 1 ) into the matrix ODE ( 8 ) . 2.2 MESSAGE PASSING NEURAL NETWORKS . Message-Passing Neural Networks ( MPNNs ) are a general framework for learning on graphs that encompass many variants of graph neural networks ( Gilmer et al. , 2017 ) . It prescribes that nodes in a graph iteratively exchange messages and update their state based on the received messages for P steps . For a graph G = ( V , E ) with nodes V and edges E , and initial node states h ( 0 ) v ∀v ∈ V , the p-th propagation step is h ( p ) v = fupd ( h ( p−1 ) v , ∑ { u , v } ∈E fmsg ( h ( p−1 ) u , h ( p−1 ) v ) ) , ( 9 ) where fmsg maps node states and edge attributes to messages and fupd updates a node ’ s state with the aggregated incoming messages . The final node states h ( P ) v can then be interpreted as per-node predictions directly or passed as node embeddings to downstream systems . In this work , we employ a slight generalization of the above to undirected hypergraphs , i.e . graphs where the edges are sets of an arbitrary number of nodes instead of having a cardinality of exactly 2 . For such a hypergraph G = ( V , E ) with nodes V and hyperedges ε = { u , v , w , ... } ∈ E , and initial node states h ( 0 ) v ∀v ∈ V , the p-th propagation step is h ( p ) v = fupd ( h ( p−1 ) v , ∑ ε∈E s.t.v∈ε fmsg ( { h ( p−1 ) u | u ∈ ε } ) v ) . ( 10 ) Note that fmsg jointly computes a separate message for each node v participating in a hyperedge ε .
This paper proposes a new model for learning partial differential equations from data. The PDE is first discretized then solved as an ODE. The dynamics function is learned with Message-Passing Neural Networks, where the function is split into a sum of physically informed terms. This splitting both improves model performance and makes the model more interpretable by disentangling the dynamics. The model is tested rigorously against multiple baseline models, and the results show the new model performs well.
SP:a76c1a2b18015e647fa687abbb2840e2426b31f8
Towards fast and effective single-step adversarial training
1 INTRODUCTION . Deep neural networks have achieved remarkable performance on a variety of tasks ( He et al. , 2015 ; Silver et al. , 2016 ; Devlin et al. , 2019 ) . However , it is well known that they are vulnerable to small worst-case perturbations around the input data – commonly referred to as adversarial examples ( Szegedy et al. , 2014 ) . The existence of such adversarial examples poses a security threat to deploying models in sensitive environments ( Biggio & Roli , 2018 ) . This has motivated a large body of work towards improving the adversarial robustness of neural networks ( Goodfellow et al. , 2015 ; Papernot et al. , 2016 ; Tramèr et al. , 2018 ) . The most popular family of solutions to obtain robust neural networks is based on the concept of adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) . In a nutshell , adversarial training can be posed as a min-max problem where instead of minimizing some loss over a dataset of clean samples , we augment the inputs with worst-case perturbations that are generated online during training . However , obtaining such perturbations is NP-hard ( Weng et al. , 2018 ) and hence , different approaches have been suggested to approximate them . They are commonly referred to as adversarial attacks . In their seminal work , Goodfellow et al . ( 2015 ) proposed the Fast Gradient Sign Method ( FGSM ) , that generates adversarial attacks by running one step of gradient ascent on the loss function . However , while FGSM-based adversarial training provides robustness against single-step FGSM adversaries , Madry et al . ( 2018 ) ; Tramèr et al . ( 2018 ) showed that these models were still vulnerable to multi-step attacks , namely those allowed to perform multiple gradient ascent steps instead of a single one . Notably , Madry et al . ( 2018 ) introduced the multi-step Projected Gradient Descent ( PGD ) attack . PGD-based attacks have now become the de facto standard for adversarial training ; yet , their cost increases linearly with the number of steps . As a result , several works have focused on reducing the cost of adversarial training by approximating the worst-case perturbations with single-step attacks ( Wong et al. , 2020 ; Shafahi et al. , 2019 ; Vivek & Babu , 2020 ) . In particular , Wong et al . ( 2020 ) studied FGSM adversarial training and discovered that it suffers from a characteristic failure mode , in which a model suddenly becomes vulnerable to multi-step attacks despite remaining robust to single-step attacks . This phenomenon is referred to as catastrophic overfitting . Moreover , they argued that adding a random perturbation prior to FGSM ( RS-FGSM ) seemed sufficient to prevent catastrophic overfitting and yield robust models . Recently , Andriushchenko & Flammarion ( 2020 ) observed that RS-FGSM still leads to catastrophic overfitting as we increase the perturbation radii . They suggested a regularizer ( GradAlign ) that , on the one hand avoids catastrophic overfitting in all the settings they considered , but on the other hand requires the computation of a double derivative – which significantly increases the computational cost compared to RS-FGSM . This has motivated other works that aim at achieving the same level of robustness with a lower computational overhead ( Golgooni et al. , 2021 ; Kim et al. , 2021 ) . In this paper , we revisit two key components that are common among previous works combining noise and FGSM ( Tramèr et al. , 2018 ; Wong et al. , 2020 ) : the role of noise , i.e . the random step , and the role of the clipping step . In Section 4.1 , we study how these two components affect model robustness ; our experiments suggest that adding noise with a large magnitude in the random step and removing the clipping step improves model robustness and prevents catastrophic overfitting , even against large perturbation radii . We combine these observations and propose a new method called Noise-FGSM ( N-FGSM ) , an illustration of which is presented in Figure 1 ( left ) . N-FGSM allows to match , or even surpass , the robust accuracy of the regularized FGSM introduced by Andriushchenko & Flammarion ( 2020 ) , while providing a 3× speed-up . To corroborate the effectiveness of our solution , we present an experimental survey of recently proposed single-step attacks and empirically demonstrate that N-FGSM trades-off robustness and computational cost better than other single-step approaches , evaluated over a large spectrum of perturbation radii ( see Figure 1 , middle and right panels ) , over several datasets ( CIFAR-10 , CIFAR100 , and SVHN ) and architectures ( PreActResNet18 and WideResNet28-10 ) . We will release our code reproducing all experiments . 2 RELATED WORK . Since the discovery of adversarial examples , many defense mechanisms have been proposed . Preprocessing techniques try to modify the input image to neutralize adversarial attacks ( Guo et al. , 2018 ; Buckman et al. , 2018 ; Song et al. , 2018 ) . Adversarial detection methods focus on detecting and rejecting adversarial attacks ( Carlini & Wagner , 2017 ; Ma et al. , 2018 ; Yang et al. , 2020 ; Tian et al. , 2021 ) . Certifiable defenses provide theoretical guarantees for the lower bound performance of networks subjected to worst-case adversarial attacks , however , they incur additional costs during inference and , empirically , they yield sub-optimal performance ( Cohen et al. , 2019 ; Wong & Kolter , 2018 ; Raghunathan et al. , 2018 ; Balunovic & Vechev , 2020 ) . Adversarial training methods are based on a special form of data augmentation designed to make the network robust to worst-case perturbations ( Zhang et al. , 2019 ; Athalye et al. , 2018 ; Kurakin et al. , 2017 ) . However , computing a worst-case perturbation is an NP-hard problem that needs to be solved at every iteration . To minimize the overhead of adversarial training , Goodfellow et al . ( 2015 ) proposed FGSM which requires one additional gradient step per iteration . Tramèr et al . ( 2018 ) first proposed performing a random step before taking the adversarial step ( R+FGSM ) , but they observed that neither method yields robust models against PGD attacks ( Madry et al. , 2018 ) . Since then , augmenting the training with PGD attacks has been one of the most popular approaches for robustness , but its cost increases linearly with the number of steps , which presents a severe practical limitation . To reduce the cost of PGD , Shafahi et al . ( 2019 ) proposed Free Adversarial Training ( Free-AT ) , that exploits a single back-propagation step to both update the network parameters and compute the attack . Wong et al . ( 2020 ) explored a variation of R+FGSM , namely RS-FGSM , and showed it can yield robust networks against multi-step attacks . Andriushchenko & Flammarion ( 2020 ) found that RS-FGSM only works for limited perturbation radii and introduced GradAlign – a regularizer to linearize the loss surface . However , optimizing GradAlign triplicates the computational cost . This motivated a new series of works that aim at matching the performance of GradAlign without the additional computational overhead ( Golgooni et al. , 2021 ; Kim et al. , 2021 ) . Other strategies that attempted to improve FGSM included introducing dropout in every layer ( Vivek & Babu , 2020 ) and perturbing intermediate feature maps together with the input ( Park & Lee , 2021 ) . Li et al . ( 2020 ) suggested combining RS-FGSM and PGD attacks during training , however , the proposed strategy requires a frequent monitoring of the PGD robust accuracy and , in the worst-case , is computationally equivalent to PGD training . Gilmer et al . ( 2019 ) ; Fawzi et al . ( 2018 ) suggested a strong link between robustness to adversarial attacks and to random noise . Motivated by this , we revisit the idea of combining noise and FGSM and propose N-FGSM . Our method is closely related to RS-FGSM , however , we find that using a larger amount of noise and removing the constraint that attacks must lie in the − l∞ ball is key to obtaining robust models . We note that Kang & Moosavi-Dezfooli ( 2021 ) concurrently studied RS-FGSM without clipping , however , as opposed to our work , they did not investigate and provide insights on the impact of noise , and the learned models were not robust against large perturbations . 3 PRELIMINARIES ON SINGLE-STEP ADVERSARIAL TRAINING . Given a classifier fθ : X → Y parameterized by θ and a perturbation set S , the classifier fθ is said to be robust at x ∈ X under S if the following holds for all δ ∈ S : fθ ( x + δ ) = fθ ( x ) . One of the most popular definitions for S is the − ` ∞ ball , i.e . S = { δ : ‖δ‖∞ ≤ } . This is known as the l∞ threat model and is the setting we adopt throughout this work . To train networks that are robust against ` ∞ threat models , adversarial training modifies the classical training procedure of minimizing a loss function over a dataset D = { ( xi , yi ) } i=1 : N of images xi ∈ X and labels yi ∈ Y . In particular , adversarial training instead minimizes the worst-case loss over the perturbation set S , i.e . trains on adversarially perturbed samples { ( xi+δi , yi ) } i=1 : N . When using the l∞ threat model , we can formalize adversarial training as solving the following problem : min θ N∑ i=1 max δ L ( fθ ( xi + δ ) , yi ) , subject to ‖δ‖∞ ≤ , ( 1 ) where L is typically the cross-entropy loss for image-classification models . Due to the difficulty of finding the exact inner maximizer , the most common procedure for adversarial training is to approximate the worst-case perturbation through several PGD steps ( Madry et al. , 2018 ) . While this has been shown to yield robust models , it comes at a cost of a linear increase in the computational overhead with the number of PGD steps . As a result , several works have focused on reducing the cost of adversarial training by approximating the inner maximization with single-step attacks . If we assume that the loss function is locally linear with respect to changes in the input , then the inner maximization of Equation ( 1 ) enjoys a closed form solution . Goodfellow et al . ( 2015 ) leveraged this result to propose the FGSM method , which takes one step in the direction of the sign of the gradient . Tramèr et al . ( 2018 ) proposed adding a random initialization prior to FGSM . However , both methods were later shown to be vulnerable against multi-step attacks , such as PGD ( Madry et al. , 2018 ) . Contrary to prior intuition , recent work from Wong et al . ( 2020 ) observed that combining a random step with FGSM can actually lead to a promising robustness performance . In particular , we note that most recent single-step methods approximate the worst-case perturbation that results from solving the inner maximization problem in Equation ( 1 ) with the following general form : δ = ψ ( η + α · sign ( ∇xiL ( fθ ( xi + η ) , yi ) ) ) , where η ∼ Ω ( 2 ) and Ω is the distribution from which we draw noise perturbations . For example , when ψ is the projection operator onto the ` ∞ ball and Ω is the uniform distribution in [ − , ] , this recovers RSFGSM with the following update : δRS-FGSM = Proj‖δ‖∞≤ ( η + α · sign ( ∇xiL ( f ( xi + η ) , yi ) ) ) , where η ∼ U [ − , ] d. ( 3 ) On the other hand , with a different noise setting where Ω = ( −α ) · sign ( N ( 0 , I ) ) and by choosing the step size α to be in [ 0 , ] we recover R+FGSM by Tramèr et al . ( 2018 ) that initially explored the idea of combining noise with FGSM but reported no improvement over adversarial training with FGSM . If we consider Ω to be deterministically 0 and ψ to be the identity map , we recover the FGSM . Finally , if we adjust the choice of the loss function L to include a gradient alignment regularizer , this recovers the GradAlign algorithm by Andriushchenko & Flammarion ( 2020 ) .
This paper methodically studied the catastrophic overfitting in fast adversarial training (Fast-AT), and revisited the role of noise and clipping operation in Fast-AT. Based on the empirical findings, this paper discovered that the absence of clipping as well as using stronger noise could help avoid catastrophic overfitting. The author further proposed Noise-FGSM, utilizing single-step FGSM and noise-augmented samples to generate adversarial examples for training. Empirical studies showed the superiority of N-FGSM both in terms of performance and speed.
SP:9f8b2c2983b0825ed4867509162980586d12cde1
Towards fast and effective single-step adversarial training
1 INTRODUCTION . Deep neural networks have achieved remarkable performance on a variety of tasks ( He et al. , 2015 ; Silver et al. , 2016 ; Devlin et al. , 2019 ) . However , it is well known that they are vulnerable to small worst-case perturbations around the input data – commonly referred to as adversarial examples ( Szegedy et al. , 2014 ) . The existence of such adversarial examples poses a security threat to deploying models in sensitive environments ( Biggio & Roli , 2018 ) . This has motivated a large body of work towards improving the adversarial robustness of neural networks ( Goodfellow et al. , 2015 ; Papernot et al. , 2016 ; Tramèr et al. , 2018 ) . The most popular family of solutions to obtain robust neural networks is based on the concept of adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) . In a nutshell , adversarial training can be posed as a min-max problem where instead of minimizing some loss over a dataset of clean samples , we augment the inputs with worst-case perturbations that are generated online during training . However , obtaining such perturbations is NP-hard ( Weng et al. , 2018 ) and hence , different approaches have been suggested to approximate them . They are commonly referred to as adversarial attacks . In their seminal work , Goodfellow et al . ( 2015 ) proposed the Fast Gradient Sign Method ( FGSM ) , that generates adversarial attacks by running one step of gradient ascent on the loss function . However , while FGSM-based adversarial training provides robustness against single-step FGSM adversaries , Madry et al . ( 2018 ) ; Tramèr et al . ( 2018 ) showed that these models were still vulnerable to multi-step attacks , namely those allowed to perform multiple gradient ascent steps instead of a single one . Notably , Madry et al . ( 2018 ) introduced the multi-step Projected Gradient Descent ( PGD ) attack . PGD-based attacks have now become the de facto standard for adversarial training ; yet , their cost increases linearly with the number of steps . As a result , several works have focused on reducing the cost of adversarial training by approximating the worst-case perturbations with single-step attacks ( Wong et al. , 2020 ; Shafahi et al. , 2019 ; Vivek & Babu , 2020 ) . In particular , Wong et al . ( 2020 ) studied FGSM adversarial training and discovered that it suffers from a characteristic failure mode , in which a model suddenly becomes vulnerable to multi-step attacks despite remaining robust to single-step attacks . This phenomenon is referred to as catastrophic overfitting . Moreover , they argued that adding a random perturbation prior to FGSM ( RS-FGSM ) seemed sufficient to prevent catastrophic overfitting and yield robust models . Recently , Andriushchenko & Flammarion ( 2020 ) observed that RS-FGSM still leads to catastrophic overfitting as we increase the perturbation radii . They suggested a regularizer ( GradAlign ) that , on the one hand avoids catastrophic overfitting in all the settings they considered , but on the other hand requires the computation of a double derivative – which significantly increases the computational cost compared to RS-FGSM . This has motivated other works that aim at achieving the same level of robustness with a lower computational overhead ( Golgooni et al. , 2021 ; Kim et al. , 2021 ) . In this paper , we revisit two key components that are common among previous works combining noise and FGSM ( Tramèr et al. , 2018 ; Wong et al. , 2020 ) : the role of noise , i.e . the random step , and the role of the clipping step . In Section 4.1 , we study how these two components affect model robustness ; our experiments suggest that adding noise with a large magnitude in the random step and removing the clipping step improves model robustness and prevents catastrophic overfitting , even against large perturbation radii . We combine these observations and propose a new method called Noise-FGSM ( N-FGSM ) , an illustration of which is presented in Figure 1 ( left ) . N-FGSM allows to match , or even surpass , the robust accuracy of the regularized FGSM introduced by Andriushchenko & Flammarion ( 2020 ) , while providing a 3× speed-up . To corroborate the effectiveness of our solution , we present an experimental survey of recently proposed single-step attacks and empirically demonstrate that N-FGSM trades-off robustness and computational cost better than other single-step approaches , evaluated over a large spectrum of perturbation radii ( see Figure 1 , middle and right panels ) , over several datasets ( CIFAR-10 , CIFAR100 , and SVHN ) and architectures ( PreActResNet18 and WideResNet28-10 ) . We will release our code reproducing all experiments . 2 RELATED WORK . Since the discovery of adversarial examples , many defense mechanisms have been proposed . Preprocessing techniques try to modify the input image to neutralize adversarial attacks ( Guo et al. , 2018 ; Buckman et al. , 2018 ; Song et al. , 2018 ) . Adversarial detection methods focus on detecting and rejecting adversarial attacks ( Carlini & Wagner , 2017 ; Ma et al. , 2018 ; Yang et al. , 2020 ; Tian et al. , 2021 ) . Certifiable defenses provide theoretical guarantees for the lower bound performance of networks subjected to worst-case adversarial attacks , however , they incur additional costs during inference and , empirically , they yield sub-optimal performance ( Cohen et al. , 2019 ; Wong & Kolter , 2018 ; Raghunathan et al. , 2018 ; Balunovic & Vechev , 2020 ) . Adversarial training methods are based on a special form of data augmentation designed to make the network robust to worst-case perturbations ( Zhang et al. , 2019 ; Athalye et al. , 2018 ; Kurakin et al. , 2017 ) . However , computing a worst-case perturbation is an NP-hard problem that needs to be solved at every iteration . To minimize the overhead of adversarial training , Goodfellow et al . ( 2015 ) proposed FGSM which requires one additional gradient step per iteration . Tramèr et al . ( 2018 ) first proposed performing a random step before taking the adversarial step ( R+FGSM ) , but they observed that neither method yields robust models against PGD attacks ( Madry et al. , 2018 ) . Since then , augmenting the training with PGD attacks has been one of the most popular approaches for robustness , but its cost increases linearly with the number of steps , which presents a severe practical limitation . To reduce the cost of PGD , Shafahi et al . ( 2019 ) proposed Free Adversarial Training ( Free-AT ) , that exploits a single back-propagation step to both update the network parameters and compute the attack . Wong et al . ( 2020 ) explored a variation of R+FGSM , namely RS-FGSM , and showed it can yield robust networks against multi-step attacks . Andriushchenko & Flammarion ( 2020 ) found that RS-FGSM only works for limited perturbation radii and introduced GradAlign – a regularizer to linearize the loss surface . However , optimizing GradAlign triplicates the computational cost . This motivated a new series of works that aim at matching the performance of GradAlign without the additional computational overhead ( Golgooni et al. , 2021 ; Kim et al. , 2021 ) . Other strategies that attempted to improve FGSM included introducing dropout in every layer ( Vivek & Babu , 2020 ) and perturbing intermediate feature maps together with the input ( Park & Lee , 2021 ) . Li et al . ( 2020 ) suggested combining RS-FGSM and PGD attacks during training , however , the proposed strategy requires a frequent monitoring of the PGD robust accuracy and , in the worst-case , is computationally equivalent to PGD training . Gilmer et al . ( 2019 ) ; Fawzi et al . ( 2018 ) suggested a strong link between robustness to adversarial attacks and to random noise . Motivated by this , we revisit the idea of combining noise and FGSM and propose N-FGSM . Our method is closely related to RS-FGSM , however , we find that using a larger amount of noise and removing the constraint that attacks must lie in the − l∞ ball is key to obtaining robust models . We note that Kang & Moosavi-Dezfooli ( 2021 ) concurrently studied RS-FGSM without clipping , however , as opposed to our work , they did not investigate and provide insights on the impact of noise , and the learned models were not robust against large perturbations . 3 PRELIMINARIES ON SINGLE-STEP ADVERSARIAL TRAINING . Given a classifier fθ : X → Y parameterized by θ and a perturbation set S , the classifier fθ is said to be robust at x ∈ X under S if the following holds for all δ ∈ S : fθ ( x + δ ) = fθ ( x ) . One of the most popular definitions for S is the − ` ∞ ball , i.e . S = { δ : ‖δ‖∞ ≤ } . This is known as the l∞ threat model and is the setting we adopt throughout this work . To train networks that are robust against ` ∞ threat models , adversarial training modifies the classical training procedure of minimizing a loss function over a dataset D = { ( xi , yi ) } i=1 : N of images xi ∈ X and labels yi ∈ Y . In particular , adversarial training instead minimizes the worst-case loss over the perturbation set S , i.e . trains on adversarially perturbed samples { ( xi+δi , yi ) } i=1 : N . When using the l∞ threat model , we can formalize adversarial training as solving the following problem : min θ N∑ i=1 max δ L ( fθ ( xi + δ ) , yi ) , subject to ‖δ‖∞ ≤ , ( 1 ) where L is typically the cross-entropy loss for image-classification models . Due to the difficulty of finding the exact inner maximizer , the most common procedure for adversarial training is to approximate the worst-case perturbation through several PGD steps ( Madry et al. , 2018 ) . While this has been shown to yield robust models , it comes at a cost of a linear increase in the computational overhead with the number of PGD steps . As a result , several works have focused on reducing the cost of adversarial training by approximating the inner maximization with single-step attacks . If we assume that the loss function is locally linear with respect to changes in the input , then the inner maximization of Equation ( 1 ) enjoys a closed form solution . Goodfellow et al . ( 2015 ) leveraged this result to propose the FGSM method , which takes one step in the direction of the sign of the gradient . Tramèr et al . ( 2018 ) proposed adding a random initialization prior to FGSM . However , both methods were later shown to be vulnerable against multi-step attacks , such as PGD ( Madry et al. , 2018 ) . Contrary to prior intuition , recent work from Wong et al . ( 2020 ) observed that combining a random step with FGSM can actually lead to a promising robustness performance . In particular , we note that most recent single-step methods approximate the worst-case perturbation that results from solving the inner maximization problem in Equation ( 1 ) with the following general form : δ = ψ ( η + α · sign ( ∇xiL ( fθ ( xi + η ) , yi ) ) ) , where η ∼ Ω ( 2 ) and Ω is the distribution from which we draw noise perturbations . For example , when ψ is the projection operator onto the ` ∞ ball and Ω is the uniform distribution in [ − , ] , this recovers RSFGSM with the following update : δRS-FGSM = Proj‖δ‖∞≤ ( η + α · sign ( ∇xiL ( f ( xi + η ) , yi ) ) ) , where η ∼ U [ − , ] d. ( 3 ) On the other hand , with a different noise setting where Ω = ( −α ) · sign ( N ( 0 , I ) ) and by choosing the step size α to be in [ 0 , ] we recover R+FGSM by Tramèr et al . ( 2018 ) that initially explored the idea of combining noise with FGSM but reported no improvement over adversarial training with FGSM . If we consider Ω to be deterministically 0 and ψ to be the identity map , we recover the FGSM . Finally , if we adjust the choice of the loss function L to include a gradient alignment regularizer , this recovers the GradAlign algorithm by Andriushchenko & Flammarion ( 2020 ) .
This paper aims to address a failure mode in the traditional single-step adversarial training known as catastrophic overfitting. They show that compared to the common practice of generating the adversarial perturbation, adopting larger random initialization and avoiding clipping the perturbation can effectively mitigate catastrophic overfitting. The effects of these two techniques are analyzed empirically in detail, followed by a comprehensive comparison to other methods.
SP:9f8b2c2983b0825ed4867509162980586d12cde1
Planckian jitter: enhancing the color quality of self-supervised visual representations
1 INTRODUCTION . Self-supervised learning enables the learning of visual representation without the need for any labeled data ( Doersch et al. , 2015 ; Dosovitskiy et al. , 2014 ) . Several recent works learn representations that are invariant with respect to a set of data augmentations , and have obtained spectacular results ( Grill et al. , 2020 ; Chen & He , 2021 ; Caron et al. , 2020 ) , significantly narrowing the gap with supervised learned representations . These works vary in their architecture , learning objective , and optimization strategy , however , they are similar in applying a common set of data augmentations to generate the various image views . The algorithms , while learning to map these different views to the same latent representation , learn complex semantic representation for visual data . The set of transformations ( data augmentation ) that are considered induce a set of invariances that characterizes the learned visual representation . Before deep learning revolutionized the way visual representations were computed , separate features were hand-designed to represent its various properties , leading to research on shape ( Lowe , 2004 ) , texture ( Manjunath & Ma , 1996 ) , and color features ( Finlayson & Schaefer , 2001 ; Geusebroek et al. , 2001 ) . Color features were typically designed to be invariant with respect to a set of scene accidental events , such as shadows , shading , illuminant and viewpoint changes . With the rise of deep learning , feature representations that exploit simultaneously color , shape and texture are learned implicitly and the invariances are a byproduct of the end-to-end training ( Krizhevsky et al. , 2009 ) . As discussed above , the current set of self-supervised learning methods explicitly define a set of invariances ( related to the applied data augmentations ) that are to be learned . In this work , we focus on the current de-facto choice for color augmentations . We argue that they seriously cripple the color quality of these representations and we propose an alternative color augmentation . Figure 1 ( left ) illustrates the currently applied color transformation for a sample image , depicted in the middle of the left-most grid . It is clear that the applied color transformation significantly alters the colors of the original image , both in terms of hue and saturation . One of the justifications in literature for such strong color augmentations is that without large color changes , mapping images to the same latent representation can be purely done based on color and no complex shape features are learned , therefore the best results when using only two transformations are obtained when applying image cropping with a color augmentation ( Chen et al. , 2020a ) . However , looking at the reported example it is evident that a representation that maps these images to the same latent representation can not rely on the object color , and as a results the quality of the color representation learned with such algorithms is expected to be inferior . Therefore , in this paper , we propose another set of color augmentations , shown on the right side of Figure 1 . In addition to introducing more natural variations in the image chromaticity , the proposed color augmentation also affects neutral regions such as the petals shown in the figure , which are left unvaried with the original color transformations . We draw on existing color imaging literature , that aimed to design features that were invariant with respect to illuminant changes that were commonly encountered in real-world scenes ( Finlayson & Schaefer , 2001 ) . Our augmentation , called Planckian jitter , applies a physically realistic illuminant variation to the images . We consider the illuminants that are described by Planck ’ s Law for black body radiation and that are known to be similar to illuminants encountered in real-life ( Tominaga et al. , 1999 ) . In the experimental section , we show that self-supervised features learned with Planckian jitter yield superior features , leading to gains of over 5 % on several downstream color classification tasks . However , since our color augmentation is less extreme , the learned shape features are of lower quality than with the original color jitter . A simple combination of both feature representations leads to huge performance gains with respect to default color jitter of between 10-15 % on several color downstream tasks . In addition , we show that our augmentation method can be applied to several state-of-the-art self-supervised learning methods . Finally , we analyze the color sensitivity of the learned color representations . 2 BACKGROUND AND RELATED WORKS . 2.1 SELF-SUPERVISED LEARNING AND CONTRASTIVE LEARNING . Recent improvements in self-supervision learn a semantically rich feature representation without the need of any labels . In SimCLR ( Chen et al. , 2020a ) similar samples are created by augmenting an input image , while dissimilar are chosen by random . To make contrastive training more efficient MoCo method ( He et al. , 2020 ) and its improved version ( Chen et al. , 2020b ) use a memory bank for learned embeddings which helps for efficient sampling . This memory is kept in sync with the rest of the network during training time by using a momentum encoder . Several methods do not have any explicit contrastive pairs . BYOL ( Grill et al. , 2020 ) propose an asymmetric network by introducing an additional MLP predictor between the outputs of the two branches . One of the branches is kept ” offline ” - updated by a momentum encoder . SimSiam ( Chen & He , 2021 ) goes even further and presents a simplified solution without a momentum encoder . Moreover , it obtains similar high-quality results and does not require a large mini-batch size . We will use the SimSiam method to verify our proposed color augmentation ( we also apply our approach to SimCLR ( Chen et al. , 2020a ) and Barlow Twins ( Zbontar et al. , 2021 ) in the experiments ) . The main part is a CNN-based encoder , learned end-to-end in an asymmetric siamese architecture , where one branch has an additional predictor ( Multi-Layer Perceptron , or MLP ) whose output aims to be as close as possible to the other branch ( see Figure 2 ) . The second branch is not updated during backward propagation . A negative-cosine loss function is used , defined as : L = D ( p1 , z2 ) /2 +D ( p2 , z1 ) /2 ( 1 ) D ( pA , zB ) = − pA ‖pA‖2 · zB ‖zB‖2 , ( 2 ) where z1 , z2 are encoded values respectively for two different augmented versions x1 and x2 of the same image x . Note that in Eq . 1 they are alternated between the two branches , but it is always only the first branch that uses a Multilayer Perceptron , producing either p1 or p2 . Additionally , no contrastive term is present : only the similarity is enforced during learning . 2.2 DATA AUGMENTATION . Data augmentation plays an important part in the learning process of self-supervised methods . In the works by Chen et al . ( 2020a ) and Zbontar et al . ( 2021 ) , authors discussed the importance of the different data augmentations . A set of well-defined transformations was proposed within the SimCLR method . This set is commonly used in later works in the self-supervision field . The augmentations include : rotation , cut , flip , color jitter , blur and gray scale . These operations are randomly applied to an image to generate different views ( x1 , x2 ) . Applied to the same image contrastive-like selfsupervised methods learn representation invariant for such distortions . The multiple view creation is a task-related procedure ( Tian et al. , 2020 ) . However , keeping in mind usefulness of learned representations for downstream tasks , color jittering is one of the most important ones ( Chen et al. , 2020a ; Zbontar et al. , 2021 ) , operating on hue , saturation , brightness and contrast . However , color jitter is expected to induce a certain level of color-invariance ( invariance to hue , saturation , brightnesss and contrast ) which are consequently transferred to the downstream task as well . As a consequence , we expect these learned features to underperform on downstream tasks for which color is crucial . Color imaging literature has long researched color features that were invariant with respect to scene accidental events , such as shading , shadows , and illuminant changes ( Geusebroek et al. , 2001 ; Finlayson & Schaefer , 2001 ) . Having features with invariance with respect to these events was found to be beneficial for object recognition . Having invariance with respect to hue and saturation changes ( which are induced by the currently used color jitter operation ) is detrimental for object recognition , especially for those objects were these characteristics are fundamental . Therefore , in this work , we aim to revisit early theory on illuminant invariance ( Finlayson & Schaefer , 2001 ) to design an improved color augmentation function that induces invariances common in the real-world and that does not deteriorate the color quality of the learned features . 3 PLANCKIAN JITTER . The range of image transformations introduced by traditional color jitter creates a variability in training data that indiscriminately explores all hues at various levels of saturation . The resulting invariance can be useful for downstream tasks where chromatic variations are indeed irrelevant ( such as car body color in vehicle recognition ) , but will be detrimental for downstream applications where the color information is known to be critical ( such as flowers , birds , vegetables classification ) . On the other hand , completely removing color invariance risks producing a model with little generalization capability , unable to handle the common variations in illumination conditions due to various sources of indoor and outdoor lighting . 3.1 PHYSICS-BASED PLANCKIAN JITTER . Here we describe an alternative data augmentation procedure , called Planckian jitter , that exploits the physical description of a black-body radiator to re-illuminate the training images within a realistic distribution ( Finlayson & Schaefer , 2001 ; Tominaga et al. , 1999 ) . The resulting augmentations are more realistic than those of the default color jitter ( see Fig . 1 ) . The resulting learned selfsupervised feature representation is thus expected to be robust to illumination changes that can be commonly observed in real-world images , while simultaneously maintaining the ability to discriminate the image content based on color information . Given an input RGB training image I , our physics-based Planckian jitter procedure applies a chromatic adaptation transform that simulates realistic variations in the illumination conditions : 1 . We sample a new illuminant spectrum σT ( λ ) from the distribution of a black body radiator . 2 . We transform the sampled spectrum σT ( λ ) into its sRGB representation ρT ∈ R3 . 3 . We create a jittered image I ′ by reilluminating I with the sampled illuminant ρT . 4 . We introduce brightness and contrast variation , producing a Planckian-jittered image I ′′ . A radiating black body at temperature T can be synthesized using Planck ’ s Law ( Andrews , 2010 ) : σT ( λ ) = 2πhc2 λ5 ( e hc kTλ − 1 ) W/m3 , ( 3 ) where c = 2.99792458×108 m/s is the speed of light , h = 6.626176×10−34 Js is Planck ’ s constant , and k = 1.380662× 10−23 J/K is Boltzmann ’ s constant . For our experiments , we sampled T in the interval between 3000K and 15000K which is known to result in a set of illuminants that can be encountered in real life ( Tominaga et al. , 1999 ) . Then , we discretized wavelength λ in 10nm steps ( ∆λ ) in the interval between 400nm and 700nm . The resulting spectra are visualized in Figure 4 ( left ) in Appendix A.1 . The conversion from spectrum into sRGB is obtained through a series of intermediate steps ( Wyszecki & Stiles , 1982 ) : 1 . We first map the spectrum into the corresponding XYZ stimuli , using the 1931 CIE standard observer color matching functions c { X , Y , Z } ( λ ) , in order to bring the illuminant into a standard color space that represents a person with average eyesight . 2 . We normalize this tristimulus by its Y component , convert it into CIE 1976 L * a * b color space , and fix its L component to 50 in a 0-to-100 scale . This allows us to constrain the intensity of the represented illuminant in a controlled manner as a separate task . 3 . We then convert the resulting values in sRGB , obtaining ρT = { R , G , B } . The resulting distribution of illuminants is visualized with the Angle-Retaining Chromaticity diagram ( Buzzelli et al. , 2020 ) in Figure 4 ( right ) in Appendix A.1 . All color space conversions assume a D65 reference white , which means that a neutral surface illuminated by average daylight conditions would appear achromatic . Once the new illuminant has been converted in sRGB , it is applied to the input image I by resorting to a Von-Kries-like transform ( von Kries , 1902 ) , given by the following channel-wise scalar multiplication : I ′ { R , G , B } = I { R , G , B } · { R , G , B } / { 1 , 1 , 1 } , ( 4 ) where we assume the original scene illuminant to be white ( 1,1,1 ) . Finally , brightness and contrast perturbations are introduced to simulate variations in the intensity of the scene illumination : I ′′ = cB · cC · I ′ + ( 1− cC ) · µ ( cB · I ′ ) , ( 5 ) where cB = 0.8 and cC = 0.8 represent , respectively , brightness and contrast coefficients , and µ is a spatial average function .
**Overview.** This paper proposes a novel type of image colour augmentation to be used during self-supervised learning (SSL). **Background.** In a typical SSL setting, similar samples are generated by randomly augmenting an image in a variety of different ways: random cropping, colour jittering, random rotations, etc. These similar images are then passed through a neural network, and the predicted features for similar samples are trained to be close to each other based on some similarity metric (ignoring the contrastive term description here as it is not directly related to the paper). **Motivation.** In this paper, the authors address the issue of using colour jittering as augmentation during SSL. Specifically, using colour jittering pushes the network towards invariance to image colours, while relying more on the shape and texture of objects when making predictions. The authors point out that despite the benefits of this for many general detection tasks, this will be a detrimental property when dealing with more colour-dependant tasks. **Method.** Thus, they propose to use Planckian jittering instead of colour jittering. Planckian jittering is a physics based augmentation method proposed by the authors (although the formulations for it come from existing literature) to re-illuminate the training images within a realistic distribution which leads to more realistic and constrained colour augmentations than colour jittering. The paper claims that Planckian jittering still helps improve network's dependence on shape and texture of objects (although less than colour jittering), while limiting the network's invariance to image colours. **Experiments.** 6 SSL models were trained independently on CIFAR-100: 3 used different variants of the Planckian jittering, 2 used different variants of colour jittering (w/ and w/o Random GrayScale), and 1 used no augmentations. Linear classifiers for CIFAR-100 and FLOWERS-102 classification tasks were then trained on top of each of the SSL models' features, where FLOWERS-102 is the task that is claimed to be more heavily colour-dependant. Moreover, an extra linear classifier was trained on the concatenation of features from a Planckian jittering and a colour jittering model (called the "Latent space combination" model). Based on accuracy, "Latent space combination" outperforms all other models by a significant margin on both datasets. Planckian jitter seems to outperform other augmentations on FLOWERS-120 (Table 1), which supports the claim of the authors. On CIFAR-100, Planckian jitter performs slightly worse than colour jittering, which the authors attribute to the reduction of colour invariance in the features. A very similar experiment was also done with different datasets (SSL training on tiny-imagenet, linear classifier trained on FLOWERS-102, CUB200, VEGFRU, T1K+), which also obtained similar results and conclusions. Moreover, in another similar experiment, the SSL models were trained with different SSL configurations (SimSiam, SimCLR, Barlow Twins) to indicate the generality of Planckian jitter for different types of SSL configurations. In another experiment, the robustness of the different models on augmented images using Planckian jittering was evaluated. Lastly, the colour sensitivity was analyzed to inspect the impact of colour information in neuron activations for each model.
SP:fa7fa24dcbfa67ffc00471e14aea2ed451bb1bea
Planckian jitter: enhancing the color quality of self-supervised visual representations
1 INTRODUCTION . Self-supervised learning enables the learning of visual representation without the need for any labeled data ( Doersch et al. , 2015 ; Dosovitskiy et al. , 2014 ) . Several recent works learn representations that are invariant with respect to a set of data augmentations , and have obtained spectacular results ( Grill et al. , 2020 ; Chen & He , 2021 ; Caron et al. , 2020 ) , significantly narrowing the gap with supervised learned representations . These works vary in their architecture , learning objective , and optimization strategy , however , they are similar in applying a common set of data augmentations to generate the various image views . The algorithms , while learning to map these different views to the same latent representation , learn complex semantic representation for visual data . The set of transformations ( data augmentation ) that are considered induce a set of invariances that characterizes the learned visual representation . Before deep learning revolutionized the way visual representations were computed , separate features were hand-designed to represent its various properties , leading to research on shape ( Lowe , 2004 ) , texture ( Manjunath & Ma , 1996 ) , and color features ( Finlayson & Schaefer , 2001 ; Geusebroek et al. , 2001 ) . Color features were typically designed to be invariant with respect to a set of scene accidental events , such as shadows , shading , illuminant and viewpoint changes . With the rise of deep learning , feature representations that exploit simultaneously color , shape and texture are learned implicitly and the invariances are a byproduct of the end-to-end training ( Krizhevsky et al. , 2009 ) . As discussed above , the current set of self-supervised learning methods explicitly define a set of invariances ( related to the applied data augmentations ) that are to be learned . In this work , we focus on the current de-facto choice for color augmentations . We argue that they seriously cripple the color quality of these representations and we propose an alternative color augmentation . Figure 1 ( left ) illustrates the currently applied color transformation for a sample image , depicted in the middle of the left-most grid . It is clear that the applied color transformation significantly alters the colors of the original image , both in terms of hue and saturation . One of the justifications in literature for such strong color augmentations is that without large color changes , mapping images to the same latent representation can be purely done based on color and no complex shape features are learned , therefore the best results when using only two transformations are obtained when applying image cropping with a color augmentation ( Chen et al. , 2020a ) . However , looking at the reported example it is evident that a representation that maps these images to the same latent representation can not rely on the object color , and as a results the quality of the color representation learned with such algorithms is expected to be inferior . Therefore , in this paper , we propose another set of color augmentations , shown on the right side of Figure 1 . In addition to introducing more natural variations in the image chromaticity , the proposed color augmentation also affects neutral regions such as the petals shown in the figure , which are left unvaried with the original color transformations . We draw on existing color imaging literature , that aimed to design features that were invariant with respect to illuminant changes that were commonly encountered in real-world scenes ( Finlayson & Schaefer , 2001 ) . Our augmentation , called Planckian jitter , applies a physically realistic illuminant variation to the images . We consider the illuminants that are described by Planck ’ s Law for black body radiation and that are known to be similar to illuminants encountered in real-life ( Tominaga et al. , 1999 ) . In the experimental section , we show that self-supervised features learned with Planckian jitter yield superior features , leading to gains of over 5 % on several downstream color classification tasks . However , since our color augmentation is less extreme , the learned shape features are of lower quality than with the original color jitter . A simple combination of both feature representations leads to huge performance gains with respect to default color jitter of between 10-15 % on several color downstream tasks . In addition , we show that our augmentation method can be applied to several state-of-the-art self-supervised learning methods . Finally , we analyze the color sensitivity of the learned color representations . 2 BACKGROUND AND RELATED WORKS . 2.1 SELF-SUPERVISED LEARNING AND CONTRASTIVE LEARNING . Recent improvements in self-supervision learn a semantically rich feature representation without the need of any labels . In SimCLR ( Chen et al. , 2020a ) similar samples are created by augmenting an input image , while dissimilar are chosen by random . To make contrastive training more efficient MoCo method ( He et al. , 2020 ) and its improved version ( Chen et al. , 2020b ) use a memory bank for learned embeddings which helps for efficient sampling . This memory is kept in sync with the rest of the network during training time by using a momentum encoder . Several methods do not have any explicit contrastive pairs . BYOL ( Grill et al. , 2020 ) propose an asymmetric network by introducing an additional MLP predictor between the outputs of the two branches . One of the branches is kept ” offline ” - updated by a momentum encoder . SimSiam ( Chen & He , 2021 ) goes even further and presents a simplified solution without a momentum encoder . Moreover , it obtains similar high-quality results and does not require a large mini-batch size . We will use the SimSiam method to verify our proposed color augmentation ( we also apply our approach to SimCLR ( Chen et al. , 2020a ) and Barlow Twins ( Zbontar et al. , 2021 ) in the experiments ) . The main part is a CNN-based encoder , learned end-to-end in an asymmetric siamese architecture , where one branch has an additional predictor ( Multi-Layer Perceptron , or MLP ) whose output aims to be as close as possible to the other branch ( see Figure 2 ) . The second branch is not updated during backward propagation . A negative-cosine loss function is used , defined as : L = D ( p1 , z2 ) /2 +D ( p2 , z1 ) /2 ( 1 ) D ( pA , zB ) = − pA ‖pA‖2 · zB ‖zB‖2 , ( 2 ) where z1 , z2 are encoded values respectively for two different augmented versions x1 and x2 of the same image x . Note that in Eq . 1 they are alternated between the two branches , but it is always only the first branch that uses a Multilayer Perceptron , producing either p1 or p2 . Additionally , no contrastive term is present : only the similarity is enforced during learning . 2.2 DATA AUGMENTATION . Data augmentation plays an important part in the learning process of self-supervised methods . In the works by Chen et al . ( 2020a ) and Zbontar et al . ( 2021 ) , authors discussed the importance of the different data augmentations . A set of well-defined transformations was proposed within the SimCLR method . This set is commonly used in later works in the self-supervision field . The augmentations include : rotation , cut , flip , color jitter , blur and gray scale . These operations are randomly applied to an image to generate different views ( x1 , x2 ) . Applied to the same image contrastive-like selfsupervised methods learn representation invariant for such distortions . The multiple view creation is a task-related procedure ( Tian et al. , 2020 ) . However , keeping in mind usefulness of learned representations for downstream tasks , color jittering is one of the most important ones ( Chen et al. , 2020a ; Zbontar et al. , 2021 ) , operating on hue , saturation , brightness and contrast . However , color jitter is expected to induce a certain level of color-invariance ( invariance to hue , saturation , brightnesss and contrast ) which are consequently transferred to the downstream task as well . As a consequence , we expect these learned features to underperform on downstream tasks for which color is crucial . Color imaging literature has long researched color features that were invariant with respect to scene accidental events , such as shading , shadows , and illuminant changes ( Geusebroek et al. , 2001 ; Finlayson & Schaefer , 2001 ) . Having features with invariance with respect to these events was found to be beneficial for object recognition . Having invariance with respect to hue and saturation changes ( which are induced by the currently used color jitter operation ) is detrimental for object recognition , especially for those objects were these characteristics are fundamental . Therefore , in this work , we aim to revisit early theory on illuminant invariance ( Finlayson & Schaefer , 2001 ) to design an improved color augmentation function that induces invariances common in the real-world and that does not deteriorate the color quality of the learned features . 3 PLANCKIAN JITTER . The range of image transformations introduced by traditional color jitter creates a variability in training data that indiscriminately explores all hues at various levels of saturation . The resulting invariance can be useful for downstream tasks where chromatic variations are indeed irrelevant ( such as car body color in vehicle recognition ) , but will be detrimental for downstream applications where the color information is known to be critical ( such as flowers , birds , vegetables classification ) . On the other hand , completely removing color invariance risks producing a model with little generalization capability , unable to handle the common variations in illumination conditions due to various sources of indoor and outdoor lighting . 3.1 PHYSICS-BASED PLANCKIAN JITTER . Here we describe an alternative data augmentation procedure , called Planckian jitter , that exploits the physical description of a black-body radiator to re-illuminate the training images within a realistic distribution ( Finlayson & Schaefer , 2001 ; Tominaga et al. , 1999 ) . The resulting augmentations are more realistic than those of the default color jitter ( see Fig . 1 ) . The resulting learned selfsupervised feature representation is thus expected to be robust to illumination changes that can be commonly observed in real-world images , while simultaneously maintaining the ability to discriminate the image content based on color information . Given an input RGB training image I , our physics-based Planckian jitter procedure applies a chromatic adaptation transform that simulates realistic variations in the illumination conditions : 1 . We sample a new illuminant spectrum σT ( λ ) from the distribution of a black body radiator . 2 . We transform the sampled spectrum σT ( λ ) into its sRGB representation ρT ∈ R3 . 3 . We create a jittered image I ′ by reilluminating I with the sampled illuminant ρT . 4 . We introduce brightness and contrast variation , producing a Planckian-jittered image I ′′ . A radiating black body at temperature T can be synthesized using Planck ’ s Law ( Andrews , 2010 ) : σT ( λ ) = 2πhc2 λ5 ( e hc kTλ − 1 ) W/m3 , ( 3 ) where c = 2.99792458×108 m/s is the speed of light , h = 6.626176×10−34 Js is Planck ’ s constant , and k = 1.380662× 10−23 J/K is Boltzmann ’ s constant . For our experiments , we sampled T in the interval between 3000K and 15000K which is known to result in a set of illuminants that can be encountered in real life ( Tominaga et al. , 1999 ) . Then , we discretized wavelength λ in 10nm steps ( ∆λ ) in the interval between 400nm and 700nm . The resulting spectra are visualized in Figure 4 ( left ) in Appendix A.1 . The conversion from spectrum into sRGB is obtained through a series of intermediate steps ( Wyszecki & Stiles , 1982 ) : 1 . We first map the spectrum into the corresponding XYZ stimuli , using the 1931 CIE standard observer color matching functions c { X , Y , Z } ( λ ) , in order to bring the illuminant into a standard color space that represents a person with average eyesight . 2 . We normalize this tristimulus by its Y component , convert it into CIE 1976 L * a * b color space , and fix its L component to 50 in a 0-to-100 scale . This allows us to constrain the intensity of the represented illuminant in a controlled manner as a separate task . 3 . We then convert the resulting values in sRGB , obtaining ρT = { R , G , B } . The resulting distribution of illuminants is visualized with the Angle-Retaining Chromaticity diagram ( Buzzelli et al. , 2020 ) in Figure 4 ( right ) in Appendix A.1 . All color space conversions assume a D65 reference white , which means that a neutral surface illuminated by average daylight conditions would appear achromatic . Once the new illuminant has been converted in sRGB , it is applied to the input image I by resorting to a Von-Kries-like transform ( von Kries , 1902 ) , given by the following channel-wise scalar multiplication : I ′ { R , G , B } = I { R , G , B } · { R , G , B } / { 1 , 1 , 1 } , ( 4 ) where we assume the original scene illuminant to be white ( 1,1,1 ) . Finally , brightness and contrast perturbations are introduced to simulate variations in the intensity of the scene illumination : I ′′ = cB · cC · I ′ + ( 1− cC ) · µ ( cB · I ′ ) , ( 5 ) where cB = 0.8 and cC = 0.8 represent , respectively , brightness and contrast coefficients , and µ is a spatial average function .
This paper first examines that typical color jittering augmentation is harmful to feature representation learning. Then the authors proposed a physics-based color augmentation, called Planckian jitter to improve the performance. The proposed Planckian jitter performs better with the recent contrastive and self-supervised learning schemes.
SP:fa7fa24dcbfa67ffc00471e14aea2ed451bb1bea
miniF2F: a cross-system benchmark for formal Olympiad-level mathematics
1 INTRODUCTION . Shared benchmarks and datasets have historically played a crucial role in driving advances in largescale applications of deep learning , e.g . in computer vision ( Deng et al. , 2009 ) and natural language processing ( Wang et al. , 2019 ; Rajpurkar et al. , 2016 ; Paperno et al. , 2016 ) . Neural theorem proving is a rapidly developing area which aims to apply techniques from deep learning to interactive theorem proving . To date , most contributions in this area have focused on individual theorem proving systems , each with a separately-implemented mathematics library and with results reported on a dataset-specific test split ; examples include the HOList ( Bansal et al. , 2019a ) , CoqGym ( Yang & Deng , 2019 ) and LeanStep ( Han et al. , 2021 ) theorem proving environments and benchmarks . However , benchmarks from this paradigm are not ideal for measuring the mathematical reasoning ability of neural theorem provers for several reasons . Library-specific train/test splits are siloed by construction , dependent on how theorems and lemmas are split in these libraries , and as such are not directly comparable across systems . Moreover , formal mathematics libraries are closer to software repositories than informal mathematical exposition , and many lemmas are implementation-specific artifacts without precise informal mathematical or cross-system translations . To date , the neural theorem proving community has not organized its efforts around a cross-system benchmark . To address this need and to provide a common resource to research groups working on formal theorem proving , we present miniF2F , a unified cross-system benchmark of formal mathematics of progressively increasing difficulty , centering around Olympiad-level problem statements ( AMC , AIME , IMO ) as well as high-school and undergraduate maths classes . Both the content and name of miniF2F are inspired by the IMO Grand Challenge ( Selsam et al. , 2019 ) : to build an AI that can win a gold medal in the International Mathematical Olympiad in a formal-to-formal ( F2F ) format . More precisely , the agent must receive IMO problems written in a formal mathematical format , and must produce a formal ( i.e . machine-checkable ) proof for that problem . We intend for miniF2F to serve as a stepping stone for different formal systems towards the IMO Grand Challenge ( Selsam et al. , 2019 ) , as it is end-to-end verifiable , cross-platform and spans a wide range of difficulty . While we report baseline results on miniF2F using GPT-f , a language model based on GPT-3 which has been finetuned for theorem proving , language models are not a mandatory approach for Olympiad problems and this assumption is not reflected in miniF2F , preserving the generality and widespread applicability of the benchmark to systems similar to DeepHOL ( Bansal et al. , 2019a ) or Holophrasm ( Whalen , 2016 ) . 2 BACKGROUND AND RELATED WORK . BENCHMARKS In the closely related field of ( first-order ) automated theorem proving ( ATP ) , the TPTP ( Sutcliffe , 2017 ) benchmark is a library of test problems in a unified format for ATP systems . In interactive theorem proving , the ” Freek 100 ” ( Wiedijk , 2008 ) tracks progress across various interactive theorem provers on a list of 100 mathematical theorems . Wu et al . ( 2021 ) built a simplified formal proof environment INT with an associated synthetic inequality benchmark . Competitions and communal challenges have also spurred development in formal theorem proving . The CADE ATP System Competition ( CASC ) ( Sutcliffe , 2016 ) is a competition that evaluates the performance of first-order automated theorem proving systems . Proof Ground ( Haslbeck et al. , 2019 ) , part of the ITP conference , is an interactive proving contest ( for humans ) that supports Coq , Isabelle , and Lean , which focuses on evaluating the formalization effort of proof to given problems within limited time . Finally , the IMO Grand Challenge ( Selsam et al. , 2019 ) , a proposal from researchers working on the interactive proof assistant Lean , aims to build a system capable of solving IMO problems in the formal-to-formal format . Due to its convenient framing as a natural language processing task , the domain of informal mathematical reasoning has received more attention than the formal one . MATH ( Hendrycks et al. , 2021 ) is a mathematics benchmark comprising 12,500 statements in natural language where exercises are classified into 5 levels of difficulty across various domains . Each exercise is combined with a detailed step-by-step proof in natural language . Scaling state-of-the-art models shows little amelioration on MATH , which requires advanced mathematical reasoning capabilities . miniF2F includes a number of formalized statements from MATH . NaturalProofs ( Welleck et al. , 2021 ) is another benchmark of natural proof in mathematics , containing 32k theorem statements and proofs . It essentially contains the proofs in ProofWiki and other resources . While MATH is more oriented towards mathematics exercises , NaturalProofs is focused on proofs of general mathematics theorems . Saxton et al . ( 2019 ) built a mathematics dataset with 2 × 106 training data and 104 test data , presented in a question-answering format where each statement is paired with a question written in natural language and a direct answer without proof . NEURAL THEOREM PROVING HOList ( Bansal et al. , 2019a ; b ; Paliwal et al. , 2020 ) provides an environment as well as a benchmark for HOL Light . They also proposes various deep reinforcement learning approaches for theorem proving and report a pass rate of 59.91 % on their benchmark . Yang & Deng ( 2019 ) built CoqGym , a large-scale dataset , which comes also with a learning environment , of 71k human-written proofs in Coq proof assistant . They report a 30.0 % pass rate on the held-out test theorems in CoqGym . Polu & Sutskever ( 2020 ) applied a decoder-only transformer similar to GPT-3 ( Brown et al. , 2020 ) to proof steps prediction in Metamath combined with a log-probability based proof search . They also proposed a methodology to train a value function to further guide proof search , achieving a 56.22 % pass rate on the held-out test set . Large language models were applied to Lean by Han et al . ( 2021 ) . They created an environment around the Lean prover targeted to machine learning and propose a dataset extracted from low level proof artifacts that is shown to boost performance when used as a self-supervised co-training objective . They report a 48.4 % pass rate on held-out test statements from mathlib , Lean ’ s mathematical library ( mathlib Community , 2020 ) . 3 MINIF2F BENCHMARK . miniF2F is a dataset of manually formalized statements of Olympiad type problems , aligned in Lean , Metamath , and Isabelle ( partial at the time of writing ) , providing a cross-platform benchmark for formal mathematical reasoning . Olympiad type problems are of particular interest to compare automated provers across different formal systems as the theories required to solve them are well identified and they generally do not require the definition of new mathematical concepts ( a capability that remains beyond the current neural theorem proving state of the art ) . The formalized statements in miniF2F are drawn from multiple sources , ranging from high school and undergraduate level exercises to Olympiad problems . miniF2F also covers different subsubjects in mathematics as well as proof strategies , focusing on the types of exercises whose statements are expressible in most formal systems . This leads to a systemic focus on algebra , number theory and inequalities because , for example , geometry and combinatorial problems are generally challenging to formalize due to only nascent efforts in these areas in most formal systems . The statements in miniF2F are all manually formalized and selected to cover a variety of difficulty levels for both humans and machines . Formal proofs for these statements are optionally attached . miniF2F draws from AIME , AMC , IMO problems as well as problems from the MATH ( Hendrycks et al. , 2021 ) informal dataset . Formalizing problems from the MATH dataset serves two purposes . First , problems in MATH are segmented by difficulty level ( from 1 to 5 ) , randomly selecting a subset from each of these difficulty levels allows miniF2F to cover a wider range of difficulty . Second , it provides the community an opportunity to compare capabilities of formal automated prover to their informal counter-parts as discussed in later sections . miniF2F comprises a test set and a validation set , which are a stratified random split from the statements we formalized such that each set equally covers each problem type and difficulty ( when available ) . Table 1 shows a detailed distribution of these statements . Versioning miniF2F is an evolving effort and new statements will continuously be added . Periodically , we will freeze versions of the benchmark . The current version of the benchmark is v11 and results in this paper are reported using this version . v1 comprises 244 test and 244 valid statements . The set of statements of each version is guaranteed to remain stable , only allowing fixes in case errors are later discovered . Rules of engagement and License miniF2F is meant to serve as a shared resource for research groups working on applying deep learning to formal theorem proving . There is no formal process to submit evaluation results and researchers are simply invited to cite miniF2F indicating the version used in their evaluations . We also encourage them to contribute proofs found by their approaches back to the benchmark . The parts of the benchmark associated with each theorem prover ( Metamath , 1https : //github.com/openai/miniF2F/tree/v1 Lean , Isabelle ) are meant to be licensed in a way that is aligned with the licensing usage associated with the theorem prover ’ s main library . As a result , the Metamath version of the benchmark is released under the MIT License , while the Lean and Isabelle versions are released under the Apache License . Formalization effort and challenges We found that , for trained practitioners ( but not necessarily experts , including students recently introduced to formal systems ) , formalizing a statement takes about 15 minutes on average , and reviewing a formalized statement , about half of that on average . Note that not all exercises are directly or naturally formalizable . In particular , multi-choice questions , word problems , and exercises that require to explicit a witness or a set as part of the answer present interesting challenges : multi-choice questions2 these problems are generally straightforwardly formalizable by reformulating the statement using the right answer only , and could be made “ fair ” in a competitive setup by formalizing all possible choices and running automated provers on all of them , attributing points only if a proof of the correct answer is provided . word problems3 where significant information is presented in natural language generally require non-trivial efforts to be formalized . We generally formalized them by explicitly modeling the mathematics concepts and expression presented in natural language while attempting as best as possible to preserve the mathematical difficulty of the original problem . Sometime the formalization work is most of the difficulty associated with the original question ; in such cases we would discard the problem entirely . problems that require to explicit a set or witness4 ( e.g . find all ... such that ... ) are not directly formalizable . The best approximation we relied on for these was to formalize the statement with the witness or answer provided , turning such exercises into the generation of a proof that the answer is correct , and if needed , that it is the unique one–which is , at times , a much easier exercise . A non negligible portion of IMO problems are as such , which we foresee could become a challenge in the future , to fairly compare humans to automated proving systems in a competitive setup . Porting effort In addition to Metamath , Lean , Isabelle ( work in progress ) and HOL Light ( work in progress ) , we are eager to extend the coverage of miniF2F to Coq , and will welcome any effort in that direction or to extend miniF2F to further systems .
This paper presents miniF2F, a test suite of Olympiad-level problems of theorem proving that is implemented in Metamath, Lean and Isabelle. MiniF2F contains 488 individual theorem statements that are formalized from Olympiad math contests. GPT-f models trained on Metamath and Lean are evaluated on this test suite.
SP:b27e82ceb1636e24042a76b3749d729029ebb38c
miniF2F: a cross-system benchmark for formal Olympiad-level mathematics
1 INTRODUCTION . Shared benchmarks and datasets have historically played a crucial role in driving advances in largescale applications of deep learning , e.g . in computer vision ( Deng et al. , 2009 ) and natural language processing ( Wang et al. , 2019 ; Rajpurkar et al. , 2016 ; Paperno et al. , 2016 ) . Neural theorem proving is a rapidly developing area which aims to apply techniques from deep learning to interactive theorem proving . To date , most contributions in this area have focused on individual theorem proving systems , each with a separately-implemented mathematics library and with results reported on a dataset-specific test split ; examples include the HOList ( Bansal et al. , 2019a ) , CoqGym ( Yang & Deng , 2019 ) and LeanStep ( Han et al. , 2021 ) theorem proving environments and benchmarks . However , benchmarks from this paradigm are not ideal for measuring the mathematical reasoning ability of neural theorem provers for several reasons . Library-specific train/test splits are siloed by construction , dependent on how theorems and lemmas are split in these libraries , and as such are not directly comparable across systems . Moreover , formal mathematics libraries are closer to software repositories than informal mathematical exposition , and many lemmas are implementation-specific artifacts without precise informal mathematical or cross-system translations . To date , the neural theorem proving community has not organized its efforts around a cross-system benchmark . To address this need and to provide a common resource to research groups working on formal theorem proving , we present miniF2F , a unified cross-system benchmark of formal mathematics of progressively increasing difficulty , centering around Olympiad-level problem statements ( AMC , AIME , IMO ) as well as high-school and undergraduate maths classes . Both the content and name of miniF2F are inspired by the IMO Grand Challenge ( Selsam et al. , 2019 ) : to build an AI that can win a gold medal in the International Mathematical Olympiad in a formal-to-formal ( F2F ) format . More precisely , the agent must receive IMO problems written in a formal mathematical format , and must produce a formal ( i.e . machine-checkable ) proof for that problem . We intend for miniF2F to serve as a stepping stone for different formal systems towards the IMO Grand Challenge ( Selsam et al. , 2019 ) , as it is end-to-end verifiable , cross-platform and spans a wide range of difficulty . While we report baseline results on miniF2F using GPT-f , a language model based on GPT-3 which has been finetuned for theorem proving , language models are not a mandatory approach for Olympiad problems and this assumption is not reflected in miniF2F , preserving the generality and widespread applicability of the benchmark to systems similar to DeepHOL ( Bansal et al. , 2019a ) or Holophrasm ( Whalen , 2016 ) . 2 BACKGROUND AND RELATED WORK . BENCHMARKS In the closely related field of ( first-order ) automated theorem proving ( ATP ) , the TPTP ( Sutcliffe , 2017 ) benchmark is a library of test problems in a unified format for ATP systems . In interactive theorem proving , the ” Freek 100 ” ( Wiedijk , 2008 ) tracks progress across various interactive theorem provers on a list of 100 mathematical theorems . Wu et al . ( 2021 ) built a simplified formal proof environment INT with an associated synthetic inequality benchmark . Competitions and communal challenges have also spurred development in formal theorem proving . The CADE ATP System Competition ( CASC ) ( Sutcliffe , 2016 ) is a competition that evaluates the performance of first-order automated theorem proving systems . Proof Ground ( Haslbeck et al. , 2019 ) , part of the ITP conference , is an interactive proving contest ( for humans ) that supports Coq , Isabelle , and Lean , which focuses on evaluating the formalization effort of proof to given problems within limited time . Finally , the IMO Grand Challenge ( Selsam et al. , 2019 ) , a proposal from researchers working on the interactive proof assistant Lean , aims to build a system capable of solving IMO problems in the formal-to-formal format . Due to its convenient framing as a natural language processing task , the domain of informal mathematical reasoning has received more attention than the formal one . MATH ( Hendrycks et al. , 2021 ) is a mathematics benchmark comprising 12,500 statements in natural language where exercises are classified into 5 levels of difficulty across various domains . Each exercise is combined with a detailed step-by-step proof in natural language . Scaling state-of-the-art models shows little amelioration on MATH , which requires advanced mathematical reasoning capabilities . miniF2F includes a number of formalized statements from MATH . NaturalProofs ( Welleck et al. , 2021 ) is another benchmark of natural proof in mathematics , containing 32k theorem statements and proofs . It essentially contains the proofs in ProofWiki and other resources . While MATH is more oriented towards mathematics exercises , NaturalProofs is focused on proofs of general mathematics theorems . Saxton et al . ( 2019 ) built a mathematics dataset with 2 × 106 training data and 104 test data , presented in a question-answering format where each statement is paired with a question written in natural language and a direct answer without proof . NEURAL THEOREM PROVING HOList ( Bansal et al. , 2019a ; b ; Paliwal et al. , 2020 ) provides an environment as well as a benchmark for HOL Light . They also proposes various deep reinforcement learning approaches for theorem proving and report a pass rate of 59.91 % on their benchmark . Yang & Deng ( 2019 ) built CoqGym , a large-scale dataset , which comes also with a learning environment , of 71k human-written proofs in Coq proof assistant . They report a 30.0 % pass rate on the held-out test theorems in CoqGym . Polu & Sutskever ( 2020 ) applied a decoder-only transformer similar to GPT-3 ( Brown et al. , 2020 ) to proof steps prediction in Metamath combined with a log-probability based proof search . They also proposed a methodology to train a value function to further guide proof search , achieving a 56.22 % pass rate on the held-out test set . Large language models were applied to Lean by Han et al . ( 2021 ) . They created an environment around the Lean prover targeted to machine learning and propose a dataset extracted from low level proof artifacts that is shown to boost performance when used as a self-supervised co-training objective . They report a 48.4 % pass rate on held-out test statements from mathlib , Lean ’ s mathematical library ( mathlib Community , 2020 ) . 3 MINIF2F BENCHMARK . miniF2F is a dataset of manually formalized statements of Olympiad type problems , aligned in Lean , Metamath , and Isabelle ( partial at the time of writing ) , providing a cross-platform benchmark for formal mathematical reasoning . Olympiad type problems are of particular interest to compare automated provers across different formal systems as the theories required to solve them are well identified and they generally do not require the definition of new mathematical concepts ( a capability that remains beyond the current neural theorem proving state of the art ) . The formalized statements in miniF2F are drawn from multiple sources , ranging from high school and undergraduate level exercises to Olympiad problems . miniF2F also covers different subsubjects in mathematics as well as proof strategies , focusing on the types of exercises whose statements are expressible in most formal systems . This leads to a systemic focus on algebra , number theory and inequalities because , for example , geometry and combinatorial problems are generally challenging to formalize due to only nascent efforts in these areas in most formal systems . The statements in miniF2F are all manually formalized and selected to cover a variety of difficulty levels for both humans and machines . Formal proofs for these statements are optionally attached . miniF2F draws from AIME , AMC , IMO problems as well as problems from the MATH ( Hendrycks et al. , 2021 ) informal dataset . Formalizing problems from the MATH dataset serves two purposes . First , problems in MATH are segmented by difficulty level ( from 1 to 5 ) , randomly selecting a subset from each of these difficulty levels allows miniF2F to cover a wider range of difficulty . Second , it provides the community an opportunity to compare capabilities of formal automated prover to their informal counter-parts as discussed in later sections . miniF2F comprises a test set and a validation set , which are a stratified random split from the statements we formalized such that each set equally covers each problem type and difficulty ( when available ) . Table 1 shows a detailed distribution of these statements . Versioning miniF2F is an evolving effort and new statements will continuously be added . Periodically , we will freeze versions of the benchmark . The current version of the benchmark is v11 and results in this paper are reported using this version . v1 comprises 244 test and 244 valid statements . The set of statements of each version is guaranteed to remain stable , only allowing fixes in case errors are later discovered . Rules of engagement and License miniF2F is meant to serve as a shared resource for research groups working on applying deep learning to formal theorem proving . There is no formal process to submit evaluation results and researchers are simply invited to cite miniF2F indicating the version used in their evaluations . We also encourage them to contribute proofs found by their approaches back to the benchmark . The parts of the benchmark associated with each theorem prover ( Metamath , 1https : //github.com/openai/miniF2F/tree/v1 Lean , Isabelle ) are meant to be licensed in a way that is aligned with the licensing usage associated with the theorem prover ’ s main library . As a result , the Metamath version of the benchmark is released under the MIT License , while the Lean and Isabelle versions are released under the Apache License . Formalization effort and challenges We found that , for trained practitioners ( but not necessarily experts , including students recently introduced to formal systems ) , formalizing a statement takes about 15 minutes on average , and reviewing a formalized statement , about half of that on average . Note that not all exercises are directly or naturally formalizable . In particular , multi-choice questions , word problems , and exercises that require to explicit a witness or a set as part of the answer present interesting challenges : multi-choice questions2 these problems are generally straightforwardly formalizable by reformulating the statement using the right answer only , and could be made “ fair ” in a competitive setup by formalizing all possible choices and running automated provers on all of them , attributing points only if a proof of the correct answer is provided . word problems3 where significant information is presented in natural language generally require non-trivial efforts to be formalized . We generally formalized them by explicitly modeling the mathematics concepts and expression presented in natural language while attempting as best as possible to preserve the mathematical difficulty of the original problem . Sometime the formalization work is most of the difficulty associated with the original question ; in such cases we would discard the problem entirely . problems that require to explicit a set or witness4 ( e.g . find all ... such that ... ) are not directly formalizable . The best approximation we relied on for these was to formalize the statement with the witness or answer provided , turning such exercises into the generation of a proof that the answer is correct , and if needed , that it is the unique one–which is , at times , a much easier exercise . A non negligible portion of IMO problems are as such , which we foresee could become a challenge in the future , to fairly compare humans to automated proving systems in a competitive setup . Porting effort In addition to Metamath , Lean , Isabelle ( work in progress ) and HOL Light ( work in progress ) , we are eager to extend the coverage of miniF2F to Coq , and will welcome any effort in that direction or to extend miniF2F to further systems .
The authors present miniF2F, a dataset of formalized mathematical problems drawn from diverse sources including IMO, AIME, AMC, undergraduate, and high school problems. The focus is on algebra, inequalities, and number theory as those problems are easier to formalize than for example, geometry or combinatorial problems. The formalization is done in Metamath, Lean, with efforts for Isabelle ongoing. The authors run GPT-f on Metamath and Lean, and the tidy baseline (from the PACT paper) on the dataset and present results. They find that proving in Lean is vastly better for performance than Metamath which they conjecture is due to access to higher level tactics in Lean compared to Metamath.
SP:b27e82ceb1636e24042a76b3749d729029ebb38c
A New Perspective on Fluid Simulation: An Image-to-Image Translation Task via Neural Networks
1 INTRODUCTION . 1.1 TARGET ISSUE . Simulating fluids , streams , and flows is a task in many fields of science . In most cases , this is done by numerical approaches like the method of finite elements ( FEM ) ( Quarteroni & Valli , 2008 ) or Lattice Boltzmann methods ( LBM ) ( Mohamad , 2011 ) . These methods are providing great advantages like configurable accuracy and fine-grained adaptability regarding the specific given task . However , to get close-to-reality simulations , some issues have to be targeted as well . One of the most important issues emerges from the approximation property of these methods . This property limits the configuration of accuracy in both directions : if the accuracy becomes too low , the approximation will become to coarse — in other words , it will no longer reflect reality respectively fulfill the given task . In contrast , if it becomes too high , effects like the curse of dimension will set in , causing exploding problem sizes . In the result , these methods will have an only minimal adjustable computation time range , as the accuracy strongly connects to the problem size and therefore also to the computation time , and may exceed every given maximum time span . To avoid this issue , a different approach might be more satisfying . As our focus is on problems with no need of high accuracy , we decided to follow a more approximate path by involving the human observer . This observer typically isn ’ t interested in the numerical representation of the simulation , but in the easy-to-interpret image-based one . That leads directly to the idea to focus on an imageto-image translation for each time step of the simulation . The results of such an approach will be much more inaccurate as typical numerical results — the very limited resolution of the color spaces alone results in high errors compared to FEM or LBM results — but for a wide class of problems the results will be good enough under the condition that the computations are very fast . Therefore , the main task for this approach is to translate an image representation of time step n into the image representation of step n+ 1 as fast and as accurate as possible . For translation , we decided to use a neural network based on good results for image-to-image translation ( Isola et al. , 2017 ) . In total , this approach — starting with an image representation of the starting values and translate it recursively into the next time steps of the simulation — leads to the following question : • Is it possible ... regarding computation time and accuracy . Can we get results fast enough to accept the additional approximation ? Is the accuracy high enough to get useful results ? Will this accuracy remain over many simulation time steps ? • ... to get accurate simulations ... regarding the real world and the human observer . How much noise in the picture is too much ? When and why does our approach begin to fail ? • ... very fast ... regarding the computation time and deployed hardware . After training , is it possible to run a complex simulation on my laptop at home ? Will it be fast enough to meet real-time conditions ? • ... with this approach ... regarding the recursive usage of neural networks as well as an unedited image input . Can we use unedited data , or do we need preprocessing steps like FFT ? Is the recursive approach expedient ? Are image processing steps like morphological filtering needed between each step ? Is the cGAN approach with a UNET architecture reasonable ? Do we need additional LSTM units ? • ... while using as little as possible parameters ? regarding the input-output-ratio . How much data is needed to produce the same output as LBM or FEM ? How generalizable is our approach ? Do we need to train every explicit model , geometry , and structure or is it possible to transfer results ? Some emerging questions aren ’ t trivial at all , and this paper can ’ t answer all of them . With our work , we want to show the benefits of the mentioned approach and highlight opportunities and limits . 1.2 STATE OF THE ART . As our approach covers multiple topics , this state-of-the-art section will compare our work to numerical approaches , image translation methods , architectures of neural networks , comparable approaches for fluid dynamics , and previous work our approach is based on . For a better overview , we will headline the corresponding sections and summarize the latest developments . Numerical Approaches : Standard numerical methods for solving PDEs , like FEM ( Quarteroni & Valli , 2008 ) or LBM ( Mohamad , 2011 ) , are widely known . Ideas like approximate preconditioning ( Anzt et al. , 2018 ) or multi-precision solvers of systems of equations ( Gratton et al. , 2019 ; Aliaga et al. , 2020 ) are emerging over the last years . These approaches can have a great impact on a specific part of the numerical PDE solver , but this doesn ’ t necessarily lead to faster run-times for the whole PDE solver . There are also some more global approaches to speed up a complete PDE solving method thanks to mathematical optimization , like Gracie et al . ( 2006 ) ; Du & Wang ( 2015 ) ; Etzmuss et al . ( 2003 ) , but these approaches are highly adapted to a specific problem . Based on these findings , a pure mathematical approach doesn ’ t seem to be the right way . Image Translation : Our basic idea is based on image translation with a neural network . In the last years many approaches in this direction appear , manly ( but not only ) to manipulate images or movies in real-time ( Liu et al. , 2017 ; Radford et al. , 2015 ; Zhao et al. , 2020 ) . Most of these approaches are not really matching our task , which results in unsuitable network architectures or unrealizable constraints . But there is one matching approach with Isola et al . ( 2017 ) . Our cGAN approach is inspired by the ideas and the excellent results given there and in additional work on cGAN structures like Karras et al . ( 2017 ) ; Zhang et al . ( 2017 ) . Neural Network Architectures : In addition to the cGAN approach , we need to find the right architecture for our neural networks . Based on Isola et al . ( 2017 ) we used a PatchGAN architecture ( Li & Wand , 2016 ) for the discriminator part . For the generator part , we used the proposed U-Net architecture ( Isola et al. , 2017 ) . Regarding the iterative structure of our translation task , we added the idea of long short-term memory modules ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) to improve the U-NET structure . Neural Fluid Dynamics and PDE Solvers : Combining numerical methods with machine learning algorithms is not entirely new , even in the field of fluid dynamics . Two very up-to-date examples are Li et al . ( 2020 ) and Pfaff et al . ( 2021 ) . While the first one focuses on the numerical operators and numerical errors , the second one tries to work with the numerical discretization . Both – as other approaches before – are following approaches starting within the numerical solving pipeline . Our approach to work solely on the image representation clearly separates us from previous approaches and is a unique characteristic of this paper . Unfortunately , our work is not yet at the point to be fully comparable with these approaches , but we are aiming to it in the near future . Additional Influences : Finally , there is one more idea we have to mention as part of the base of our work . In Lehmann et al . ( 2020 ) the authors showed , why and how a binary map is a good option to define a sharp separation of areas within an image regarding the image translation task . We adopted this option for our approach , as we need a sharp separation between the streaming area and the environment area in the image representation . 1.3 CONTENT ORGANIZATION . In detail , we will provide data for the following main findings in this paper : • Using a pix2pix-approach with a U-NET structure and a cGAN training is a useful way to get approximated simulation results • Results from recursive application of neuronal networks are useful over tenth to hundreds of iterations , including an only moderate increasing additional approximation error • The speed-up can be upto 9 ( on GPU-based hardware ) compared to a FEM-based simulation • Different color spaces , boundary mappings , and input data sets are possible and may lead to data-driven approaches We structured this paper as the following : We start with some basic knowledge about FEM and image comparison , followed by explanations of our neuronal network architecture , and our data generation in section 2 . Section 3 will cover our results for our test setting ( see A.3 ) and will give a look over the edge of this specific configured environment . We close this work with a conclusion and summary in section 4 as well as a view on our future directions . The statement of r/eproducibility can found at the very end . 2 THEORETICAL BACKGROUND . 2.1 NUMERICAL BASIS AND MESSURMENT . Looking into our approach , everything starts with a given PDE ( partial differential equation ) , the mathematical model behind descriptions of physical phenomenons . In our case , this is the NavierStokes equations for incompressible flows ( Oymak & Selcuk , 1996 ) . The starting conditions , boundary conditions , and physical parameters are chosen in a way to get a Kármán vortex street within a canal with an obstacle ( fig . 1 ) ( Schäfer et al. , 1996 ) . For discretization , the method of lines ( Oymak & Selcuk , 1996 ) is used , which leads to discrete time steps each equipped with the same discrete space grid . The values derived from the chosen solvers1 are finally mapped to a chosen color space ( in our case it is the [ 0 , 255 ] grayscale ) . This very rough spotlight on the basic method shows where the main issue can be located : There are a lot of approximation errors in the reformulation , discretization , numerical solvers , and in the final mapping to the color space ( Quarteroni & Valli , 2008 ) . Additionally , the single errors may ( or may not ) accumulate , which is the reason it can be very difficult to change a single solver or transformation method out of the solving pipeline . Especially in the case of neural networks , where 1Θ-step methods for the time direction ( Berzins & Furzeland , 1992 ) and a variant of Newton ’ s method ( Nemec & Zingg , 2002 ) for the non-linear equations in space . no useful formal error limit guaranties can be given , a global replacement approach seems to be the more promising than changing a single pipeline step . As this means to replace the whole numerical method with a neural network and only keep the starting values to create the first input data , we have to solve the issue to define a meaningful quality measurement for our results . For run time a simple time difference , respectively a speed-up measurement , is good enough . For accuracy , comparing the numerically created image of a time step with one created by a neural network pixel by pixel may lead to high errors , even if the images are indistinguishable by a human observer . However , the same measurement may only result in small errors even if one can find obvious false streaming data ( with small changes in the colors ) . As we found in previous testing , this problem will occur in our case also for average pixel errors and correlation-based measurements . Therefore , the most promising quality measurement for us is based on the mean square error ( MSE ) and originally developed to evaluate the quality of lossy compression algorithms . It is called peak signal-to-noise ratio ( PSNR ) ( Korhonen & You , 2012 ) : εPSNR = 10 · log10 ( 2552 εMSE ) [ dB ] . ( 1 ) In theory , a higher PSNR value means less detectable differences in the images , and values above 30 dB should result in only undetectable differences for a human observer ( Mehra , 2016 ) . In practice , we noticed that in many cases , values below this significant value might be acceptable as well . The reason is that the first observable errors are irrelevant ones regarding our setting , like discolored vertical or horizontal pixel lines that obviously can not be interpreted as streaming data . Therefore , we marked the theoretical limit in our charts and stopped our iteration some steps later .
The paper casts the problem of 2d fluid flow simulation as an image to image-to-translation task. A cGAN with standard architectures (U-net generator, PatchGAN discriminator) is trained to advance the visualization of the simulation to the next time step, and an extension with an LSTM block is explored. The model is evaluated in an autoregressive setting on the problem of fluid flow around a rectangle, and the results are evaluated with image metrics (PSNR).
SP:fb34d6a5be0e6820eeac331137163919d581f120
A New Perspective on Fluid Simulation: An Image-to-Image Translation Task via Neural Networks
1 INTRODUCTION . 1.1 TARGET ISSUE . Simulating fluids , streams , and flows is a task in many fields of science . In most cases , this is done by numerical approaches like the method of finite elements ( FEM ) ( Quarteroni & Valli , 2008 ) or Lattice Boltzmann methods ( LBM ) ( Mohamad , 2011 ) . These methods are providing great advantages like configurable accuracy and fine-grained adaptability regarding the specific given task . However , to get close-to-reality simulations , some issues have to be targeted as well . One of the most important issues emerges from the approximation property of these methods . This property limits the configuration of accuracy in both directions : if the accuracy becomes too low , the approximation will become to coarse — in other words , it will no longer reflect reality respectively fulfill the given task . In contrast , if it becomes too high , effects like the curse of dimension will set in , causing exploding problem sizes . In the result , these methods will have an only minimal adjustable computation time range , as the accuracy strongly connects to the problem size and therefore also to the computation time , and may exceed every given maximum time span . To avoid this issue , a different approach might be more satisfying . As our focus is on problems with no need of high accuracy , we decided to follow a more approximate path by involving the human observer . This observer typically isn ’ t interested in the numerical representation of the simulation , but in the easy-to-interpret image-based one . That leads directly to the idea to focus on an imageto-image translation for each time step of the simulation . The results of such an approach will be much more inaccurate as typical numerical results — the very limited resolution of the color spaces alone results in high errors compared to FEM or LBM results — but for a wide class of problems the results will be good enough under the condition that the computations are very fast . Therefore , the main task for this approach is to translate an image representation of time step n into the image representation of step n+ 1 as fast and as accurate as possible . For translation , we decided to use a neural network based on good results for image-to-image translation ( Isola et al. , 2017 ) . In total , this approach — starting with an image representation of the starting values and translate it recursively into the next time steps of the simulation — leads to the following question : • Is it possible ... regarding computation time and accuracy . Can we get results fast enough to accept the additional approximation ? Is the accuracy high enough to get useful results ? Will this accuracy remain over many simulation time steps ? • ... to get accurate simulations ... regarding the real world and the human observer . How much noise in the picture is too much ? When and why does our approach begin to fail ? • ... very fast ... regarding the computation time and deployed hardware . After training , is it possible to run a complex simulation on my laptop at home ? Will it be fast enough to meet real-time conditions ? • ... with this approach ... regarding the recursive usage of neural networks as well as an unedited image input . Can we use unedited data , or do we need preprocessing steps like FFT ? Is the recursive approach expedient ? Are image processing steps like morphological filtering needed between each step ? Is the cGAN approach with a UNET architecture reasonable ? Do we need additional LSTM units ? • ... while using as little as possible parameters ? regarding the input-output-ratio . How much data is needed to produce the same output as LBM or FEM ? How generalizable is our approach ? Do we need to train every explicit model , geometry , and structure or is it possible to transfer results ? Some emerging questions aren ’ t trivial at all , and this paper can ’ t answer all of them . With our work , we want to show the benefits of the mentioned approach and highlight opportunities and limits . 1.2 STATE OF THE ART . As our approach covers multiple topics , this state-of-the-art section will compare our work to numerical approaches , image translation methods , architectures of neural networks , comparable approaches for fluid dynamics , and previous work our approach is based on . For a better overview , we will headline the corresponding sections and summarize the latest developments . Numerical Approaches : Standard numerical methods for solving PDEs , like FEM ( Quarteroni & Valli , 2008 ) or LBM ( Mohamad , 2011 ) , are widely known . Ideas like approximate preconditioning ( Anzt et al. , 2018 ) or multi-precision solvers of systems of equations ( Gratton et al. , 2019 ; Aliaga et al. , 2020 ) are emerging over the last years . These approaches can have a great impact on a specific part of the numerical PDE solver , but this doesn ’ t necessarily lead to faster run-times for the whole PDE solver . There are also some more global approaches to speed up a complete PDE solving method thanks to mathematical optimization , like Gracie et al . ( 2006 ) ; Du & Wang ( 2015 ) ; Etzmuss et al . ( 2003 ) , but these approaches are highly adapted to a specific problem . Based on these findings , a pure mathematical approach doesn ’ t seem to be the right way . Image Translation : Our basic idea is based on image translation with a neural network . In the last years many approaches in this direction appear , manly ( but not only ) to manipulate images or movies in real-time ( Liu et al. , 2017 ; Radford et al. , 2015 ; Zhao et al. , 2020 ) . Most of these approaches are not really matching our task , which results in unsuitable network architectures or unrealizable constraints . But there is one matching approach with Isola et al . ( 2017 ) . Our cGAN approach is inspired by the ideas and the excellent results given there and in additional work on cGAN structures like Karras et al . ( 2017 ) ; Zhang et al . ( 2017 ) . Neural Network Architectures : In addition to the cGAN approach , we need to find the right architecture for our neural networks . Based on Isola et al . ( 2017 ) we used a PatchGAN architecture ( Li & Wand , 2016 ) for the discriminator part . For the generator part , we used the proposed U-Net architecture ( Isola et al. , 2017 ) . Regarding the iterative structure of our translation task , we added the idea of long short-term memory modules ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) to improve the U-NET structure . Neural Fluid Dynamics and PDE Solvers : Combining numerical methods with machine learning algorithms is not entirely new , even in the field of fluid dynamics . Two very up-to-date examples are Li et al . ( 2020 ) and Pfaff et al . ( 2021 ) . While the first one focuses on the numerical operators and numerical errors , the second one tries to work with the numerical discretization . Both – as other approaches before – are following approaches starting within the numerical solving pipeline . Our approach to work solely on the image representation clearly separates us from previous approaches and is a unique characteristic of this paper . Unfortunately , our work is not yet at the point to be fully comparable with these approaches , but we are aiming to it in the near future . Additional Influences : Finally , there is one more idea we have to mention as part of the base of our work . In Lehmann et al . ( 2020 ) the authors showed , why and how a binary map is a good option to define a sharp separation of areas within an image regarding the image translation task . We adopted this option for our approach , as we need a sharp separation between the streaming area and the environment area in the image representation . 1.3 CONTENT ORGANIZATION . In detail , we will provide data for the following main findings in this paper : • Using a pix2pix-approach with a U-NET structure and a cGAN training is a useful way to get approximated simulation results • Results from recursive application of neuronal networks are useful over tenth to hundreds of iterations , including an only moderate increasing additional approximation error • The speed-up can be upto 9 ( on GPU-based hardware ) compared to a FEM-based simulation • Different color spaces , boundary mappings , and input data sets are possible and may lead to data-driven approaches We structured this paper as the following : We start with some basic knowledge about FEM and image comparison , followed by explanations of our neuronal network architecture , and our data generation in section 2 . Section 3 will cover our results for our test setting ( see A.3 ) and will give a look over the edge of this specific configured environment . We close this work with a conclusion and summary in section 4 as well as a view on our future directions . The statement of r/eproducibility can found at the very end . 2 THEORETICAL BACKGROUND . 2.1 NUMERICAL BASIS AND MESSURMENT . Looking into our approach , everything starts with a given PDE ( partial differential equation ) , the mathematical model behind descriptions of physical phenomenons . In our case , this is the NavierStokes equations for incompressible flows ( Oymak & Selcuk , 1996 ) . The starting conditions , boundary conditions , and physical parameters are chosen in a way to get a Kármán vortex street within a canal with an obstacle ( fig . 1 ) ( Schäfer et al. , 1996 ) . For discretization , the method of lines ( Oymak & Selcuk , 1996 ) is used , which leads to discrete time steps each equipped with the same discrete space grid . The values derived from the chosen solvers1 are finally mapped to a chosen color space ( in our case it is the [ 0 , 255 ] grayscale ) . This very rough spotlight on the basic method shows where the main issue can be located : There are a lot of approximation errors in the reformulation , discretization , numerical solvers , and in the final mapping to the color space ( Quarteroni & Valli , 2008 ) . Additionally , the single errors may ( or may not ) accumulate , which is the reason it can be very difficult to change a single solver or transformation method out of the solving pipeline . Especially in the case of neural networks , where 1Θ-step methods for the time direction ( Berzins & Furzeland , 1992 ) and a variant of Newton ’ s method ( Nemec & Zingg , 2002 ) for the non-linear equations in space . no useful formal error limit guaranties can be given , a global replacement approach seems to be the more promising than changing a single pipeline step . As this means to replace the whole numerical method with a neural network and only keep the starting values to create the first input data , we have to solve the issue to define a meaningful quality measurement for our results . For run time a simple time difference , respectively a speed-up measurement , is good enough . For accuracy , comparing the numerically created image of a time step with one created by a neural network pixel by pixel may lead to high errors , even if the images are indistinguishable by a human observer . However , the same measurement may only result in small errors even if one can find obvious false streaming data ( with small changes in the colors ) . As we found in previous testing , this problem will occur in our case also for average pixel errors and correlation-based measurements . Therefore , the most promising quality measurement for us is based on the mean square error ( MSE ) and originally developed to evaluate the quality of lossy compression algorithms . It is called peak signal-to-noise ratio ( PSNR ) ( Korhonen & You , 2012 ) : εPSNR = 10 · log10 ( 2552 εMSE ) [ dB ] . ( 1 ) In theory , a higher PSNR value means less detectable differences in the images , and values above 30 dB should result in only undetectable differences for a human observer ( Mehra , 2016 ) . In practice , we noticed that in many cases , values below this significant value might be acceptable as well . The reason is that the first observable errors are irrelevant ones regarding our setting , like discolored vertical or horizontal pixel lines that obviously can not be interpreted as streaming data . Therefore , we marked the theoretical limit in our charts and stopped our iteration some steps later .
The authors pose fluid simulation as an image-to-image translation task. From this perspective, approximating fluid flow using a cGAN can potentially improve speed (at the expense of accuracy) over FEM methods. This can be useful in situations where accuracy can be traded off for speed (e.g. video games). The authors show a U-net can produce good future predictions for Karman vortex sheets, and is faster than an off the shelf multiphysics simulator. In addition the authors ablate various architectural design choices.
SP:fb34d6a5be0e6820eeac331137163919d581f120
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
Recently , large-scale Contrastive Language-Image Pre-training ( CLIP ) ( Radford et al. , 2021 ) has attracted unprecedented attention for its impressive zero-shot recognition ability and excellent transferability to downstream tasks . However , CLIP is quite data-hungry and requires 400M image-text pairs for pre-training , thereby restricting its adoption . This work proposes a novel training paradigm , Data efficient CLIP ( DeCLIP ) , to alleviate this limitation . We demonstrate that by carefully utilizing the widespread supervision among the image-text pairs , our DeCLIP can learn generic visual features more efficiently . Instead of using the single image-text contrastive supervision , we fully exploit data potential through the use of ( 1 ) self-supervision within each modality ; ( 2 ) multi-view supervision across modalities ; ( 3 ) nearest-neighbor supervision from other similar pairs . Benefiting from these intrinsic supervision , our DeCLIP-ResNet50 can achieve 60.4 % zero-shot top1 accuracy on ImageNet , which is 0.8 % above the CLIP-ResNet50 while using 7.1× fewer data . Our DeCLIP-ResNet50 outperforms its counterpart in 8 out of 11 visual datasets when transferred to downstream tasks . Moreover , Scaling up the model and computing also works well in our framework . 1 INTRODUCTION . Over the last few years , pre-trained models have greatly revolutionized computer vision ( CV ) and natural language processing ( NLP ) . The first wave of exploring pre-trained models took place in the field of CV . Deep convolutional neural nets ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ) are pre-trained on well-labeled ImageNet ( Deng et al. , 2009 ) and then transferred to downstream CV tasks ( Girshick et al. , 2014 ; Long et al. , 2015 ; Vinyals et al. , 2015 ) . Standardly , CV models are pre-trained to predict a fixed set of pre-defined object categories , e.g. , 1000 classes in ImageNet . However , this supervised pre-training is hard to scale since we need arduous human labeling to specify new visual concepts . When pre-training meets NLP , the intrinsic supervision within the natural language makes the pretraining more scalable ( Devlin et al. , 2018 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . Witnessing the progress in NLP , researchers use natural language supervision to learn visual features . The language-image pre-training can scale up to a very large size , benefiting from abundant image-text pairs on the Internet . For instance , CLIP ( Radford et al. , 2021 ) and ALIGN ( Jia et al. , 2021 ) adopt the contrastive loss to push the embedding of matched image-text pairs together while pushing those of non-matched pairs apart . They achieve prestigious performance by learning from an enormous dataset that contains 400M/1B image-text pairs . However , these methods also require huge storage and computing resources , which is not affordable for most laboratories and companies . We argue that these prior arts only use the single image-text contrastive supervision while overlooking the widespread supervision within the pairs , thus is inefficient . Firstly , there underlies rich structural information within each modality itself ( LeCun & Misra , 2021 ) . We can tweak some words/pixels in a sentence/image while retaining a similar semantic meaning . This sort of self-supervision can be exploited to learn a more common-sense representation for each modality ( Devlin et al. , 2018 ; He et al. , 2020 ; Chen et al. , 2020a ) . Moreover , inspired C DeC 71 72 73 74 75 73.3 73.8 ImageNet1k C DeC 68 69 70 71 72 70.3 71.2 CIFAR100 C DeC 94 96 98 100 96.1 99.2 FLOWERS C DeC 86 87 88 89 90 88.2 88.7 PETS C DeC 87 88 89 90 91 88.7 89.8 CIFAR10 C DeC 70 71 72 73 74 73.3 72.8 SUN C DeC 82 84 86 86.4 82.7 FOOD101 C DeC 78 80 82 78.3 81.7 CARS C DeC 90 92 94 89.6 93.9 CALTECH C DeC 46 47 48 49 50 49.1 48.4 AIRCRAFT C DeC 74 75 76 77 78 76.4 76.8 DTD C DeC 77 78 79 80 81 79.1 79.9 Average Figure 2 : Transfer the DeCLIP-ResNet50 ( abbr . as DeC ) and CLIP-ResNet50 ( abbr . as C ) to 11 downstream visual datasets using linear probe verification . Our DeCLIP achieves better results in 8 out of 11 datasets . by contrasting multi-crops in an image ( Caron et al. , 2020 ) , we further extend the multi-view 1 supervision into our multi-modality setting . Specifically , each image is paired with multiple textual descriptions obtained via stochastic augmentations , vice versa . The benefit is intuitive : this auxiliary multi-view supervision brings more invariant and robust information . Besides these overlooked supervision , we propose a novel nearest-neighbor ( NN ) supervision from other similar pairs . This NN supervision is mainly based on the intuition that one image is likely to have other similar text descriptions among the dataset . As shown in right figure , the image with the text ’ going to see a lot of vintage tractors this week ’ can also be described by ’ vintage at tractors a gathering ’ . For this reason , we sample the NN in the embedding space and utilize them as additional supervisory signals . Aggregating these supervision leads to our novel training paradigm DeCLIP , which stands for Data efficient Contrastive Language-Image Pretraining . Extensive experiments show the effectiveness and efficiency of our DeCLIP . As shown in Fig . 1 , with a ResNet50 image encoder and a Transformer text encoder , our model can achieve 60.4 % zero-shot top1 accuracy on ImageNet , which is 0.8 % above the CLIPResNet50 while using 7.1× fewer data . Using only 88M image-text pairs , our best ResNet50/ViTB32 models boost the zero-shot performance to 62.5 % and 66.2 % , nearly 3.0 % higher than the best number reported for these two architectures . We further verify the transferability of our models on downstream tasks . As indicated in Fig . 2 , our DeCLIP-ResNet50 outperforms its counterpart in 8 out of 11 visual datasets . Moreover , Scaling up the model and computing also works well in our framework . Using 4.5× fewer data , our DeCLIP-RegNetY-64GF achieves 73.7 % zero-shot ImageNet top1 accuracy , which is on-pair with CLIP-R50×64 . Pre-trained models , code , and datasets shall be released to the community . The contributions are summarized as follows : • To the best of our knowledge , this is the first work to study self-supervision and cross-modal multi-view supervision in the million-scale image-text pre-training task . Our work opens a new direction to fully exploit the intrinsic supervision within the multi-modal data instead of scaling up data naively . • We propose novel cross-modal Nearest-Neighbor Supervision ( NNS ) to harness information from other similar pairs . The NNS can also be regarded as a semantic-level augmentation . 1View is originally a visual concept . For simplicity , we use the same term for language . 2 RELATED WORK . 2.1 PRE-TRAINED MODELS . The critical idea of pre-training is to first extract general knowledge implicitly from the massive amount of data and then transfer the knowledge to versatile downstream tasks ( Han et al. , 2021 ) . With the transformer architecture ( Vaswani et al. , 2017 ) , big pre-trained NLP models , such as BERTseries ( Devlin et al. , 2018 ; Liu et al. , 2019 ) and GPT-series ( Radford et al. , 2019 ; Brown et al. , 2020 ) , have dominated this area ( Han et al. , 2021 ) . Besides the exquisite architecture , the great success comes from the tremendous language data over the Internet and labor-free supervision ( Yang et al. , 2019 ; Radford et al. , 2019 ; Devlin et al. , 2018 ) within the language itself . In the field of CV , supervised pre-training on ImageNet is still the standard practice . While achieving great success on downstream CV tasks ( Girshick et al. , 2014 ; Long et al. , 2015 ; Vinyals et al. , 2015 ) , this supervised manner is hard to scale . To address this challenge , our DeCLIP learns directly from image-text pairs that are abundant across the Internet . More importantly , by exploiting the widespread supervision within the pairs , our DeCLIP is more data-efficient than the prior art . 2.2 SUPERVISION WITHIN DATA . Language supervision Joulin et al . ( 2016 ) ; Gomez et al . ( 2017 ) ; Zhang et al . ( 2020 ) ; Sariyildiz et al . ( 2020 ) ; Desai & Johnson ( 2021 ) demonstrate the effectiveness of learning transferable visual features from language supervision . However , these works are limited to a small dataset such as Flickr ( Young et al. , 2014 ) or COCO ( Lin et al. , 2014 ) . Pioneering work CLIP ( Radford et al. , 2021 ) and ALIGN ( Jia et al. , 2021 ) achieve prestigious performance via learning from 400M/1B image-text pairs . We are following these two works to improve their data efficiency . Visual self-supervision Our work is also highly related to self-supervised learning ( SSL ) ( LeCun & Misra , 2021 ) . Contrastive learning , as a pretext task of SSL , has achieved remarkable success in visual representation learning ( He et al. , 2020 ; Chen et al. , 2020a ; Caron et al. , 2020 ; Grill et al. , 2020 ; Chen et al. , 2020a ) . Researchers also extend contrastive learning into multi-modal settings ( Yuan et al. , 2021 ) . However , it is only limited to a small COCO dataset ( Yuan et al. , 2021 ) . Nearest-neighbor supervision Recently , researchers have exploited nearest-neighbor supervision to learn visual features ( Dwibedi et al. , 2021 ; Van Gansbeke et al. , 2021 ) . They find that using nearest-neighbor as positive samples in the contrastive loss improves the performances on multiple downstream tasks . However , they mainly focus on the single visual modality pretraining on relatively small datasets , such as ImageNet . We propose novel nearest-neighbor supervision for multi-modal learning to harness information from other similar pairs . 2.3 MULTI-MODAL LEARNING . Multi-modal learning aims to build models that can process and relate information from multiple modalities ( Baltrušaitis et al. , 2018 ) . Vision-language tasks , such as VQA ( Antol et al. , 2015 ; Gao et al. , 2019 ) , image captioning ( Xu et al. , 2015 ) , require an understanding of both textual and visual semantics . Most vision-language models use a bunch of cross-modal transformers to fuse and align the information between text and image , such as LXMERT ( Tan & Bansal , 2019 ) , UNITER ( Chen et al. , 2020b ) , ViLBERT ( Lu et al. , 2019 ) , VisualBERT ( Li et al. , 2019 ) , OSCAR ( Li et al. , 2020 ) , Pixel-BERT ( Huang et al. , 2020 ) . These methods either need an off-the-shelf object detector to extract region features or dedicated cross-modal transformer layers , significantly hindering their scalability . Our DeCLIP , by contrast , uses a simple yet effective two-tower framework with multimodal interaction only at the top . Moreover , this series of models ( Radford et al. , 2021 ; Jia et al. , 2021 ; Huo et al. , 2021 ) can perform zero-shot recognition , adapting to new categories with no seen labeled data . Shen et al . ( 2021 ) also shows that the pre-trained CLIP model can significantly benefit the downstream VQA and image caption tasks . Our DeCLIP is supposed to be compatible with more modalities , e.g. , acoustic signals ( Akbari et al. , 2021 ) . More modalities included , more correlated supervision are expected to be exploited . We hope our DeCLIP can inspire and advance the future study of multi-modal learning . correct pairings of a batch of ( image , text ) training examples . ( b ) Our DeCLIP overview . 1 means Self-Supervision ( SS ) . For image SS , we maximize the similarity between two augmented views of the same instance . For text SS , we leverage Masked Language Modeling ( MLM ) within a text sentence . 2 represents cross-modal Multi-View Supervision ( MVS ) . We first have two augmented views of both image and text , then contrast the 2 × 2 image-text pairs . 3 indicates NearestNeighbor Supervision ( NNS ) . We sample text NN in the embedding space to serve as additional supervision . The combination of the three supervision leads to efficient multi-modal learning .
The paper proposes DeCLIP to further utilize the data potential by adding three training objectives to CLIP pre-training: 1) inspired by SimSiam and BERT, self-supervised objectives are added to both image and text; 2) they generate different views for both images and text, and apply contrastive objectives; 3) they sample neighbor text as additional positive examples. DeCLIP improves data efficiency. With web-crawled data, DeCLIP outperforms CLIP counterparts with 4.5x smaller amount of data. In addition, while introducing addition objectives and especially different views increases per-batch compute time by 1.5x, the authors show that DeCLIP still outperforms CLIP when given the same compute time budget.
SP:81cd76230b5fb152f865202149938069ef659ae7
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
Recently , large-scale Contrastive Language-Image Pre-training ( CLIP ) ( Radford et al. , 2021 ) has attracted unprecedented attention for its impressive zero-shot recognition ability and excellent transferability to downstream tasks . However , CLIP is quite data-hungry and requires 400M image-text pairs for pre-training , thereby restricting its adoption . This work proposes a novel training paradigm , Data efficient CLIP ( DeCLIP ) , to alleviate this limitation . We demonstrate that by carefully utilizing the widespread supervision among the image-text pairs , our DeCLIP can learn generic visual features more efficiently . Instead of using the single image-text contrastive supervision , we fully exploit data potential through the use of ( 1 ) self-supervision within each modality ; ( 2 ) multi-view supervision across modalities ; ( 3 ) nearest-neighbor supervision from other similar pairs . Benefiting from these intrinsic supervision , our DeCLIP-ResNet50 can achieve 60.4 % zero-shot top1 accuracy on ImageNet , which is 0.8 % above the CLIP-ResNet50 while using 7.1× fewer data . Our DeCLIP-ResNet50 outperforms its counterpart in 8 out of 11 visual datasets when transferred to downstream tasks . Moreover , Scaling up the model and computing also works well in our framework . 1 INTRODUCTION . Over the last few years , pre-trained models have greatly revolutionized computer vision ( CV ) and natural language processing ( NLP ) . The first wave of exploring pre-trained models took place in the field of CV . Deep convolutional neural nets ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ) are pre-trained on well-labeled ImageNet ( Deng et al. , 2009 ) and then transferred to downstream CV tasks ( Girshick et al. , 2014 ; Long et al. , 2015 ; Vinyals et al. , 2015 ) . Standardly , CV models are pre-trained to predict a fixed set of pre-defined object categories , e.g. , 1000 classes in ImageNet . However , this supervised pre-training is hard to scale since we need arduous human labeling to specify new visual concepts . When pre-training meets NLP , the intrinsic supervision within the natural language makes the pretraining more scalable ( Devlin et al. , 2018 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . Witnessing the progress in NLP , researchers use natural language supervision to learn visual features . The language-image pre-training can scale up to a very large size , benefiting from abundant image-text pairs on the Internet . For instance , CLIP ( Radford et al. , 2021 ) and ALIGN ( Jia et al. , 2021 ) adopt the contrastive loss to push the embedding of matched image-text pairs together while pushing those of non-matched pairs apart . They achieve prestigious performance by learning from an enormous dataset that contains 400M/1B image-text pairs . However , these methods also require huge storage and computing resources , which is not affordable for most laboratories and companies . We argue that these prior arts only use the single image-text contrastive supervision while overlooking the widespread supervision within the pairs , thus is inefficient . Firstly , there underlies rich structural information within each modality itself ( LeCun & Misra , 2021 ) . We can tweak some words/pixels in a sentence/image while retaining a similar semantic meaning . This sort of self-supervision can be exploited to learn a more common-sense representation for each modality ( Devlin et al. , 2018 ; He et al. , 2020 ; Chen et al. , 2020a ) . Moreover , inspired C DeC 71 72 73 74 75 73.3 73.8 ImageNet1k C DeC 68 69 70 71 72 70.3 71.2 CIFAR100 C DeC 94 96 98 100 96.1 99.2 FLOWERS C DeC 86 87 88 89 90 88.2 88.7 PETS C DeC 87 88 89 90 91 88.7 89.8 CIFAR10 C DeC 70 71 72 73 74 73.3 72.8 SUN C DeC 82 84 86 86.4 82.7 FOOD101 C DeC 78 80 82 78.3 81.7 CARS C DeC 90 92 94 89.6 93.9 CALTECH C DeC 46 47 48 49 50 49.1 48.4 AIRCRAFT C DeC 74 75 76 77 78 76.4 76.8 DTD C DeC 77 78 79 80 81 79.1 79.9 Average Figure 2 : Transfer the DeCLIP-ResNet50 ( abbr . as DeC ) and CLIP-ResNet50 ( abbr . as C ) to 11 downstream visual datasets using linear probe verification . Our DeCLIP achieves better results in 8 out of 11 datasets . by contrasting multi-crops in an image ( Caron et al. , 2020 ) , we further extend the multi-view 1 supervision into our multi-modality setting . Specifically , each image is paired with multiple textual descriptions obtained via stochastic augmentations , vice versa . The benefit is intuitive : this auxiliary multi-view supervision brings more invariant and robust information . Besides these overlooked supervision , we propose a novel nearest-neighbor ( NN ) supervision from other similar pairs . This NN supervision is mainly based on the intuition that one image is likely to have other similar text descriptions among the dataset . As shown in right figure , the image with the text ’ going to see a lot of vintage tractors this week ’ can also be described by ’ vintage at tractors a gathering ’ . For this reason , we sample the NN in the embedding space and utilize them as additional supervisory signals . Aggregating these supervision leads to our novel training paradigm DeCLIP , which stands for Data efficient Contrastive Language-Image Pretraining . Extensive experiments show the effectiveness and efficiency of our DeCLIP . As shown in Fig . 1 , with a ResNet50 image encoder and a Transformer text encoder , our model can achieve 60.4 % zero-shot top1 accuracy on ImageNet , which is 0.8 % above the CLIPResNet50 while using 7.1× fewer data . Using only 88M image-text pairs , our best ResNet50/ViTB32 models boost the zero-shot performance to 62.5 % and 66.2 % , nearly 3.0 % higher than the best number reported for these two architectures . We further verify the transferability of our models on downstream tasks . As indicated in Fig . 2 , our DeCLIP-ResNet50 outperforms its counterpart in 8 out of 11 visual datasets . Moreover , Scaling up the model and computing also works well in our framework . Using 4.5× fewer data , our DeCLIP-RegNetY-64GF achieves 73.7 % zero-shot ImageNet top1 accuracy , which is on-pair with CLIP-R50×64 . Pre-trained models , code , and datasets shall be released to the community . The contributions are summarized as follows : • To the best of our knowledge , this is the first work to study self-supervision and cross-modal multi-view supervision in the million-scale image-text pre-training task . Our work opens a new direction to fully exploit the intrinsic supervision within the multi-modal data instead of scaling up data naively . • We propose novel cross-modal Nearest-Neighbor Supervision ( NNS ) to harness information from other similar pairs . The NNS can also be regarded as a semantic-level augmentation . 1View is originally a visual concept . For simplicity , we use the same term for language . 2 RELATED WORK . 2.1 PRE-TRAINED MODELS . The critical idea of pre-training is to first extract general knowledge implicitly from the massive amount of data and then transfer the knowledge to versatile downstream tasks ( Han et al. , 2021 ) . With the transformer architecture ( Vaswani et al. , 2017 ) , big pre-trained NLP models , such as BERTseries ( Devlin et al. , 2018 ; Liu et al. , 2019 ) and GPT-series ( Radford et al. , 2019 ; Brown et al. , 2020 ) , have dominated this area ( Han et al. , 2021 ) . Besides the exquisite architecture , the great success comes from the tremendous language data over the Internet and labor-free supervision ( Yang et al. , 2019 ; Radford et al. , 2019 ; Devlin et al. , 2018 ) within the language itself . In the field of CV , supervised pre-training on ImageNet is still the standard practice . While achieving great success on downstream CV tasks ( Girshick et al. , 2014 ; Long et al. , 2015 ; Vinyals et al. , 2015 ) , this supervised manner is hard to scale . To address this challenge , our DeCLIP learns directly from image-text pairs that are abundant across the Internet . More importantly , by exploiting the widespread supervision within the pairs , our DeCLIP is more data-efficient than the prior art . 2.2 SUPERVISION WITHIN DATA . Language supervision Joulin et al . ( 2016 ) ; Gomez et al . ( 2017 ) ; Zhang et al . ( 2020 ) ; Sariyildiz et al . ( 2020 ) ; Desai & Johnson ( 2021 ) demonstrate the effectiveness of learning transferable visual features from language supervision . However , these works are limited to a small dataset such as Flickr ( Young et al. , 2014 ) or COCO ( Lin et al. , 2014 ) . Pioneering work CLIP ( Radford et al. , 2021 ) and ALIGN ( Jia et al. , 2021 ) achieve prestigious performance via learning from 400M/1B image-text pairs . We are following these two works to improve their data efficiency . Visual self-supervision Our work is also highly related to self-supervised learning ( SSL ) ( LeCun & Misra , 2021 ) . Contrastive learning , as a pretext task of SSL , has achieved remarkable success in visual representation learning ( He et al. , 2020 ; Chen et al. , 2020a ; Caron et al. , 2020 ; Grill et al. , 2020 ; Chen et al. , 2020a ) . Researchers also extend contrastive learning into multi-modal settings ( Yuan et al. , 2021 ) . However , it is only limited to a small COCO dataset ( Yuan et al. , 2021 ) . Nearest-neighbor supervision Recently , researchers have exploited nearest-neighbor supervision to learn visual features ( Dwibedi et al. , 2021 ; Van Gansbeke et al. , 2021 ) . They find that using nearest-neighbor as positive samples in the contrastive loss improves the performances on multiple downstream tasks . However , they mainly focus on the single visual modality pretraining on relatively small datasets , such as ImageNet . We propose novel nearest-neighbor supervision for multi-modal learning to harness information from other similar pairs . 2.3 MULTI-MODAL LEARNING . Multi-modal learning aims to build models that can process and relate information from multiple modalities ( Baltrušaitis et al. , 2018 ) . Vision-language tasks , such as VQA ( Antol et al. , 2015 ; Gao et al. , 2019 ) , image captioning ( Xu et al. , 2015 ) , require an understanding of both textual and visual semantics . Most vision-language models use a bunch of cross-modal transformers to fuse and align the information between text and image , such as LXMERT ( Tan & Bansal , 2019 ) , UNITER ( Chen et al. , 2020b ) , ViLBERT ( Lu et al. , 2019 ) , VisualBERT ( Li et al. , 2019 ) , OSCAR ( Li et al. , 2020 ) , Pixel-BERT ( Huang et al. , 2020 ) . These methods either need an off-the-shelf object detector to extract region features or dedicated cross-modal transformer layers , significantly hindering their scalability . Our DeCLIP , by contrast , uses a simple yet effective two-tower framework with multimodal interaction only at the top . Moreover , this series of models ( Radford et al. , 2021 ; Jia et al. , 2021 ; Huo et al. , 2021 ) can perform zero-shot recognition , adapting to new categories with no seen labeled data . Shen et al . ( 2021 ) also shows that the pre-trained CLIP model can significantly benefit the downstream VQA and image caption tasks . Our DeCLIP is supposed to be compatible with more modalities , e.g. , acoustic signals ( Akbari et al. , 2021 ) . More modalities included , more correlated supervision are expected to be exploited . We hope our DeCLIP can inspire and advance the future study of multi-modal learning . correct pairings of a batch of ( image , text ) training examples . ( b ) Our DeCLIP overview . 1 means Self-Supervision ( SS ) . For image SS , we maximize the similarity between two augmented views of the same instance . For text SS , we leverage Masked Language Modeling ( MLM ) within a text sentence . 2 represents cross-modal Multi-View Supervision ( MVS ) . We first have two augmented views of both image and text , then contrast the 2 × 2 image-text pairs . 3 indicates NearestNeighbor Supervision ( NNS ) . We sample text NN in the embedding space to serve as additional supervision . The combination of the three supervision leads to efficient multi-modal learning .
In this paper, the authors mitigate the data-hungriness of the CLIP model. The authors propose three directions: single-modality self-supervision; multi-view multi-modality contrastive learning, and nearest-neighbor supervision. With the proposed three components, the authors can achieve better or comparable results with CLIP with more than 4x fewer data.
SP:81cd76230b5fb152f865202149938069ef659ae7
Causally Focused Convolutional Networks Through Minimal Human Guidance
1 INTRODUCTION . Convolutional Neural Networks ( CNNs ) are more popular than any other techniques in image classification . The ability to automatically extract required features is one key factor behind the phenomenal success of these models . Image classification being used in critical application areas such as medicine , surveillance , and many others , CNNs could make a huge impact in these domains . However , when implementing artificial intelligence based systems in such domains , attributing the success of the application to accuracy alone is not sufficient . In such cases , these systems are expected to be justifiable as the decisions made by them may have huge impact on various factors with high risks . Recently , it has been observed that CNNs are very much efficient to find correlation between features and labels and often extract features greedily following that principle ( Shwartz-Ziv & Tishby , 2017 ; Tishby & Zaslavsky , 2015 ; Chaitin , 2015 ; Blier & Ollivier , 2018 ) . In this process , often it may happen that these models learn correlations ( Shen et al. , 2017 ) which may not be justifiable from human perspective . In order to eliminate the effect of non causal correlations , CNNs need to be trained on huge datasets which may not be always possible in various domains like medicine . Thus learning the correct features efficiently from less data becomes an important problem in these areas . Let us illustrate this using an example . Suppose we have a dataset with images of cows on grasslands and aeroplanes in blue-sky . It has been observed that grass is extracted as a feature for cow and sky is extracted as a feature for aeroplane ( see Fig . 1 ) . We have used Grad-Cam ( Selvaraju et al. , 2017 ) to generate the heatmaps to visualize the features extracted by the CNNs . These heatmaps reveal that the model is using irrelevant features for classification . Possible solutions to overcome this issue would be to add more images to the dataset or to re-balance it in order to remove the data bias . Contrarily , our objective is to utilize the available data efficiently and to make the models learn features which can be causal from human perspective . It is evident that , CNNs are not guaranteed to extract such causal features . With this point of view , we propose to take guidance from humans on what they think is causal for a few samples in a class . We capture this guidance in the form of activation masks which are basically binary matrices with 1s on the causal parts of the images ( see Fig . 2 ) . Once we have the user guidance , our plan is to tweak the learning process of the CNNs though these guidance to focus them on extracting the causal features . We achieve that by modifying the learning objective of the CNNs and the backpropagation algorithm then takes care of updating the model parameters accordingly . This simple modification in the training procedure helps avoid the learning of spurious correlations between features and labels and focus just on the causal ones . We have experimentally observed that , this concept is working quite well on a wide range of cases and has proved to be extremely useful in the case of medical datasets . The main contributions of our work are summarized below : 1 . We propose a technique to focus the CNNs in learning causal features with the help of minimal human guidance . 2 . We demonstrate that our method not just improves learning of causal features but also helps in learning efficiently with less data . Additionally , we also show that the features learnt using our method are more robust to various types of image perturbations . 2 RELATED WORK . CNN and Interpretability Convolutional Neural Networks ( LeCun et al. , 1999 ) have boosted the progress in the field of computer vision since their inception . Manually designed architectures like LeNet ( LeCun et al. , 1989 ) , AlexNet ( Krizhevsky et al. , 2012 ) , VGG16 ( Simonyan & Zisserman , 2014 ) and many more have been proposed in the literature . In order to simplify the CNN architectures by retaining the spatial structure throughout the network , Springenberg et al . ( 2014 ) proposed the all convolutional nets , which eliminate the fully connected layers in these networks . To interpret the decisions of the CNNs , tools like Grad-Cam ( Selvaraju et al. , 2017 ) are widely used in practice , as they provide a way to extract the class discriminatory features learnt by the model . Correlation and Causality Research on the topics of correlation and causality has been gaining popularity among the researches in the recent years . Work by Shen et al . ( 2018 ) has recently shed some light on the correlational behavior of CNNs in image classification . Few other works like Arjovsky et al . ( 2019 ) , try to understand the causal relation between the input images and the corresponding labels , i.e . studying whether the relation is causal , anticausal or agnostic in nature . In our work , we just rely on the fact that there exist few features in the images which are the cause for the label and we expect the model to correctly identify such causal features only . The importance of causality , specifically in field of medicine is studied by Castro et al . ( 2020 ) and Liu et al . ( 2019 ) , highlighting the challenges for causality in computer aided diagnosis . Learning Causal Features The very recent work by Xiao et al . ( 2020 ) , studies the influence of the image background on object recognition . They show that non-trivial accuracy can be achieved by relying just on the background features in the images . A similar study was done in the case of medical images by Maguolo & Nanni ( 2021 ) , where they showed that CNN models provided diagnosis for the chest x-ray images even when the lung regions were removed from the input images . Not much work has been done on improving the learning of causal features , especially in the case of small datasets . The closest work that we found to our method is the self-supervised method called Guided Attention Inference Network ( GAIN ) ( Li et al. , 2019 ) which is proposed to improve the priors for the task of weakly supervised image segmentation . The authors present an extended version of this method , called the GAINpext , which uses an additional parameter sharing network with the GAIN architecture for pixel level supervision , that brings up the similarity with our work . We use this model as a baseline in our experiments . 3 PROPOSED METHODOLOGY : CAUSALLY FOCUSED CONVOLUTIONAL . NETWORKS ( CFCN ) In this section we describe the proposed method dubbed , Causally Focused Convolutional Networks ( CFCN ) . In CFCN , we force the model to break the spurious correlation between the label and any feature , and focus only on the causal features . In order to achieve this , we resort to minimal human guidance through activation masks . 3.1 NOTATIONS . Consider an input image X of size m1×m2 , its ground truth one-hot label y and the corresponding input activation mask A . Let , C be the number of classes in the dataset . The image classification model outputs class probabilities ŷ and the set of feature maps { Âf } Ff=1 generated by the last convolutional layer after application of relu activation , where F is the number of filters in this convolutional layer . The input activation masks and the feature map outputs are resized to a common shape n1 × n2 . Let c denote the index of the true class of the input image . A ◦ B denotes the element-wise product of two matrices A and B of same size . All the notations are summarized in Tab . 1 . 3.2 ACTIVATION MASK FOR HUMAN GUIDANCE . Activation masks are binary images where the causal regions are indicated using 1s and those belonging to the context regions are indicated using 0s . Some datasets like the Brain MRI ( Cheng et al. , 2015 ) , readily provide binary masks which can be directly used for our purpose . In few other cases , we may have pixel level labels which provide fine grained annotations exactly covering the regions of the objects of interest or bounding boxes annotations , that provide relatively coarse regions which may also contain few context features in them . Such annotations can be used to generate the activation masks as described in Appendix A . However , the method should not be dependent on the availability of masks or not . So we devise a simple technique to generate masks automatically . A typical step by step procedure for activation mask generation is shown in Fig . 2 . For a small subset of training images , the user has to roughly select the area of the objects of interest , which is then converted into a binary mask as shown in the figure . This can be automated using a python script . 3.3 CAUSAL FOCUS THROUGH ACTIVATION MASKS . In general CNNs are composed of several convolutional layers and pooling layers to extract features . The features extracted at the last layer then get passed to a Feed-forward Neural Network ( FNN ) to assign labels . Better the quality of features better will be the performance of the classifier . CNN layers along with the FNN layers are trained end to end by optimizing the categorical cross entropy loss . It has been observed that CNNs learn to extract features greedily and often end optimizing the correlation between labels and features . This process does not ensure that these models will always extract features which are causal from a human point of view . To mitigate this issue , we propose a mechanism to guide CNNs to focus on causal ( in a human eye ) features through additional minimal human input in the form of activation masks . During the model training we provide input images , their labels and activation masks . For the subset of input images which do not have the activation masks , we provide dummy masks with all values as 1s . The forward pass through the network for a single input image X , with true label y and input activation mask A generates the class probabilities ŷ and the feature map outputs { Âf } Ff=1 from the last convolutional layer , where F is the number of filters in this layer . Using this notation , we propose to optimize the following loss to train the CNN . L = − C∑ i=1 yilog ( ŷi ) ︸ ︷︷ ︸ Lcl + α 1− 1F F∑ f=1 n1∑ j=1 n2∑ k=1 ( A ◦ Âf ) j , k n1∑ j=1 n2∑ k=1 ( Âf ) j , k + , ︸ ︷︷ ︸ Lcf ( 1 ) where ≥ 0 is a small quantitiy to avoid accidental divide by zero error . α ≥ 0 is the trade-off parameter between the traditional categorical cross entropy loss ( Lcl ) and the proposed causallyfocus loss ( Lcf ) . Greater the value of α greater is the weightage on the causally-focus loss . Apart from the traditional CNNs , we also applied our causal feature learning method to the all convolutional nets proposed by Springenberg et al . ( 2014 ) . The number of filters in the last convolutional layer of these nets , is equal to the number of classes with each feature map output corresponding to each class , thus highlighting only that class specific features . We then calculate the causally-focus loss only with respect to the feature map Ac corresponding to the true class of the image and the input activation mask A as follows : L = − C∑ i=1 yilog ( ŷi ) ︸ ︷︷ ︸ Lcl +α 1− n1∑ j=1 n2∑ k=1 ( A ◦ Âc ) j , k n1∑ j=1 n2∑ k=1 ( Âc ) j , k + , ︸ ︷︷ ︸ Lcf ( 2 ) where c is the index corresponding to the actual class of the image , i.e . yc = 1 . This formulation has the ability to preserve the spatial structure of the data which is otherwise not maintained by the fully connected layers . Secondly , in Eq . 1 , we are calculating the causally-focus loss on all the feature map outputs which can be more time consuming . A detailed analysis of the loss function is presented in the Appendix B . The proposed approach CFCN is depicted in Fig . 3 .
This paper proposes a user-guided training for CNN to focus more on casual features which are based on human perspective and termed as casually focused convolutional neural network. Human guidance is used to make a rough binary segmentation mask of foreground objects/pixels. For an input image, mask and label are both utilized to compute final loss. Final loss is composed of 1) activation loss which is computed using the binary mask and last convolution layer features and 2) cross-entropy loss which is calculated using predicted probability and class label of the input image. Images for which binary mask is not available, dummy mask is created as all foreground pixel. Two types of networks, all convolutional layers (CFCN-C) and fully-connected classifiers (CFCN-F) are utilized with the proposed methodology. Experiments are performed on a total of 4 datasets of natural and medical images. Accuracy and visual activations are compared with baseline CNN, GAIN [1], and both CFCN-C and CFCN-F.
SP:f25f1a55b9c3945008f4c769ee1cc6414016da1a
Causally Focused Convolutional Networks Through Minimal Human Guidance
1 INTRODUCTION . Convolutional Neural Networks ( CNNs ) are more popular than any other techniques in image classification . The ability to automatically extract required features is one key factor behind the phenomenal success of these models . Image classification being used in critical application areas such as medicine , surveillance , and many others , CNNs could make a huge impact in these domains . However , when implementing artificial intelligence based systems in such domains , attributing the success of the application to accuracy alone is not sufficient . In such cases , these systems are expected to be justifiable as the decisions made by them may have huge impact on various factors with high risks . Recently , it has been observed that CNNs are very much efficient to find correlation between features and labels and often extract features greedily following that principle ( Shwartz-Ziv & Tishby , 2017 ; Tishby & Zaslavsky , 2015 ; Chaitin , 2015 ; Blier & Ollivier , 2018 ) . In this process , often it may happen that these models learn correlations ( Shen et al. , 2017 ) which may not be justifiable from human perspective . In order to eliminate the effect of non causal correlations , CNNs need to be trained on huge datasets which may not be always possible in various domains like medicine . Thus learning the correct features efficiently from less data becomes an important problem in these areas . Let us illustrate this using an example . Suppose we have a dataset with images of cows on grasslands and aeroplanes in blue-sky . It has been observed that grass is extracted as a feature for cow and sky is extracted as a feature for aeroplane ( see Fig . 1 ) . We have used Grad-Cam ( Selvaraju et al. , 2017 ) to generate the heatmaps to visualize the features extracted by the CNNs . These heatmaps reveal that the model is using irrelevant features for classification . Possible solutions to overcome this issue would be to add more images to the dataset or to re-balance it in order to remove the data bias . Contrarily , our objective is to utilize the available data efficiently and to make the models learn features which can be causal from human perspective . It is evident that , CNNs are not guaranteed to extract such causal features . With this point of view , we propose to take guidance from humans on what they think is causal for a few samples in a class . We capture this guidance in the form of activation masks which are basically binary matrices with 1s on the causal parts of the images ( see Fig . 2 ) . Once we have the user guidance , our plan is to tweak the learning process of the CNNs though these guidance to focus them on extracting the causal features . We achieve that by modifying the learning objective of the CNNs and the backpropagation algorithm then takes care of updating the model parameters accordingly . This simple modification in the training procedure helps avoid the learning of spurious correlations between features and labels and focus just on the causal ones . We have experimentally observed that , this concept is working quite well on a wide range of cases and has proved to be extremely useful in the case of medical datasets . The main contributions of our work are summarized below : 1 . We propose a technique to focus the CNNs in learning causal features with the help of minimal human guidance . 2 . We demonstrate that our method not just improves learning of causal features but also helps in learning efficiently with less data . Additionally , we also show that the features learnt using our method are more robust to various types of image perturbations . 2 RELATED WORK . CNN and Interpretability Convolutional Neural Networks ( LeCun et al. , 1999 ) have boosted the progress in the field of computer vision since their inception . Manually designed architectures like LeNet ( LeCun et al. , 1989 ) , AlexNet ( Krizhevsky et al. , 2012 ) , VGG16 ( Simonyan & Zisserman , 2014 ) and many more have been proposed in the literature . In order to simplify the CNN architectures by retaining the spatial structure throughout the network , Springenberg et al . ( 2014 ) proposed the all convolutional nets , which eliminate the fully connected layers in these networks . To interpret the decisions of the CNNs , tools like Grad-Cam ( Selvaraju et al. , 2017 ) are widely used in practice , as they provide a way to extract the class discriminatory features learnt by the model . Correlation and Causality Research on the topics of correlation and causality has been gaining popularity among the researches in the recent years . Work by Shen et al . ( 2018 ) has recently shed some light on the correlational behavior of CNNs in image classification . Few other works like Arjovsky et al . ( 2019 ) , try to understand the causal relation between the input images and the corresponding labels , i.e . studying whether the relation is causal , anticausal or agnostic in nature . In our work , we just rely on the fact that there exist few features in the images which are the cause for the label and we expect the model to correctly identify such causal features only . The importance of causality , specifically in field of medicine is studied by Castro et al . ( 2020 ) and Liu et al . ( 2019 ) , highlighting the challenges for causality in computer aided diagnosis . Learning Causal Features The very recent work by Xiao et al . ( 2020 ) , studies the influence of the image background on object recognition . They show that non-trivial accuracy can be achieved by relying just on the background features in the images . A similar study was done in the case of medical images by Maguolo & Nanni ( 2021 ) , where they showed that CNN models provided diagnosis for the chest x-ray images even when the lung regions were removed from the input images . Not much work has been done on improving the learning of causal features , especially in the case of small datasets . The closest work that we found to our method is the self-supervised method called Guided Attention Inference Network ( GAIN ) ( Li et al. , 2019 ) which is proposed to improve the priors for the task of weakly supervised image segmentation . The authors present an extended version of this method , called the GAINpext , which uses an additional parameter sharing network with the GAIN architecture for pixel level supervision , that brings up the similarity with our work . We use this model as a baseline in our experiments . 3 PROPOSED METHODOLOGY : CAUSALLY FOCUSED CONVOLUTIONAL . NETWORKS ( CFCN ) In this section we describe the proposed method dubbed , Causally Focused Convolutional Networks ( CFCN ) . In CFCN , we force the model to break the spurious correlation between the label and any feature , and focus only on the causal features . In order to achieve this , we resort to minimal human guidance through activation masks . 3.1 NOTATIONS . Consider an input image X of size m1×m2 , its ground truth one-hot label y and the corresponding input activation mask A . Let , C be the number of classes in the dataset . The image classification model outputs class probabilities ŷ and the set of feature maps { Âf } Ff=1 generated by the last convolutional layer after application of relu activation , where F is the number of filters in this convolutional layer . The input activation masks and the feature map outputs are resized to a common shape n1 × n2 . Let c denote the index of the true class of the input image . A ◦ B denotes the element-wise product of two matrices A and B of same size . All the notations are summarized in Tab . 1 . 3.2 ACTIVATION MASK FOR HUMAN GUIDANCE . Activation masks are binary images where the causal regions are indicated using 1s and those belonging to the context regions are indicated using 0s . Some datasets like the Brain MRI ( Cheng et al. , 2015 ) , readily provide binary masks which can be directly used for our purpose . In few other cases , we may have pixel level labels which provide fine grained annotations exactly covering the regions of the objects of interest or bounding boxes annotations , that provide relatively coarse regions which may also contain few context features in them . Such annotations can be used to generate the activation masks as described in Appendix A . However , the method should not be dependent on the availability of masks or not . So we devise a simple technique to generate masks automatically . A typical step by step procedure for activation mask generation is shown in Fig . 2 . For a small subset of training images , the user has to roughly select the area of the objects of interest , which is then converted into a binary mask as shown in the figure . This can be automated using a python script . 3.3 CAUSAL FOCUS THROUGH ACTIVATION MASKS . In general CNNs are composed of several convolutional layers and pooling layers to extract features . The features extracted at the last layer then get passed to a Feed-forward Neural Network ( FNN ) to assign labels . Better the quality of features better will be the performance of the classifier . CNN layers along with the FNN layers are trained end to end by optimizing the categorical cross entropy loss . It has been observed that CNNs learn to extract features greedily and often end optimizing the correlation between labels and features . This process does not ensure that these models will always extract features which are causal from a human point of view . To mitigate this issue , we propose a mechanism to guide CNNs to focus on causal ( in a human eye ) features through additional minimal human input in the form of activation masks . During the model training we provide input images , their labels and activation masks . For the subset of input images which do not have the activation masks , we provide dummy masks with all values as 1s . The forward pass through the network for a single input image X , with true label y and input activation mask A generates the class probabilities ŷ and the feature map outputs { Âf } Ff=1 from the last convolutional layer , where F is the number of filters in this layer . Using this notation , we propose to optimize the following loss to train the CNN . L = − C∑ i=1 yilog ( ŷi ) ︸ ︷︷ ︸ Lcl + α 1− 1F F∑ f=1 n1∑ j=1 n2∑ k=1 ( A ◦ Âf ) j , k n1∑ j=1 n2∑ k=1 ( Âf ) j , k + , ︸ ︷︷ ︸ Lcf ( 1 ) where ≥ 0 is a small quantitiy to avoid accidental divide by zero error . α ≥ 0 is the trade-off parameter between the traditional categorical cross entropy loss ( Lcl ) and the proposed causallyfocus loss ( Lcf ) . Greater the value of α greater is the weightage on the causally-focus loss . Apart from the traditional CNNs , we also applied our causal feature learning method to the all convolutional nets proposed by Springenberg et al . ( 2014 ) . The number of filters in the last convolutional layer of these nets , is equal to the number of classes with each feature map output corresponding to each class , thus highlighting only that class specific features . We then calculate the causally-focus loss only with respect to the feature map Ac corresponding to the true class of the image and the input activation mask A as follows : L = − C∑ i=1 yilog ( ŷi ) ︸ ︷︷ ︸ Lcl +α 1− n1∑ j=1 n2∑ k=1 ( A ◦ Âc ) j , k n1∑ j=1 n2∑ k=1 ( Âc ) j , k + , ︸ ︷︷ ︸ Lcf ( 2 ) where c is the index corresponding to the actual class of the image , i.e . yc = 1 . This formulation has the ability to preserve the spatial structure of the data which is otherwise not maintained by the fully connected layers . Secondly , in Eq . 1 , we are calculating the causally-focus loss on all the feature map outputs which can be more time consuming . A detailed analysis of the loss function is presented in the Appendix B . The proposed approach CFCN is depicted in Fig . 3 .
The paper proposes to alternate the loss of classification CNNs by adding additional loss term that focuses the model's attention on the object present in the image rather than on the background. To do so, the model is provided with an additional input (a binary mask) that is used to guide the learning of the model. The approach is validated on four datasets showing some benefits in terms of classification performance on small-scale datasets while decreasing the classification performance on larger scale datasets.
SP:f25f1a55b9c3945008f4c769ee1cc6414016da1a
Equal Experience in Recommender Systems
1 INTRODUCTION . Recommender systems are everywhere , playing a crucial role to support decision making and to decide what we experience in our daily life . One recent challenge concerning fairness arises when the systems are built upon biased historical data . Biased data due to polarized preferences of particular groups for certain items may often yield limited recommendation service . For instance , if female students exhibit high ratings on literature subjects and less interest in math and science relative to males , the subject recommender system trained based on such data may provide a narrow scope of recommended subjects to the female group , thereby yielding unequal experience . This unequal experience across groups may result in amplifying the gender gap issue in science , technology , engineering , and mathematics ( STEM ) fields . Among various works for fair recommender systems ( Yao & Huang , 2017 ; Li et al. , 2021 ; Kamishima & Akaho , 2017 ; Xiao et al. , 2017 ; Beutel et al. , 2019 ; Burke , 2017 ) , one recent and most relevant work is Yao & Huang ( 2017 ) . They focus on a scenario in which unfairness occurs mainly due to distinct recommendation accuracies across different groups . They propose novel fairness measures that quantify the degree of such unfairness via the difference between recommendation accuracies , and also develop an optimization framework that well trades the fairness measures against the average accuracy . However , it comes with a challenge in ensuring fairness w.r.t . the unequal experience . This is because similar accuracy performances between different groups do not guarantee a variety of recommendations to an underrepresented group with historical data bearing low preferences and/or scarce ratings for certain items . For instance , in the subject recommendation , the fairness notion may not serve properly , as long as female students exhibit low ratings ( and/or lack of ratings ) on math and science subjects due to societal/cultural influences ( and/or sampling biases ) . Furthermore , if the recommended items are selected only according to the overall preference , the biased preference for a specific item group will further increase , and the exposure to the unpreferred item group will gradually decrease . Contribution : In an effort to address the challenge , we introduce a new fairness notion that we call equal experience . At a high level , the notion represents how equally various items are suggested even for an underrepresented group preserving such biased historical data . Inspired by an informationtheoretic notion “ mutual information ” ( Cover , 1999 ) and its key property “ chain rule ” , we quantify our notion so as to control the level of independence between preference predictions and items for any group of users . Specifically , the notion encourages prediction Ỹ ( e.g. , 1 if a user prefers an item ; 0 otherwise ) to be independent of the following two : ( i ) user group Zuser ( e.g. , 0 for male ; and 1 for female ) ; and ( ii ) item group Zitem ( e.g. , 0 for mathematics ; and 1 for literature ) . In other words , it promotes Ỹ ⊥ ( Zuser , Zitem ) ; which in turns ensures all of the following four types of independence that one can think of : Ỹ ⊥ Zitem , Ỹ ⊥ Zuser , Ỹ ⊥ Zitem|Zuser , and Ỹ ⊥ Zuser|Zitem . This is inspired by the fact that mutual information being zero is equivalent to the independence between associated random variables , as well as the chain rule : I ( Ỹ ; Zuser , Zitem ) = I ( Ỹ ; Zitem ) + I ( Ỹ ; Zuser|Zitem ) = I ( Ỹ ; Zuser ) + I ( Ỹ ; Zitem|Zuser ) . ( 1 ) See Section 3.1 for details . The higher independence , the more diverse recommendation services are offered for every group . We also develop an optimization framework that incorporates the quantified notion as a regularization term into a conventional optimization for collaborative filtering in recommender systems ( e.g. , the one based on matrix completion Koren ( 2008 ) ; Koren et al . ( 2009 ) ) . Here one noticeable feature of our framework is that the fairness performances w.r.t . the above four types of independence conditions can be gracefully controlled via a single unified regularization term . This is in stark contrast to prior works ( Yao & Huang , 2017 ; Li et al. , 2021 ; Kamishima & Akaho , 2017 ; Mehrotra et al. , 2018 ) , each of which promotes only one independence condition or two via two separate regularization terms . See below Related works for details . In order to enable an efficient implementation of the fairness constraint , we employ recent methodologies developed in the context of fair classifiers , such as the ones building upon kernel density estimation ( Cho et al. , 2020a ) , mutual information ( Zhang et al. , 2018 ; Kamishima et al. , 2012 ; Cho et al. , 2020b ) , or covariance ( Zafar et al. , 2017a ; b ) . We also conduct extensive experiments both on synthetic and two benchmark real datasets : MovieLens 1M ( Harper & Konstan , 2015 ) and Last FM 360K ( Celma , 2010 ) . As a result , we first identify two primary sources of biases that incur unequal experience : population imbalance and observation bias ( Yao & Huang , 2017 ) . In addition , we demonstrate that our fairness notion can help improve the fairness measure w.r.t . equal experience ( to be defined in Section 3.1 ; see Definition 2 ) while exhibiting a small degradation of recommendation accuracy . Related works : In addition to Yao & Huang ( 2017 ) , numerous fairness notions and algorithms have been proposed for fair recommender systems ( Xiao et al. , 2017 ; Beutel et al. , 2019 ; Singh & Joachims , 2018 ; Zehlike et al. , 2017 ; Narasimhan et al. , 2020 ; Biega et al. , 2018 ; Li et al. , 2021 ; Kamishima & Akaho , 2017 ; Mehrotra et al. , 2018 ; Schnabel et al. , 2016 ) . Xiao et al . ( 2017 ) develop fairness notions that encourage similar recommendations for users within the same group . Beutel et al . ( 2019 ) consider similar metrics as that in Yao & Huang ( 2017 ) yet in the context of pairwise recommender systems wherein pairewise preferences are given as training data . Li et al . ( 2021 ) propose a fairness measure that quantifies the irrelevancy of preference predictions to user groups , like demographic parity in the fairness literature ( Feldman et al. , 2015 ; Zafar et al. , 2017a ; b ) . Specifically , they consider the independence condition between prediction Ỹ and user group Zuser : Ỹ ⊥ Zuser . Actually this was also considered as another fairness measure in Yao & Huang ( 2017 ) . Similarly , other works with a different direction consider the similar notion concerning the independence w.r.t . item group Zitem : Ỹ ⊥ Zitem ( Kamishima & Akaho , 2017 ; Singh & Joachims , 2018 ; Biega et al. , 2018 ) . Mehrotra et al . ( 2018 ) incorporate both measures to formulate a multi-objective optimization . In Section 2.2 , we will elaborate on why the above prior fairness notions can not fully address the challenge w.r.t . unequal experience . There has been a proliferation of fairness notions in the context of fair classifiers : ( i ) group fairness ( Feldman et al. , 2015 ; Zafar et al. , 2017b ; Hardt et al. , 2016 ; Woodworth et al. , 2017 ) ; ( ii ) individual fairness ( Dwork et al. , 2012 ; Garg et al. , 2018 ) ; ( iii ) causality-based fairness ( Kusner et al. , 2017 ; Nabi & Shpitser , 2018 ; Russell et al. , 2017 ; Wu et al. , 2019 ; Zhang & Bareinboim , 2018b ; a ) . Among various prominent group fairness notions , demographic parity and equalized odds give an inspiration to our work in the process of applying the chain rule , reflected in equation 1 . Concurrently , a multitude of fairness algorithms have been developed with the use of covariance ( Zafar et al. , 2017a ; b ) , mutual information ( Zhang et al. , 2018 ; Kamishima et al. , 2012 ; Cho et al. , 2020b ) , kernel density estimation ( Cho et al. , 2020a ) or Rényi correlation ( Mary et al. , 2019 ) to name a few . In this work , we also demonstrate that our proposed framework ( to be presented in Section 3 ) embraces many of these approaches ; See Remark 1 for details . 2 PROBLEM FORMULATION . As a key technique for operating recommender systems , we consider collaborative filtering which estimates user ratings on items . We first formulate an optimization problem for collaborative filtering building upon one prominent approach , matrix completion . We then introduce a couple of fairness measures proposed by recent prior works ( Yao & Huang , 2017 ; Li et al. , 2021 ; Kamishima & Akaho , 2017 ) , and present an extended optimization framework that incorporates the fairness measures as regularization terms . 2.1 OPTIMIZATION BASED ON MATRIX COMPLETION . As a well-known approach for operating recommender systems , we consider matrix completion ( Fazel , 2002 ; Koren et al. , 2009 ; Candès & Recht , 2009 ) . Let M ∈ Rn×m be the ground-truth rating matrix where n and m denote the number of users and items respectively . Each entry , denoted by Mij , can be of any type . It could be binary , five-star rating , or any real number . Denote by Ω the set of observed entries of M . For simplicity , we assume noiseless observation . Denote by M̂ ∈ Rn×m an estimate of the rating matrix . Matrix completion can be done via the rank minimization that exploits the low-rank structure of the rating matrix . However , since the problem is NP-hard ( Fazel , 2002 ) , we consider a well-known relaxation approach that intends to minimize instead the squared error between M and M̂ in the observed entries : min M̂ ∑ ( i , j ) ∈Ω ( Mij − M̂ij ) 2 . ( 2 ) There are two well-known approaches for solving the optimization in equation 2 : ( i ) matrix factorization ( Abadir & Magnus , 2005 ; Koren et al. , 2009 ) ; and ( ii ) neural-net-based parameterization ( Salakhutdinov et al. , 2007 ; Sedhain et al. , 2015 ; He et al. , 2017 ) . Matrix factorization assumes a certain structure on the rating matrix : M = LR where L ∈ Rn×r and R ∈ Rr×m . One natural way to search for optimal L∗ and R∗ is to apply gradient descent ( Robbins & Monro , 1951 ) w.r.t . all of the Lij ’ s and Rij ’ s , although it does not ensure the convergence of the optimal point due to non-convexity . The second approach is to parameterize M̂ via neural networks such as restricted Boltzmann machine ( Salakhutdinov et al. , 2007 ) and autoencoder ( Sedhain et al. , 2015 ; Lee et al. , 2018 ) . For instance , one may employ an autoencoder-type neural network which outputs a completed matrix M̂ fed by the partially-observed version of M . For a user-based autoencoder ( Sedhain et al. , 2015 ) , an observed row vector of M is fed into the autoencoder , while an observed column vector serves as an input for an item-based autoencoder ( Sedhain et al. , 2015 ) . In this work , we consider the two approaches in our experiments : matrix factorization with gradient descent ; and autoencoder-based parameterization . One common way to promote a fair recommender system is to incorporate a fairness measure , say Lfair ( which we will relate to an estimated matrix M̂ ) , as a regularization term into the above base optimization in equation 2 : min M̂ ( 1− λ ) ∑ ( i , j ) ∈Ω ( Mij − M̂ij ) 2 + λ · Lfair ( 3 ) where λ ∈ [ 0 , 1 ] denotes a normalized regularization factor that balances prediction accuracy against the fairness constraint . For the fairness-regularization term Lfair , several fairness measures have been introduced .
The presented study argued that a fair recommendation should be independent of both user and item. Therefore, the study introduced a new fairness notion, i.e., equal experience, and further incorporated this fairness notion as a regularisation term in the matrix completion framework to construct a fair recommender system. The proposed method was evaluated with three datasets (one synthetic and two real datasets).
SP:1d98df05bd885aff11b20cd016d822e970752dec
Equal Experience in Recommender Systems
1 INTRODUCTION . Recommender systems are everywhere , playing a crucial role to support decision making and to decide what we experience in our daily life . One recent challenge concerning fairness arises when the systems are built upon biased historical data . Biased data due to polarized preferences of particular groups for certain items may often yield limited recommendation service . For instance , if female students exhibit high ratings on literature subjects and less interest in math and science relative to males , the subject recommender system trained based on such data may provide a narrow scope of recommended subjects to the female group , thereby yielding unequal experience . This unequal experience across groups may result in amplifying the gender gap issue in science , technology , engineering , and mathematics ( STEM ) fields . Among various works for fair recommender systems ( Yao & Huang , 2017 ; Li et al. , 2021 ; Kamishima & Akaho , 2017 ; Xiao et al. , 2017 ; Beutel et al. , 2019 ; Burke , 2017 ) , one recent and most relevant work is Yao & Huang ( 2017 ) . They focus on a scenario in which unfairness occurs mainly due to distinct recommendation accuracies across different groups . They propose novel fairness measures that quantify the degree of such unfairness via the difference between recommendation accuracies , and also develop an optimization framework that well trades the fairness measures against the average accuracy . However , it comes with a challenge in ensuring fairness w.r.t . the unequal experience . This is because similar accuracy performances between different groups do not guarantee a variety of recommendations to an underrepresented group with historical data bearing low preferences and/or scarce ratings for certain items . For instance , in the subject recommendation , the fairness notion may not serve properly , as long as female students exhibit low ratings ( and/or lack of ratings ) on math and science subjects due to societal/cultural influences ( and/or sampling biases ) . Furthermore , if the recommended items are selected only according to the overall preference , the biased preference for a specific item group will further increase , and the exposure to the unpreferred item group will gradually decrease . Contribution : In an effort to address the challenge , we introduce a new fairness notion that we call equal experience . At a high level , the notion represents how equally various items are suggested even for an underrepresented group preserving such biased historical data . Inspired by an informationtheoretic notion “ mutual information ” ( Cover , 1999 ) and its key property “ chain rule ” , we quantify our notion so as to control the level of independence between preference predictions and items for any group of users . Specifically , the notion encourages prediction Ỹ ( e.g. , 1 if a user prefers an item ; 0 otherwise ) to be independent of the following two : ( i ) user group Zuser ( e.g. , 0 for male ; and 1 for female ) ; and ( ii ) item group Zitem ( e.g. , 0 for mathematics ; and 1 for literature ) . In other words , it promotes Ỹ ⊥ ( Zuser , Zitem ) ; which in turns ensures all of the following four types of independence that one can think of : Ỹ ⊥ Zitem , Ỹ ⊥ Zuser , Ỹ ⊥ Zitem|Zuser , and Ỹ ⊥ Zuser|Zitem . This is inspired by the fact that mutual information being zero is equivalent to the independence between associated random variables , as well as the chain rule : I ( Ỹ ; Zuser , Zitem ) = I ( Ỹ ; Zitem ) + I ( Ỹ ; Zuser|Zitem ) = I ( Ỹ ; Zuser ) + I ( Ỹ ; Zitem|Zuser ) . ( 1 ) See Section 3.1 for details . The higher independence , the more diverse recommendation services are offered for every group . We also develop an optimization framework that incorporates the quantified notion as a regularization term into a conventional optimization for collaborative filtering in recommender systems ( e.g. , the one based on matrix completion Koren ( 2008 ) ; Koren et al . ( 2009 ) ) . Here one noticeable feature of our framework is that the fairness performances w.r.t . the above four types of independence conditions can be gracefully controlled via a single unified regularization term . This is in stark contrast to prior works ( Yao & Huang , 2017 ; Li et al. , 2021 ; Kamishima & Akaho , 2017 ; Mehrotra et al. , 2018 ) , each of which promotes only one independence condition or two via two separate regularization terms . See below Related works for details . In order to enable an efficient implementation of the fairness constraint , we employ recent methodologies developed in the context of fair classifiers , such as the ones building upon kernel density estimation ( Cho et al. , 2020a ) , mutual information ( Zhang et al. , 2018 ; Kamishima et al. , 2012 ; Cho et al. , 2020b ) , or covariance ( Zafar et al. , 2017a ; b ) . We also conduct extensive experiments both on synthetic and two benchmark real datasets : MovieLens 1M ( Harper & Konstan , 2015 ) and Last FM 360K ( Celma , 2010 ) . As a result , we first identify two primary sources of biases that incur unequal experience : population imbalance and observation bias ( Yao & Huang , 2017 ) . In addition , we demonstrate that our fairness notion can help improve the fairness measure w.r.t . equal experience ( to be defined in Section 3.1 ; see Definition 2 ) while exhibiting a small degradation of recommendation accuracy . Related works : In addition to Yao & Huang ( 2017 ) , numerous fairness notions and algorithms have been proposed for fair recommender systems ( Xiao et al. , 2017 ; Beutel et al. , 2019 ; Singh & Joachims , 2018 ; Zehlike et al. , 2017 ; Narasimhan et al. , 2020 ; Biega et al. , 2018 ; Li et al. , 2021 ; Kamishima & Akaho , 2017 ; Mehrotra et al. , 2018 ; Schnabel et al. , 2016 ) . Xiao et al . ( 2017 ) develop fairness notions that encourage similar recommendations for users within the same group . Beutel et al . ( 2019 ) consider similar metrics as that in Yao & Huang ( 2017 ) yet in the context of pairwise recommender systems wherein pairewise preferences are given as training data . Li et al . ( 2021 ) propose a fairness measure that quantifies the irrelevancy of preference predictions to user groups , like demographic parity in the fairness literature ( Feldman et al. , 2015 ; Zafar et al. , 2017a ; b ) . Specifically , they consider the independence condition between prediction Ỹ and user group Zuser : Ỹ ⊥ Zuser . Actually this was also considered as another fairness measure in Yao & Huang ( 2017 ) . Similarly , other works with a different direction consider the similar notion concerning the independence w.r.t . item group Zitem : Ỹ ⊥ Zitem ( Kamishima & Akaho , 2017 ; Singh & Joachims , 2018 ; Biega et al. , 2018 ) . Mehrotra et al . ( 2018 ) incorporate both measures to formulate a multi-objective optimization . In Section 2.2 , we will elaborate on why the above prior fairness notions can not fully address the challenge w.r.t . unequal experience . There has been a proliferation of fairness notions in the context of fair classifiers : ( i ) group fairness ( Feldman et al. , 2015 ; Zafar et al. , 2017b ; Hardt et al. , 2016 ; Woodworth et al. , 2017 ) ; ( ii ) individual fairness ( Dwork et al. , 2012 ; Garg et al. , 2018 ) ; ( iii ) causality-based fairness ( Kusner et al. , 2017 ; Nabi & Shpitser , 2018 ; Russell et al. , 2017 ; Wu et al. , 2019 ; Zhang & Bareinboim , 2018b ; a ) . Among various prominent group fairness notions , demographic parity and equalized odds give an inspiration to our work in the process of applying the chain rule , reflected in equation 1 . Concurrently , a multitude of fairness algorithms have been developed with the use of covariance ( Zafar et al. , 2017a ; b ) , mutual information ( Zhang et al. , 2018 ; Kamishima et al. , 2012 ; Cho et al. , 2020b ) , kernel density estimation ( Cho et al. , 2020a ) or Rényi correlation ( Mary et al. , 2019 ) to name a few . In this work , we also demonstrate that our proposed framework ( to be presented in Section 3 ) embraces many of these approaches ; See Remark 1 for details . 2 PROBLEM FORMULATION . As a key technique for operating recommender systems , we consider collaborative filtering which estimates user ratings on items . We first formulate an optimization problem for collaborative filtering building upon one prominent approach , matrix completion . We then introduce a couple of fairness measures proposed by recent prior works ( Yao & Huang , 2017 ; Li et al. , 2021 ; Kamishima & Akaho , 2017 ) , and present an extended optimization framework that incorporates the fairness measures as regularization terms . 2.1 OPTIMIZATION BASED ON MATRIX COMPLETION . As a well-known approach for operating recommender systems , we consider matrix completion ( Fazel , 2002 ; Koren et al. , 2009 ; Candès & Recht , 2009 ) . Let M ∈ Rn×m be the ground-truth rating matrix where n and m denote the number of users and items respectively . Each entry , denoted by Mij , can be of any type . It could be binary , five-star rating , or any real number . Denote by Ω the set of observed entries of M . For simplicity , we assume noiseless observation . Denote by M̂ ∈ Rn×m an estimate of the rating matrix . Matrix completion can be done via the rank minimization that exploits the low-rank structure of the rating matrix . However , since the problem is NP-hard ( Fazel , 2002 ) , we consider a well-known relaxation approach that intends to minimize instead the squared error between M and M̂ in the observed entries : min M̂ ∑ ( i , j ) ∈Ω ( Mij − M̂ij ) 2 . ( 2 ) There are two well-known approaches for solving the optimization in equation 2 : ( i ) matrix factorization ( Abadir & Magnus , 2005 ; Koren et al. , 2009 ) ; and ( ii ) neural-net-based parameterization ( Salakhutdinov et al. , 2007 ; Sedhain et al. , 2015 ; He et al. , 2017 ) . Matrix factorization assumes a certain structure on the rating matrix : M = LR where L ∈ Rn×r and R ∈ Rr×m . One natural way to search for optimal L∗ and R∗ is to apply gradient descent ( Robbins & Monro , 1951 ) w.r.t . all of the Lij ’ s and Rij ’ s , although it does not ensure the convergence of the optimal point due to non-convexity . The second approach is to parameterize M̂ via neural networks such as restricted Boltzmann machine ( Salakhutdinov et al. , 2007 ) and autoencoder ( Sedhain et al. , 2015 ; Lee et al. , 2018 ) . For instance , one may employ an autoencoder-type neural network which outputs a completed matrix M̂ fed by the partially-observed version of M . For a user-based autoencoder ( Sedhain et al. , 2015 ) , an observed row vector of M is fed into the autoencoder , while an observed column vector serves as an input for an item-based autoencoder ( Sedhain et al. , 2015 ) . In this work , we consider the two approaches in our experiments : matrix factorization with gradient descent ; and autoencoder-based parameterization . One common way to promote a fair recommender system is to incorporate a fairness measure , say Lfair ( which we will relate to an estimated matrix M̂ ) , as a regularization term into the above base optimization in equation 2 : min M̂ ( 1− λ ) ∑ ( i , j ) ∈Ω ( Mij − M̂ij ) 2 + λ · Lfair ( 3 ) where λ ∈ [ 0 , 1 ] denotes a normalized regularization factor that balances prediction accuracy against the fairness constraint . For the fairness-regularization term Lfair , several fairness measures have been introduced .
The paper is concerned with fairness in recommendations. Specifically, there are groups of users and groups of items. Previous work has modelled fairness as the constraint of all user groups having the same accuracy or as the prediction probability being independent of the item group or the user group. In this work, the notion is generalized so that the prediction is independent of both the item and user group. Optimization algorithms are shown to solve this problem along with experimental results.
SP:1d98df05bd885aff11b20cd016d822e970752dec
Superclass-Conditional Gaussian Mixture Model For Learning Fine-Grained Embeddings
1 INTRODUCTION . Training deep models with sufficient generalizability is of fundamental importance , which demands immense training data with fine-grained annotations ( Krizhevsky et al. , 2012 ; Brown et al. , 2020 ) . In many fields , however , data labeling requires domain-specific knowledge , such as medicine ( Sohoni et al. , 2020 ) , thus is prohibitive , and infeasible to be exhaustive . In this case , data for model training may only be “ coarsely ” labeled , while later the model is tested on a finer-grained classification task ( Bukchin et al. , 2021 ) . For example , consider an event prediction task for dialysis patients ( Inaguma et al. , 2019 ) . Hemodialysis is a major renal replacement therapy for patients with end-stage renal failure . These patients have to take hemodialysis thrice a week , each lasts for 4-5 hours . During the treatment , unexpected events , such as muscle cramp , perspiration , and dizziness , may happen as a result of lowering blood pressure , which need intensive medical care , thus should always be avoided . It is therefore an important medical issue to predict such events before an initiation of hemodialysis . In this task , binary labels , which mark the incidence of an event , can be collected . In contrast , finergrained labels that annotate different subtypes of events are seldom recorded . Since distinguishing different subtypes facilitates precise diagnoses , and helps physicians assess the risk for deciding whether to perform a hemodialysis ( with certain precautions ) , it is desirable that a model trained with coarse ( binary ) labels can perform well on a finer-grained multi-class ( subtypes ) task . To fill the gap of granularity between the training and testing scenarios , a practical way is to collect a few new records for a patient , with their fine-grained annotations . These data constitute a support set for fine-tuning a pre-trained model to the specific data distribution induced by the annotations of the target patient , for whom the adapted model is used for future predictions . Although massive finegrained annotation is impractical , annotating a few-shot set is feasible . In this work , we are interested in such a Cross-Granularity Few-Shot ( CGFS ) learning problem , where a model pre-trained on a set of coarse classes ( denoted as superclasses ) , needs to adapt to an unseen set of fine-grained target classes ( denoted as subclasses ) . The target subclasses could be descendants of the superclasses ( as in the aforementioned example ) , or could descend from other superclasses that are unobserved during pre-training . To be practical , the adaptation should only use a few samples from the subclasses . The CGFS problem is not limited to the above application . It occurs in a model ’ s lifespan when an application requires separating some subclasses from the superclasses yet when the training dataset was created these subclasses were unannotated . For example , it could occur in detecting rare pathology or variants using medical images that were coarsely described ( Oakden-Rayner et al. , 2020 ) , or personalizing a generic model trained on all historical users to a specific customer ( Luo et al. , 2020 ) . Despite its significance , CGFS can not be trivially solved by regularly training models with coarse labels , because typical losses for supervised learning aim to maximize inter-class boundaries but neglect intra-class variation . Thus , subclasses may arbitrarily and unevenly spread in every superclass . Recently , Bukchin et al . ( 2021 ) proposed to integrate coarse supervision and contrastive learning ( within superclasses ) for solving CGFS . However , their approach can not be readily used for medical records which typically contain static profiles and time series ( Che et al. , 2016 ) . This is due to the absence of a standard augmentation method for generating constrastive pairs on some data other than images . Also , since contrastive learning does not model subclass explicitly , their solution could be suboptimal ( as evaluated in Sec . 4 ) . Moreover , as their model was built upon MoCo ( He et al. , 2020 ) , and maintains many dictionaries for data sampling , its computational costs are high ( Sec . 4 ) . In this work , we propose a novel Superclass-Conditional Gaussian Mixture model ( SCGM ) to learn fine-grained embeddings for the CGFS problem . Our contributions are summarized as follows . • SCGM is agnostic to the encoder , thus is flexible to different applications . It models the generation of samples from hierarchical classes , and explicitly represents the unobserved subclasses by latent variables , without assuming their identities . It dynamically computes a Gaussian mixture for every sample conditioned on its superclass , and the model forms a hierarchy of Gaussian mixtures . • SCGM only adds a small overhead to an encoder , for parameterizing its distributions , thus is efficient . The model parameters are learned end-to-end by maximum likelihood estimation via a principled Expectation-Maximization ( EM ) algorithm . We also theoretically linked our loss function to InfoNCE ( Oord et al. , 2018 ) , explaining its effectiveness from a contrastive perspective . • In the experiments , we evaluated SCGM on both benchmark image datasets and a real-life medical dataset . Since SCGM is compatible with contrastive learning , we also tested it with a momentum encoder ( He et al. , 2020 ) . The results demonstrate SCGM on generic encoders has already outperformed the state-of-the-art ( SOTA ) baselines , with less computational costs , and it achieves boosted performance in some cases with momentum contrast integrated . 2 RELATED WORK . To our best knowledge , this is the first work to develop an SCGM model for underlying a framework that enables tackling CGFS across domains . The most relevant work ( Bukchin et al. , 2021 ) combines superclass-wise contrastive learning with coarse classification for preserving intra-class variation . Another work ( Yang et al. , 2021 ) used a three-step approach that pseudo-labels the embeddings ( pretrained by coarse classification and batch-wise contrastive learning ) by clustering every superclass . The pseudo-fine-labels were used to re-train the encoder . A similar three-step method ( Sohoni et al. , 2020 ) used a different loss for maximizing the worst-case expected accuracy over the pseudo-labeled subclasses . As discussed before , the former two methods require non-trivial searches for a suitable data augmentation method , which may be unavailable for some non-image data . By splitting training steps , the latter two methods could lead to suboptimal pseudo-labeling , and misleading labels could confuse the downstream steps . In contrast , our model is end-to-end . It explicitly infers the posterior of the subclass during coarse training . Moreover , it can achieve better performance without using as many computational resources as the model of ( Bukchin et al. , 2021 ) ( as evaluated in Sec . 4 ) . Learning with coarse supervision . Several works deal with coarse/fine labels from the perspective of weakly supervised learning ( Zhou , 2018 ) than to tackle the CGFS problem , including training methods that take advantage of a mixture of either balanced ( Ristin et al. , 2015 ; Guo et al. , 2018 ; Taherkhani et al. , 2019 ) or unbalanced ( Hsieh et al. , 2019 ; Liu et al. , 2019 ; Robinson et al. , 2020 ) coarse and fine labels . Among them , Liu et al . ( 2019 ) addressed a few-shot learning problem , but the model training assumes access to some fine labels and a graph of class hierarchy , which are unavailable in our problem . Thus it can not be adapted to solve our problem . Few-shot learning . Meta-learning has become a popular idea to handle the few-shot learning problem , which derives metric-based ( Vinyals et al. , 2016 ; Snell et al. , 2017 ) and optimization-based methods ( Finn et al. , 2017 ; Nichol et al. , 2018 ) . The idea has been extended to semi-supervised ( Ren et al. , 2018 ) , unsupervised ( Hsu et al. , 2018 ) and semantics-augmented ( Xing et al. , 2019 ) scenarios , when labels are scarce . However , none of them explores coarse labels for cross-granularity learning . Recently , many works observed that learning embeddings on all classes ( without episodic training ) , followed by simple fine-tuning , is superior to SOTA meta-learning methods ( Wang et al. , 2019 ; Dhillon et al. , 2020 ; Tian et al. , 2020 ) . In this work , similar to ( Bukchin et al. , 2021 ) , we focus on this paradigm to learn useful embeddings , and do not use meta-learning for pre-training . Embedding methods . The emerging self-supervised methods , such as contrastive methods ( Oord et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ) , are appealing in their ability to attain comparable embeddings to the supervised counterparts , and even surpass them when transferring to other tasks ( He et al. , 2020 ; Tian et al. , 2020 ) . Beyond instance-wise contrast , recent methods have explored instance-cluster contrast to boost performance ( Caron et al. , 2020 ; Li et al. , 2020 ) . In unsupervised methods , joint embedding and clustering has been found beneficial ( Caron et al. , 2018 ; Asano et al. , 2020 ) . Whereas , these methods never exploited coarse supervision . Thus , their embeddings/clusters do not necessarily reflect intra-class variation , which is important to the CGFS task . Vanilla Gaussian mixture ( GM ) model is unfit for the CGFS scenario . A recent work ( Manduchi et al. , 2021 ) extended GM to constrained clustering by conditioning every sample with a prior clustering preference . It is neither supervised by coarse classes nor hierarchically structured . Conventional hierarchical GM is used for hierarchical clustering ( Goldberger & Roweis , 2005 ; Olech & Paradowski , 2016 ; Athey et al. , 2019 ) by applying GM agglomeratively or divisively . These unsupervised methods only infer clusters , but do not pre-train embedding models for task adaptation . 3 SUPERCLASS-CONDITIONAL GAUSSIAN MIXTURE MODEL . Firstly , a word about some notations . Let Dtrain = { ( xi , yi ) } ni=1 be n sample-label training pairs , where yi ∈ Ysuper = { 1 , ... , c } is a superclass label . Each xi is associated with a latent ( unobserved ) subclass label ŷi ∈ Ysub = { 1 , ... , s } . Ysub relates to Ysuper by a hierarchical structure , i.e. , Ysub can be partitioned into c disjoint sets Ysub-1 , ... , Ysub-c , such that if ŷi ∈ Ysub-j , then yi = j ( 1 ≤ j ≤ c ) . Let fθ be an encoder ( i.e. , backbone network ) that is trained on Dtrain ( without knowing Ysub ) . It maps xi to a d-dimensional feature fθ ( xi ) . At test time , given a k-shot support set for a subset Ymsub ⊆ Ysub of m subclasses , i.e. , Dsupport = { ( xi , ŷi ) |ŷi ∈ Ymsub } mki=1 , the task is to train a classifier C : Rd → Ymsub with optimal accuracy on a test set of Ymsub subclasses . In our experiments , we also explored the case when the subclasses belong to superclasses that are not used for pre-training . As discussed in Sec . 2 , our focus is to train fθ for good embeddings without modifying fθ during the adaptation–a paradigm with SOTA few-shot performance ( Dhillon et al. , 2020 ; Tian et al. , 2020 ) . Formally , let fθ ( xi ) = vi , our goal is to find the model parameter θ that maximizes the likelihood of the posterior distribution pθ ( yi|vi ) on the observed data in Dtrain for classification tasks ( Grathwohl et al. , 2019 ) . To model the unobserved subclasses , we associate every vi with a latent variable zi to indicate to which subclass vi belongs . Suppose there are r possible subclasses , the log-likelihood to maximize can be rewritten by marginalizing out the latent variables . ℓ ( Dtrain ; θ ) = 1 n n∑ i=1 log [ pθ ( yi|vi ) ] = 1 n n∑ i=1 log [ r∑ zi=1 p ( yi|zi ) pθ ( zi|vi ) ] ( 1 ) where the distribution pθ ( zi|vi ) specifies the subclass membership of vi , and p ( yi|zi ) associates zi with a subclass partition Ysub-yi . Unlike some previous works ( Sohoni et al. , 2020 ) , which searched the number of subclasses for every superclass using a quality metric , we only assume a total number of r subclasses , and seek to infer their relationship with superclasses , i.e. , p ( yi|zi ) , without any prior on subclass partitions , so that the model is more generalizable .
The paper presents SCGM, a new technique for solving the Cross-Granularity Few-Shot learning (CGFS) problem. CGFS is defined as the problem of adapting a classification model trained on coarse (“superclass”) labels to perform well on fine-grained labels, which consist of multiple “subclass” labels within each superclass. The paper presents a new generative modeling approach that enables end-to-end classifier training in a manner that (a) incorporates information about superclass-subclass hierarchies (b) does not rely on explicit subclass enumeration (c) provides improved empirical performance on CGFS and (d) provides systems benefits with respect to existing baselines. The authors provide detailed description of their approach and characterize its relationship with existing methods, and present empirical results that support their methodological choices.
SP:6c3b9e6e95025f24bb371dfeb598f5ebc049bbc7
Superclass-Conditional Gaussian Mixture Model For Learning Fine-Grained Embeddings
1 INTRODUCTION . Training deep models with sufficient generalizability is of fundamental importance , which demands immense training data with fine-grained annotations ( Krizhevsky et al. , 2012 ; Brown et al. , 2020 ) . In many fields , however , data labeling requires domain-specific knowledge , such as medicine ( Sohoni et al. , 2020 ) , thus is prohibitive , and infeasible to be exhaustive . In this case , data for model training may only be “ coarsely ” labeled , while later the model is tested on a finer-grained classification task ( Bukchin et al. , 2021 ) . For example , consider an event prediction task for dialysis patients ( Inaguma et al. , 2019 ) . Hemodialysis is a major renal replacement therapy for patients with end-stage renal failure . These patients have to take hemodialysis thrice a week , each lasts for 4-5 hours . During the treatment , unexpected events , such as muscle cramp , perspiration , and dizziness , may happen as a result of lowering blood pressure , which need intensive medical care , thus should always be avoided . It is therefore an important medical issue to predict such events before an initiation of hemodialysis . In this task , binary labels , which mark the incidence of an event , can be collected . In contrast , finergrained labels that annotate different subtypes of events are seldom recorded . Since distinguishing different subtypes facilitates precise diagnoses , and helps physicians assess the risk for deciding whether to perform a hemodialysis ( with certain precautions ) , it is desirable that a model trained with coarse ( binary ) labels can perform well on a finer-grained multi-class ( subtypes ) task . To fill the gap of granularity between the training and testing scenarios , a practical way is to collect a few new records for a patient , with their fine-grained annotations . These data constitute a support set for fine-tuning a pre-trained model to the specific data distribution induced by the annotations of the target patient , for whom the adapted model is used for future predictions . Although massive finegrained annotation is impractical , annotating a few-shot set is feasible . In this work , we are interested in such a Cross-Granularity Few-Shot ( CGFS ) learning problem , where a model pre-trained on a set of coarse classes ( denoted as superclasses ) , needs to adapt to an unseen set of fine-grained target classes ( denoted as subclasses ) . The target subclasses could be descendants of the superclasses ( as in the aforementioned example ) , or could descend from other superclasses that are unobserved during pre-training . To be practical , the adaptation should only use a few samples from the subclasses . The CGFS problem is not limited to the above application . It occurs in a model ’ s lifespan when an application requires separating some subclasses from the superclasses yet when the training dataset was created these subclasses were unannotated . For example , it could occur in detecting rare pathology or variants using medical images that were coarsely described ( Oakden-Rayner et al. , 2020 ) , or personalizing a generic model trained on all historical users to a specific customer ( Luo et al. , 2020 ) . Despite its significance , CGFS can not be trivially solved by regularly training models with coarse labels , because typical losses for supervised learning aim to maximize inter-class boundaries but neglect intra-class variation . Thus , subclasses may arbitrarily and unevenly spread in every superclass . Recently , Bukchin et al . ( 2021 ) proposed to integrate coarse supervision and contrastive learning ( within superclasses ) for solving CGFS . However , their approach can not be readily used for medical records which typically contain static profiles and time series ( Che et al. , 2016 ) . This is due to the absence of a standard augmentation method for generating constrastive pairs on some data other than images . Also , since contrastive learning does not model subclass explicitly , their solution could be suboptimal ( as evaluated in Sec . 4 ) . Moreover , as their model was built upon MoCo ( He et al. , 2020 ) , and maintains many dictionaries for data sampling , its computational costs are high ( Sec . 4 ) . In this work , we propose a novel Superclass-Conditional Gaussian Mixture model ( SCGM ) to learn fine-grained embeddings for the CGFS problem . Our contributions are summarized as follows . • SCGM is agnostic to the encoder , thus is flexible to different applications . It models the generation of samples from hierarchical classes , and explicitly represents the unobserved subclasses by latent variables , without assuming their identities . It dynamically computes a Gaussian mixture for every sample conditioned on its superclass , and the model forms a hierarchy of Gaussian mixtures . • SCGM only adds a small overhead to an encoder , for parameterizing its distributions , thus is efficient . The model parameters are learned end-to-end by maximum likelihood estimation via a principled Expectation-Maximization ( EM ) algorithm . We also theoretically linked our loss function to InfoNCE ( Oord et al. , 2018 ) , explaining its effectiveness from a contrastive perspective . • In the experiments , we evaluated SCGM on both benchmark image datasets and a real-life medical dataset . Since SCGM is compatible with contrastive learning , we also tested it with a momentum encoder ( He et al. , 2020 ) . The results demonstrate SCGM on generic encoders has already outperformed the state-of-the-art ( SOTA ) baselines , with less computational costs , and it achieves boosted performance in some cases with momentum contrast integrated . 2 RELATED WORK . To our best knowledge , this is the first work to develop an SCGM model for underlying a framework that enables tackling CGFS across domains . The most relevant work ( Bukchin et al. , 2021 ) combines superclass-wise contrastive learning with coarse classification for preserving intra-class variation . Another work ( Yang et al. , 2021 ) used a three-step approach that pseudo-labels the embeddings ( pretrained by coarse classification and batch-wise contrastive learning ) by clustering every superclass . The pseudo-fine-labels were used to re-train the encoder . A similar three-step method ( Sohoni et al. , 2020 ) used a different loss for maximizing the worst-case expected accuracy over the pseudo-labeled subclasses . As discussed before , the former two methods require non-trivial searches for a suitable data augmentation method , which may be unavailable for some non-image data . By splitting training steps , the latter two methods could lead to suboptimal pseudo-labeling , and misleading labels could confuse the downstream steps . In contrast , our model is end-to-end . It explicitly infers the posterior of the subclass during coarse training . Moreover , it can achieve better performance without using as many computational resources as the model of ( Bukchin et al. , 2021 ) ( as evaluated in Sec . 4 ) . Learning with coarse supervision . Several works deal with coarse/fine labels from the perspective of weakly supervised learning ( Zhou , 2018 ) than to tackle the CGFS problem , including training methods that take advantage of a mixture of either balanced ( Ristin et al. , 2015 ; Guo et al. , 2018 ; Taherkhani et al. , 2019 ) or unbalanced ( Hsieh et al. , 2019 ; Liu et al. , 2019 ; Robinson et al. , 2020 ) coarse and fine labels . Among them , Liu et al . ( 2019 ) addressed a few-shot learning problem , but the model training assumes access to some fine labels and a graph of class hierarchy , which are unavailable in our problem . Thus it can not be adapted to solve our problem . Few-shot learning . Meta-learning has become a popular idea to handle the few-shot learning problem , which derives metric-based ( Vinyals et al. , 2016 ; Snell et al. , 2017 ) and optimization-based methods ( Finn et al. , 2017 ; Nichol et al. , 2018 ) . The idea has been extended to semi-supervised ( Ren et al. , 2018 ) , unsupervised ( Hsu et al. , 2018 ) and semantics-augmented ( Xing et al. , 2019 ) scenarios , when labels are scarce . However , none of them explores coarse labels for cross-granularity learning . Recently , many works observed that learning embeddings on all classes ( without episodic training ) , followed by simple fine-tuning , is superior to SOTA meta-learning methods ( Wang et al. , 2019 ; Dhillon et al. , 2020 ; Tian et al. , 2020 ) . In this work , similar to ( Bukchin et al. , 2021 ) , we focus on this paradigm to learn useful embeddings , and do not use meta-learning for pre-training . Embedding methods . The emerging self-supervised methods , such as contrastive methods ( Oord et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ) , are appealing in their ability to attain comparable embeddings to the supervised counterparts , and even surpass them when transferring to other tasks ( He et al. , 2020 ; Tian et al. , 2020 ) . Beyond instance-wise contrast , recent methods have explored instance-cluster contrast to boost performance ( Caron et al. , 2020 ; Li et al. , 2020 ) . In unsupervised methods , joint embedding and clustering has been found beneficial ( Caron et al. , 2018 ; Asano et al. , 2020 ) . Whereas , these methods never exploited coarse supervision . Thus , their embeddings/clusters do not necessarily reflect intra-class variation , which is important to the CGFS task . Vanilla Gaussian mixture ( GM ) model is unfit for the CGFS scenario . A recent work ( Manduchi et al. , 2021 ) extended GM to constrained clustering by conditioning every sample with a prior clustering preference . It is neither supervised by coarse classes nor hierarchically structured . Conventional hierarchical GM is used for hierarchical clustering ( Goldberger & Roweis , 2005 ; Olech & Paradowski , 2016 ; Athey et al. , 2019 ) by applying GM agglomeratively or divisively . These unsupervised methods only infer clusters , but do not pre-train embedding models for task adaptation . 3 SUPERCLASS-CONDITIONAL GAUSSIAN MIXTURE MODEL . Firstly , a word about some notations . Let Dtrain = { ( xi , yi ) } ni=1 be n sample-label training pairs , where yi ∈ Ysuper = { 1 , ... , c } is a superclass label . Each xi is associated with a latent ( unobserved ) subclass label ŷi ∈ Ysub = { 1 , ... , s } . Ysub relates to Ysuper by a hierarchical structure , i.e. , Ysub can be partitioned into c disjoint sets Ysub-1 , ... , Ysub-c , such that if ŷi ∈ Ysub-j , then yi = j ( 1 ≤ j ≤ c ) . Let fθ be an encoder ( i.e. , backbone network ) that is trained on Dtrain ( without knowing Ysub ) . It maps xi to a d-dimensional feature fθ ( xi ) . At test time , given a k-shot support set for a subset Ymsub ⊆ Ysub of m subclasses , i.e. , Dsupport = { ( xi , ŷi ) |ŷi ∈ Ymsub } mki=1 , the task is to train a classifier C : Rd → Ymsub with optimal accuracy on a test set of Ymsub subclasses . In our experiments , we also explored the case when the subclasses belong to superclasses that are not used for pre-training . As discussed in Sec . 2 , our focus is to train fθ for good embeddings without modifying fθ during the adaptation–a paradigm with SOTA few-shot performance ( Dhillon et al. , 2020 ; Tian et al. , 2020 ) . Formally , let fθ ( xi ) = vi , our goal is to find the model parameter θ that maximizes the likelihood of the posterior distribution pθ ( yi|vi ) on the observed data in Dtrain for classification tasks ( Grathwohl et al. , 2019 ) . To model the unobserved subclasses , we associate every vi with a latent variable zi to indicate to which subclass vi belongs . Suppose there are r possible subclasses , the log-likelihood to maximize can be rewritten by marginalizing out the latent variables . ℓ ( Dtrain ; θ ) = 1 n n∑ i=1 log [ pθ ( yi|vi ) ] = 1 n n∑ i=1 log [ r∑ zi=1 p ( yi|zi ) pθ ( zi|vi ) ] ( 1 ) where the distribution pθ ( zi|vi ) specifies the subclass membership of vi , and p ( yi|zi ) associates zi with a subclass partition Ysub-yi . Unlike some previous works ( Sohoni et al. , 2020 ) , which searched the number of subclasses for every superclass using a quality metric , we only assume a total number of r subclasses , and seek to infer their relationship with superclasses , i.e. , p ( yi|zi ) , without any prior on subclass partitions , so that the model is more generalizable .
The paper introduces a setup with the goal to adapt from a coarse pretrained model to unseen fine-grained labels. This problem is formulated as a superclass-subclass latent model and learned with maximum likelihood via expectation-maximization. The proposed approach, super-class conditional Gaussian mixture (SCGM) model, defines a hierarchical Gaussian distribution on the class hierarchy that models both the superclasses and subclasses. SCGM is evaluated on 6 image-net related datasets and a a real Dialysis-Event dataset collected by hospitals, and demonstrates competitive results under two evaluation setups: generalizing to seen and unseen superclass.
SP:6c3b9e6e95025f24bb371dfeb598f5ebc049bbc7
$p$-Laplacian Based Graph Neural Networks
1 INTRODUCTION . In this paper , we explore the usage of graph neural networks ( GNNs ) for semi-supervised node classification on graphs , especially when the graphs admit strong heterophily or noisy edges . Semisupervised learning problems on graphs are ubiquitous in a lot of real-world scenarios , such as user classification in social media ( Kipf & Welling , 2017 ) , protein classification in biology ( Velickovic et al. , 2018 ) , molecular property prediction in chemistry ( Duvenaud et al. , 2015 ) , and many others ( Marcheggiani & Titov , 2017 ; Satorras & Estrach , 2018 ) . Recently , GNNs are becoming the de facto choice for processing graph structured data . They can exploit the node features and the graph topology by propagating and transforming the features over the topology in each layer and thereby learn refined node representations . A series of GNN architectures have been proposed , including graph convolutional networks ( Bruna et al. , 2014 ; Henaff et al. , 2015 ; Defferrard et al. , 2016 ; Kipf & Welling , 2017 ; Wu et al. , 2019 ) , graph attention networks ( Velickovic et al. , 2018 ; Thekumparampil et al. , 2018 ) , and other representatives ( Hamilton et al. , 2017 ; Xu et al. , 2018 ; Pei et al. , 2020 ) . Most of the existing GNN architectures work under the homophily assumption , i.e . the labels of nodes and their neighbors in a graph are the same or consistent , which is also commonly used in graph clustering ( Bach & Jordan , 2004 ; von Luxburg , 2007 ; Liu & Han , 2013 ) and semi-supervised learning on graphs ( Belkin et al. , 2004 ; Hein , 2006 ; Nadler et al. , 2009 ) . However , recent studies ( Zhu et al. , 2020 ; 2021 ; Chien et al. , 2021 ) show that in contrast to their success on homophilic graphs , most GNNs fail to work well on heterophilic graphs , in which linked nodes are more likely to have distinct labels . Moreover , GNNs could even fail on graphs where their topology is not helpful for label prediction . In these cases , propagating and transforming node features over the graph topology could lead to worse performance than simply applying multi-layer perceptrons ( MLPs ) on each of the nodes independently . Several recent works were proposed to deal with the heterophily issues of GNNs . Zhu et al . ( 2020 ) finds that heuristically combining ego- , neighbor , and higher-order embeddings improves GNN performance on heterophilic graphs . Zhu et al . ( 2021 ) uses a compatibility matrix to model the graph homophily or heterphily level . Chien et al . ( 2021 ) incorporates the generalized PageRank algorithm with graph convolutions so as to jointly optimize node feature and topological information extraction for both homophilic and heterophilic graphs . However , the problem of GNNs on graphs with non-informative topologies ( or noisy edges ) remains open . Unlike previous works , we tackle the above issues of GNNs by proposing the discrete p-Laplacian based message passing scheme , termed as p-Laplacian message passing . It is derived from a discrete regularization framework and is theoretically verified as an approximation of a polynomial graph filter defined on the spectral domain of the p-Laplacian . Spectral analysis of p-Laplacian message passing shows that it works simultaneously as low-pass and high-pass filters 1 and thus is applicable to both homophilic and heterophilic graphs . Moreover , when p ̸= 2 , our theoretical results indicate that it can adaptively learn aggregation weights in terms of the variation of node embeddings on edges ( measured by the graph gradient ( Amghibech , 2003 ; Zhou & Schölkopf , 2005 ; Luo et al. , 2010 ) ) , and work as low-pass or both low-pass and high-pass filters on a node according to the local variation of node embeddings around the node ( measured by the norm of graph gradients ) . Based on p-Laplacian message passing , we propose a new GNN architecture , called pGNN , to enable GNNs to work with heterophilic graphs and graphs with non-informative topologies . Several existing GNN architectures , including SGC ( Wu et al. , 2019 ) , APPNP ( Klicpera et al. , 2019 ) and GPRGNN ( Chien et al. , 2021 ) , can be shown to be analogical to pGNN with p = 2 . Our empirical studies on real-world benchmark datasets ( homophilic and heterophilic datasets ) and synthetic datasets ( cSBM ( Deshpande et al. , 2018 ) ) demonstrate that pGNNs obtain the best performance on heterophilic graphs and competitive performance on homophilic graphs against state-of-the-art GNNs . Moreover , experimental results on graphs with different levels of noisy edges show that pGNNs work much more robustly than GNN baselines and even as well as MLPs on graphs with completely random edges . Additional experiments ( reported in Appendix F.5 ) illustrate that intergrating pGNNs with existing GNN architectures ( i.e . GCN ( Kipf & Welling , 2017 ) , JKNet ( Xu et al. , 2018 ) ) can significantly improve their performance on heterophilic graphs . In conclusion , our contributions can be summarized as below : ( 1 ) New methodologies . We propose p-Laplacian message passing and pGNN to adapt GNNs to heterophilic graphs and graphs where the topology is non-informative for label prediction . ( 2 ) Superior performance . We empirically demonstrate that pGNNs is superior on heterophilic graphs and competitive on homophilic graphs against state-of-the-art GNNs . Moreover , pGNNs work robustly on graphs with noisy edges or non-informative topologies . ( 3 ) Theoretical justification . We theoretically demonstrate that p-Laplacian message passing works as both low-pass and high-pass filters and the message passing iteration is guarantee to converge with proper settings . ( 4 ) New paradigm of designing GNN architectures . We bridge the gap between discrete regularization framework and GNNs , which could further inspire researchers to develop new graph convolutions or message passing schemes using other regularization techniques with explicit assumptions on graphs . Due to space limit , we defer the discussions on related work and future work and all proofs to the Appendix . 2 PRELIMINARIES AND BACKGROUND . Notation . Let G = ( V , E , W ) be an undirected graph , where V = { 1 , 2 , . . . , N } is the set of nodes , E ⊆ V × V is the set of edges , W ∈ RN×N is the adjacency matrix and Wi , j = Wj , i , Wi , j > 0 for [ i , j ] ∈ E , Wi , j = 0 , otherwise . Ni = { j } [ i , j ] ∈E denotes the set of neighbors of node i , D ∈ RN×N = diag ( D1,1 , . . . DN , N ) denotes the diagonal degree matrix with Di , i = ∑N j=1 Wi , j , for i = 1 , . . . , N . f : V → R and g : E → R are functions defined on the vertices and edges of G , respectively . FV denotes the Hilbert space of functions endowed with the inner product ⟨f , f̃⟩FV : =∑ i∈V f ( i ) f̃ ( i ) . Similarly define FE . We also denote by [ K ] = { 1 , 2 , . . . , K } , ∀K ∈ N and we use ∥x∥ = ∥x∥2 = ( ∑d i=1 x 2 i ) 1/2 , ∀x ∈ Rd to denote the Frobenius norm of a vector . Problem Formulation . Given a graph G = ( V , E , W ) , each node i ∈ V has a feature vector Xi , : which is the i-th row of X and a subset of nodes in G have labels from a label set L = { 1 , . . . , L } . The goal of semi-supervised node classification on G is to learn a mapping M : V → L and predict the labels of unlabeled nodes . 1Note that if the low frequencies and high frequencies dominate the middle frequencies ( the frequencies that are around the cutoff frequency ) , we say that the filter works both as low-pass and high-pass filters . Homophily and Heterophily . The homophily or heterophily of a graph is used to describe the relation of labels between linked nodes in the graphs . The level of homophily of a graph can be measured by H ( G ) = Ei∈V [ ∣∣ { j } j∈Ni , yi=yj ∣∣ /|Ni| ] ( Pei et al. , 2020 ; Chien et al. , 2021 ) , where∣∣ { j } j∈Ni , yi=yj ∣∣ denotes the number of neighbors of i ∈ V that share the same label as i and H ( G ) → 1 corresponds to strong homophily while H ( G ) → 0 indicates strong heterophily . We say that a graph is a homophilic ( heterophilic ) graph if it has strong homophily ( heterophily ) . Graph Gradient . The graph gradient of an edge [ i , j ] , i , j ∈ V is defined to be a measurement of the variation of a function f 2 : V → R on the edge [ i , j ] . Definition 1 ( Graph Gradient ) . Given a graph G = ( V , E ) and a function f : V → R , the graph gradient is an operator ∇ : FV → FE defined by ( ∇f ) ( [ i , j ] ) : = √ Wi , j Dj , j f ( j ) − √ Wi , j Di , i f ( i ) , for all [ i , j ] ∈ E . ( 1 ) For [ i , j ] /∈ E , ( ∇f ) ( [ i , j ] ) : = 0 . The graph gradient of a function f at vertex i is defined to be ∇f ( i ) : = ( ( ∇f ) ( [ i , 1 ] ) , . . . , ( ∇f ) ( [ i , N ] ) ) and its Frobenius norm is given by ∥∇f ( i ) ∥2 : = ( ∑N j=1 ( ∇f ) 2 ( [ i , j ] ) ) 1/2 , which measures the variation of f around node i . We measure the variation of f over the whole graph G by Sp ( f ) where it is defined to be Sp ( f ) : = 1 2 N∑ i=1 N∑ j=1 ∥ ( ∇f ) ( [ i , j ] ) ∥p = 1 2 N∑ i=1 N∑ j=1 ∥∥∥∥∥ √ Wi , j Dj , j f ( j ) − √ Wi , j Di , i f ( i ) ∥∥∥∥∥ p , for p ≥ 1 , ( 2 ) Note that the definition of Sp here is different with the p-Dirichlet form in Zhou & Schölkopf ( 2005 ) . Graph Divergence . The graph divergence is defined to be the adjoint of the graph gradient : Definition 2 ( Graph Divergence ) . Given a graph G = ( V , E ) , and functions f : V → R , g : E → R , the graph divergence is an operator div : FE → FV which satisfies ⟨∇f , g⟩ = ⟨f , −divg⟩ . ( 3 ) The graph divergence can be computed by ( divg ) ( i ) = N∑ j=1 √ Wi , j Di , i ( g ( [ i , j ] ) − g ( [ j , i ] ) ) . ( 4 ) Fig . 4 in Appendix E.1 gives a tiny example of illustration of graph gradient and graph divergence . Graph p-Laplacian Operator . By the definitions of graph gradient and graph divergence , we reach the definition of graph p-Laplacian operator as below . Definition 3 ( Graph p-Laplacian3 ) . Given a graph G = ( V , E ) and a function f : V → R , the graph p-Laplacian is an operator ∆p : FV → FV defined by ∆pf : = − 1 2 div ( ∥∇f∥p−2∇f ) , for p ≥ 1 . ( 5 ) where ∥ · ∥p−2 is element-wise , i.e . ∥∇f ( i ) ∥p−2 = ( ∥ ( ∇f ) ( [ i , 1 ] ) ∥p−2 , . . . , ∥ ( ∇f ) ( [ i , N ] ) ∥p−2 ) . Substituting Eq . ( 1 ) and Eq . ( 4 ) into Eq . ( 5 ) , we obtain ( ∆pf ) ( i ) = N∑ j=1 √ Wi , j Di , i ∥ ( ∇f ) ( [ j , i ] ) ∥p−2 ( √ Wi , j Di , i f ( i ) − √ Wi , j Dj , j f ( j ) ) ( 6 ) The graph p-Laplacian is semi-definite : ⟨f , ∆pf⟩ = Sp ( f ) ≥ 0 and we have ∂Sp ( f ) ∂f ∣∣∣∣ i = p ( ∆pf ) ( i ) . ( 7 ) When p = 2 , ∆2 is refered as the ordinary Laplacian operator and ∆2 = I − D−1/2WD−1/2 and when p = 1 , ∆1 is refered as the Curvature operator and ∆1f : = − 12div ( ∥∇f∥ −1∇f ) . Note that Laplacian ∆2 is a linear operator , while in general for p ̸= 2 , p-Laplacian is nonlinear since ∆p ( af ) ̸= a∆p ( f ) for a ∈ R. 2f can be a vector function : f : V → Rc for some c ∈ N and here we use f : V → R for better illustration . 3Note that the definition adopted is slightly different with the one used in Zhou & Schölkopf ( 2005 ) where ∥ · ∥p−2 is not element-wise and the one used in some literature such as Amghibech ( 2003 ) ; Bühler & Hein ( 2009 ) , where ( ∆pf ) ( i ) = ∑N j=1 Wi , j Di , i |f ( i ) − f ( j ) |p−2 ( f ( i ) − f ( j ) ) for p > 1 and p = 1 is not allowed . 3 p-LAPLACIAN BASED GRAPH NEURAL NETWORKS In this section , we derive the p-Laplacian message passing scheme from a p-Laplacian regularization framework and present pGNN , a new GNN architecture developed upon the new message passing scheme . We theoretically characterize how p-Laplacian message passing adaptively learns aggregation weights and profits pGNN for being effective on both homophilic and heterophilic graphs . 3.1 p-LAPLACIAN REGULARIZATION FRAMEWORK Given an undirected graph G = ( V , E ) and a signal function with c ( c ∈ N ) channels f : V → Rc , let X = ( X⊤1 , : , . . . , X ⊤ N , : ) ⊤ ∈ RN×c with Xi , : ∈ R1×c , i ∈ [ N ] denoting the node features of G and F = ( F⊤1 , : , . . . , F ⊤ N , : ) ⊤ ∈ RN×c be a matrix whose ith row vector Fi , : ∈ R1×c , i ∈ [ N ] represents the function value of f at the i-th vertex in G. We present a p-Laplacian regularization problem whose cost function is defined to be F∗ = argmin F L ( F ) : = argmin F Sp ( F ) + µ N∑ i=1 ∥Fi , : −Xi , :∥2 , ( 8 ) where µ ∈ ( 0 , ∞ ) . The first term of the right-hand side in Eq . ( 8 ) is a measurement of variation of the signal over the graph based on p-Laplacian . As we will discuss later , different choices of p result in different smoothness constraint on the signals . The second term is the constraint that the optimal signals F∗ should not change too much from the input signal X , and µ provides a trade-off between these two constraints . Regularization with p = 2 . When p = 2 , the solution of Eq . ( 8 ) satisfies ∆2F∗ + µ ( F∗ −X ) = 0 and we can obtain the closed form ( Zhou et al. , 2003 ; Zhou & Schölkopf , 2005 ) F∗ = µ ( ∆2 + µIN ) −1X . ( 9 ) Then , we could use the following iteration algorithm to get an approximation of Eq . ( 9 ) : F ( k+1 ) = αD−1/2WD−1/2F ( k ) + βX , ( 10 ) where k represents the iteration index , α = 11+µ and β = µ 1+µ = 1− α . The iteration converges to a closed-form solution as k goes to infinity ( Zhou et al. , 2003 ; Zhou & Schölkopf , 2005 ) . We could relate the the result here with the personalized PageRank ( PPR ) ( Page et al. , 1999 ; Klicpera et al. , 2019 ) algorithm ( proof defered to Appendix D.1 ) : Theorem 1 ( Relation to personalized PageRank ( Klicpera et al. , 2019 ) ) . µ ( ∆2 + µIN ) −1 in the closed form solution of Eq . ( 9 ) is equivalent to the personalized PageRank matrix . Regularization with p > 1 . For p > 1 , the solution of Eq . ( 8 ) satisfies p∆pF∗ +2µ ( F∗ −X ) = 0 . By Eq . ( 6 ) we have that , for all i ∈ [ N ] , N∑ j=1 Wi , j√ Di , i ∥ ( ∇f∗ ) ( [ j , i ] ) ∥p−2 ( 1√ Di , i F∗i , : − 1√ Dj , j F∗j , : ) + 2µ p ( F∗i , : −Xi , : ) = 0 . Based on which we can construct a similar iterative algorithm to obtain a solution ( Zhou & Schölkopf , 2005 ) : F ( k+1 ) i , : = α ( k ) i , i N∑ j=1 M ( k ) i , j√ Di , iDj , j F ( k ) j , : + β ( k ) i , i Xi , : , for all i ∈ [ N ] , ( 11 ) with M ( k ) ∈ RN×N , α ( k ) = diag ( α ( k ) 1,1 , . . . , α ( k ) N , N ) , β ( k ) = diag ( β ( k ) 1,1 , . . . , β ( k ) N , N ) updated by M ( k ) i , j = Wi , j ∥∥∥∥∥ √ Wi , j Di , i F ( k ) i , : − √ Wi , j Dj , j F ( k ) j , : ∥∥∥∥∥ p−2 , for all i , j ∈ [ N ] , ( 12 ) α ( k ) i , i = 1 / N∑ j=1 M ( k ) i , j Di , i + 2µ p , β ( k ) i , i = 2µp αi , i , for all i ∈ [ N ] , ( 13 ) Note that in Eq . ( 12 ) , when ∥∥∥√Wi , jDi , i F ( k ) i , : −√Wi , jDj , j F ( k ) j , : ∥∥∥ = 0 , we set M ( k ) i , j = 0 . It is easy to see that Eq . ( 10 ) is the special cases of Eq . ( 14 ) with p = 2 . Remark 1 ( Discussion on p = 1 ) . For p = 1 , when f is a real-valued function ( c = 1 ) , ∆1f is a step function , which could make the stationary condition of the objective function Eq . ( 8 ) become problematic . Additionally , ∆1f is not continuous at ∥ ( ∇f ) ( [ i , j ] ) ∥ = 0 . Therefore , p = 1 is not allowed when f is a real value function . On the other hand , note that there is a Frobenius norm in ∆pf . When f is a vector-valued function ( c > 1 ) , the step function in ∆1f only exists on the axes . The stationary condition will be fine if the node embeddings F are not a matrix of vectors that has only one non-zero element , which is true for many graphs . p = 1 may work for these graphs . Overall , we suggest to use p > 1 in practice but p = 1 may work for graphs with multiple channel signals as well . We conduct experiments for p > 1 ( e.g. , p = 1.5 , 2 , 2.5 ) and p = 1 in Sec . 5 . 3.2 p-LAPLACIAN MESSAGE PASSING AND pGNN ARCHITECTURE p-Laplacian Message Passing . Rewrite Eq . ( 11 ) in a matrix form we obtain F ( k+1 ) = α ( k ) D−1/2M ( k ) D−1/2F ( k ) + β ( k ) X . ( 14 ) Eq . ( 14 ) provides a new message passing mechanism , named p-Laplacian message passing . Remark 2. αD−1/2MD−1/2 in Eq . ( 14 ) can be regarded as the learned aggregation weights at each step for message passing . It suggests that p-Laplacian message passing could adaptively tune the aggregation weights during the course of learning , which will be demonstrated theoretically and empirically in the sequel of this paper . βX in Eq . ( 14 ) can be regarded as a residual unit , which helps the model escape from the oversmoothing issue ( Chien et al. , 2021 ) . We present the following theorem to show the shrinking property of p-Laplacian message passing . Theorem 2 ( Shrinking Property of p-Laplacian Message Passing ) . Given a graph G = ( V , E , W ) with node features X , β ( k ) , F ( k ) , M ( k ) , α ( k ) are updated accordingly to Equations ( 11 ) to ( 13 ) for k = 0 , 1 , . . . , K and F ( 0 ) = X . Then there exist some positive real value µ > 0 which depends on X , G , p and p > 1 such that Lp ( F ( k+1 ) ) ≤ Lp ( F ( k ) ) . Proof see Appendix D.2 . Thm . 2 shows that with some proper positive real value µ and p > 1 , the loss of the objective function Eq . ( 8 ) is guaranteed to decline after taking one step p-Laplacian message passing . Thm . 2 also demonstrates that the iteration Equations ( 11 ) to ( 13 ) is guaranteed to converge for p > 1 with some proper µ which is chosen depends on the input graph and p. pGNN Architecture . We design the architecture of pGNNs using p-Laplacian message passing . Given node features X ∈ RN×c , the number of node labels L , the number of hidden units h , the maximum number of iterations K , and M , α , and β updated by Equations ( 12 ) and ( 13 ) respectively , we give the pGNN architecture as following : F ( 0 ) = ReLU ( XΘ ( 1 ) ) , ( 15 ) F ( k+1 ) = α ( k ) D−1/2M ( k ) D−1/2F ( k ) + β ( k ) F ( 0 ) , k = 0 , 1 , . . . , K − 1 , ( 16 ) Z = softmax ( F ( K ) Θ ( 2 ) ) , ( 17 ) where Z ∈ RN×L is the output propbability matrix with Zi , j is the estimated probability that the label at node i ∈ [ N ] is j ∈ [ L ] given the features X and the graph G , Θ ( 1 ) ∈ Rc×h and Θ ( 2 ) ∈ Rh×L are the first- and the second-layer parameters of the neural network , respectively . Remark 3 ( Connection to existing GNN variants ) . The message passing scheme of pGNNs is different from that of several GNN variants ( say , GCN , GAT , and GraphSage ) , which repeatedly stack message passing layers . In contrast , it is similar with SGC ( Wu et al. , 2019 ) , APPNP ( Klicpera et al. , 2019 ) , and GPRGNN ( Chien et al. , 2021 ) . SGC is an approximation to the closed-form in Eq . ( 9 ) ( Fu et al. , 2020 ) . By Thm . 1 , it is easy to see that APPNP , which uses PPR to propagate the node embeddings , is analogical to pGNN with p = 2 , termed as 2.0GNN . APPNP and 2.0GNN work analogically and effectively on homophilic graphs . 2.0GNN can also work effectively on heterophilic graphs by letting Θ ( 2 ) be negative . However , APPNP fails on heterophilic graphs as its PPR weights are fixed ( Chien et al. , 2021 ) . Unlike APPNP , GPRGNN , which adaptively learn the generalized PageRank ( GPR ) weights , works similarly to 2.0GNN on both homophilic and heterophilic graphs . However , GPRGNN needs more supervised information in order to learn optimal GPR weights . On the contrary , pGNNs need less supervised information to obtain similar results because Θ ( 2 ) acts like a hyperplane for classification . pGNNs could work better under weak supervised information . Our analysis is also verified by the experimental results in Sec . 5 . We also provide an upper-bounding risk of pGNNs by Thm . 4 in Appendix C.1 to study the effect of the hyperparameter µ on the performance of pGNNs . Thm . 4 shows that the risk of pGNNs is upperbounded by the sum of three terms : the risk of label prediction using only the original node features X , the norm of p-Laplacian diffusion on X , and the magnitude of the noise in X. µ controls the trade-off between these three terms . The smaller µ , the more weights on the p-Laplacian diffusion term and the noise term and the less weights on the the other term and vice versa . 4 SPECTRAL VIEWS OF p-LAPLACIAN MESSAGE PASSING In this section , we theoretically demonstrate that p-Laplacian message passing is an approximation of a polynomial graph filter defined on the spectral domain of p-Laplacian . We show by spectral analysis that p-Laplacian message passing works simultaneously as low-pass and high-pass filters . p-Eigenvalues and p-Eigenvectors of the Graph p-Laplacian . We first introduce the definitions of p-eigenvalues and p-eigenvectors of p-Laplacian . Let ϕp : R → R defined as ϕp ( u ) = ∥u∥p−2u , for u ∈ R , u ̸= 0 . Note that ϕ2 ( u ) = u . For notational simplicity , we denote by ϕp ( u ) = ( ϕp ( u1 ) , . . . , ϕp ( uN ) ) ⊤ for u ∈ RN and Φp ( U ) = ( ϕp ( U : ,1 ) , . . . , ϕp ( U : ,N ) ) for U ∈ RN×N and U : ,i ∈ RN is the ith column vector of U . Definition 4 ( p-Eigenvector and p-Eigenvalue ) . A vector u ∈ RN is a p-eigenvector of ∆p if it satisfies the equation ( ∆pu ) i = λϕp ( ui ) , for all i ∈ [ N ] , where λ ∈ R is a real value referred as a p-eigenvalue of ∆p associated with the p-eigenvector u . Definition 5 ( p-Orthogonal ( Luo , Huang , Ding , and Nie , 2010 ) ) . Given two vectors u , v ∈ RN with u , v ̸= 0 , we call that u and v is p-orthogonal if ϕp ( u ) ⊤ϕp ( v ) = ∑N i=1 ϕp ( ui ) ϕp ( vi ) = 0 . Luo et al . ( 2010 ) demonstrated that the p-eigenvectors of ∆p are p-orthogonal to each other ( see Thm . 5 in Appendix C.2 for details ) . Therefore , the space spanned by the multiple p-eigenvectors of ∆p is p-orthogonal . Additionally , we demonstrate that the p-eigen-decomposition of ∆p is given by : ∆p = Φp ( U ) ΛΦp ( U ) ⊤ ( see Thm . 6 in Appendix C.3 for details ) , where U is a matrix of p-eigenvectors of ∆p and Λ is a diagonal matrix in which the diagonal is the p-eigenvalues of ∆p . Graph Convolutions based on p-Laplacian . Based on Thm . 5 , the graph Fourier Transform f̂ of any function f on the vertices of G can be defined as the expansion of f in terms of Φ ( U ) where U is the matrix of p-eigenvectors of ∆p : f̂ = Φp ( U ) ⊤f . Similarly , the inverse graph Fourier transform is then given by : f = Φp ( U ) f̂ . Therefore , a signal X ∈ RN×c being filtered by a spectral filter gθ can be expressed formally as : gθ ⋆ X = Φp ( U ) ĝθ ( Λ ) Φp ( U ) ⊤X , where Λ denotes a diagonal matrix in which the diagonal corresponds to the p-eigenvalues { λl } l=0 , ... , N−1 of ∆p and ĝθ ( Λ ) denotes a diagonal matrix in which the diagonal corresponds to spectral filter coefficients . Let ĝθ be a polynomial filter defined as ĝθ = ∑K−1 k=0 θkλ k l , where the parameter θ = [ θ0 , . . . , θK−1 ] ⊤ ∈ RK is a vector of polynomial coefficients . By the p-eigen-decomposition of p-Laplacian , we have gθ ⋆X ≈ K−1∑ k=0 θkΦp ( U ) Λ kΦp ( U ) ⊤X = K−1∑ k=0 θk∆ k pX . ( 18 ) Theorem 3 . The K-step p-Laplacian message passing is a K-order polynomial approximation to the graph filter given by Eq . ( 18 ) . Proof see Appendix D.3 . Thm . 3 indicates that p-Laplacian message passing mechanism is implicitly a polynomial spectral filter defined on the spectral domain of p-Laplacian . Spectral Analysis of p-Laplacian Message Passing . Here , we analyze the spectral propecties of p-Laplacian message passing . We can approximately view p-Laplacian message pasing as a filter of a linear combination of K spectral filters g ( Λ ) ( 0 ) , g ( Λ ) ( 1 ) , . . . , g ( Λ ) ( K−1 ) with each spectral filter defined to be g ( Λ ) ( k ) : = ( αD−1/2MD−1/2 ) k where Mi , j = Wi , j∥ √ Wi , j Di , i Fi , : − √ Wi , j Dj , j Fj , :∥p−2 for i , j ∈ [ N ] and F is the matrix of node embeddings . We can study the properties of p-Laplacian message passing by studying the spectral properties of αD−1/2MD−1/2 as given below . Proposition 1 . Given a connected graph G = ( V , E , W ) with node embeddings F and the pLaplacian ∆p with its p-eigenvectors { u ( l ) } l=0,1 , ... , N−1 and the p-eigenvalues { λl } l=0,1 , ... , N−1 . Let gp ( λi−1 ) : = αi , i ∑ j D −1/2 i , i Mi , jD −1/2 j , j for i ∈ [ N ] be the filters defined on the spectral domain of ∆p , where Mi , j = Wi , j∥∇f ( [ i , j ] ) ∥p−2 , ( ∇f ) ( [ i , j ] ) is the graph gradient of the edge between node i and j and ∥∇f ( i ) ∥ is the norm of graph gradient at i. Ni denotes the number of edges connected to i , Nmin = min { Nj } j∈ [ N ] , and k = argmaxj ( { |u ( l ) j |/ √ Dl , l } j∈ [ N ] ; l=0 , ... , N−1 ) , then 1 . When p = 2 , gp ( λi−1 ) works as both low-pass and high-pass filters . 2 . When p > 2 , if ∥∇f ( i ) ∥ ≤ 2 ( p−1 ) / ( p−2 ) , gp ( λi−1 ) works as both low-pass and high-pass filters on node i and gp ( λi−1 ) works as low-pass filters on i when ∥∇f ( i ) ∥ ≥ 2 ( p−1 ) / ( p−2 ) . 3 . When 1 ≤ p < 2 , if 0 ≤ ∥∇f ( i ) ∥ ≤ 2 ( 2 √ Nk ) 1/ ( p−2 ) , gp ( λi−1 ) works as low-pass filters on node i and gp ( λi−1 ) works as both low-pass and high-pass filters on i when ∥∇f ( i ) ∥ ≥ 2 ( 2 √ Nk ) 1/ ( p−2 ) . Specifically , when p = 1 , Nk can be replaced by Nmin . Proof see Appendix D.7 . Proposition 1 shows that when p ̸= 2 , p-Laplacian message passing adaptively works as low-pass or both low-pass and high-pass filters on node i in terms of the degree of local node embedding variation around i , i.e . the norm of the graph gradient ∥∇f ( i ) ∥ at node i . When p = 2 , p-Laplacian message passing works as both low-pass and high-pass filters on node i regardless of the value of ∥∇f ( i ) ∥ . When p > 2 , p-Laplacian message passing works as lowpass filters on node i for large ∥∇f ( i ) ∥ and works as both low-pass and high-pass filters for small ∥∇f ( i ) ∥ . Therefore , pGNNs with p > 2 can work very effectively on graphs with strong homophily . When 1 ≤ p < 2 , p-Laplacian message passing works as low-pass filters for small ∥∇f ( i ) ∥ and works as both low-pass and high-pass filters for large ∥∇f ( i ) ∥ . Thus , pGNNs with 1 ≤ p < 2 can work effectively on graphs with low homophily , i.e . heterophilic graphs . The results here confirms our analysis of the aggregation weights of p-Laplacian message passing presented in Thm . 2 .
This paper proposes a p-Laplacian based GNN to handle heterophilic graphs and graphs with non-informative topologies. Both the above cases are assumed in most existing GNN architectures and hence this work breaks away from the norm. This work proposes a discrete p-Laplacian message passing scheme which is derived from a discrete regularization framework. The authors do a spectral analysis of their novel p-Laplacian message passing scheme and show that it works as both a low-pass and high-pass filter, which is then applicable to both homophilic and heterophilic graphs. More specifically, they show that p-GNN with p>2, works well for graphs which exhibit strong homophily, while for p \in [1,2) p-GNN works effectively on heterophilic graphs. The empirical results support the theoretical justifications and outperform the baselines quite significantly especially on heterophilic and non-informative topology bearing graphs.
SP:36ec768c7b2c29b3ee400d43e4a7fa0dc35ca6cf
$p$-Laplacian Based Graph Neural Networks
1 INTRODUCTION . In this paper , we explore the usage of graph neural networks ( GNNs ) for semi-supervised node classification on graphs , especially when the graphs admit strong heterophily or noisy edges . Semisupervised learning problems on graphs are ubiquitous in a lot of real-world scenarios , such as user classification in social media ( Kipf & Welling , 2017 ) , protein classification in biology ( Velickovic et al. , 2018 ) , molecular property prediction in chemistry ( Duvenaud et al. , 2015 ) , and many others ( Marcheggiani & Titov , 2017 ; Satorras & Estrach , 2018 ) . Recently , GNNs are becoming the de facto choice for processing graph structured data . They can exploit the node features and the graph topology by propagating and transforming the features over the topology in each layer and thereby learn refined node representations . A series of GNN architectures have been proposed , including graph convolutional networks ( Bruna et al. , 2014 ; Henaff et al. , 2015 ; Defferrard et al. , 2016 ; Kipf & Welling , 2017 ; Wu et al. , 2019 ) , graph attention networks ( Velickovic et al. , 2018 ; Thekumparampil et al. , 2018 ) , and other representatives ( Hamilton et al. , 2017 ; Xu et al. , 2018 ; Pei et al. , 2020 ) . Most of the existing GNN architectures work under the homophily assumption , i.e . the labels of nodes and their neighbors in a graph are the same or consistent , which is also commonly used in graph clustering ( Bach & Jordan , 2004 ; von Luxburg , 2007 ; Liu & Han , 2013 ) and semi-supervised learning on graphs ( Belkin et al. , 2004 ; Hein , 2006 ; Nadler et al. , 2009 ) . However , recent studies ( Zhu et al. , 2020 ; 2021 ; Chien et al. , 2021 ) show that in contrast to their success on homophilic graphs , most GNNs fail to work well on heterophilic graphs , in which linked nodes are more likely to have distinct labels . Moreover , GNNs could even fail on graphs where their topology is not helpful for label prediction . In these cases , propagating and transforming node features over the graph topology could lead to worse performance than simply applying multi-layer perceptrons ( MLPs ) on each of the nodes independently . Several recent works were proposed to deal with the heterophily issues of GNNs . Zhu et al . ( 2020 ) finds that heuristically combining ego- , neighbor , and higher-order embeddings improves GNN performance on heterophilic graphs . Zhu et al . ( 2021 ) uses a compatibility matrix to model the graph homophily or heterphily level . Chien et al . ( 2021 ) incorporates the generalized PageRank algorithm with graph convolutions so as to jointly optimize node feature and topological information extraction for both homophilic and heterophilic graphs . However , the problem of GNNs on graphs with non-informative topologies ( or noisy edges ) remains open . Unlike previous works , we tackle the above issues of GNNs by proposing the discrete p-Laplacian based message passing scheme , termed as p-Laplacian message passing . It is derived from a discrete regularization framework and is theoretically verified as an approximation of a polynomial graph filter defined on the spectral domain of the p-Laplacian . Spectral analysis of p-Laplacian message passing shows that it works simultaneously as low-pass and high-pass filters 1 and thus is applicable to both homophilic and heterophilic graphs . Moreover , when p ̸= 2 , our theoretical results indicate that it can adaptively learn aggregation weights in terms of the variation of node embeddings on edges ( measured by the graph gradient ( Amghibech , 2003 ; Zhou & Schölkopf , 2005 ; Luo et al. , 2010 ) ) , and work as low-pass or both low-pass and high-pass filters on a node according to the local variation of node embeddings around the node ( measured by the norm of graph gradients ) . Based on p-Laplacian message passing , we propose a new GNN architecture , called pGNN , to enable GNNs to work with heterophilic graphs and graphs with non-informative topologies . Several existing GNN architectures , including SGC ( Wu et al. , 2019 ) , APPNP ( Klicpera et al. , 2019 ) and GPRGNN ( Chien et al. , 2021 ) , can be shown to be analogical to pGNN with p = 2 . Our empirical studies on real-world benchmark datasets ( homophilic and heterophilic datasets ) and synthetic datasets ( cSBM ( Deshpande et al. , 2018 ) ) demonstrate that pGNNs obtain the best performance on heterophilic graphs and competitive performance on homophilic graphs against state-of-the-art GNNs . Moreover , experimental results on graphs with different levels of noisy edges show that pGNNs work much more robustly than GNN baselines and even as well as MLPs on graphs with completely random edges . Additional experiments ( reported in Appendix F.5 ) illustrate that intergrating pGNNs with existing GNN architectures ( i.e . GCN ( Kipf & Welling , 2017 ) , JKNet ( Xu et al. , 2018 ) ) can significantly improve their performance on heterophilic graphs . In conclusion , our contributions can be summarized as below : ( 1 ) New methodologies . We propose p-Laplacian message passing and pGNN to adapt GNNs to heterophilic graphs and graphs where the topology is non-informative for label prediction . ( 2 ) Superior performance . We empirically demonstrate that pGNNs is superior on heterophilic graphs and competitive on homophilic graphs against state-of-the-art GNNs . Moreover , pGNNs work robustly on graphs with noisy edges or non-informative topologies . ( 3 ) Theoretical justification . We theoretically demonstrate that p-Laplacian message passing works as both low-pass and high-pass filters and the message passing iteration is guarantee to converge with proper settings . ( 4 ) New paradigm of designing GNN architectures . We bridge the gap between discrete regularization framework and GNNs , which could further inspire researchers to develop new graph convolutions or message passing schemes using other regularization techniques with explicit assumptions on graphs . Due to space limit , we defer the discussions on related work and future work and all proofs to the Appendix . 2 PRELIMINARIES AND BACKGROUND . Notation . Let G = ( V , E , W ) be an undirected graph , where V = { 1 , 2 , . . . , N } is the set of nodes , E ⊆ V × V is the set of edges , W ∈ RN×N is the adjacency matrix and Wi , j = Wj , i , Wi , j > 0 for [ i , j ] ∈ E , Wi , j = 0 , otherwise . Ni = { j } [ i , j ] ∈E denotes the set of neighbors of node i , D ∈ RN×N = diag ( D1,1 , . . . DN , N ) denotes the diagonal degree matrix with Di , i = ∑N j=1 Wi , j , for i = 1 , . . . , N . f : V → R and g : E → R are functions defined on the vertices and edges of G , respectively . FV denotes the Hilbert space of functions endowed with the inner product ⟨f , f̃⟩FV : =∑ i∈V f ( i ) f̃ ( i ) . Similarly define FE . We also denote by [ K ] = { 1 , 2 , . . . , K } , ∀K ∈ N and we use ∥x∥ = ∥x∥2 = ( ∑d i=1 x 2 i ) 1/2 , ∀x ∈ Rd to denote the Frobenius norm of a vector . Problem Formulation . Given a graph G = ( V , E , W ) , each node i ∈ V has a feature vector Xi , : which is the i-th row of X and a subset of nodes in G have labels from a label set L = { 1 , . . . , L } . The goal of semi-supervised node classification on G is to learn a mapping M : V → L and predict the labels of unlabeled nodes . 1Note that if the low frequencies and high frequencies dominate the middle frequencies ( the frequencies that are around the cutoff frequency ) , we say that the filter works both as low-pass and high-pass filters . Homophily and Heterophily . The homophily or heterophily of a graph is used to describe the relation of labels between linked nodes in the graphs . The level of homophily of a graph can be measured by H ( G ) = Ei∈V [ ∣∣ { j } j∈Ni , yi=yj ∣∣ /|Ni| ] ( Pei et al. , 2020 ; Chien et al. , 2021 ) , where∣∣ { j } j∈Ni , yi=yj ∣∣ denotes the number of neighbors of i ∈ V that share the same label as i and H ( G ) → 1 corresponds to strong homophily while H ( G ) → 0 indicates strong heterophily . We say that a graph is a homophilic ( heterophilic ) graph if it has strong homophily ( heterophily ) . Graph Gradient . The graph gradient of an edge [ i , j ] , i , j ∈ V is defined to be a measurement of the variation of a function f 2 : V → R on the edge [ i , j ] . Definition 1 ( Graph Gradient ) . Given a graph G = ( V , E ) and a function f : V → R , the graph gradient is an operator ∇ : FV → FE defined by ( ∇f ) ( [ i , j ] ) : = √ Wi , j Dj , j f ( j ) − √ Wi , j Di , i f ( i ) , for all [ i , j ] ∈ E . ( 1 ) For [ i , j ] /∈ E , ( ∇f ) ( [ i , j ] ) : = 0 . The graph gradient of a function f at vertex i is defined to be ∇f ( i ) : = ( ( ∇f ) ( [ i , 1 ] ) , . . . , ( ∇f ) ( [ i , N ] ) ) and its Frobenius norm is given by ∥∇f ( i ) ∥2 : = ( ∑N j=1 ( ∇f ) 2 ( [ i , j ] ) ) 1/2 , which measures the variation of f around node i . We measure the variation of f over the whole graph G by Sp ( f ) where it is defined to be Sp ( f ) : = 1 2 N∑ i=1 N∑ j=1 ∥ ( ∇f ) ( [ i , j ] ) ∥p = 1 2 N∑ i=1 N∑ j=1 ∥∥∥∥∥ √ Wi , j Dj , j f ( j ) − √ Wi , j Di , i f ( i ) ∥∥∥∥∥ p , for p ≥ 1 , ( 2 ) Note that the definition of Sp here is different with the p-Dirichlet form in Zhou & Schölkopf ( 2005 ) . Graph Divergence . The graph divergence is defined to be the adjoint of the graph gradient : Definition 2 ( Graph Divergence ) . Given a graph G = ( V , E ) , and functions f : V → R , g : E → R , the graph divergence is an operator div : FE → FV which satisfies ⟨∇f , g⟩ = ⟨f , −divg⟩ . ( 3 ) The graph divergence can be computed by ( divg ) ( i ) = N∑ j=1 √ Wi , j Di , i ( g ( [ i , j ] ) − g ( [ j , i ] ) ) . ( 4 ) Fig . 4 in Appendix E.1 gives a tiny example of illustration of graph gradient and graph divergence . Graph p-Laplacian Operator . By the definitions of graph gradient and graph divergence , we reach the definition of graph p-Laplacian operator as below . Definition 3 ( Graph p-Laplacian3 ) . Given a graph G = ( V , E ) and a function f : V → R , the graph p-Laplacian is an operator ∆p : FV → FV defined by ∆pf : = − 1 2 div ( ∥∇f∥p−2∇f ) , for p ≥ 1 . ( 5 ) where ∥ · ∥p−2 is element-wise , i.e . ∥∇f ( i ) ∥p−2 = ( ∥ ( ∇f ) ( [ i , 1 ] ) ∥p−2 , . . . , ∥ ( ∇f ) ( [ i , N ] ) ∥p−2 ) . Substituting Eq . ( 1 ) and Eq . ( 4 ) into Eq . ( 5 ) , we obtain ( ∆pf ) ( i ) = N∑ j=1 √ Wi , j Di , i ∥ ( ∇f ) ( [ j , i ] ) ∥p−2 ( √ Wi , j Di , i f ( i ) − √ Wi , j Dj , j f ( j ) ) ( 6 ) The graph p-Laplacian is semi-definite : ⟨f , ∆pf⟩ = Sp ( f ) ≥ 0 and we have ∂Sp ( f ) ∂f ∣∣∣∣ i = p ( ∆pf ) ( i ) . ( 7 ) When p = 2 , ∆2 is refered as the ordinary Laplacian operator and ∆2 = I − D−1/2WD−1/2 and when p = 1 , ∆1 is refered as the Curvature operator and ∆1f : = − 12div ( ∥∇f∥ −1∇f ) . Note that Laplacian ∆2 is a linear operator , while in general for p ̸= 2 , p-Laplacian is nonlinear since ∆p ( af ) ̸= a∆p ( f ) for a ∈ R. 2f can be a vector function : f : V → Rc for some c ∈ N and here we use f : V → R for better illustration . 3Note that the definition adopted is slightly different with the one used in Zhou & Schölkopf ( 2005 ) where ∥ · ∥p−2 is not element-wise and the one used in some literature such as Amghibech ( 2003 ) ; Bühler & Hein ( 2009 ) , where ( ∆pf ) ( i ) = ∑N j=1 Wi , j Di , i |f ( i ) − f ( j ) |p−2 ( f ( i ) − f ( j ) ) for p > 1 and p = 1 is not allowed . 3 p-LAPLACIAN BASED GRAPH NEURAL NETWORKS In this section , we derive the p-Laplacian message passing scheme from a p-Laplacian regularization framework and present pGNN , a new GNN architecture developed upon the new message passing scheme . We theoretically characterize how p-Laplacian message passing adaptively learns aggregation weights and profits pGNN for being effective on both homophilic and heterophilic graphs . 3.1 p-LAPLACIAN REGULARIZATION FRAMEWORK Given an undirected graph G = ( V , E ) and a signal function with c ( c ∈ N ) channels f : V → Rc , let X = ( X⊤1 , : , . . . , X ⊤ N , : ) ⊤ ∈ RN×c with Xi , : ∈ R1×c , i ∈ [ N ] denoting the node features of G and F = ( F⊤1 , : , . . . , F ⊤ N , : ) ⊤ ∈ RN×c be a matrix whose ith row vector Fi , : ∈ R1×c , i ∈ [ N ] represents the function value of f at the i-th vertex in G. We present a p-Laplacian regularization problem whose cost function is defined to be F∗ = argmin F L ( F ) : = argmin F Sp ( F ) + µ N∑ i=1 ∥Fi , : −Xi , :∥2 , ( 8 ) where µ ∈ ( 0 , ∞ ) . The first term of the right-hand side in Eq . ( 8 ) is a measurement of variation of the signal over the graph based on p-Laplacian . As we will discuss later , different choices of p result in different smoothness constraint on the signals . The second term is the constraint that the optimal signals F∗ should not change too much from the input signal X , and µ provides a trade-off between these two constraints . Regularization with p = 2 . When p = 2 , the solution of Eq . ( 8 ) satisfies ∆2F∗ + µ ( F∗ −X ) = 0 and we can obtain the closed form ( Zhou et al. , 2003 ; Zhou & Schölkopf , 2005 ) F∗ = µ ( ∆2 + µIN ) −1X . ( 9 ) Then , we could use the following iteration algorithm to get an approximation of Eq . ( 9 ) : F ( k+1 ) = αD−1/2WD−1/2F ( k ) + βX , ( 10 ) where k represents the iteration index , α = 11+µ and β = µ 1+µ = 1− α . The iteration converges to a closed-form solution as k goes to infinity ( Zhou et al. , 2003 ; Zhou & Schölkopf , 2005 ) . We could relate the the result here with the personalized PageRank ( PPR ) ( Page et al. , 1999 ; Klicpera et al. , 2019 ) algorithm ( proof defered to Appendix D.1 ) : Theorem 1 ( Relation to personalized PageRank ( Klicpera et al. , 2019 ) ) . µ ( ∆2 + µIN ) −1 in the closed form solution of Eq . ( 9 ) is equivalent to the personalized PageRank matrix . Regularization with p > 1 . For p > 1 , the solution of Eq . ( 8 ) satisfies p∆pF∗ +2µ ( F∗ −X ) = 0 . By Eq . ( 6 ) we have that , for all i ∈ [ N ] , N∑ j=1 Wi , j√ Di , i ∥ ( ∇f∗ ) ( [ j , i ] ) ∥p−2 ( 1√ Di , i F∗i , : − 1√ Dj , j F∗j , : ) + 2µ p ( F∗i , : −Xi , : ) = 0 . Based on which we can construct a similar iterative algorithm to obtain a solution ( Zhou & Schölkopf , 2005 ) : F ( k+1 ) i , : = α ( k ) i , i N∑ j=1 M ( k ) i , j√ Di , iDj , j F ( k ) j , : + β ( k ) i , i Xi , : , for all i ∈ [ N ] , ( 11 ) with M ( k ) ∈ RN×N , α ( k ) = diag ( α ( k ) 1,1 , . . . , α ( k ) N , N ) , β ( k ) = diag ( β ( k ) 1,1 , . . . , β ( k ) N , N ) updated by M ( k ) i , j = Wi , j ∥∥∥∥∥ √ Wi , j Di , i F ( k ) i , : − √ Wi , j Dj , j F ( k ) j , : ∥∥∥∥∥ p−2 , for all i , j ∈ [ N ] , ( 12 ) α ( k ) i , i = 1 / N∑ j=1 M ( k ) i , j Di , i + 2µ p , β ( k ) i , i = 2µp αi , i , for all i ∈ [ N ] , ( 13 ) Note that in Eq . ( 12 ) , when ∥∥∥√Wi , jDi , i F ( k ) i , : −√Wi , jDj , j F ( k ) j , : ∥∥∥ = 0 , we set M ( k ) i , j = 0 . It is easy to see that Eq . ( 10 ) is the special cases of Eq . ( 14 ) with p = 2 . Remark 1 ( Discussion on p = 1 ) . For p = 1 , when f is a real-valued function ( c = 1 ) , ∆1f is a step function , which could make the stationary condition of the objective function Eq . ( 8 ) become problematic . Additionally , ∆1f is not continuous at ∥ ( ∇f ) ( [ i , j ] ) ∥ = 0 . Therefore , p = 1 is not allowed when f is a real value function . On the other hand , note that there is a Frobenius norm in ∆pf . When f is a vector-valued function ( c > 1 ) , the step function in ∆1f only exists on the axes . The stationary condition will be fine if the node embeddings F are not a matrix of vectors that has only one non-zero element , which is true for many graphs . p = 1 may work for these graphs . Overall , we suggest to use p > 1 in practice but p = 1 may work for graphs with multiple channel signals as well . We conduct experiments for p > 1 ( e.g. , p = 1.5 , 2 , 2.5 ) and p = 1 in Sec . 5 . 3.2 p-LAPLACIAN MESSAGE PASSING AND pGNN ARCHITECTURE p-Laplacian Message Passing . Rewrite Eq . ( 11 ) in a matrix form we obtain F ( k+1 ) = α ( k ) D−1/2M ( k ) D−1/2F ( k ) + β ( k ) X . ( 14 ) Eq . ( 14 ) provides a new message passing mechanism , named p-Laplacian message passing . Remark 2. αD−1/2MD−1/2 in Eq . ( 14 ) can be regarded as the learned aggregation weights at each step for message passing . It suggests that p-Laplacian message passing could adaptively tune the aggregation weights during the course of learning , which will be demonstrated theoretically and empirically in the sequel of this paper . βX in Eq . ( 14 ) can be regarded as a residual unit , which helps the model escape from the oversmoothing issue ( Chien et al. , 2021 ) . We present the following theorem to show the shrinking property of p-Laplacian message passing . Theorem 2 ( Shrinking Property of p-Laplacian Message Passing ) . Given a graph G = ( V , E , W ) with node features X , β ( k ) , F ( k ) , M ( k ) , α ( k ) are updated accordingly to Equations ( 11 ) to ( 13 ) for k = 0 , 1 , . . . , K and F ( 0 ) = X . Then there exist some positive real value µ > 0 which depends on X , G , p and p > 1 such that Lp ( F ( k+1 ) ) ≤ Lp ( F ( k ) ) . Proof see Appendix D.2 . Thm . 2 shows that with some proper positive real value µ and p > 1 , the loss of the objective function Eq . ( 8 ) is guaranteed to decline after taking one step p-Laplacian message passing . Thm . 2 also demonstrates that the iteration Equations ( 11 ) to ( 13 ) is guaranteed to converge for p > 1 with some proper µ which is chosen depends on the input graph and p. pGNN Architecture . We design the architecture of pGNNs using p-Laplacian message passing . Given node features X ∈ RN×c , the number of node labels L , the number of hidden units h , the maximum number of iterations K , and M , α , and β updated by Equations ( 12 ) and ( 13 ) respectively , we give the pGNN architecture as following : F ( 0 ) = ReLU ( XΘ ( 1 ) ) , ( 15 ) F ( k+1 ) = α ( k ) D−1/2M ( k ) D−1/2F ( k ) + β ( k ) F ( 0 ) , k = 0 , 1 , . . . , K − 1 , ( 16 ) Z = softmax ( F ( K ) Θ ( 2 ) ) , ( 17 ) where Z ∈ RN×L is the output propbability matrix with Zi , j is the estimated probability that the label at node i ∈ [ N ] is j ∈ [ L ] given the features X and the graph G , Θ ( 1 ) ∈ Rc×h and Θ ( 2 ) ∈ Rh×L are the first- and the second-layer parameters of the neural network , respectively . Remark 3 ( Connection to existing GNN variants ) . The message passing scheme of pGNNs is different from that of several GNN variants ( say , GCN , GAT , and GraphSage ) , which repeatedly stack message passing layers . In contrast , it is similar with SGC ( Wu et al. , 2019 ) , APPNP ( Klicpera et al. , 2019 ) , and GPRGNN ( Chien et al. , 2021 ) . SGC is an approximation to the closed-form in Eq . ( 9 ) ( Fu et al. , 2020 ) . By Thm . 1 , it is easy to see that APPNP , which uses PPR to propagate the node embeddings , is analogical to pGNN with p = 2 , termed as 2.0GNN . APPNP and 2.0GNN work analogically and effectively on homophilic graphs . 2.0GNN can also work effectively on heterophilic graphs by letting Θ ( 2 ) be negative . However , APPNP fails on heterophilic graphs as its PPR weights are fixed ( Chien et al. , 2021 ) . Unlike APPNP , GPRGNN , which adaptively learn the generalized PageRank ( GPR ) weights , works similarly to 2.0GNN on both homophilic and heterophilic graphs . However , GPRGNN needs more supervised information in order to learn optimal GPR weights . On the contrary , pGNNs need less supervised information to obtain similar results because Θ ( 2 ) acts like a hyperplane for classification . pGNNs could work better under weak supervised information . Our analysis is also verified by the experimental results in Sec . 5 . We also provide an upper-bounding risk of pGNNs by Thm . 4 in Appendix C.1 to study the effect of the hyperparameter µ on the performance of pGNNs . Thm . 4 shows that the risk of pGNNs is upperbounded by the sum of three terms : the risk of label prediction using only the original node features X , the norm of p-Laplacian diffusion on X , and the magnitude of the noise in X. µ controls the trade-off between these three terms . The smaller µ , the more weights on the p-Laplacian diffusion term and the noise term and the less weights on the the other term and vice versa . 4 SPECTRAL VIEWS OF p-LAPLACIAN MESSAGE PASSING In this section , we theoretically demonstrate that p-Laplacian message passing is an approximation of a polynomial graph filter defined on the spectral domain of p-Laplacian . We show by spectral analysis that p-Laplacian message passing works simultaneously as low-pass and high-pass filters . p-Eigenvalues and p-Eigenvectors of the Graph p-Laplacian . We first introduce the definitions of p-eigenvalues and p-eigenvectors of p-Laplacian . Let ϕp : R → R defined as ϕp ( u ) = ∥u∥p−2u , for u ∈ R , u ̸= 0 . Note that ϕ2 ( u ) = u . For notational simplicity , we denote by ϕp ( u ) = ( ϕp ( u1 ) , . . . , ϕp ( uN ) ) ⊤ for u ∈ RN and Φp ( U ) = ( ϕp ( U : ,1 ) , . . . , ϕp ( U : ,N ) ) for U ∈ RN×N and U : ,i ∈ RN is the ith column vector of U . Definition 4 ( p-Eigenvector and p-Eigenvalue ) . A vector u ∈ RN is a p-eigenvector of ∆p if it satisfies the equation ( ∆pu ) i = λϕp ( ui ) , for all i ∈ [ N ] , where λ ∈ R is a real value referred as a p-eigenvalue of ∆p associated with the p-eigenvector u . Definition 5 ( p-Orthogonal ( Luo , Huang , Ding , and Nie , 2010 ) ) . Given two vectors u , v ∈ RN with u , v ̸= 0 , we call that u and v is p-orthogonal if ϕp ( u ) ⊤ϕp ( v ) = ∑N i=1 ϕp ( ui ) ϕp ( vi ) = 0 . Luo et al . ( 2010 ) demonstrated that the p-eigenvectors of ∆p are p-orthogonal to each other ( see Thm . 5 in Appendix C.2 for details ) . Therefore , the space spanned by the multiple p-eigenvectors of ∆p is p-orthogonal . Additionally , we demonstrate that the p-eigen-decomposition of ∆p is given by : ∆p = Φp ( U ) ΛΦp ( U ) ⊤ ( see Thm . 6 in Appendix C.3 for details ) , where U is a matrix of p-eigenvectors of ∆p and Λ is a diagonal matrix in which the diagonal is the p-eigenvalues of ∆p . Graph Convolutions based on p-Laplacian . Based on Thm . 5 , the graph Fourier Transform f̂ of any function f on the vertices of G can be defined as the expansion of f in terms of Φ ( U ) where U is the matrix of p-eigenvectors of ∆p : f̂ = Φp ( U ) ⊤f . Similarly , the inverse graph Fourier transform is then given by : f = Φp ( U ) f̂ . Therefore , a signal X ∈ RN×c being filtered by a spectral filter gθ can be expressed formally as : gθ ⋆ X = Φp ( U ) ĝθ ( Λ ) Φp ( U ) ⊤X , where Λ denotes a diagonal matrix in which the diagonal corresponds to the p-eigenvalues { λl } l=0 , ... , N−1 of ∆p and ĝθ ( Λ ) denotes a diagonal matrix in which the diagonal corresponds to spectral filter coefficients . Let ĝθ be a polynomial filter defined as ĝθ = ∑K−1 k=0 θkλ k l , where the parameter θ = [ θ0 , . . . , θK−1 ] ⊤ ∈ RK is a vector of polynomial coefficients . By the p-eigen-decomposition of p-Laplacian , we have gθ ⋆X ≈ K−1∑ k=0 θkΦp ( U ) Λ kΦp ( U ) ⊤X = K−1∑ k=0 θk∆ k pX . ( 18 ) Theorem 3 . The K-step p-Laplacian message passing is a K-order polynomial approximation to the graph filter given by Eq . ( 18 ) . Proof see Appendix D.3 . Thm . 3 indicates that p-Laplacian message passing mechanism is implicitly a polynomial spectral filter defined on the spectral domain of p-Laplacian . Spectral Analysis of p-Laplacian Message Passing . Here , we analyze the spectral propecties of p-Laplacian message passing . We can approximately view p-Laplacian message pasing as a filter of a linear combination of K spectral filters g ( Λ ) ( 0 ) , g ( Λ ) ( 1 ) , . . . , g ( Λ ) ( K−1 ) with each spectral filter defined to be g ( Λ ) ( k ) : = ( αD−1/2MD−1/2 ) k where Mi , j = Wi , j∥ √ Wi , j Di , i Fi , : − √ Wi , j Dj , j Fj , :∥p−2 for i , j ∈ [ N ] and F is the matrix of node embeddings . We can study the properties of p-Laplacian message passing by studying the spectral properties of αD−1/2MD−1/2 as given below . Proposition 1 . Given a connected graph G = ( V , E , W ) with node embeddings F and the pLaplacian ∆p with its p-eigenvectors { u ( l ) } l=0,1 , ... , N−1 and the p-eigenvalues { λl } l=0,1 , ... , N−1 . Let gp ( λi−1 ) : = αi , i ∑ j D −1/2 i , i Mi , jD −1/2 j , j for i ∈ [ N ] be the filters defined on the spectral domain of ∆p , where Mi , j = Wi , j∥∇f ( [ i , j ] ) ∥p−2 , ( ∇f ) ( [ i , j ] ) is the graph gradient of the edge between node i and j and ∥∇f ( i ) ∥ is the norm of graph gradient at i. Ni denotes the number of edges connected to i , Nmin = min { Nj } j∈ [ N ] , and k = argmaxj ( { |u ( l ) j |/ √ Dl , l } j∈ [ N ] ; l=0 , ... , N−1 ) , then 1 . When p = 2 , gp ( λi−1 ) works as both low-pass and high-pass filters . 2 . When p > 2 , if ∥∇f ( i ) ∥ ≤ 2 ( p−1 ) / ( p−2 ) , gp ( λi−1 ) works as both low-pass and high-pass filters on node i and gp ( λi−1 ) works as low-pass filters on i when ∥∇f ( i ) ∥ ≥ 2 ( p−1 ) / ( p−2 ) . 3 . When 1 ≤ p < 2 , if 0 ≤ ∥∇f ( i ) ∥ ≤ 2 ( 2 √ Nk ) 1/ ( p−2 ) , gp ( λi−1 ) works as low-pass filters on node i and gp ( λi−1 ) works as both low-pass and high-pass filters on i when ∥∇f ( i ) ∥ ≥ 2 ( 2 √ Nk ) 1/ ( p−2 ) . Specifically , when p = 1 , Nk can be replaced by Nmin . Proof see Appendix D.7 . Proposition 1 shows that when p ̸= 2 , p-Laplacian message passing adaptively works as low-pass or both low-pass and high-pass filters on node i in terms of the degree of local node embedding variation around i , i.e . the norm of the graph gradient ∥∇f ( i ) ∥ at node i . When p = 2 , p-Laplacian message passing works as both low-pass and high-pass filters on node i regardless of the value of ∥∇f ( i ) ∥ . When p > 2 , p-Laplacian message passing works as lowpass filters on node i for large ∥∇f ( i ) ∥ and works as both low-pass and high-pass filters for small ∥∇f ( i ) ∥ . Therefore , pGNNs with p > 2 can work very effectively on graphs with strong homophily . When 1 ≤ p < 2 , p-Laplacian message passing works as low-pass filters for small ∥∇f ( i ) ∥ and works as both low-pass and high-pass filters for large ∥∇f ( i ) ∥ . Thus , pGNNs with 1 ≤ p < 2 can work effectively on graphs with low homophily , i.e . heterophilic graphs . The results here confirms our analysis of the aggregation weights of p-Laplacian message passing presented in Thm . 2 .
The paper derives the p-Laplacian message passing formula under the p-Laplacian based regularization framework and further proposes p-GNN architecture. Authors further justify the relations of p-Laplacian message passing with low and high-pass filters and the upper bound of one layer risk of p-GNNs. Experiments show the superiority of p-GNNs on both heterophilic and homophilic settings. However, we still have some concerns for the paper before further evaluation.
SP:36ec768c7b2c29b3ee400d43e4a7fa0dc35ca6cf
Learning Distributionally Robust Models at Scale via Composite Optimization
1 INTRODUCTION . Conventional machine learning problem aims at learning a model based on the assumption that training data and test data come from same data distribution . However , this assumption may not hold in various practical learning problems where there is label shift ( Zhang et al. , 2020a ) , distribution shift ( Sagawa et al. , 2019 ) , fairness constraints ( Hashimoto et al. , 2018 ) , and adversarial examples ( Sinha et al. , 2017 ) , to name a few . Distributionally robust optimization ( DRO ) , which has recently attracted remarkable attention from the machine learning community , is a common approach to deal with the aforementioned uncertainties ( Duchi & Namkoong , 2016 ; Chen et al. , 2017 ; Rahimian & Mehrotra , 2019 ) . Defining the empirical distribution of the training data of size m by P̂m , 1m ∑m i=1 δξ̂i where δ is the Dirac delta function , the goal of DRO formulation is to solve the following optimization problem inf x [ Ψ ( x ) , sup ξ∈Q EQ [ ` ( x ; ξ ) ] ] , ( 1 ) where ξ is a data sample randomly drawn from distribution Q , ` ( x ; ξ ) is the corresponding loss function and EQ [ ` ( x , ξ ) ] is the expected loss over distributionQ which belongs to uncertainty set Um . The uncertainty set Um is defined as Um , { Q : d ( Q , P̂m ) ≤ } indicates the ball of a distribution with center P̂m and also d ( P , Q ) is a distance measure between probability distribution P and Q . We note this uncertainty set captures the distribution shift hence Eq . ( 1 ) minimizes the worse data distribution . Prior studies ( Ben-Tal et al. , 2013 ; Bertsimas et al. , 2018 ; Blanchet et al. , 2019 ; Esfahani & Kuhn , 2018 ) considered different uncertainty sets ( see Definition 3.1 in Esfahani & Kuhn ( 2018 ) ) for which they proposed equivalent reformulations of Eq . ( 1 ) based on the specific choice of Um . To solve the above min-max optimization problems , majority of prior studies heavily rely on either semidefinite programming ( Esfahani & Kuhn , 2018 ) or stochastic primal-dual methods both for convex ( Nemirovski et al. , 2009 ; Juditsky et al. , 2011 ; Yan et al. , 2019 ; 2020 ; Namkoong & Duchi , 2016 ) and non-convex ( deep learning ) objectives ( Yan et al. , 2020 ) . While primal-dual methods can be used as an approach to solve min-max optimization problems , it suffers from a few downsides . First and foremost , they need to store a probability distribution of constrained violation of dimension m corresponding to dual variables . Additionally , available primal-dual methods often demand data sampling that corresponds to the probability distribution over m data samples which introduces additional cost over uniform sampling . Finally , while majority of prior studies are limited to DRO problems with convex objectives , establishing tight convergence rate for DRO problems with penalty with non-convex objectives is still lacking . To overcome these issues , in this paper , we consider three different reformulations of Eq . ( 1 ) , corresponding to three different choices of uncertainty sets Um namely , ( 1 ) DRO with Wasserstein metrics , ( 2 ) DRO with χ2 divergence metrics , and ( 3 ) DRO with regularized entropy metrics ( also known as KL ) and show in Section 2 that all aforementioned DRO notions are indeed different instances of a deterministic composite optimization and can be solved by reducing to an instances of the following problem : min x [ Ψ ( x ) , r ( x ) + 1 m ∑m i=1 hi ( x ) + f ( 1 m ∑m i=1 gi ( x ) ) ] , ( 2 ) where we suppose r ( x ) is convex and a relatively simple function , f ( x ) : Rp → R and hi ( x ) : Rd → R for 1 ≤ i ≤ m are scalar-valued functions , and gi ( x ) : Rd → Rp for 1 ≤ i ≤ m are vector-valued functions . On the road to solve problem ( 2 ) at scale , we also develop a novel algorithm for heavily constrained optimization problems ( Wang & Bertsekas , 2015 ; 2016 ; Narasimhan et al. , 2020b ) that rather surprisingly invokes a single projection through the course of optimization . This algorithm is of independent interest and addresses the scalability issues raised in applications such as fairness ( Donini et al. , 2018 ; Zafar et al. , 2019 ) . We summarize the main contributions of our paper below : • We provide a large-scale analysis of DRO with Wasserstein distance and heavily constrained reformulation when the objective function is strongly-convex . Our result relies on a novel mini-batch constraint sampling for handling heavily-constrained optimization problems . As summarized in Table . 1 , our convergence analysis improves the state-of-the-art both in terms of the dependence on the convergence error as well as the number of constraints m. • We represent a large-scale analysis of DRO with non-convex objectives and χ2 or KL divergences . • We verify our theoretical results through various extensive experiments on different datasets . In particular , we show that our proposed method outperforms recent methods in DRO for heavily constrained problems with a great reduction in time complexity over them . The proofs of all the theorems are provided in the appendix . 1.1 RELATED WORK . DRO and connections to heavily constrained optimization . As mentioned earlier , DRO has many different formulations , depending on the divergence metrics used ( e.g. , Wasserstein , χ2 or KL ) . While Namkoong & Duchi ( 2016 ) ; Shapiro ( 2017 ) ; Duchi & Namkoong ( 2021 ) consider constrained or penalized DRO formulation , Sinha et al . ( 2017 ) ; Levy et al . ( 2020 ) formulate the underlying optimization problem as unconstrained . One of the contributions of our paper is to provide a unifying framework through the language of composite optimization and treat all these variants similarly . In particular , when the objective function is convex , Levy et al . ( 2020 ) recently proposed scalable algorithms for different variants of the DRO problems with , e.g. , χ2 or KL divergence metrics . Our unifying approach readily extends those results to the more challenging non-convex setting for which we are unaware of any prior work with convergence guarantees ( for instance , Hashimoto et al . ( 2018 ) studied DRO with χ2-divergence but did not provide any convergence guarantee ) . Similarly , Kuhn et al . ( 2019 ) ; Esfahani & Kuhn ( 2018 ) formulated DRO with Wasserstein distance as an instance of constrained optimization . Notably , they require ti impose one constraint per training data point and to solve such a constrained problem they proposed a semi-definite program . Even though the formulation is very novel , it can not scale . We , in contrast , consider such a heavily constrained optimization as an instance of a composite optimization for which we provide a scalable solution . What is rather surprising about our method is that it only checks a batch of constraints per iteration , inspired by Cotter et al . ( 2016 ) , and performs a single projection at the final stage of the algorithm in order to provide an -optimal solution in the case of strongly convex objectives . Moreover , in contrast to Cotter et al . ( 2016 ) , we do not keep a probability distribution over the set of constraints . We should also remark that our convergence guarantees achieve the known lower bounds in terms of accuracy and the number of constraints m. Finally , we should highlight the difference of our algorithm and Frank-Wolfe ( FW ) ( Frank et al. , 1956 ; Jaggi , 2013 ; Zhang et al. , 2020b ) . While FW does not require a projection oracle , it performs a linear program over the set of constraints at each iteration . In contrast , our heavily- constrained optimization solution performs a single projection without the overhead of running a linear program at each iteration . Stochastic composite optimization . The general stochastic composite optimization minx [ Ψ ( x ) , r ( x ) + f ( Eξ [ gξ ( x ) ) ] ] has recently received a lot of attentions ( Qi et al. , 2020b ; a ; Wang et al. , 2017 ) . Our reformulation of DRO variants is a finite-sum instance of this general problem . More concretely , Lian et al . ( 2017 ) ; Huo et al . ( 2018 ) ; Zhang & Xiao ( 2019a ) aimed to solve the following finite-sum problem minx [ Ψ ( x ) , r ( x ) + 1n ∑n j=1 fj ( 1 m ∑m i=1 gi ( x ) ) ] , using SVRG or SAGA ( Defazio et al. , 2014 ) . In contrast , our proposed algorithm is inspired by Zhang & Xiao ( 2019a ) and generalizes their method to the case where the extra terms hi ( x ) in Eq . ( 2 ) are non-zero . We should also note that Qi et al . ( 2020a ) proposed a similar idea in the context of online learning for DRO problems with KL divergence . Our work in contrast provides guarantees for DRO with both constraints or penalty terms . 2 DRO VIA FINITE-SUM COMPOSITE OPTIMIZATION . In this section , we discuss in detail how a finite-sum composite optimization ( 2 ) can unify various notions of distributionally robust learning , where some of which rely on heavily constrained optimization subroutines . While much research effort has been devoted to develop a specialized algorithm for each notion , our reduction paves the way to developing a scalable algorithm , discussed in Section 3 . DRO with Wasserstein distance . Kuhn et al . ( 2019 ) ; Esfahani & Kuhn ( 2018 ) provide the equivalent and tractable reformulation of Eq . ( 1 ) which can be regarded as a heavily constrained optimization problem as follows : min x r ( x ) , 1 m m∑ i=1 fi ( x ) subject to g̃i ( x ) ≤ 0 , ∀i ∈ [ m ] . ( 3 ) where g̃i ( x ) are functions related to loss function as well as slack variables ( please see Appendix A for more details ) . Naively solving optimization problem ( 3 ) suffers from the computational complexity due to the large number of constraintsm . To efficiently solve the optimization problem ( 3 ) , inspired by Mahdavi et al . ( 2012 ) and Cotter et al . ( 2016 ) , we pursue the smoothed constrained reduction approach and introduce the augmented optimization problem ( see Appendix B ) of the form min x Ψ ( x ) , [ r ( x ) + γ ln ( g ( x ) ) ] where gi ( x ) , exp ( αg̃i ( x ) γ ) and g ( x ) = 1m+1 [ 1 + ∑m i=1 gi ( x ) ] . We can see that this optimization problem is a special case of the optimization problem Eq . ( 2 ) where r ( x ) = f ( x ) , f ( 1m ∑m i=1 gi ( x ) ) = γ ln g ( x ) , and h ( x ) = 0 . In contrast to Cotter et al . ( 2016 ) that requires an extra storage cost of probability distribution of dimension m , and relatively poor convergence rate in terms of m and accuracy , we propose an algorithm that simply checks a batch of constraints and achieves the optimum dependency in terms of m and . DRO with χ2-divergence . The second type of DRO problem we consider utilizes the χ2divergence metric as follows : min x max 0≤pi≤1 , ∑m i=1 pi=1 m∑ i=1 pifi ( xi ) − γDχ2 ( p ) . ( 4 ) where the χ2 divergence is defined as the distance between the uniform distribution and an arbitrary probability distribution p , i.e. , Dχ2 ( p ) , m2 ∑m i=1 ( pi − 1m ) 2 . Levy et al . ( 2020 ) studied this problem only for the case of convex objectives . In this paper , we allow objective functions fi for 1 ≤ i ≤ m to be both non-convex or strongly-convex . The following claim derives the equivalent finite-sum composite optimization . Claim 2.1 . The optimization problem ( 4 ) is equivalent to the following problem : min x Ψ ( x ) , 1− 1 2γm m∑ i=1 [ fi ( x ) ] 2 + 1 2γ [ 1 m m∑ i=1 fi ( x ) ] 2 ( 5 ) We note that the optimization problem ( 5 ) fits into the formulation of finite-sum composite optimization ( 2 ) by choosing r ( x ) = 1 , h ( x ) = 1m ∑m i=1− ( fi ( x ) ) 2 2γ and f ( g ( x ) ) = 1 2γ [ 1 m ∑m i=1 fi ( x ) ] 2 with hi ( x ) = − f 2 i ( x ) 2γ , gi ( x ) = fi ( x ) and f ( x ) = x2 2γ . DRO with KL divergence . Finally , for DRO with KL-divergence , usually considered in online settings ( Qi et al. , 2020a ) , we consider solving the following optimization problem : min x max 0≤pi≤1 , ∑m i=1 pi=1 [ m∑ i=1 pifi ( xi ) + γH ( p1 , . . . , pm ) ] , ( 6 ) whereH ( p1 , . . . , pm ) = − ∑m i=1 pi log pi is the entropy function . To solve the min-max problem ( 6 ) , it is straightforward to convert it to the equivalent stochastic composite optimization problem of the following form : min x [ Ψ ( x ) , ln ( 1 m m∑ i=1 exp ( fi ( x ) γ ) ) ] . ( 7 ) As it can be seen , the optimization problem ( 7 ) fits into the composite optimization ( 2 ) by choosing r ( x ) = h ( x ) = 0 and f ( g ( x ) ) = ln ( 1 m ∑m i=1 exp ( fi ( x ) γ ) ) .
The paper views different variants of DRO are simply instances of a finite-sum composite optimization, from which efficient optimization algorithms were proposed. The convergence analysis was established for strongly-convex and non-convex settings. The effectiveness of the proposed algorithm are well demonstrated in experiments.
SP:ec5a3d26769b738d3cc12b0bfdfe90fb51fb4b36
Learning Distributionally Robust Models at Scale via Composite Optimization
1 INTRODUCTION . Conventional machine learning problem aims at learning a model based on the assumption that training data and test data come from same data distribution . However , this assumption may not hold in various practical learning problems where there is label shift ( Zhang et al. , 2020a ) , distribution shift ( Sagawa et al. , 2019 ) , fairness constraints ( Hashimoto et al. , 2018 ) , and adversarial examples ( Sinha et al. , 2017 ) , to name a few . Distributionally robust optimization ( DRO ) , which has recently attracted remarkable attention from the machine learning community , is a common approach to deal with the aforementioned uncertainties ( Duchi & Namkoong , 2016 ; Chen et al. , 2017 ; Rahimian & Mehrotra , 2019 ) . Defining the empirical distribution of the training data of size m by P̂m , 1m ∑m i=1 δξ̂i where δ is the Dirac delta function , the goal of DRO formulation is to solve the following optimization problem inf x [ Ψ ( x ) , sup ξ∈Q EQ [ ` ( x ; ξ ) ] ] , ( 1 ) where ξ is a data sample randomly drawn from distribution Q , ` ( x ; ξ ) is the corresponding loss function and EQ [ ` ( x , ξ ) ] is the expected loss over distributionQ which belongs to uncertainty set Um . The uncertainty set Um is defined as Um , { Q : d ( Q , P̂m ) ≤ } indicates the ball of a distribution with center P̂m and also d ( P , Q ) is a distance measure between probability distribution P and Q . We note this uncertainty set captures the distribution shift hence Eq . ( 1 ) minimizes the worse data distribution . Prior studies ( Ben-Tal et al. , 2013 ; Bertsimas et al. , 2018 ; Blanchet et al. , 2019 ; Esfahani & Kuhn , 2018 ) considered different uncertainty sets ( see Definition 3.1 in Esfahani & Kuhn ( 2018 ) ) for which they proposed equivalent reformulations of Eq . ( 1 ) based on the specific choice of Um . To solve the above min-max optimization problems , majority of prior studies heavily rely on either semidefinite programming ( Esfahani & Kuhn , 2018 ) or stochastic primal-dual methods both for convex ( Nemirovski et al. , 2009 ; Juditsky et al. , 2011 ; Yan et al. , 2019 ; 2020 ; Namkoong & Duchi , 2016 ) and non-convex ( deep learning ) objectives ( Yan et al. , 2020 ) . While primal-dual methods can be used as an approach to solve min-max optimization problems , it suffers from a few downsides . First and foremost , they need to store a probability distribution of constrained violation of dimension m corresponding to dual variables . Additionally , available primal-dual methods often demand data sampling that corresponds to the probability distribution over m data samples which introduces additional cost over uniform sampling . Finally , while majority of prior studies are limited to DRO problems with convex objectives , establishing tight convergence rate for DRO problems with penalty with non-convex objectives is still lacking . To overcome these issues , in this paper , we consider three different reformulations of Eq . ( 1 ) , corresponding to three different choices of uncertainty sets Um namely , ( 1 ) DRO with Wasserstein metrics , ( 2 ) DRO with χ2 divergence metrics , and ( 3 ) DRO with regularized entropy metrics ( also known as KL ) and show in Section 2 that all aforementioned DRO notions are indeed different instances of a deterministic composite optimization and can be solved by reducing to an instances of the following problem : min x [ Ψ ( x ) , r ( x ) + 1 m ∑m i=1 hi ( x ) + f ( 1 m ∑m i=1 gi ( x ) ) ] , ( 2 ) where we suppose r ( x ) is convex and a relatively simple function , f ( x ) : Rp → R and hi ( x ) : Rd → R for 1 ≤ i ≤ m are scalar-valued functions , and gi ( x ) : Rd → Rp for 1 ≤ i ≤ m are vector-valued functions . On the road to solve problem ( 2 ) at scale , we also develop a novel algorithm for heavily constrained optimization problems ( Wang & Bertsekas , 2015 ; 2016 ; Narasimhan et al. , 2020b ) that rather surprisingly invokes a single projection through the course of optimization . This algorithm is of independent interest and addresses the scalability issues raised in applications such as fairness ( Donini et al. , 2018 ; Zafar et al. , 2019 ) . We summarize the main contributions of our paper below : • We provide a large-scale analysis of DRO with Wasserstein distance and heavily constrained reformulation when the objective function is strongly-convex . Our result relies on a novel mini-batch constraint sampling for handling heavily-constrained optimization problems . As summarized in Table . 1 , our convergence analysis improves the state-of-the-art both in terms of the dependence on the convergence error as well as the number of constraints m. • We represent a large-scale analysis of DRO with non-convex objectives and χ2 or KL divergences . • We verify our theoretical results through various extensive experiments on different datasets . In particular , we show that our proposed method outperforms recent methods in DRO for heavily constrained problems with a great reduction in time complexity over them . The proofs of all the theorems are provided in the appendix . 1.1 RELATED WORK . DRO and connections to heavily constrained optimization . As mentioned earlier , DRO has many different formulations , depending on the divergence metrics used ( e.g. , Wasserstein , χ2 or KL ) . While Namkoong & Duchi ( 2016 ) ; Shapiro ( 2017 ) ; Duchi & Namkoong ( 2021 ) consider constrained or penalized DRO formulation , Sinha et al . ( 2017 ) ; Levy et al . ( 2020 ) formulate the underlying optimization problem as unconstrained . One of the contributions of our paper is to provide a unifying framework through the language of composite optimization and treat all these variants similarly . In particular , when the objective function is convex , Levy et al . ( 2020 ) recently proposed scalable algorithms for different variants of the DRO problems with , e.g. , χ2 or KL divergence metrics . Our unifying approach readily extends those results to the more challenging non-convex setting for which we are unaware of any prior work with convergence guarantees ( for instance , Hashimoto et al . ( 2018 ) studied DRO with χ2-divergence but did not provide any convergence guarantee ) . Similarly , Kuhn et al . ( 2019 ) ; Esfahani & Kuhn ( 2018 ) formulated DRO with Wasserstein distance as an instance of constrained optimization . Notably , they require ti impose one constraint per training data point and to solve such a constrained problem they proposed a semi-definite program . Even though the formulation is very novel , it can not scale . We , in contrast , consider such a heavily constrained optimization as an instance of a composite optimization for which we provide a scalable solution . What is rather surprising about our method is that it only checks a batch of constraints per iteration , inspired by Cotter et al . ( 2016 ) , and performs a single projection at the final stage of the algorithm in order to provide an -optimal solution in the case of strongly convex objectives . Moreover , in contrast to Cotter et al . ( 2016 ) , we do not keep a probability distribution over the set of constraints . We should also remark that our convergence guarantees achieve the known lower bounds in terms of accuracy and the number of constraints m. Finally , we should highlight the difference of our algorithm and Frank-Wolfe ( FW ) ( Frank et al. , 1956 ; Jaggi , 2013 ; Zhang et al. , 2020b ) . While FW does not require a projection oracle , it performs a linear program over the set of constraints at each iteration . In contrast , our heavily- constrained optimization solution performs a single projection without the overhead of running a linear program at each iteration . Stochastic composite optimization . The general stochastic composite optimization minx [ Ψ ( x ) , r ( x ) + f ( Eξ [ gξ ( x ) ) ] ] has recently received a lot of attentions ( Qi et al. , 2020b ; a ; Wang et al. , 2017 ) . Our reformulation of DRO variants is a finite-sum instance of this general problem . More concretely , Lian et al . ( 2017 ) ; Huo et al . ( 2018 ) ; Zhang & Xiao ( 2019a ) aimed to solve the following finite-sum problem minx [ Ψ ( x ) , r ( x ) + 1n ∑n j=1 fj ( 1 m ∑m i=1 gi ( x ) ) ] , using SVRG or SAGA ( Defazio et al. , 2014 ) . In contrast , our proposed algorithm is inspired by Zhang & Xiao ( 2019a ) and generalizes their method to the case where the extra terms hi ( x ) in Eq . ( 2 ) are non-zero . We should also note that Qi et al . ( 2020a ) proposed a similar idea in the context of online learning for DRO problems with KL divergence . Our work in contrast provides guarantees for DRO with both constraints or penalty terms . 2 DRO VIA FINITE-SUM COMPOSITE OPTIMIZATION . In this section , we discuss in detail how a finite-sum composite optimization ( 2 ) can unify various notions of distributionally robust learning , where some of which rely on heavily constrained optimization subroutines . While much research effort has been devoted to develop a specialized algorithm for each notion , our reduction paves the way to developing a scalable algorithm , discussed in Section 3 . DRO with Wasserstein distance . Kuhn et al . ( 2019 ) ; Esfahani & Kuhn ( 2018 ) provide the equivalent and tractable reformulation of Eq . ( 1 ) which can be regarded as a heavily constrained optimization problem as follows : min x r ( x ) , 1 m m∑ i=1 fi ( x ) subject to g̃i ( x ) ≤ 0 , ∀i ∈ [ m ] . ( 3 ) where g̃i ( x ) are functions related to loss function as well as slack variables ( please see Appendix A for more details ) . Naively solving optimization problem ( 3 ) suffers from the computational complexity due to the large number of constraintsm . To efficiently solve the optimization problem ( 3 ) , inspired by Mahdavi et al . ( 2012 ) and Cotter et al . ( 2016 ) , we pursue the smoothed constrained reduction approach and introduce the augmented optimization problem ( see Appendix B ) of the form min x Ψ ( x ) , [ r ( x ) + γ ln ( g ( x ) ) ] where gi ( x ) , exp ( αg̃i ( x ) γ ) and g ( x ) = 1m+1 [ 1 + ∑m i=1 gi ( x ) ] . We can see that this optimization problem is a special case of the optimization problem Eq . ( 2 ) where r ( x ) = f ( x ) , f ( 1m ∑m i=1 gi ( x ) ) = γ ln g ( x ) , and h ( x ) = 0 . In contrast to Cotter et al . ( 2016 ) that requires an extra storage cost of probability distribution of dimension m , and relatively poor convergence rate in terms of m and accuracy , we propose an algorithm that simply checks a batch of constraints and achieves the optimum dependency in terms of m and . DRO with χ2-divergence . The second type of DRO problem we consider utilizes the χ2divergence metric as follows : min x max 0≤pi≤1 , ∑m i=1 pi=1 m∑ i=1 pifi ( xi ) − γDχ2 ( p ) . ( 4 ) where the χ2 divergence is defined as the distance between the uniform distribution and an arbitrary probability distribution p , i.e. , Dχ2 ( p ) , m2 ∑m i=1 ( pi − 1m ) 2 . Levy et al . ( 2020 ) studied this problem only for the case of convex objectives . In this paper , we allow objective functions fi for 1 ≤ i ≤ m to be both non-convex or strongly-convex . The following claim derives the equivalent finite-sum composite optimization . Claim 2.1 . The optimization problem ( 4 ) is equivalent to the following problem : min x Ψ ( x ) , 1− 1 2γm m∑ i=1 [ fi ( x ) ] 2 + 1 2γ [ 1 m m∑ i=1 fi ( x ) ] 2 ( 5 ) We note that the optimization problem ( 5 ) fits into the formulation of finite-sum composite optimization ( 2 ) by choosing r ( x ) = 1 , h ( x ) = 1m ∑m i=1− ( fi ( x ) ) 2 2γ and f ( g ( x ) ) = 1 2γ [ 1 m ∑m i=1 fi ( x ) ] 2 with hi ( x ) = − f 2 i ( x ) 2γ , gi ( x ) = fi ( x ) and f ( x ) = x2 2γ . DRO with KL divergence . Finally , for DRO with KL-divergence , usually considered in online settings ( Qi et al. , 2020a ) , we consider solving the following optimization problem : min x max 0≤pi≤1 , ∑m i=1 pi=1 [ m∑ i=1 pifi ( xi ) + γH ( p1 , . . . , pm ) ] , ( 6 ) whereH ( p1 , . . . , pm ) = − ∑m i=1 pi log pi is the entropy function . To solve the min-max problem ( 6 ) , it is straightforward to convert it to the equivalent stochastic composite optimization problem of the following form : min x [ Ψ ( x ) , ln ( 1 m m∑ i=1 exp ( fi ( x ) γ ) ) ] . ( 7 ) As it can be seen , the optimization problem ( 7 ) fits into the composite optimization ( 2 ) by choosing r ( x ) = h ( x ) = 0 and f ( g ( x ) ) = ln ( 1 m ∑m i=1 exp ( fi ( x ) γ ) ) .
This paper targets on solving distributionally robust optimization (DRO) that considering distribution shifts in the data. In this paper, they show that how different variants of DRO are simply instances of a finite-sum composite optimization for which they provide scalable methods by utilizing variance reduction algorithm. They also provide empirical results that demonstrate the effectiveness of our proposed algorithm with respect to the prior art in order to learn robust models from very large datasets.
SP:ec5a3d26769b738d3cc12b0bfdfe90fb51fb4b36
IA-MARL: Imputation Assisted Multi-Agent Reinforcement Learning for Missing Training Data
1 INTRODUCTION . Reinforcement learning ( RL ) solves many challenging problems including the game playing ( Mnih et al. , 2015 ) and the robot control ( Levine et al. , 2016 ) , which focus on the single-agent RL environment , modeled as the Markov decision process ( Sutton and Barto , 2011 ) . However , there exist many real-world problems that involve interaction among multiple agents such as multi-robot control ( Hüttenrauch et al. , 2019 ) and multiplayer games ( Silver et al. , 2017 ; Bard et al. , 2020 ) . Hence , the multi-agent reinforcement learning ( MARL ) that operates in multi-agent domain has been introduced and now becomes one of the most active and challenging RL research areas . In MARL , the decentralized approach has been used to train each agent based on its trajectory ( Tan , 1993 ) . However , it often shows unstable and low performance due to non-stationary environment and partially observable information ( Tan , 1993 ; Foerster et al. , 2017 ) that inherits from the decentralization . Specifically , as the agents evolve their policies independently , the environment becomes non-stationary , which unstabilizes training at each agent . In addition , the agent may not observe the information of other agents , which causes low performance in the cooperative or competitive environment ( Lowe et al. , 2017 ) . Recently , the centralized training with decentralized execution ( CTDE ) framework has been introduced for MARL ( Oliehoek et al. , 2008 ; Foerster et al. , 2018 ) . This can alleviate the non-stationary environment and the partially observable information problems ( Lowe et al. , 2017 ) and encourages coordination among agents ( Foerster et al. , 2018 ) . In the execution of CTDE , each agent takes an action based on its observation , while the training of the agents is performed at a centralized server after collecting observations , actions , and rewards of all agents . In existing works , those data from all agents are assumed to be available at the centralized server , which may not be always true in reality . The data from distributed agents can be unavailable due to practical reasons including the communication failure , hardware limit , and security attacks ( Lakshminarayan et al. , 1999 ; Twala , 2009 ) . For instance , in wireless sensor applications of MARL such as vehicle tracking ( Liang et al. , 2020 ) and environmental monitoring ( Li et al. , 2020 ) , sensors ( i.e. , agents ) transmit their sensed information to a receiver ( i.e. , centralized server ) for training . The training data , transmitted from sensors , can be missed when the communication is unstable . In addition , even when the training data successfully arrives at the centralized server , certain data can be removed from the training dataset due to security attacks such as false data injection ( Yan et al. , 2016 ) and unauthorized data modification ( Ferretti et al. , 2014 ) . As one can readily imagine , this missing training data can cause a serious problem in MARL as the training can not be performed . One possible solution on this missing data problem is to use only the training data that contains data from all agents without missing . However , in this case , the number of training data can dramatically decrease as the number of agents increases or the data missing happens more often , which delays the training . Another solution can be the use of imputation for replacing the missing training data in the MARL . However , the training data of this case can be different from the original data , which potentially degrades the performance . Therefore , as discussed above , the missing data problem should be carefully considered for bringing the MARL to the next level for wider range of applications . Despite of it , to the best of our knowledge , the missing data problem in MARL has not been taken into account in existing works . In this paper , we propose an imputation assisted multi-agent reinforcement learning ( IA-MARL ) with considering the missing training data problem , where the training data of each agent , consists of observation , action , and reward , can be randomly missed with certain probability . The proposed IA-MARL consists of two steps : 1 ) the imputation of missing training data and 2 ) the mask-based update of the networks . Specifically , for the imputation of the missing training data , we use a generative adversarial imputation network ( GAIN ) to impute the data from all agents , and we form the data for the training of agents . We then perform the mask-based update , which trains the value function and the policy of each agent by selectively using the training data of the corresponding agent , not missed over the consecutive times . In the experimental results , we show the IA-MARL outperforms a decentralized approach and also can achieve the performance of MARL with all training data without missing . We then also show the performance of IA-MARL for different missing probabilities , the number of agents , and the number of pre-training episodes for GAIN . From the ablation study , we also verify the importance of the mask-based update as well as the imputation accuracy in multi-agent environments when the training data can be missed . 2 RELATED WORK . Independent Q-learning has been proposed as a decentralized approach to train each agent using its own data independently ( Tan , 1993 ) . The independent Q-learning is used for the tabular environment ( Littman , 1994 ) , and deep learning-based approaches are presented in Tampuu et al . ( 2017 ) ; Gupta et al . ( 2017 ) . Independent Q-learning , however , suffers from the non-stationary environment and partially observable information problems . CTDE is one of the solutions for those problems . In CTDE , the data from all agents are used for the centralized training , while the execution at each agent only requires its observation ( Oliehoek et al. , 2008 ) . For instance , the centralized server trains the value function of each agent using observations , actions , and rewards of all agents , while each agent takes an action based on its observation ( Lowe et al. , 2017 ) . Recently , MARL algorithms that adopt CTDE framework have been presented . For instance , Lowe et al . ( 2017 ) proposes the multi-agent deep deterministic policy gradient ( MADDPG ) that extends deep deterministic policy gradient ( DDPG ) for the continuous control of multiple agents . For the credit assignment problem , the counterfactual baseline ( Foerster et al. , 2018 ) and the value function factorization that determines the contribution of each agent are used ( Sunehag et al. , 2018 ; Rashid et al. , 2018 ; Son et al. , 2019 ) . To improve the performance of MARL , the soft value function and the multi-head attention are used in Iqbal and Sha ( 2019 ) , and the communication between agents that provides additional information to each agent is introduced in Foerster et al . ( 2016 ) ; Mordatch and Abbeel ( 2018 ) while the limited communication channel between agents during the execution is considered in Kim et al . ( 2019 ) to address the real world communication channel constraints in MARL . However , none of the prior work considers the missing training data problem . Imputation is the research area that replaces missing data , which has been used for many applications including medical data and image concealment ( Rubin , 2004 ) . For the imputation , some techniques such as multivariate imputation by chained equations ( MICE ) ( Buuren and Groothuis-Oudshoorn , 2010 ) , matrix completion ( Mazumder et al. , 2010 ) , and MissForest ( Stekhoven and Bühlmann , 2012 ) are proposed . However , when the imputation is used for the data that have large data space ( e.g. , the data obtained from agents in CTDE ) , the techniques with insufficient expressive power might result in low performance . For this case , some imputation techniques that have more expressiveness by adopting the deep neural networks can be more suitable such as the multiple imputation using denoising autoencoders ( MIDA ) ( Gondara and Wang , 2018 ) , the bidirectional recurrent imputation for time series ( BRITS ) ( Cao et al. , 2018 ) , and the generative adversarial imputation network ( GAIN ) ( Yoon et al. , 2018 ) . 3 BACKGROUND . We consider a decentralized partially observable Markov decision process , defined by a tuple ( S , A , P , r , Ω , O , γ , n ) , where S , A , and Ω are set of states , actions , and observations , respectively . r and γ are the reward and the discount factor , respectively , and n is the number of agents . We use s ∈ S , a ∈ A , and o ∈ Ω for a state , an action , and an observation , respectively . We use subscript i for the corresponding agent and t for the time , e.g. , oi , t is the observation of agent i at time t. We use bold symbols to denote observations , actions , and rewards of all agents , e.g. , at = ( a1 , t , · · · , an , t ) . Here , P ( st+1|st , at ) and O ( ot|st ) are the transition probability and the conditional observation probability , respectively . 3.1 DDPG AND MADDPG . The objective of the agent in the environment is to maximize the cumulative reward Rt =∑T t′=t γ t′−trt′ . For this , we use the actor-critic method . The expected cumulative reward for given action and state is Q ( st , at ) = E [ Rt|s = st , a = at ] , which is called as an action-value function or value function . Using the Bellman equation , the value function can be rewritten as Q ( st , at ) = Ert , st+1 , at+1 [ rt + γQ ( st+1 , at+1 ) ] . When the parameter θ is used for the value function approximation , the value function Q can be learned by minimizing the loss L ( θ ) , given as L ( θ ) = E [ ( Qθ ( st , at|θ ) − y ) 2 ] , y = rt + γQθ ( st+1 , at+1 ) . ( 1 ) DDPG is the widely-used choices for the policy update . The policy parameterized by φ takes state s as an input and outputs deterministic action a = µφ ( s ) in DDPG , where the gradient of φ is given as ∇φJ ( φ ) = E [ ∇φµφ ( at|st ) ∇atQθ ( st , at ) |at=µφ ( st ) ] . ( 2 ) MADDPG is an algorithm that uses DDPG in the CTDE framework ( Lowe et al. , 2017 ) . The value function in MADDPG takes observations and actions of all agents as an input . Meanwhile , the policy of each agent takes its observation as an input since the agent can access to its observation only . In CTDE , the loss for the value function and the gradient for the policy are given as L ( θi ) = E [ ( Qθi ( ot , at ) − y ) 2 ] , y = ri , t + γQθi ( ot+1 , at+1 ) , ( 3 ) ∇φiJ ( φi ) = E [ ∇φiµφi ( ai , t|oi , t ) ∇ai , tQθi ( ot , at ) |ai , t=µφi ( oi , t ) ] . ( 4 ) Here , each agent has a different value function and a policy , parameterized respectively as θi and φi .
The submission proposes a cooperative MARL problem settings in which the observation-action-reward tuples generated during training are unavailable with some non-zero probability. The submission suggests addressing this problem setting by first imputing the missing training data and what they call a mask based update. The submission presents experiments for multi-agent particle environments.
SP:fa0ebf93ac2ed036fcd7d1e81761fefb43a4961e
IA-MARL: Imputation Assisted Multi-Agent Reinforcement Learning for Missing Training Data
1 INTRODUCTION . Reinforcement learning ( RL ) solves many challenging problems including the game playing ( Mnih et al. , 2015 ) and the robot control ( Levine et al. , 2016 ) , which focus on the single-agent RL environment , modeled as the Markov decision process ( Sutton and Barto , 2011 ) . However , there exist many real-world problems that involve interaction among multiple agents such as multi-robot control ( Hüttenrauch et al. , 2019 ) and multiplayer games ( Silver et al. , 2017 ; Bard et al. , 2020 ) . Hence , the multi-agent reinforcement learning ( MARL ) that operates in multi-agent domain has been introduced and now becomes one of the most active and challenging RL research areas . In MARL , the decentralized approach has been used to train each agent based on its trajectory ( Tan , 1993 ) . However , it often shows unstable and low performance due to non-stationary environment and partially observable information ( Tan , 1993 ; Foerster et al. , 2017 ) that inherits from the decentralization . Specifically , as the agents evolve their policies independently , the environment becomes non-stationary , which unstabilizes training at each agent . In addition , the agent may not observe the information of other agents , which causes low performance in the cooperative or competitive environment ( Lowe et al. , 2017 ) . Recently , the centralized training with decentralized execution ( CTDE ) framework has been introduced for MARL ( Oliehoek et al. , 2008 ; Foerster et al. , 2018 ) . This can alleviate the non-stationary environment and the partially observable information problems ( Lowe et al. , 2017 ) and encourages coordination among agents ( Foerster et al. , 2018 ) . In the execution of CTDE , each agent takes an action based on its observation , while the training of the agents is performed at a centralized server after collecting observations , actions , and rewards of all agents . In existing works , those data from all agents are assumed to be available at the centralized server , which may not be always true in reality . The data from distributed agents can be unavailable due to practical reasons including the communication failure , hardware limit , and security attacks ( Lakshminarayan et al. , 1999 ; Twala , 2009 ) . For instance , in wireless sensor applications of MARL such as vehicle tracking ( Liang et al. , 2020 ) and environmental monitoring ( Li et al. , 2020 ) , sensors ( i.e. , agents ) transmit their sensed information to a receiver ( i.e. , centralized server ) for training . The training data , transmitted from sensors , can be missed when the communication is unstable . In addition , even when the training data successfully arrives at the centralized server , certain data can be removed from the training dataset due to security attacks such as false data injection ( Yan et al. , 2016 ) and unauthorized data modification ( Ferretti et al. , 2014 ) . As one can readily imagine , this missing training data can cause a serious problem in MARL as the training can not be performed . One possible solution on this missing data problem is to use only the training data that contains data from all agents without missing . However , in this case , the number of training data can dramatically decrease as the number of agents increases or the data missing happens more often , which delays the training . Another solution can be the use of imputation for replacing the missing training data in the MARL . However , the training data of this case can be different from the original data , which potentially degrades the performance . Therefore , as discussed above , the missing data problem should be carefully considered for bringing the MARL to the next level for wider range of applications . Despite of it , to the best of our knowledge , the missing data problem in MARL has not been taken into account in existing works . In this paper , we propose an imputation assisted multi-agent reinforcement learning ( IA-MARL ) with considering the missing training data problem , where the training data of each agent , consists of observation , action , and reward , can be randomly missed with certain probability . The proposed IA-MARL consists of two steps : 1 ) the imputation of missing training data and 2 ) the mask-based update of the networks . Specifically , for the imputation of the missing training data , we use a generative adversarial imputation network ( GAIN ) to impute the data from all agents , and we form the data for the training of agents . We then perform the mask-based update , which trains the value function and the policy of each agent by selectively using the training data of the corresponding agent , not missed over the consecutive times . In the experimental results , we show the IA-MARL outperforms a decentralized approach and also can achieve the performance of MARL with all training data without missing . We then also show the performance of IA-MARL for different missing probabilities , the number of agents , and the number of pre-training episodes for GAIN . From the ablation study , we also verify the importance of the mask-based update as well as the imputation accuracy in multi-agent environments when the training data can be missed . 2 RELATED WORK . Independent Q-learning has been proposed as a decentralized approach to train each agent using its own data independently ( Tan , 1993 ) . The independent Q-learning is used for the tabular environment ( Littman , 1994 ) , and deep learning-based approaches are presented in Tampuu et al . ( 2017 ) ; Gupta et al . ( 2017 ) . Independent Q-learning , however , suffers from the non-stationary environment and partially observable information problems . CTDE is one of the solutions for those problems . In CTDE , the data from all agents are used for the centralized training , while the execution at each agent only requires its observation ( Oliehoek et al. , 2008 ) . For instance , the centralized server trains the value function of each agent using observations , actions , and rewards of all agents , while each agent takes an action based on its observation ( Lowe et al. , 2017 ) . Recently , MARL algorithms that adopt CTDE framework have been presented . For instance , Lowe et al . ( 2017 ) proposes the multi-agent deep deterministic policy gradient ( MADDPG ) that extends deep deterministic policy gradient ( DDPG ) for the continuous control of multiple agents . For the credit assignment problem , the counterfactual baseline ( Foerster et al. , 2018 ) and the value function factorization that determines the contribution of each agent are used ( Sunehag et al. , 2018 ; Rashid et al. , 2018 ; Son et al. , 2019 ) . To improve the performance of MARL , the soft value function and the multi-head attention are used in Iqbal and Sha ( 2019 ) , and the communication between agents that provides additional information to each agent is introduced in Foerster et al . ( 2016 ) ; Mordatch and Abbeel ( 2018 ) while the limited communication channel between agents during the execution is considered in Kim et al . ( 2019 ) to address the real world communication channel constraints in MARL . However , none of the prior work considers the missing training data problem . Imputation is the research area that replaces missing data , which has been used for many applications including medical data and image concealment ( Rubin , 2004 ) . For the imputation , some techniques such as multivariate imputation by chained equations ( MICE ) ( Buuren and Groothuis-Oudshoorn , 2010 ) , matrix completion ( Mazumder et al. , 2010 ) , and MissForest ( Stekhoven and Bühlmann , 2012 ) are proposed . However , when the imputation is used for the data that have large data space ( e.g. , the data obtained from agents in CTDE ) , the techniques with insufficient expressive power might result in low performance . For this case , some imputation techniques that have more expressiveness by adopting the deep neural networks can be more suitable such as the multiple imputation using denoising autoencoders ( MIDA ) ( Gondara and Wang , 2018 ) , the bidirectional recurrent imputation for time series ( BRITS ) ( Cao et al. , 2018 ) , and the generative adversarial imputation network ( GAIN ) ( Yoon et al. , 2018 ) . 3 BACKGROUND . We consider a decentralized partially observable Markov decision process , defined by a tuple ( S , A , P , r , Ω , O , γ , n ) , where S , A , and Ω are set of states , actions , and observations , respectively . r and γ are the reward and the discount factor , respectively , and n is the number of agents . We use s ∈ S , a ∈ A , and o ∈ Ω for a state , an action , and an observation , respectively . We use subscript i for the corresponding agent and t for the time , e.g. , oi , t is the observation of agent i at time t. We use bold symbols to denote observations , actions , and rewards of all agents , e.g. , at = ( a1 , t , · · · , an , t ) . Here , P ( st+1|st , at ) and O ( ot|st ) are the transition probability and the conditional observation probability , respectively . 3.1 DDPG AND MADDPG . The objective of the agent in the environment is to maximize the cumulative reward Rt =∑T t′=t γ t′−trt′ . For this , we use the actor-critic method . The expected cumulative reward for given action and state is Q ( st , at ) = E [ Rt|s = st , a = at ] , which is called as an action-value function or value function . Using the Bellman equation , the value function can be rewritten as Q ( st , at ) = Ert , st+1 , at+1 [ rt + γQ ( st+1 , at+1 ) ] . When the parameter θ is used for the value function approximation , the value function Q can be learned by minimizing the loss L ( θ ) , given as L ( θ ) = E [ ( Qθ ( st , at|θ ) − y ) 2 ] , y = rt + γQθ ( st+1 , at+1 ) . ( 1 ) DDPG is the widely-used choices for the policy update . The policy parameterized by φ takes state s as an input and outputs deterministic action a = µφ ( s ) in DDPG , where the gradient of φ is given as ∇φJ ( φ ) = E [ ∇φµφ ( at|st ) ∇atQθ ( st , at ) |at=µφ ( st ) ] . ( 2 ) MADDPG is an algorithm that uses DDPG in the CTDE framework ( Lowe et al. , 2017 ) . The value function in MADDPG takes observations and actions of all agents as an input . Meanwhile , the policy of each agent takes its observation as an input since the agent can access to its observation only . In CTDE , the loss for the value function and the gradient for the policy are given as L ( θi ) = E [ ( Qθi ( ot , at ) − y ) 2 ] , y = ri , t + γQθi ( ot+1 , at+1 ) , ( 3 ) ∇φiJ ( φi ) = E [ ∇φiµφi ( ai , t|oi , t ) ∇ai , tQθi ( ot , at ) |ai , t=µφi ( oi , t ) ] . ( 4 ) Here , each agent has a different value function and a policy , parameterized respectively as θi and φi .
Authors present a novel and effective method leveraging a generative adversarial approach to a specific MARL problem with regard to missing training data. The approach presented treats the missing data as targets of imputation, loosely similar to that of inpainting problems in computer vision, where missing pixels are "filled in". Results show that IA-MARL outperforms one baseline (MADDPG) that has not been originally devised for missing training data.
SP:fa0ebf93ac2ed036fcd7d1e81761fefb43a4961e
Assisted Learning for Organizations with Limited Imbalanced Data
1 INTRODUCTION . Modern distributed learning frameworks such as federated learning ( Shokri & Shmatikov , 2015 ; Konecny et al. , 2016 ; McMahan et al. , 2017 ) aim to improve the learning performance for a large number of learners that have limited data and computation/communication resources . These learning frameworks are well suited for cloud systems and IoT systems ( Ray , 2016 ; Gomathi et al. , 2018 ) that manage numerous smart devices through wireless communication . Over the past decade , many organizations , e.g. , government agencies , hospitals , schools , and companies , have integrated machine learning models into their work pipeline to facilitate data analysis and decision making . For example , according to a recent survey ( Financesonline , 2021 ) , 49 % of companies worldwide are exploring or planning to use machine learning , 51 % of organizations claim to be early adopters of machine learning , and the estimated productivity improvement from the learning models is 40 % . However , the performance of their machine learning models critically depends on the quality of the data , which typically is of a limited population and is biased toward certain distributions . Unfortunately , the existing learning frameworks can not help big organizations improve their learning performance due to the following major restrictions . • Unlike smart devices in the conventional federated learning , organizational learners typically cooperate with a single external service provider under a rigorous contract . Moreover , the service provider is often presumed to have more data with better quality than the organization . • Conventional distributed learners achieve the performance goal by frequently exchanging information with other learners . In comparison , each learning round for organizational learners is costly , as they need to pay the provider for assistance and need to exchange a large amount of information with the provider . Hence , organizational learners desire to achieve a significant performance improvement within limited assistance rounds . Therefore , there is an emerging need to develop a modern machine learning framework for organizational learners that can significantly improve the model performance by purchasing limited assistance services from external providers without data sharing . This constitutes the goal of this work . In this work , we develop an assisted learning framework in the ‘ horizontal-splitting ’ setting , where the learner and the service provider possess different datasets that are utilized for training a common model . In our context , the learner ’ s data is assumed to be limited and imbalanced , while the provider ’ s data is supposed to be big and complements the learner ’ s data . Our learning framework nicely suits the organizational learners ’ characteristics : they have a very limited budget for purchasing external assistance services , yet they exchange a large amount of side information with the provider per assistance round to maximize the performance gain . This is opposite to federated learning , where smart devices are equipped with only a limited communication budget but can endlessly learn through interacting with the cloud . We summarize our contributions as follows . 1.1 OUR CONTRIBUTIONS . We identify the need for developing an assisted learning framework for facilitating the deployment of general machine learning in large organizations . This learning framework addresses the unique challenges as explained previously . We first develop an assisted deep learning framework for organizational learners with limited and imbalanced data , and propose a stochastic training algorithm named AssistSGD . Specifically , every assistance round of AssistSGD consists of two phases . In the first phase , the learner performs local SGD training for multiple iterations and sends the generated trajectory of models together with their corresponding local loss values to the service provider . In the second phase , the provider utilizes the learner ’ s information to evaluate the global loss of the received models , and uses the best model with the smallest global loss as an initialization . Then , the provider performs local SGD training for multiple iterations and sends the generated trajectory of models together with their corresponding local loss values to the learner . Finally , the learner utilizes the provider ’ s information to evaluate the global loss of the received models , and outputs the best model with the smallest global loss . Under mild technical assumptions , we formally prove that AssistSGD with full batch gradient updates is guaranteed to find a critical point of the global loss function in general nonconvex optimization . We further generalize the framework to enable assisted reinforcement learning , and develop a policy gradient training algorithm named AssistPG , which has the same training logic as that of AssistSGD . Through extensive experiments with deep learning and reinforcement learning , we demonstrate that the learner can achieve a near-oracle performance with AssistSGD and AssistPG as if all the data were centralized . In particular , as the learner data ’ s level of imbalance increases , AssistSGD can help the learner achieve a higher performance gain . Moreover , data are never exchanged in the assisted learning process for both participants . 1.2 RELATED WORK . Assisted learning . Earlier work on assisted learning ( Xian et al. , 2020 ) considers organizations that collect different features from the same cohort . This is in contrast with our context where organizations hold the same features but imbalanced data distributions or environments . Also , our method applies to general deep learning and reinforcement learning tasks , which are beyond the previously studied regression task . Consequently , our algorithm designs and application scenarios are significantly different from the prior work . Distributed optimization . In conventional distributed optimization , the data is evenly distributed among workers , which collaboratively solve a large-scale problem by exchanging local information ( gradients , models . ) via either decentralized networks ( Xie et al. , 2016 ; Lian et al. , 2017 ; 2018 ) or centralized networks ( Ho et al. , 2013 ; Li et al. , 2014 ; Richtarik & Takavc , 2016 ; Zhou et al. , 2016 ; 2018 ) . As a comparison , our AssistSGD only requires a few transmission rounds between the learner and provider . This is particularly appealing for organizational learners , who can employ a sophisticated optimization process locally while restricting the rounds of assistance . Federated learning . Federated learning is an emerging distributed learning framework ( Shokri & Shmatikov , 2015 ; Konecny et al. , 2016 ; McMahan et al. , 2017 ; Zhao et al. , 2018 ; Li et al. , 2020 ; Diao et al. , 2021 ) , which aims to learn a global model using the average of local models trained by numerous smart devices with heterogeneous data . The existing federated learning algorithms require frequent transmissions of local model parameters . This is different from our solution designed for the organizational learning scenarios , where each learner is an organization that often has unconstrained communication and computation resources , but is restricted to interact with other external service providers . Our solution aims to help the learner improve learning performance within ten rounds , while federated learning needs many more rounds . 2 ASSISTED DEEP LEARNING . In this section , we introduce the assisted deep learning framework . Throughout the paper , L denotes a learner who seeks assistance , and P denotes a service provider who provides assistance to L . 2.1 PROBLEM FORMULATION . We consider the case where the learner L aims to train a machine learning model θ that performs well on its own datasetD ( L ) and generalizes well to unseen data . In general , L can train a machine learning model by solving the empirical risk minimization problem minθ∈Θ f ( θ ; D ( L ) ) , where f ( · ; D ( L ) ) is the loss on D ( L ) and Θ is the parameter space . Standard statistical learning theories show that the obtained model can generalize well to intact test samples under suitable constraints of model parsimoniousness ( Ding et al. , 2018 ) . However , when the user ’ s data D ( L ) contains a limited number of samples that are highly imbalanced , the learned model will suffer from overfitting or deteriorated generalization capability to the unseen test data . To overcome data deficiency , the learner L intends to connect with an external service provider P ( e.g. , a commercialized data company ) , who possesses data D ( P ) that are sufficient or complementary to the learner ’ s data D ( L ) . Ideally , the user L would improve the model by solving the following data-augmented problem , where D ( L , P ) : = D ( L ) ∪ D ( P ) denotes the centralized data . θ ( L , P ) = arg min θ∈Θ f ( θ ; D ( L , P ) ) , where D ( L , P ) = D ( L ) ∪ D ( P ) . ( 1 ) We note that f ( θ ; D ( L , P ) ) = f ( θ , D ( L ) ) + f ( θ , D ( P ) ) . If D ( P ) is generated from a distribution that is close to the underlying data distribution , then it is expected that θ ( L , P ) will achieve significantly improved performance on unseen data . However , it is unrealistic to centralize the data since the interactions between the learner L and the provider P are often restricted by various regulations . Some representative regulations that formally define the assisted learning framework are listed below . Assisted Learning Protocols 1 . No data sharing : Neither the learner L nor the provider P will share data with each other . 2 . Limited assistance : The learner L has a limited budget for purchasing assistance service . The learner desires to maximize the performance gain with only a few assistance rounds . 3 . Unlimited communication bandwidth : In each assistance round , the learner and the provider can exchange unlimited information . For example , the learner ( resp . provider ) can send an employee ( resp . technician ) to deliver a large-capacity hard drive to the other . The above assisted learning framework is different from the existing learning frameworks . For example , in federated learning , many devices collaboratively train a global model via a large number of learning rounds with limited communication bandwidth . In comparison , the organizational learner in assisted learning can only query a few rounds of assistance from the provider , but can exchange unlimited information with it . Hence , we need to develop a training algorithm that can substantially improve the learner ’ s model quality using limited interactions with the provider for assisted learning . Next , we present an assisted stochastic gradient descent ( AssistSGD ) algorithm for this purpose . 2.2 ASSISTSGD FOR ASSISTED DEEP LEARNING . We propose AssistSGD in Algorithm 1 for assisted deep learning . The learning process consists of R rounds , each consisting of the following interactions between the learner L and the provider P. ( 1 ) First , the learner L initiates a local learning process . It initializes a model θ ( L ) 0 and applies SGD with learning rate η to update it for T iterations using the local dataset D ( L ) . Then , the learner evaluates the local loss f ( · ; D ( L ) ) in a subset T of the iterations t = 0 , 1 , ... T − 1 . Lastly , the learner sends this subset of models and their corresponding local loss to the provider P . Algorithm 1 AssistSGD Input : Initialization model θ0 , learning rate η , assistance rounds R , local iterations T . for assistance rounds r = 1 , . . . , R do Learner L : I Initialize θ ( L ) 0 = θ r−1 . I Local SGD training to generate { θ ( L ) t } T−1t=0 . I Send { θ ( L ) t , f ( θ ( L ) t ; D ( L ) ) } t∈T to provider P . ———————————————————— Provider P : I Initialize θ ( P ) 0 = arg minθ∈ { θ ( L ) t } t∈T f ( θ ; D ( L , P ) ) . I Local SGD training to generate { θ ( P ) t } T ′−1 t=0 . I Send { θ ( P ) t , f ( θ ( P ) t ; D ( P ) ) } t∈T ′ to learner L . ———————————————————— Learner L : I Output θr = arg min θ∈ { θ ( P ) t } t∈T ′ f ( θ ; D ( L , P ) ) . end Output : The best model in { θr } Rr=1 . ( 2 ) Upon receiving the information from the learner L , the provider P first evaluates the global loss f ( · ; D ( L , P ) ) of the received set of models { θ ( L ) t , t ∈ T } and picks the best one ( denoted by θ ( P ) 0 ) for initialization . Note that the global loss can be evaluated because the local loss { f ( θ ( L ) t ; D ( L ) ) , t ∈ T } are provided by the learner L , and the provider P just needs to evaluate the local loss { f ( θ ( L ) t ; D ( P ) ) , t ∈ T } . After that , the provider applies SGD with learning rate η to update the model for T ′ iterations on the local dataset D ( P ) . Then , the provider evaluates the local loss f ( · ; D ( P ) ) in a subset T ′ of the iterations t = 0 , 1 , ... T ′ − 1 , and sends the subset of models and their corresponding local loss to the learner L. ( 3 ) Once the learner L receives feedback from the provider P , it evaluates the global loss f ( · ; D ( L , P ) ) of the received set of models { θ ( P ) t , t ∈ T ′ } and picks the best model as the output model of this assistance round . Discussions . The above algorithm works for general deep learning tasks . It does not require data sharing between the learner and the provider . Moreover , in each learning round , the learner and the provider exchange a small number of their local training models . As we show later in the experimental studies , it suffices to sample the iterations in T , T ′ at a low frequency . Such an assisted learning process is very different from , for example , the federated learning process . Particularly , in each round of federated learning , all learners perform a small number of local SGD updates and send their last output models to the cloud due to limited communication bandwidths . Consequently , the global model needs a large number of learning rounds to achieve a desirable performance .
This paper investigates a novel learning scenario, where the learner has limited access to the global data distribution and can share learned model parameters with a so-called service provider through multiple (but limited) rounds of interactions. The motivation is interesting and seemingly useful for the scenarios described in the introduction. The authors propose an intuitive assisted learning framework applicable to both (deep) supervised learning and reinforcement learning, and experiments show that the proposed algorithms achieve comparable performance with learning from centralized data. However, there are a few concerns/ questions in the problem setup and the proposed assisted learning protocol, please see the detailed review below.
SP:40a3502f03e9bef04f4f8c088e6b7dcb768846da
Assisted Learning for Organizations with Limited Imbalanced Data
1 INTRODUCTION . Modern distributed learning frameworks such as federated learning ( Shokri & Shmatikov , 2015 ; Konecny et al. , 2016 ; McMahan et al. , 2017 ) aim to improve the learning performance for a large number of learners that have limited data and computation/communication resources . These learning frameworks are well suited for cloud systems and IoT systems ( Ray , 2016 ; Gomathi et al. , 2018 ) that manage numerous smart devices through wireless communication . Over the past decade , many organizations , e.g. , government agencies , hospitals , schools , and companies , have integrated machine learning models into their work pipeline to facilitate data analysis and decision making . For example , according to a recent survey ( Financesonline , 2021 ) , 49 % of companies worldwide are exploring or planning to use machine learning , 51 % of organizations claim to be early adopters of machine learning , and the estimated productivity improvement from the learning models is 40 % . However , the performance of their machine learning models critically depends on the quality of the data , which typically is of a limited population and is biased toward certain distributions . Unfortunately , the existing learning frameworks can not help big organizations improve their learning performance due to the following major restrictions . • Unlike smart devices in the conventional federated learning , organizational learners typically cooperate with a single external service provider under a rigorous contract . Moreover , the service provider is often presumed to have more data with better quality than the organization . • Conventional distributed learners achieve the performance goal by frequently exchanging information with other learners . In comparison , each learning round for organizational learners is costly , as they need to pay the provider for assistance and need to exchange a large amount of information with the provider . Hence , organizational learners desire to achieve a significant performance improvement within limited assistance rounds . Therefore , there is an emerging need to develop a modern machine learning framework for organizational learners that can significantly improve the model performance by purchasing limited assistance services from external providers without data sharing . This constitutes the goal of this work . In this work , we develop an assisted learning framework in the ‘ horizontal-splitting ’ setting , where the learner and the service provider possess different datasets that are utilized for training a common model . In our context , the learner ’ s data is assumed to be limited and imbalanced , while the provider ’ s data is supposed to be big and complements the learner ’ s data . Our learning framework nicely suits the organizational learners ’ characteristics : they have a very limited budget for purchasing external assistance services , yet they exchange a large amount of side information with the provider per assistance round to maximize the performance gain . This is opposite to federated learning , where smart devices are equipped with only a limited communication budget but can endlessly learn through interacting with the cloud . We summarize our contributions as follows . 1.1 OUR CONTRIBUTIONS . We identify the need for developing an assisted learning framework for facilitating the deployment of general machine learning in large organizations . This learning framework addresses the unique challenges as explained previously . We first develop an assisted deep learning framework for organizational learners with limited and imbalanced data , and propose a stochastic training algorithm named AssistSGD . Specifically , every assistance round of AssistSGD consists of two phases . In the first phase , the learner performs local SGD training for multiple iterations and sends the generated trajectory of models together with their corresponding local loss values to the service provider . In the second phase , the provider utilizes the learner ’ s information to evaluate the global loss of the received models , and uses the best model with the smallest global loss as an initialization . Then , the provider performs local SGD training for multiple iterations and sends the generated trajectory of models together with their corresponding local loss values to the learner . Finally , the learner utilizes the provider ’ s information to evaluate the global loss of the received models , and outputs the best model with the smallest global loss . Under mild technical assumptions , we formally prove that AssistSGD with full batch gradient updates is guaranteed to find a critical point of the global loss function in general nonconvex optimization . We further generalize the framework to enable assisted reinforcement learning , and develop a policy gradient training algorithm named AssistPG , which has the same training logic as that of AssistSGD . Through extensive experiments with deep learning and reinforcement learning , we demonstrate that the learner can achieve a near-oracle performance with AssistSGD and AssistPG as if all the data were centralized . In particular , as the learner data ’ s level of imbalance increases , AssistSGD can help the learner achieve a higher performance gain . Moreover , data are never exchanged in the assisted learning process for both participants . 1.2 RELATED WORK . Assisted learning . Earlier work on assisted learning ( Xian et al. , 2020 ) considers organizations that collect different features from the same cohort . This is in contrast with our context where organizations hold the same features but imbalanced data distributions or environments . Also , our method applies to general deep learning and reinforcement learning tasks , which are beyond the previously studied regression task . Consequently , our algorithm designs and application scenarios are significantly different from the prior work . Distributed optimization . In conventional distributed optimization , the data is evenly distributed among workers , which collaboratively solve a large-scale problem by exchanging local information ( gradients , models . ) via either decentralized networks ( Xie et al. , 2016 ; Lian et al. , 2017 ; 2018 ) or centralized networks ( Ho et al. , 2013 ; Li et al. , 2014 ; Richtarik & Takavc , 2016 ; Zhou et al. , 2016 ; 2018 ) . As a comparison , our AssistSGD only requires a few transmission rounds between the learner and provider . This is particularly appealing for organizational learners , who can employ a sophisticated optimization process locally while restricting the rounds of assistance . Federated learning . Federated learning is an emerging distributed learning framework ( Shokri & Shmatikov , 2015 ; Konecny et al. , 2016 ; McMahan et al. , 2017 ; Zhao et al. , 2018 ; Li et al. , 2020 ; Diao et al. , 2021 ) , which aims to learn a global model using the average of local models trained by numerous smart devices with heterogeneous data . The existing federated learning algorithms require frequent transmissions of local model parameters . This is different from our solution designed for the organizational learning scenarios , where each learner is an organization that often has unconstrained communication and computation resources , but is restricted to interact with other external service providers . Our solution aims to help the learner improve learning performance within ten rounds , while federated learning needs many more rounds . 2 ASSISTED DEEP LEARNING . In this section , we introduce the assisted deep learning framework . Throughout the paper , L denotes a learner who seeks assistance , and P denotes a service provider who provides assistance to L . 2.1 PROBLEM FORMULATION . We consider the case where the learner L aims to train a machine learning model θ that performs well on its own datasetD ( L ) and generalizes well to unseen data . In general , L can train a machine learning model by solving the empirical risk minimization problem minθ∈Θ f ( θ ; D ( L ) ) , where f ( · ; D ( L ) ) is the loss on D ( L ) and Θ is the parameter space . Standard statistical learning theories show that the obtained model can generalize well to intact test samples under suitable constraints of model parsimoniousness ( Ding et al. , 2018 ) . However , when the user ’ s data D ( L ) contains a limited number of samples that are highly imbalanced , the learned model will suffer from overfitting or deteriorated generalization capability to the unseen test data . To overcome data deficiency , the learner L intends to connect with an external service provider P ( e.g. , a commercialized data company ) , who possesses data D ( P ) that are sufficient or complementary to the learner ’ s data D ( L ) . Ideally , the user L would improve the model by solving the following data-augmented problem , where D ( L , P ) : = D ( L ) ∪ D ( P ) denotes the centralized data . θ ( L , P ) = arg min θ∈Θ f ( θ ; D ( L , P ) ) , where D ( L , P ) = D ( L ) ∪ D ( P ) . ( 1 ) We note that f ( θ ; D ( L , P ) ) = f ( θ , D ( L ) ) + f ( θ , D ( P ) ) . If D ( P ) is generated from a distribution that is close to the underlying data distribution , then it is expected that θ ( L , P ) will achieve significantly improved performance on unseen data . However , it is unrealistic to centralize the data since the interactions between the learner L and the provider P are often restricted by various regulations . Some representative regulations that formally define the assisted learning framework are listed below . Assisted Learning Protocols 1 . No data sharing : Neither the learner L nor the provider P will share data with each other . 2 . Limited assistance : The learner L has a limited budget for purchasing assistance service . The learner desires to maximize the performance gain with only a few assistance rounds . 3 . Unlimited communication bandwidth : In each assistance round , the learner and the provider can exchange unlimited information . For example , the learner ( resp . provider ) can send an employee ( resp . technician ) to deliver a large-capacity hard drive to the other . The above assisted learning framework is different from the existing learning frameworks . For example , in federated learning , many devices collaboratively train a global model via a large number of learning rounds with limited communication bandwidth . In comparison , the organizational learner in assisted learning can only query a few rounds of assistance from the provider , but can exchange unlimited information with it . Hence , we need to develop a training algorithm that can substantially improve the learner ’ s model quality using limited interactions with the provider for assisted learning . Next , we present an assisted stochastic gradient descent ( AssistSGD ) algorithm for this purpose . 2.2 ASSISTSGD FOR ASSISTED DEEP LEARNING . We propose AssistSGD in Algorithm 1 for assisted deep learning . The learning process consists of R rounds , each consisting of the following interactions between the learner L and the provider P. ( 1 ) First , the learner L initiates a local learning process . It initializes a model θ ( L ) 0 and applies SGD with learning rate η to update it for T iterations using the local dataset D ( L ) . Then , the learner evaluates the local loss f ( · ; D ( L ) ) in a subset T of the iterations t = 0 , 1 , ... T − 1 . Lastly , the learner sends this subset of models and their corresponding local loss to the provider P . Algorithm 1 AssistSGD Input : Initialization model θ0 , learning rate η , assistance rounds R , local iterations T . for assistance rounds r = 1 , . . . , R do Learner L : I Initialize θ ( L ) 0 = θ r−1 . I Local SGD training to generate { θ ( L ) t } T−1t=0 . I Send { θ ( L ) t , f ( θ ( L ) t ; D ( L ) ) } t∈T to provider P . ———————————————————— Provider P : I Initialize θ ( P ) 0 = arg minθ∈ { θ ( L ) t } t∈T f ( θ ; D ( L , P ) ) . I Local SGD training to generate { θ ( P ) t } T ′−1 t=0 . I Send { θ ( P ) t , f ( θ ( P ) t ; D ( P ) ) } t∈T ′ to learner L . ———————————————————— Learner L : I Output θr = arg min θ∈ { θ ( P ) t } t∈T ′ f ( θ ; D ( L , P ) ) . end Output : The best model in { θr } Rr=1 . ( 2 ) Upon receiving the information from the learner L , the provider P first evaluates the global loss f ( · ; D ( L , P ) ) of the received set of models { θ ( L ) t , t ∈ T } and picks the best one ( denoted by θ ( P ) 0 ) for initialization . Note that the global loss can be evaluated because the local loss { f ( θ ( L ) t ; D ( L ) ) , t ∈ T } are provided by the learner L , and the provider P just needs to evaluate the local loss { f ( θ ( L ) t ; D ( P ) ) , t ∈ T } . After that , the provider applies SGD with learning rate η to update the model for T ′ iterations on the local dataset D ( P ) . Then , the provider evaluates the local loss f ( · ; D ( P ) ) in a subset T ′ of the iterations t = 0 , 1 , ... T ′ − 1 , and sends the subset of models and their corresponding local loss to the learner L. ( 3 ) Once the learner L receives feedback from the provider P , it evaluates the global loss f ( · ; D ( L , P ) ) of the received set of models { θ ( P ) t , t ∈ T ′ } and picks the best model as the output model of this assistance round . Discussions . The above algorithm works for general deep learning tasks . It does not require data sharing between the learner and the provider . Moreover , in each learning round , the learner and the provider exchange a small number of their local training models . As we show later in the experimental studies , it suffices to sample the iterations in T , T ′ at a low frequency . Such an assisted learning process is very different from , for example , the federated learning process . Particularly , in each round of federated learning , all learners perform a small number of local SGD updates and send their last output models to the cloud due to limited communication bandwidths . Consequently , the global model needs a large number of learning rounds to achieve a desirable performance .
This paper studies a novel problem setup where a learner has unbalanced data, and a service provider has complementary or sufficient data, and the learner needs to improve accuracy in as few rounds of communication as possible, where the communication in each round is unbounded. An algorithm AssistSGD is proposed and shown to converge to a stationary point. Experiments show that AssistSGD has better performance than baselines and is close to centralized SGD on CIFAR 10 and reinforcement learning tasks,
SP:40a3502f03e9bef04f4f8c088e6b7dcb768846da
Surprise Minimizing Multi-Agent Learning with Energy-based Models
sites.google.com/view/surprise-web/ 1 INTRODUCTION . The rise of RL has led to an increasing interest in the study of multi-agent systems ( Lowe et al. , 2017 ; Vinyals et al. , 2019 ) , commonly known as Multi-Agent Reinforcement Learning ( MARL ) . In the case of partially observable settings , MARL enables the learning of policies with centralised training and decentralised control ( Kraemer & Banerjee , 2016 ) . This has proven to be useful for exploiting value-based methods which motivate collaboration across large number of agents . But how do agents behave in the presence of sudden environmental changes ? Consider the problem of autonomous driving wherein a driver ( agent ) autonomously operates a vehicle in real-time . The driver learns to optimize the reward function by maintaining constant speed and covering more distance in different traffic conditions . Whenever the vehicle approaches an obstacle , the driver acts to avoid it by utilizing the brake and directional steering commands . However , due to the fast-paced dynamics of the environment , say fast-moving traffic , the agent may abruptly encounter an obstacle ( a person running across the street ) which may result in a collision . Irrespective of the optimal action ( pushing of brakes ) executed by the agent , the vehicle may fail to evade the collision as a result of the abrupt temporal change . The above arises as a consequence of surprise , which is defined as a statistical measure of uncertainty . Surprise minimization ( Berseth et al. , 2019 ) is a recent phenomenon observed in the case of singleagent RL methods which deals with environments consisting of rapidly changing states . In the case of model-based RL ( Kaiser et al. , 2019 ) , surprise minimization is used as an effective planning tool in the agent ’ s model ( Berseth et al. , 2019 ) whereas in the case of model-free RL , surprise minimization is witnessed as an intrinsic motivation ( Achiam & Sastry , 2017 ; Macedo et al. , 2004 ) or generalization problem ( Chen , 2020 ) . On the other hand , MARL does not account for surprise across agents as a result of which agents remain unaware of drastic changes in the environment ( Macedo & Cardoso , 2005 ) . Thus , surprise minimization in the case of multi-agent settings requires attention from a critical standpoint . A potential pathway to treat surprising states may be realized in light of free-energy minimization . The free-energy principle depicts convergence to local niches and provides a general recipe for cognitive stability among agents . Through this lens , we unify surprise with free-energy in the multi-agent setting . We construct a temporal EBM which represents an estimate of surprise agents may face in the environment . All agents jointly minimize this estimate utilizing temporal difference learning upon their value functions and the EBM . Our formulation of free-energy minimization is theoretically akin to minimizing the entropy in conjugate gradient space . This insight provides a suitable convergence result towards minimum surprising states ( or niches ) of the agent state distributions . In an empirical study of multi-agent tasks which present significant collaboration bottlenecks and fast-paced dynamics , we validate our theoretical claims and motivate the practical usage of EBMs in MARL . 2 RELATED WORK . Surprise Minimization : Despite the recent success of value-based methods ( Mnih et al. , 2016 ; Hessel et al. , 2017 ) RL agents suffer from spurious state spaces and encounter sudden changes in trajectories . Quantitatively , surprise has been studied as a measure of deviation ( Berseth et al. , 2019 ; Chen , 2020 ) among states encountered by the agent during its interaction with the environment . While exploring ( Burda et al. , 2019 ; Thrun , 1992 ) the environment , agents tend to have higher deviation among states which is gradually reduced by gaining a significant understanding of state-action transitions . In the case of model-based RL , agents can leverage spurious experiences ( Berseth et al. , 2019 ) and plan effectively for future steps . On the other hand , in the case of model-free RL , surprise results in sample-inefficient learning ( Achiam & Sastry , 2017 ) . This is primarily addressed by making use of rigorous exploration strategies ( Stadie et al. , 2015 ; Lee et al. , 2019 ) . High-dimensional exploration further requires extrinsic feature engineering ( Kulkarni et al. , 2016 ) and meta models ( Gupta et al. , 2018 ) . A suitable way to tackle high-dimensional dynamics is by utilizing surprise as a penalty on the reward ( Chen , 2020 ) . This leads to improved generalization for single-agent interactions ( Ren et al. , 2005 ) . Our proposed approach is orthogonal to the aforesaid methods . Energy-based Models : EBMs have been successfully implemented in single-agent RL methods ( O ’ Donoghue et al. , 2016 ; Haarnoja et al. , 2017 ) . These typically make use of Boltzmann distributions to approximate policies ( Levine & Abbeel , 2014 ) . Such a formulation results in the minimization of free energy within the agent . While policy approximation depicts promise in the case of unknown dynamics , inference methods ( Toussaint , 2009 ) play a key role in optimizing goal-oriented behavior . A second type of usage of EBMs follows the maximization of entropy ( Ziebart et al. , 2008 ) . The maximum entropy framework ( Haarnoja et al. , 2018b ) highlighted in Soft Q-Learning ( SQL ) ( Haarnoja et al. , 2017 ) allows the agent to obey a policy which maximizes its reward and entropy concurrently . Maximization of agent ’ s entropy results in diverse and adaptive behaviors ( Ziebart , 2010 ) which may be difficult to accomplish using standard exploration techniques ( Burda et al. , 2019 ; Thrun , 1992 ) . The maximum entropy framework is akin to approximate inference in the case of policy gradient methods ( Schulman et al. , 2017 ) . Such a connection between likelihood ratio gradient techniques and energy-based formulations leads to diverse and robust policies ( Haarnoja , 2018 ) and their hierarchical extensions ( Haarnoja et al. , 2018a ) which preserve the lower levels of hierarchies . In the case of MARL , EBMs have witnessed limited applicability as a result of the increasing number of agents and complexity within each agent ( Buşoniu et al. , 2010 ) . While the probabilistic framework is readily transferable to opponent-aware multi-agent systems ( Wen et al. , 2019 ) , cooperative settings consisting of coordination between agents require a firm formulation of energy which is scalable in the number of agents ( Grau-Moya et al. , 2018 ) and accounts for environments consisting of spurious states ( Wei et al. , 2018 ) . Our theoretical formulation is motivated by these methods in literature . 3 PRELIMINARIES . 3.1 MULTI-AGENT LEARNING . We review the cooperative MARL setup . The problem is modeled as a Dec-Partially Observable Markov Decision Process ( POMDP ) ( Oliehoek & Amato , 2016 ) defined by the tuple ( S , A , r , N , P , Z , O , γ ) where the state space S and action space A are discrete , r : S × A → [ rmin , rmax ] presents the reward observed by agents a ∈ N where N is the set of all agents , P : S × S ×A → [ 0 , ∞ ) presents the unknown transition model consisting of the transition probability to the next state s′ ∈ S given the current state s ∈ S and joint action u ∈ A ( a combination of each agent ’ s action ua ∈ Aa ) at time step t and γ is the discount factor . We consider a partially observable setting in which each agent n draws individual observations z ∈ Z according to the observation function O ( s , u ) : S × A → Z . We consider a joint policy πθ ( u|s ) as a function of model parameters θ . Standard RL defines the agent ’ s objective to maximize the expected discounted reward Eπθ [ ∑T t=0 γ tr ( st , ut ) ] as a function of the parameters θ . The joint action-value function for agents is represented as Q ( u , s ; θ ) = Eπθ [ ∑T t=1 γ tr ( s , u ) |s = st , u = ut ] which is the expected sum of payoffs obtained in state s upon performing action u by following the policy πθ . We denote the optimal policy πθ∗ ( shorthand π∗ ) such that Q ( u , s ; θ∗ ) ≥ Q ( u , s ; θ ) ∀s ∈ S , u ∈ A . In the case of multiple agents , the joint optimal policy can be expressed as the Nash Equilibrium ( Nash , 1950 ) of the Stochastic Markov Game as π∗ = ( π1 , ∗ , π2 , ∗ , ... πN , ∗ ) such that Q ( ua , s ; θ∗ ) ≥ Q ( ua , s ; θ ) ∀s ∈ S , u ∈ A , a ∈ N . Q-Learning is an off-policy , model-free algorithm suitable for continuous and episodic tasks . The algorithm uses semi-gradient descent to minimize the Temporal Difference ( TD ) error in Equation 1 . L ( θ ) = E s , u , s′∼R [ ( r + γmax u′∈A Q ( u′ , s′ ; θ− ) −Q ( u , s ; θ ) ) 2 ] ( 1 ) where y = r + γmax u′∈A Q ( u′ , s′ ; θ− ) is the TD target consisting of θ− as the target parameters andR denotes the replay buffer . 3.2 ENERGY-BASED MODELS . EBMs ( LeCun et al. , 2006 ; 2007 ) have been successfully applied in the field of machine learning ( Teh et al. , 2003 ) and probabilistic inference ( MacKay , 2002 ) . A typical EBM E formulates the equilibrium probabilities ( Sallans & Hinton , 2004 ) P ( v , h ) = exp ( −E ( v , h ) ) ∑ v̂ , ĥ [ exp ( −E ( v̂ , ĥ ) ) ] via a Boltzmann distribution ( Levine & Abbeel , 2014 ) where v and h are the values of the visible and hidden variables and v̂ and ĥ are all the possible configurations of the visible and hidden variables respectively . The probability distribution over all the visible variables can be obtained by summing over all possible configurations of the hidden variables . This is mathematically expressed in Equation 2 . P ( v ) = ∑ h exp ( −E ( v , h ) ) ∑ v̂ , ĥ exp ( −E ( v̂ , ĥ ) ) ( 2 ) Here , E ( v , h ) is called the equilibrium free energy which is the minimum of the variational free energy and ∑ v̂ , ĥ exp ( −E ( v̂ , ĥ ) ) is the partition function . 4 ENERGY-BASED SURPRISE MINIMIZATION . We begin by constructing surprise minimization as an energy-based problem in the temporal setting . The motivation behind an energy-based formulation stems from rapidly changing states as an undesired niche among agents in the case of partially-observed settings . To steer agents away from this niche , we further construct a method which incorporates the theoretical aspect of the study .
The authors present a method to regularise the learning of Q values within Decentralised partially observable markov decision processes, where the regulariser is one that minimises surprise in some way across the population of agents within the environment. This it is argued allows the agents to avoid situations in which states are "rapidly changing", instead aiming to reach an equilibrium state where just enough surprise is being experienced as part of a reward maximising objective. The authors present a series of results in which their method outperforms a number of SoTA alternatives on a reasonable looking set of benchmarks, as well as an ablation study showing the criticality of each proposed component.
SP:ff2c79dd5ef9325a3f48750082a994b3ad9be172
Surprise Minimizing Multi-Agent Learning with Energy-based Models
sites.google.com/view/surprise-web/ 1 INTRODUCTION . The rise of RL has led to an increasing interest in the study of multi-agent systems ( Lowe et al. , 2017 ; Vinyals et al. , 2019 ) , commonly known as Multi-Agent Reinforcement Learning ( MARL ) . In the case of partially observable settings , MARL enables the learning of policies with centralised training and decentralised control ( Kraemer & Banerjee , 2016 ) . This has proven to be useful for exploiting value-based methods which motivate collaboration across large number of agents . But how do agents behave in the presence of sudden environmental changes ? Consider the problem of autonomous driving wherein a driver ( agent ) autonomously operates a vehicle in real-time . The driver learns to optimize the reward function by maintaining constant speed and covering more distance in different traffic conditions . Whenever the vehicle approaches an obstacle , the driver acts to avoid it by utilizing the brake and directional steering commands . However , due to the fast-paced dynamics of the environment , say fast-moving traffic , the agent may abruptly encounter an obstacle ( a person running across the street ) which may result in a collision . Irrespective of the optimal action ( pushing of brakes ) executed by the agent , the vehicle may fail to evade the collision as a result of the abrupt temporal change . The above arises as a consequence of surprise , which is defined as a statistical measure of uncertainty . Surprise minimization ( Berseth et al. , 2019 ) is a recent phenomenon observed in the case of singleagent RL methods which deals with environments consisting of rapidly changing states . In the case of model-based RL ( Kaiser et al. , 2019 ) , surprise minimization is used as an effective planning tool in the agent ’ s model ( Berseth et al. , 2019 ) whereas in the case of model-free RL , surprise minimization is witnessed as an intrinsic motivation ( Achiam & Sastry , 2017 ; Macedo et al. , 2004 ) or generalization problem ( Chen , 2020 ) . On the other hand , MARL does not account for surprise across agents as a result of which agents remain unaware of drastic changes in the environment ( Macedo & Cardoso , 2005 ) . Thus , surprise minimization in the case of multi-agent settings requires attention from a critical standpoint . A potential pathway to treat surprising states may be realized in light of free-energy minimization . The free-energy principle depicts convergence to local niches and provides a general recipe for cognitive stability among agents . Through this lens , we unify surprise with free-energy in the multi-agent setting . We construct a temporal EBM which represents an estimate of surprise agents may face in the environment . All agents jointly minimize this estimate utilizing temporal difference learning upon their value functions and the EBM . Our formulation of free-energy minimization is theoretically akin to minimizing the entropy in conjugate gradient space . This insight provides a suitable convergence result towards minimum surprising states ( or niches ) of the agent state distributions . In an empirical study of multi-agent tasks which present significant collaboration bottlenecks and fast-paced dynamics , we validate our theoretical claims and motivate the practical usage of EBMs in MARL . 2 RELATED WORK . Surprise Minimization : Despite the recent success of value-based methods ( Mnih et al. , 2016 ; Hessel et al. , 2017 ) RL agents suffer from spurious state spaces and encounter sudden changes in trajectories . Quantitatively , surprise has been studied as a measure of deviation ( Berseth et al. , 2019 ; Chen , 2020 ) among states encountered by the agent during its interaction with the environment . While exploring ( Burda et al. , 2019 ; Thrun , 1992 ) the environment , agents tend to have higher deviation among states which is gradually reduced by gaining a significant understanding of state-action transitions . In the case of model-based RL , agents can leverage spurious experiences ( Berseth et al. , 2019 ) and plan effectively for future steps . On the other hand , in the case of model-free RL , surprise results in sample-inefficient learning ( Achiam & Sastry , 2017 ) . This is primarily addressed by making use of rigorous exploration strategies ( Stadie et al. , 2015 ; Lee et al. , 2019 ) . High-dimensional exploration further requires extrinsic feature engineering ( Kulkarni et al. , 2016 ) and meta models ( Gupta et al. , 2018 ) . A suitable way to tackle high-dimensional dynamics is by utilizing surprise as a penalty on the reward ( Chen , 2020 ) . This leads to improved generalization for single-agent interactions ( Ren et al. , 2005 ) . Our proposed approach is orthogonal to the aforesaid methods . Energy-based Models : EBMs have been successfully implemented in single-agent RL methods ( O ’ Donoghue et al. , 2016 ; Haarnoja et al. , 2017 ) . These typically make use of Boltzmann distributions to approximate policies ( Levine & Abbeel , 2014 ) . Such a formulation results in the minimization of free energy within the agent . While policy approximation depicts promise in the case of unknown dynamics , inference methods ( Toussaint , 2009 ) play a key role in optimizing goal-oriented behavior . A second type of usage of EBMs follows the maximization of entropy ( Ziebart et al. , 2008 ) . The maximum entropy framework ( Haarnoja et al. , 2018b ) highlighted in Soft Q-Learning ( SQL ) ( Haarnoja et al. , 2017 ) allows the agent to obey a policy which maximizes its reward and entropy concurrently . Maximization of agent ’ s entropy results in diverse and adaptive behaviors ( Ziebart , 2010 ) which may be difficult to accomplish using standard exploration techniques ( Burda et al. , 2019 ; Thrun , 1992 ) . The maximum entropy framework is akin to approximate inference in the case of policy gradient methods ( Schulman et al. , 2017 ) . Such a connection between likelihood ratio gradient techniques and energy-based formulations leads to diverse and robust policies ( Haarnoja , 2018 ) and their hierarchical extensions ( Haarnoja et al. , 2018a ) which preserve the lower levels of hierarchies . In the case of MARL , EBMs have witnessed limited applicability as a result of the increasing number of agents and complexity within each agent ( Buşoniu et al. , 2010 ) . While the probabilistic framework is readily transferable to opponent-aware multi-agent systems ( Wen et al. , 2019 ) , cooperative settings consisting of coordination between agents require a firm formulation of energy which is scalable in the number of agents ( Grau-Moya et al. , 2018 ) and accounts for environments consisting of spurious states ( Wei et al. , 2018 ) . Our theoretical formulation is motivated by these methods in literature . 3 PRELIMINARIES . 3.1 MULTI-AGENT LEARNING . We review the cooperative MARL setup . The problem is modeled as a Dec-Partially Observable Markov Decision Process ( POMDP ) ( Oliehoek & Amato , 2016 ) defined by the tuple ( S , A , r , N , P , Z , O , γ ) where the state space S and action space A are discrete , r : S × A → [ rmin , rmax ] presents the reward observed by agents a ∈ N where N is the set of all agents , P : S × S ×A → [ 0 , ∞ ) presents the unknown transition model consisting of the transition probability to the next state s′ ∈ S given the current state s ∈ S and joint action u ∈ A ( a combination of each agent ’ s action ua ∈ Aa ) at time step t and γ is the discount factor . We consider a partially observable setting in which each agent n draws individual observations z ∈ Z according to the observation function O ( s , u ) : S × A → Z . We consider a joint policy πθ ( u|s ) as a function of model parameters θ . Standard RL defines the agent ’ s objective to maximize the expected discounted reward Eπθ [ ∑T t=0 γ tr ( st , ut ) ] as a function of the parameters θ . The joint action-value function for agents is represented as Q ( u , s ; θ ) = Eπθ [ ∑T t=1 γ tr ( s , u ) |s = st , u = ut ] which is the expected sum of payoffs obtained in state s upon performing action u by following the policy πθ . We denote the optimal policy πθ∗ ( shorthand π∗ ) such that Q ( u , s ; θ∗ ) ≥ Q ( u , s ; θ ) ∀s ∈ S , u ∈ A . In the case of multiple agents , the joint optimal policy can be expressed as the Nash Equilibrium ( Nash , 1950 ) of the Stochastic Markov Game as π∗ = ( π1 , ∗ , π2 , ∗ , ... πN , ∗ ) such that Q ( ua , s ; θ∗ ) ≥ Q ( ua , s ; θ ) ∀s ∈ S , u ∈ A , a ∈ N . Q-Learning is an off-policy , model-free algorithm suitable for continuous and episodic tasks . The algorithm uses semi-gradient descent to minimize the Temporal Difference ( TD ) error in Equation 1 . L ( θ ) = E s , u , s′∼R [ ( r + γmax u′∈A Q ( u′ , s′ ; θ− ) −Q ( u , s ; θ ) ) 2 ] ( 1 ) where y = r + γmax u′∈A Q ( u′ , s′ ; θ− ) is the TD target consisting of θ− as the target parameters andR denotes the replay buffer . 3.2 ENERGY-BASED MODELS . EBMs ( LeCun et al. , 2006 ; 2007 ) have been successfully applied in the field of machine learning ( Teh et al. , 2003 ) and probabilistic inference ( MacKay , 2002 ) . A typical EBM E formulates the equilibrium probabilities ( Sallans & Hinton , 2004 ) P ( v , h ) = exp ( −E ( v , h ) ) ∑ v̂ , ĥ [ exp ( −E ( v̂ , ĥ ) ) ] via a Boltzmann distribution ( Levine & Abbeel , 2014 ) where v and h are the values of the visible and hidden variables and v̂ and ĥ are all the possible configurations of the visible and hidden variables respectively . The probability distribution over all the visible variables can be obtained by summing over all possible configurations of the hidden variables . This is mathematically expressed in Equation 2 . P ( v ) = ∑ h exp ( −E ( v , h ) ) ∑ v̂ , ĥ exp ( −E ( v̂ , ĥ ) ) ( 2 ) Here , E ( v , h ) is called the equilibrium free energy which is the minimum of the variational free energy and ∑ v̂ , ĥ exp ( −E ( v̂ , ĥ ) ) is the partition function . 4 ENERGY-BASED SURPRISE MINIMIZATION . We begin by constructing surprise minimization as an energy-based problem in the temporal setting . The motivation behind an energy-based formulation stems from rapidly changing states as an undesired niche among agents in the case of partially-observed settings . To steer agents away from this niche , we further construct a method which incorporates the theoretical aspect of the study .
This paper introduced a suprise term in optimizing the policies (action-value functions in Q-learning) for solving the non-stationary challenge due to the rapid changes from the environment for MARL scenarios. This work not only proposed the concept of suprising value in the context of MARL, but also gave an operator (i.e. a contration) that can make the sequence of suprising values converges to a fixed point. The general sketch of the proofs is correct. Also, the authors showed that the convex conjugate of the suprising value operator is analogous to the minimization of the uncertainty among agents. By incorporating the suprising value into the Q-learning algorithm, the term led by the ratio between target and behaviour suprising value (suprising ratio) can also be interpreted as an intrinsic reward. Moreover, the authors also compare the proposed surprising minimization objective with the Soft Q-learning objective.
SP:ff2c79dd5ef9325a3f48750082a994b3ad9be172
Multimeasurement Generative Models
1 INTRODUCTION . Consider a collection of i.i.d . samples { xi } ni=1 , assumed to have been drawn from an unknown distribution with density pX in Rd . An important problem in probabilistic modeling is the task of drawing independent samples from pX , which has numerous potential applications . This problem is typically approached in two phases : approximating pX , and drawing samples from the approximated density . In unnormalized models the first phase is approached by learning the energy function fX associated with the Gibbs distribution pX ∝ exp ( −fX ) , and for the second phase one must resort to Markov chain Monte Carlo methods , such as Langevin MCMC , which are typically very slow to mix in high dimensions . MCMC sampling is considered an “ art ” and we do not have black box samplers that converge fast and are stable for complex ( natural ) distributions . The source of the problem is mainly attributed to the fact that the energy functions of interest are typically highly nonconvex . A broad sketch of our solution to this problem is to model a smoother density in an M-fold expanded space . The new density p ( y ) , called M-density , is defined in RMd , where the bold y is a shorthand for ( y1 , . . . , yM ) . M-density is smoother in the sense that its marginals pm ( ym ) are obtained by the convolution pm ( ym ) = ∫ pm ( ym|x ) p ( x ) dx with a smoothing kernel pm ( ym|x ) which for most of the paper we take to be the isotropic Gaussian : Ym = X +N ( 0 , σ 2 mId ) . Although we bypass learning p ( x ) , the new formalism allows for generating samples from p ( x ) since X can be estimated exactly given Y = y ( for large M ) . To give a physical picture , the approach here is based on “ taking apart ” the complex manifold where the random variable X is concentrated in Rd and mapping it to a smoother manifold in RMd where Y = ( Y1 , . . . , YM ) is concentrated . Smoothing a density with a kernel is a technique in nonparametric density estimation that goes back to Parzen ( 1962 ) . In kernel density estimation , the estimator of p ( x ) is obtained by convolving the empirical measure with a kernel . In that methodology , the kernel bandwidth ( σ , for Gaussian kernels ) is adjusted to estimate p ( x ) in Rd given a collection of independent samples { xi } ni=1 . This estimator , like most nonparametric estimators , suffers from a severe curse of dimensionality ( Wainwright , 2019 ) . But what if the kernel bandwidth is fixed : how much easier is the problem of estimating p ( y ) ? This question is answered in ( Goldfeld et al. , 2020 ) , where they obtained the rate of convergence eO ( d ) n−1/2 ( measured using various distances ) in remarkable contrast to the well-known n−1/d rate for estimating p ( x ) . This nonparametric estimation result is not directly relevant here , but it formalizes the intuition that learning p ( y ) = ∫ p ( y|x ) p ( x ) dx is a lot simpler than learning p ( x ) . With this motivation , we start with an introduction to the problem of learning unnormalized p ( y ) , based on independent samples from p ( x ) . This problem was formulated by Vincent ( 2011 ) using score matching ( Hyvärinen , 2005 ) . It was approached recently with the more fundamental methodology of empirical Bayes ( Saremi & Hyvärinen , 2019 ) . The idea is to use the Bayes estimator of X given Y = y , the study of which is at the root of the empirical Bayes approach to statistics ( Robbins , 1956 ) , in a least-squares objective . This machinery builds on the fact that the estimator x̂ ( y ) = E [ X|Y = y ] can be expressed in closed form in terms of unnormalized p ( y ) ( Sec . 3.1 ) . For Gaussian kernels , x̂ ( y ) is expressed in terms of the score function ∇ log p ( y ) ( Miyasawa , 1961 ) . For such kernels , the learning objective arrived at in ( Saremi & Hyvärinen , 2019 ) is identical to the denoising score matching formulation ( Vincent , 2011 ) , but with new insights rooted in empirical Bayes which is the statistical framework for denoising . The main problem with the empirical Bayes methodology is that p ( x|y ) remains unknown and can not be sampled from . The estimator x̂ ( y ) = E [ X|Y = y ] can be computed , but the concentration of the posterior p ( x|y ) around the mean is not in our control . Our solution to this problem starts with an observation that is very intuitive from a Bayesian perspective : one can sharpen the posterior by simply taking more independent noisy measurements . This scheme is formalized by replacing p ( y|x ) with the factorial kernel p ( y|x ) = M∏ m=1 pm ( ym|x ) , ( 1 ) which we name multimeasurement noise model ( MNM ) . Now , the object of interest is a different density which we call M-density obtained by convolving p ( x ) with the factorial kernel : p ( y ) = ∫ p ( y|x ) p ( x ) dx . ( 2 ) This formally maps the original problem of drawing samples from p ( x ) to drawing samples from p ( y ) for any fixed noise level since the estimator of X given Y = y is asymptotically exact . We quantify this for Gaussian MNMs using the plug-in estimator ( the empirical mean of measurements ) . Smooth & Symmetric ! Consider Gaussian MNMs with equal noise level σ in the regime of large σ , large M such that σ √ d/M is “ small ” .1 In that regime , the complex manifold associated with the data distribution is mapped to a very smooth symmetric manifold in a much higher dimensional space . The original manifold can be reconstructed via a single step by computing x̂ ( y ) . Due to equal noise levels , the manifold associated with M-density is symmetric under the permutation group : p ( y1 , . . . , yM ) = p ( yπ ( 1 ) , . . . , yπ ( M ) ) , ( 3 ) where π is a permutation of indices ( Fig . 1 ) . Although we develop a general methodology for studying M-densities , in the later part of the paper we focus on permutation invariant Gaussian M-densities . The paper is organized as follows . In Sec . 2 , we derive Bayes estimators for Poisson and Gaussian MNMs . In Sec . 3 , we present the least-squares objective for learning Gaussian M-densities . We also give a weaker formulation of the learning objective based on score matching . Sec . 4 is devoted to the important topic of parametrization , where we introduce multidenoising autoencoder ( MDAE ) in which we formally connect M-densities to the DAE literature . DAEs have never been studied for factorial kernels and the emergence of MDAE as a generative model should be of wide interest . In addition , we introduce metaencoder formulated in an unnormalized latent variable model , which is mainly left as a side contribution . In Sec . 5 , we present the sampling algorithm used in the paper . In Sec . 6 , we present our experiments on MNIST , CIFAR-10 , and FFHQ-256 datasets which were focused on permutation invariant M-densities . The experiments are mainly of qualitative nature demonstrating the effectiveness of this method in generating fast mixing Markov chains in high dimensions . Related works are discussed in Sec . 7 , and we finish with concluding remarks . 1The regime σ √ d/M 1 is obtained in our analysis of the highly suboptimal plug-in estimator ( Sec . 2.3 ) . Notation . The subscripts are dropped from densities and energy functions when it is clear from their arguments : p ( y ) = pY ( y ) , p ( y|x ) = pY|X=x ( y ) , f ( y ) = fY ( y ) , etc . Bold fonts are reserved for multimeasurement random variables : Y = ( Y1 , . . . , YM ) . The following are shorthand notations : [ M ] = { 1 , . . . , M } and∇m = ∇ym . Throughout , ∇ is the gradient with respect to inputs ( in RMd ) , not parameters . The following convention is used regarding parametric functions : fθ ( · ) = f ( · ; θ ) . Different parametrization schemes come with a different set of parameters , the collection of which we denote by θ . For all the datasets used in the paper , X takes values in the hypercube [ 0 , 1 ] d . 2 FORMALISM : MULTIMEASUREMENT BAYES ESTIMATORS . This work is based on generalizing the empirical Bayes methodology to MNMs . It is well known that the least-squares estimator of X given Y = y ( for any noise model ) is the Bayes estimator : x̂ ( y ) = ∫ xp ( y|x ) p ( x ) dx∫ p ( y|x ) p ( x ) dx . ( 4 ) Next we study this estimator à la Robbins ( 1956 ) for Poisson ( the Poisson kernel was the first example studied in 1956 ) and Gaussian MNMs . In both cases the estimator x̂ ( y ) is derived to be a functional of the joint density p ( y ) . In addition , x̂ ( y ) is invariant to scaling p ( y ) by a constant , therefore one can ignore the partition function in this estimation problem . This is the main appeal of this formalism . Poisson MNMs are included as a warm-up and to demonstrate the generality of the new formalism , but we will not pursue it as a generative model in our experiments for technical reasons due to the challenges regarding sampling discrete distributions in high dimenions ( see Remark 7 ) . 2.1 POISSON MNM . Let X be a random variable taking values in R+ . The Poisson MNM is defined by : p ( y|x ) = e−Mx M∏ l=1 xyl yl ! , yl ∈ N. The numerator in r.h.s . of Eq . 4 is computed next . The measurement index m below is an arbitrary index in [ M ] used for absorbing x such that xp ( y|x ) has the same functional form as p ( y|x ) : ∫ xp ( y|x ) p ( x ) dx = ∫ e−Mx ( ym + 1 ) x ( ym+1 ) ( ym + 1 ) ! ∏ l 6=m xyl yl ! p ( x ) dx = ( ym + 1 ) p ( y + 1m ) , where 1m is defined as a vector whose component l is δml . Using Eq . 4 , it immediately follows x̂ ( y ) = ( ym + 1 ) p ( y + 1m ) p ( y ) , m ∈ [ M ] . ( 5 ) We emphasize that the dependency of x̂ ( y ) on the noise channel ( measurement index ) m that appears on the right hand side of the expression above is only an artifact of the calculation ( we observe this again for Gaussian MNMs ) . The result above holds for any measurement index m ∈ [ M ] , therefore ( ym + 1 ) p ( y + 1m ) = ( ym′ + 1 ) p ( y + 1m′ ) for all m , m′ ∈ [ M ] . Example . We can derive the estimator x̂ ( y ) analytically for p ( x ) = e−x . We first derive p ( y ) :2 p ( y ) = ( ∑ l yl ) ! ∏ l yl ! ( M + 1 ) −1− ∑ l yl , where the sums/products are over the measurement indices l ∈ [ M ] . Using Eq . 5 , it follows x̂ ( y ) = ( ym + 1 ) p ( y + 1m ) p ( y ) = ( ym + 1 ) ∑ l yl + 1 ym + 1 1 M + 1 = ∑ l yl + 1 M + 1 As expected one arrives at the same result by computing Eq . 5 for any measurement index m .
This paper introduced an alternative sampling method, with an application on generative models, by convolving an unknown distribution $p_x$ with a factorial kernel called multi-measurement noise model (MNM). The resulting M-density $p_y$ is smoother (easier to sample from), and is permutation invariant. Two factorial kernels, Poisson and Gaussian MNMs, are introduced for the convolution, and can be connected to Bayes estimator, and the learning of parametric energy and score functions. Two parameterization schemes are proposed for modeling the energy and score functions of the Gaussian M-density respectively. Empirical results on FFHQ-256 dataset are very impressive. The main contribution is the usage of factorial kernels, which is a direct extension of smoothing a density in non-parametric density estimation, and is not used for generative models before.
SP:82d842008ef479c32545afd952f9d7db15a0baf5
Multimeasurement Generative Models
1 INTRODUCTION . Consider a collection of i.i.d . samples { xi } ni=1 , assumed to have been drawn from an unknown distribution with density pX in Rd . An important problem in probabilistic modeling is the task of drawing independent samples from pX , which has numerous potential applications . This problem is typically approached in two phases : approximating pX , and drawing samples from the approximated density . In unnormalized models the first phase is approached by learning the energy function fX associated with the Gibbs distribution pX ∝ exp ( −fX ) , and for the second phase one must resort to Markov chain Monte Carlo methods , such as Langevin MCMC , which are typically very slow to mix in high dimensions . MCMC sampling is considered an “ art ” and we do not have black box samplers that converge fast and are stable for complex ( natural ) distributions . The source of the problem is mainly attributed to the fact that the energy functions of interest are typically highly nonconvex . A broad sketch of our solution to this problem is to model a smoother density in an M-fold expanded space . The new density p ( y ) , called M-density , is defined in RMd , where the bold y is a shorthand for ( y1 , . . . , yM ) . M-density is smoother in the sense that its marginals pm ( ym ) are obtained by the convolution pm ( ym ) = ∫ pm ( ym|x ) p ( x ) dx with a smoothing kernel pm ( ym|x ) which for most of the paper we take to be the isotropic Gaussian : Ym = X +N ( 0 , σ 2 mId ) . Although we bypass learning p ( x ) , the new formalism allows for generating samples from p ( x ) since X can be estimated exactly given Y = y ( for large M ) . To give a physical picture , the approach here is based on “ taking apart ” the complex manifold where the random variable X is concentrated in Rd and mapping it to a smoother manifold in RMd where Y = ( Y1 , . . . , YM ) is concentrated . Smoothing a density with a kernel is a technique in nonparametric density estimation that goes back to Parzen ( 1962 ) . In kernel density estimation , the estimator of p ( x ) is obtained by convolving the empirical measure with a kernel . In that methodology , the kernel bandwidth ( σ , for Gaussian kernels ) is adjusted to estimate p ( x ) in Rd given a collection of independent samples { xi } ni=1 . This estimator , like most nonparametric estimators , suffers from a severe curse of dimensionality ( Wainwright , 2019 ) . But what if the kernel bandwidth is fixed : how much easier is the problem of estimating p ( y ) ? This question is answered in ( Goldfeld et al. , 2020 ) , where they obtained the rate of convergence eO ( d ) n−1/2 ( measured using various distances ) in remarkable contrast to the well-known n−1/d rate for estimating p ( x ) . This nonparametric estimation result is not directly relevant here , but it formalizes the intuition that learning p ( y ) = ∫ p ( y|x ) p ( x ) dx is a lot simpler than learning p ( x ) . With this motivation , we start with an introduction to the problem of learning unnormalized p ( y ) , based on independent samples from p ( x ) . This problem was formulated by Vincent ( 2011 ) using score matching ( Hyvärinen , 2005 ) . It was approached recently with the more fundamental methodology of empirical Bayes ( Saremi & Hyvärinen , 2019 ) . The idea is to use the Bayes estimator of X given Y = y , the study of which is at the root of the empirical Bayes approach to statistics ( Robbins , 1956 ) , in a least-squares objective . This machinery builds on the fact that the estimator x̂ ( y ) = E [ X|Y = y ] can be expressed in closed form in terms of unnormalized p ( y ) ( Sec . 3.1 ) . For Gaussian kernels , x̂ ( y ) is expressed in terms of the score function ∇ log p ( y ) ( Miyasawa , 1961 ) . For such kernels , the learning objective arrived at in ( Saremi & Hyvärinen , 2019 ) is identical to the denoising score matching formulation ( Vincent , 2011 ) , but with new insights rooted in empirical Bayes which is the statistical framework for denoising . The main problem with the empirical Bayes methodology is that p ( x|y ) remains unknown and can not be sampled from . The estimator x̂ ( y ) = E [ X|Y = y ] can be computed , but the concentration of the posterior p ( x|y ) around the mean is not in our control . Our solution to this problem starts with an observation that is very intuitive from a Bayesian perspective : one can sharpen the posterior by simply taking more independent noisy measurements . This scheme is formalized by replacing p ( y|x ) with the factorial kernel p ( y|x ) = M∏ m=1 pm ( ym|x ) , ( 1 ) which we name multimeasurement noise model ( MNM ) . Now , the object of interest is a different density which we call M-density obtained by convolving p ( x ) with the factorial kernel : p ( y ) = ∫ p ( y|x ) p ( x ) dx . ( 2 ) This formally maps the original problem of drawing samples from p ( x ) to drawing samples from p ( y ) for any fixed noise level since the estimator of X given Y = y is asymptotically exact . We quantify this for Gaussian MNMs using the plug-in estimator ( the empirical mean of measurements ) . Smooth & Symmetric ! Consider Gaussian MNMs with equal noise level σ in the regime of large σ , large M such that σ √ d/M is “ small ” .1 In that regime , the complex manifold associated with the data distribution is mapped to a very smooth symmetric manifold in a much higher dimensional space . The original manifold can be reconstructed via a single step by computing x̂ ( y ) . Due to equal noise levels , the manifold associated with M-density is symmetric under the permutation group : p ( y1 , . . . , yM ) = p ( yπ ( 1 ) , . . . , yπ ( M ) ) , ( 3 ) where π is a permutation of indices ( Fig . 1 ) . Although we develop a general methodology for studying M-densities , in the later part of the paper we focus on permutation invariant Gaussian M-densities . The paper is organized as follows . In Sec . 2 , we derive Bayes estimators for Poisson and Gaussian MNMs . In Sec . 3 , we present the least-squares objective for learning Gaussian M-densities . We also give a weaker formulation of the learning objective based on score matching . Sec . 4 is devoted to the important topic of parametrization , where we introduce multidenoising autoencoder ( MDAE ) in which we formally connect M-densities to the DAE literature . DAEs have never been studied for factorial kernels and the emergence of MDAE as a generative model should be of wide interest . In addition , we introduce metaencoder formulated in an unnormalized latent variable model , which is mainly left as a side contribution . In Sec . 5 , we present the sampling algorithm used in the paper . In Sec . 6 , we present our experiments on MNIST , CIFAR-10 , and FFHQ-256 datasets which were focused on permutation invariant M-densities . The experiments are mainly of qualitative nature demonstrating the effectiveness of this method in generating fast mixing Markov chains in high dimensions . Related works are discussed in Sec . 7 , and we finish with concluding remarks . 1The regime σ √ d/M 1 is obtained in our analysis of the highly suboptimal plug-in estimator ( Sec . 2.3 ) . Notation . The subscripts are dropped from densities and energy functions when it is clear from their arguments : p ( y ) = pY ( y ) , p ( y|x ) = pY|X=x ( y ) , f ( y ) = fY ( y ) , etc . Bold fonts are reserved for multimeasurement random variables : Y = ( Y1 , . . . , YM ) . The following are shorthand notations : [ M ] = { 1 , . . . , M } and∇m = ∇ym . Throughout , ∇ is the gradient with respect to inputs ( in RMd ) , not parameters . The following convention is used regarding parametric functions : fθ ( · ) = f ( · ; θ ) . Different parametrization schemes come with a different set of parameters , the collection of which we denote by θ . For all the datasets used in the paper , X takes values in the hypercube [ 0 , 1 ] d . 2 FORMALISM : MULTIMEASUREMENT BAYES ESTIMATORS . This work is based on generalizing the empirical Bayes methodology to MNMs . It is well known that the least-squares estimator of X given Y = y ( for any noise model ) is the Bayes estimator : x̂ ( y ) = ∫ xp ( y|x ) p ( x ) dx∫ p ( y|x ) p ( x ) dx . ( 4 ) Next we study this estimator à la Robbins ( 1956 ) for Poisson ( the Poisson kernel was the first example studied in 1956 ) and Gaussian MNMs . In both cases the estimator x̂ ( y ) is derived to be a functional of the joint density p ( y ) . In addition , x̂ ( y ) is invariant to scaling p ( y ) by a constant , therefore one can ignore the partition function in this estimation problem . This is the main appeal of this formalism . Poisson MNMs are included as a warm-up and to demonstrate the generality of the new formalism , but we will not pursue it as a generative model in our experiments for technical reasons due to the challenges regarding sampling discrete distributions in high dimenions ( see Remark 7 ) . 2.1 POISSON MNM . Let X be a random variable taking values in R+ . The Poisson MNM is defined by : p ( y|x ) = e−Mx M∏ l=1 xyl yl ! , yl ∈ N. The numerator in r.h.s . of Eq . 4 is computed next . The measurement index m below is an arbitrary index in [ M ] used for absorbing x such that xp ( y|x ) has the same functional form as p ( y|x ) : ∫ xp ( y|x ) p ( x ) dx = ∫ e−Mx ( ym + 1 ) x ( ym+1 ) ( ym + 1 ) ! ∏ l 6=m xyl yl ! p ( x ) dx = ( ym + 1 ) p ( y + 1m ) , where 1m is defined as a vector whose component l is δml . Using Eq . 4 , it immediately follows x̂ ( y ) = ( ym + 1 ) p ( y + 1m ) p ( y ) , m ∈ [ M ] . ( 5 ) We emphasize that the dependency of x̂ ( y ) on the noise channel ( measurement index ) m that appears on the right hand side of the expression above is only an artifact of the calculation ( we observe this again for Gaussian MNMs ) . The result above holds for any measurement index m ∈ [ M ] , therefore ( ym + 1 ) p ( y + 1m ) = ( ym′ + 1 ) p ( y + 1m′ ) for all m , m′ ∈ [ M ] . Example . We can derive the estimator x̂ ( y ) analytically for p ( x ) = e−x . We first derive p ( y ) :2 p ( y ) = ( ∑ l yl ) ! ∏ l yl ! ( M + 1 ) −1− ∑ l yl , where the sums/products are over the measurement indices l ∈ [ M ] . Using Eq . 5 , it follows x̂ ( y ) = ( ym + 1 ) p ( y + 1m ) p ( y ) = ( ym + 1 ) ∑ l yl + 1 ym + 1 1 M + 1 = ∑ l yl + 1 M + 1 As expected one arrives at the same result by computing Eq . 5 for any measurement index m .
Given $n$ independent samples $x_i$ in a space of dimension $d$ drawn from an unknown distribution $p(x)$, this paper is interested in drawing new samples independent of $x_i$ but coming from the same unknown distribution p(x). The classical approach consists of learning $p(x)$ from $x_i$ (i.e., approximated by $\tilde{p}(x)$) and sampling new points according to the approximated version $\tilde{p}(x)$, which remains a difficult task given sometimes the highly non-convex character of $p(x)$. For these reasons, the authors propose to noise the data $x$ using $M$ different channels whose noise levels are chosen to obtain data denoted $y$. Since the level noise of each channel is known, this allows using a Bayesian estimator $\hat x(y)$ to find the samples in the original space. The advantage of this approach is that the Bayesian estimator depends on the density $p(y)$, which allows, thanks to a chosen parameterization of $p(y)$ (which therefore leads to a parametrization of $\hat{x}(y)$), to find the optimal parameters through a least-square objective by minimizing $\|x-\hat{x}(y)\|^2$ on the training data. The Bayesian estimator thus allows generating new samples using the optimized density $p^{\star}(y)$ with the corresponding optimal parameters. The authors make a connection between the proposed method and some methods of the literature constituting new intuitions, different views of algorithms proposed in the literature. The experiments conducted suggest the efficiency of the algorithm to generate the most diverse samples
SP:82d842008ef479c32545afd952f9d7db15a0baf5
Robust Generalization of Quadratic Neural Networks via Function Identification
1 INTRODUCTION . Recent work has demonstrated that neural networks are not robust to shifts in the underlying data , including both distribution shifts ( i.e. , where the data comes from a new distribution independent of the neural network parameters ) ( Hendrycks & Dietterich , 2019 ; Taori et al. , 2020 ) as well as adversarial shifts ( i.e. , where the shift can depend on the neural network parameters ) ( Szegedy et al. , 2013 ) . Accordingly , there has been a great deal of interest in better understanding why neural networks fail to be robust ( Tsipras et al. , 2018 ; Ilyas et al. , 2019 ) and on improving robustness ( Goodfellow et al. , 2014 ; Raghunathan et al. , 2018 ; Cohen et al. , 2019 ) . From the perspective of learning theory , there is little reason to expect neural networks to be robust , since generalization bounds typically assume that the test examples are from the same distribution as the training examples . PAC-Bayesian generalization bounds allow for a limited amount of robustness , but only if the support of the target distribution q is contained in that of the source distribution p , since it requires that the KL divergence DKL ( q ‖ p ) is finite . Yet , distribution shifts ( Hendrycks & Dietterich , 2019 ) often shift probability mass to inputs completely outside the source distribution . Instead , the reason we might expect neural networks to be robust to these shifts is that humans are robust to them ; for instance , small pixel-level shifts considered in adversarial examples are typically unnoticeable to humans , yet these shifts can move the image completely off of the distribution of natural images . This fact indicates a gap in our theoretical understanding of neural networks . In particular , the key question is understanding settings under which we may expect neural networks to be robust to distribution shifts that are “ large ” ( e.g. , in terms of KL divergence ) . We study a strategy for closing this gap based on the statistical concept of identifiability ( Hsu et al. , 2012 ) . At a high level , this concept assumes that the true model belongs to the model family ; then , in the limit of infinite training data , the learning algorithm can exactly recover the parameters of the true model . For instance , in linear regression , the data is generated according to the model y = 〈θ∗ , x〉+ξ , where ξ is σ-subgaussian noise . Then , under mild assumptions on the training data Z = ( X , Y ) , the ordinary least squares ( OLS ) estimator θ̂ ( Z ) recovers the true parameter—i.e. , in the limit of infinite data , θ̂ ( Z ) = θ∗ . With finite samples , OLS satisfies high-probability convergence rates of the form ‖θ̂ ( Z ) − θ∗‖2 ≤ . ( 1 ) The connection to robustness is that if ( 1 ) holds , then for any input x such that ‖x‖2 ≤ xmax , we have |〈θ̂ ( Z ) , x〉 − 〈θ∗ , x〉| ≤ ‖θ̂ ( Z ) − θ∗‖2 · ‖x‖2 ≤ · xmax . ( 2 ) Thus , for any distribution q ( x ) with support onB2 ( 0 , xmax ) = { x ∈ X | ‖x‖2 ≤ xmax } , θ̂ ( Z ) obtains bounded error—i.e. , we have Eq ( x ) [ ( 〈θ̂ ( Z ) , x〉 − 〈θ∗ , x〉 ) 2 ] ≤ 2x2max with high probability . Thus , a natural question is whether we can obtain similar kinds of parameter identification bounds for neural networks . A key complication is that practical neural networks are often over-parameterized , possibly to facilitate optimization ( Du & Lee , 2018 ; Jacot et al. , 2018 ) . In this setting , identification is impossible , since multiple parameters can yield the same model . Nevertheless , it may be possible to obtain bounds of the form ( 2 ) —in particular , even if we do not recover the true parameters θ∗ , we can still recover the function fθ∗ ( x ) . We refer to this notion as function identification . Furthermore , we show that quadratic neural networks satisfy function identification bounds under mild conditions . To demonstrate its utility , we show how function identification can be leveraged to obtain regret guarantees for a bandit ( Rusmevichientong & Tsitsiklis , 2010 ) where each arm is a quadratic neural network . Linear bandits fundamentally involve covariate shift since their “ covariates ” are arms , which are adaptively chosen through the learning process as a function of past observations ; thus , existing approaches have all operated in the setting where there is a unique and identifiable global minimizer . Similarly , we build on recent work proving bounds on transfer learning in the setting of bounded label shift and unbounded covariate shift ( Bastani , 2020 ; Xu et al. , 2021 ) ; again , we show that we can leverage function identification to easily transfer learn quadratic neural networks . Our results suggest that one strategy for improving robustness of neural networks is to design models that can represent the true data generating process . However , doing so can be challenging due to the complexity of most real-world data generating processes mapping covariates to labels—e.g. , mapping natural language to semantic meaning or images to object detections and labels . As a consequence , we study how our results can connect to neural module networks ( Andreas et al. , 2016 ) , which are designed to break down complex tasks into smaller ones that can each be solved by a neural network . For instance , we may break down the task “ count the number of red balls in image x ” into ( i ) detecting balls , ( ii ) detecting red objects , and ( iii ) intersecting the two sets , and ( iv ) summing the results . Intuitively , neural modules can generalize more robustly since ( i ) it is more likely that an individual neural module designed to solve a simple task can be identified from training data , and ( ii ) even if module composition is not itself identifiable , shifts in the compositional structure tend to be smaller than shifts in the underlying data distribution . We study a simplified form of neural module networks , where modules are quadratic neural networks composed in sequence according to a given input ; for simplicity , we assume they can be trained in a supervised way , ensuring robust generalization . Then , we show that under certain conditions , compositions of these models are also robust , including the case where there are shifts in the distribution over compositions . Related work . Prior work has connected misspecification ( i.e. , the true model is in the model family ) and robustness to covariate shift ( Shimodaira , 2000 ; Wen et al. , 2014 ) ; however , having a correctly specified model is insufficient if the true parameters are not identifiable—e.g. , in linear regression , if the covariance matrix Σ = Ep ( x ) [ xx > ] is singular , then θ is not identifiable ; thus , the estimated model may not be robust . Quadratic neural networks can not be identified even if the model is correctly specified since the parameters have a continuous symmetry ( i.e. , orthogonal transformations ) . Recent work has studied learning under adversarial examples ( Goodfellow et al. , 2014 ; Raghunathan et al. , 2018 ; Cohen et al. , 2019 ) and corrupted training data ( Steinhardt et al. , 2017 ; Diakonikolas et al. , 2019 ) . In contrast , we are interested in robustness to covariate shift ; there has been recent work empirically showing that neural networks are sensitive to distribution shift ( Hendrycks & Dietterich , 2019 ; Taori et al. , 2020 ; Ruis et al. , 2020 ; Ribeiro et al. , 2020 ; Koh et al. , 2020 ) . Distributionally robust optimization enables training of models robust to small shifts ( Duchi & Namkoong , 2018 ) , but we are interested in potentially large shifts . Unsupervised domain adaptation ( Ben-David et al. , 2007 ; Blitzer et al. , 2008 ) learns a model on a covariate shifted target distribution ; however , they rely on unlabeled examples from the target domain , whereas we do not . There has been recent theory on robustness to adversarial perturbations—e.g. , showing there may be a tradeoff between robustness and on-distribution generalization ( Tsipras et al. , 2018 ) , and that non-robust algorithms tend to learn predictive but brittle representations compared to adversarially robust ones ( Ilyas et al. , 2019 ) . In contrast , we show that these tradeoffs are mitigated when the true model function can be identified despite over-parameterization . Furthermore , adversarial shifts are typically bounded ( e.g. , small ` ∞ norm ) , whereas the shifts we consider may be large . There has been a great deal of recent work on deep learning theory , including on quadratic neural networks ; however , it has largely focused on optimization ( Ge et al. , 2017b ; Jacot et al. , 2018 ; Du et al. , 2019 ; Gao et al. , 2019 ; Soltanolkotabi et al. , 2018 ; Li et al. , 2018 ) , and on-distribution generalization ( Neyshabur et al. , 2017 ; Du & Lee , 2018 ; Jacot et al. , 2018 ; Arora et al. , 2018 ; Long & Sedghi , 2019 ) . In contrast , we are interested in out-of-distribution generalization . We discuss additional related work on matrix factorization and multi-armed bandits in Appendix A , as well as a discussion of the novelty of our results . 2 PROBLEM FORMULATION . We consider a model fθ : X → Y , with covariates X ⊆ Rd , labels Y ⊆ R , and parameters θ ∈ Θ ⊆ Rm . A generalization bound from learning theory typically has the form Pp ( Z ) [ Lp ( θ̂ ( Z ) ) ≤ ] ≥ 1− δ where Lp ( θ ) = Ep ( x ) [ ( fθ ( x ) − fθ∗ ( x ) ) 2 ] , ( 3 ) where , δ ∈ R > 0 , Z = { ( x1 , y1 ) , ... , ( xn , yn ) } ⊆ X × Y with yi = fθ∗ ( xi ) + ξi is a training set of i.i.d . observations from a distribution p ( i.e. , p ( Z ) = p ( x1 , y1 ) · ... · p ( xn , yn ) ) , ξi is bounded random noise independent of xi with |ξi| ≤ ξmax , θ∗ ∈ Θ are the true parameters , and θ̂ ( Z ) = arg min θ∈Θ L̂ ( θ ; Z ) where L̂ ( θ ; Z ) = 1 n n∑ i=1 ( fθ ( xi ) − yi ) 2 is an estimator based on the training data Z.1 In particular , they assume that the training inputs xi ∼ p are i.i.d . samples from the same distribution as the test example x ∼ p. Definition 2.1 . The model fθ and distribution p satisfy function identification if for any , δ ∈ R > 0 , we have Pp ( Z ) [ ∀x ∈ X . ( fθ̂ ( Z ) ( x ) − fθ∗ ( x ) ) 2 ≤ ] ≥ 1− δ for n = |Z| sufficiently large . Function identification implies generalization bounds even when the test data comes from a different distribution q . In particular , we say fθ robustly generalizes if for any q with support on X , we have Pp ( Z ) [ Lq ( θ̂ ( Z ) ) ≤ ] ≥ 1− δ , ( 4 ) where the difference from ( 3 ) has been highlighted in red . It is easy to see that function identification implies ( 4 ) . Note that the true model fθ∗ does not change , so there is no label shift .
The author proposed the concept of function identification to address the limitation of parameter identification in the over-parameterized setting. The function identification states that the outputs of quadratic neural networks with the empirically minimized parameters and the true parameters have bounded difference with high probability. The reason to adopt the definition of function identification analysis is that it is hard ti make parameter identification with neural networks which usually are over-parameterized. The main theoretical result is Theorem 3.6, which is directly used in Corollary 3.7 to derive a robust generalization error bound, i.e., the loss function is evaluated on an arbitrary distribution. The robust generalization error bound is then used for three cases: (1) the upper bound of the regret of quadratic neural bandits, (2) bounds for transfer learning, and (3) generalization bounds for neural module networks.
SP:24a82292d86bf8f75e14cf09ac7c2e3af812df6e
Robust Generalization of Quadratic Neural Networks via Function Identification
1 INTRODUCTION . Recent work has demonstrated that neural networks are not robust to shifts in the underlying data , including both distribution shifts ( i.e. , where the data comes from a new distribution independent of the neural network parameters ) ( Hendrycks & Dietterich , 2019 ; Taori et al. , 2020 ) as well as adversarial shifts ( i.e. , where the shift can depend on the neural network parameters ) ( Szegedy et al. , 2013 ) . Accordingly , there has been a great deal of interest in better understanding why neural networks fail to be robust ( Tsipras et al. , 2018 ; Ilyas et al. , 2019 ) and on improving robustness ( Goodfellow et al. , 2014 ; Raghunathan et al. , 2018 ; Cohen et al. , 2019 ) . From the perspective of learning theory , there is little reason to expect neural networks to be robust , since generalization bounds typically assume that the test examples are from the same distribution as the training examples . PAC-Bayesian generalization bounds allow for a limited amount of robustness , but only if the support of the target distribution q is contained in that of the source distribution p , since it requires that the KL divergence DKL ( q ‖ p ) is finite . Yet , distribution shifts ( Hendrycks & Dietterich , 2019 ) often shift probability mass to inputs completely outside the source distribution . Instead , the reason we might expect neural networks to be robust to these shifts is that humans are robust to them ; for instance , small pixel-level shifts considered in adversarial examples are typically unnoticeable to humans , yet these shifts can move the image completely off of the distribution of natural images . This fact indicates a gap in our theoretical understanding of neural networks . In particular , the key question is understanding settings under which we may expect neural networks to be robust to distribution shifts that are “ large ” ( e.g. , in terms of KL divergence ) . We study a strategy for closing this gap based on the statistical concept of identifiability ( Hsu et al. , 2012 ) . At a high level , this concept assumes that the true model belongs to the model family ; then , in the limit of infinite training data , the learning algorithm can exactly recover the parameters of the true model . For instance , in linear regression , the data is generated according to the model y = 〈θ∗ , x〉+ξ , where ξ is σ-subgaussian noise . Then , under mild assumptions on the training data Z = ( X , Y ) , the ordinary least squares ( OLS ) estimator θ̂ ( Z ) recovers the true parameter—i.e. , in the limit of infinite data , θ̂ ( Z ) = θ∗ . With finite samples , OLS satisfies high-probability convergence rates of the form ‖θ̂ ( Z ) − θ∗‖2 ≤ . ( 1 ) The connection to robustness is that if ( 1 ) holds , then for any input x such that ‖x‖2 ≤ xmax , we have |〈θ̂ ( Z ) , x〉 − 〈θ∗ , x〉| ≤ ‖θ̂ ( Z ) − θ∗‖2 · ‖x‖2 ≤ · xmax . ( 2 ) Thus , for any distribution q ( x ) with support onB2 ( 0 , xmax ) = { x ∈ X | ‖x‖2 ≤ xmax } , θ̂ ( Z ) obtains bounded error—i.e. , we have Eq ( x ) [ ( 〈θ̂ ( Z ) , x〉 − 〈θ∗ , x〉 ) 2 ] ≤ 2x2max with high probability . Thus , a natural question is whether we can obtain similar kinds of parameter identification bounds for neural networks . A key complication is that practical neural networks are often over-parameterized , possibly to facilitate optimization ( Du & Lee , 2018 ; Jacot et al. , 2018 ) . In this setting , identification is impossible , since multiple parameters can yield the same model . Nevertheless , it may be possible to obtain bounds of the form ( 2 ) —in particular , even if we do not recover the true parameters θ∗ , we can still recover the function fθ∗ ( x ) . We refer to this notion as function identification . Furthermore , we show that quadratic neural networks satisfy function identification bounds under mild conditions . To demonstrate its utility , we show how function identification can be leveraged to obtain regret guarantees for a bandit ( Rusmevichientong & Tsitsiklis , 2010 ) where each arm is a quadratic neural network . Linear bandits fundamentally involve covariate shift since their “ covariates ” are arms , which are adaptively chosen through the learning process as a function of past observations ; thus , existing approaches have all operated in the setting where there is a unique and identifiable global minimizer . Similarly , we build on recent work proving bounds on transfer learning in the setting of bounded label shift and unbounded covariate shift ( Bastani , 2020 ; Xu et al. , 2021 ) ; again , we show that we can leverage function identification to easily transfer learn quadratic neural networks . Our results suggest that one strategy for improving robustness of neural networks is to design models that can represent the true data generating process . However , doing so can be challenging due to the complexity of most real-world data generating processes mapping covariates to labels—e.g. , mapping natural language to semantic meaning or images to object detections and labels . As a consequence , we study how our results can connect to neural module networks ( Andreas et al. , 2016 ) , which are designed to break down complex tasks into smaller ones that can each be solved by a neural network . For instance , we may break down the task “ count the number of red balls in image x ” into ( i ) detecting balls , ( ii ) detecting red objects , and ( iii ) intersecting the two sets , and ( iv ) summing the results . Intuitively , neural modules can generalize more robustly since ( i ) it is more likely that an individual neural module designed to solve a simple task can be identified from training data , and ( ii ) even if module composition is not itself identifiable , shifts in the compositional structure tend to be smaller than shifts in the underlying data distribution . We study a simplified form of neural module networks , where modules are quadratic neural networks composed in sequence according to a given input ; for simplicity , we assume they can be trained in a supervised way , ensuring robust generalization . Then , we show that under certain conditions , compositions of these models are also robust , including the case where there are shifts in the distribution over compositions . Related work . Prior work has connected misspecification ( i.e. , the true model is in the model family ) and robustness to covariate shift ( Shimodaira , 2000 ; Wen et al. , 2014 ) ; however , having a correctly specified model is insufficient if the true parameters are not identifiable—e.g. , in linear regression , if the covariance matrix Σ = Ep ( x ) [ xx > ] is singular , then θ is not identifiable ; thus , the estimated model may not be robust . Quadratic neural networks can not be identified even if the model is correctly specified since the parameters have a continuous symmetry ( i.e. , orthogonal transformations ) . Recent work has studied learning under adversarial examples ( Goodfellow et al. , 2014 ; Raghunathan et al. , 2018 ; Cohen et al. , 2019 ) and corrupted training data ( Steinhardt et al. , 2017 ; Diakonikolas et al. , 2019 ) . In contrast , we are interested in robustness to covariate shift ; there has been recent work empirically showing that neural networks are sensitive to distribution shift ( Hendrycks & Dietterich , 2019 ; Taori et al. , 2020 ; Ruis et al. , 2020 ; Ribeiro et al. , 2020 ; Koh et al. , 2020 ) . Distributionally robust optimization enables training of models robust to small shifts ( Duchi & Namkoong , 2018 ) , but we are interested in potentially large shifts . Unsupervised domain adaptation ( Ben-David et al. , 2007 ; Blitzer et al. , 2008 ) learns a model on a covariate shifted target distribution ; however , they rely on unlabeled examples from the target domain , whereas we do not . There has been recent theory on robustness to adversarial perturbations—e.g. , showing there may be a tradeoff between robustness and on-distribution generalization ( Tsipras et al. , 2018 ) , and that non-robust algorithms tend to learn predictive but brittle representations compared to adversarially robust ones ( Ilyas et al. , 2019 ) . In contrast , we show that these tradeoffs are mitigated when the true model function can be identified despite over-parameterization . Furthermore , adversarial shifts are typically bounded ( e.g. , small ` ∞ norm ) , whereas the shifts we consider may be large . There has been a great deal of recent work on deep learning theory , including on quadratic neural networks ; however , it has largely focused on optimization ( Ge et al. , 2017b ; Jacot et al. , 2018 ; Du et al. , 2019 ; Gao et al. , 2019 ; Soltanolkotabi et al. , 2018 ; Li et al. , 2018 ) , and on-distribution generalization ( Neyshabur et al. , 2017 ; Du & Lee , 2018 ; Jacot et al. , 2018 ; Arora et al. , 2018 ; Long & Sedghi , 2019 ) . In contrast , we are interested in out-of-distribution generalization . We discuss additional related work on matrix factorization and multi-armed bandits in Appendix A , as well as a discussion of the novelty of our results . 2 PROBLEM FORMULATION . We consider a model fθ : X → Y , with covariates X ⊆ Rd , labels Y ⊆ R , and parameters θ ∈ Θ ⊆ Rm . A generalization bound from learning theory typically has the form Pp ( Z ) [ Lp ( θ̂ ( Z ) ) ≤ ] ≥ 1− δ where Lp ( θ ) = Ep ( x ) [ ( fθ ( x ) − fθ∗ ( x ) ) 2 ] , ( 3 ) where , δ ∈ R > 0 , Z = { ( x1 , y1 ) , ... , ( xn , yn ) } ⊆ X × Y with yi = fθ∗ ( xi ) + ξi is a training set of i.i.d . observations from a distribution p ( i.e. , p ( Z ) = p ( x1 , y1 ) · ... · p ( xn , yn ) ) , ξi is bounded random noise independent of xi with |ξi| ≤ ξmax , θ∗ ∈ Θ are the true parameters , and θ̂ ( Z ) = arg min θ∈Θ L̂ ( θ ; Z ) where L̂ ( θ ; Z ) = 1 n n∑ i=1 ( fθ ( xi ) − yi ) 2 is an estimator based on the training data Z.1 In particular , they assume that the training inputs xi ∼ p are i.i.d . samples from the same distribution as the test example x ∼ p. Definition 2.1 . The model fθ and distribution p satisfy function identification if for any , δ ∈ R > 0 , we have Pp ( Z ) [ ∀x ∈ X . ( fθ̂ ( Z ) ( x ) − fθ∗ ( x ) ) 2 ≤ ] ≥ 1− δ for n = |Z| sufficiently large . Function identification implies generalization bounds even when the test data comes from a different distribution q . In particular , we say fθ robustly generalizes if for any q with support on X , we have Pp ( Z ) [ Lq ( θ̂ ( Z ) ) ≤ ] ≥ 1− δ , ( 4 ) where the difference from ( 3 ) has been highlighted in red . It is easy to see that function identification implies ( 4 ) . Note that the true model fθ∗ does not change , so there is no label shift .
The paper considers uncertainty estimation in overparameterized shallow neural networks with quadratic activation function. In particular, the paper assumes a non-linear regression model where labels are generated by an aforementioned neural net f* and noise is bounded. Then, uncertainty estimation is in giving a confidence interval around f*(x) for any given x. This differs from the usual uncertainty estimation in the linear regression, where one is concerned with deviation of parameters. The bound is a combination of the uniform convergence type bound on the excess risk and an interesting observation about strong convexity of the risk in the parameters of the network. The strong convexity here combined with Lipschitzness of the predictor allows to "convert" excess risk bound into the deviation between predictors. The uncertainty estimation developed here is later used to give an explore-then-commit type of bandit algorithm with $T^{\\frac{2}{3}}$ regret.
SP:24a82292d86bf8f75e14cf09ac7c2e3af812df6e
On the One-sided Convergence of Adam-type Algorithms in Non-convex Non-concave Min-max Optimization
1 INTRODUCTION . As one of the most popular optimizers in supervised deep learning tasks like natural language processing ( Chowdhury , 2003 ) as well as the main workhorse of generative adversarial network training ( Goodfellow et al. , 2014 ) , Adam-type methods are widely used because of their minimal need for learning rate tuning and their coordinate-wise adaptivity on local geometry . Starting from AdaGrad ( Duchi et al. , 2011 ) , adaptive gradient methods have evolved into a variety of different Adam-type algorithms , such as Adam ( Kingma & Ba , 2015 ) , RMSprop , AMSGrad ( Reddi et al. , 2018 ) and AdaDelta ( Zeiler , 2012 ) . In supervised learning , adaptive gradient methods and Adamtype algorithms play important roles . Especially in the field of NLP ( natural language processing ) , Adam-type algorithms are the goto optimizer . Multiple NLP experiments show that sparse Adam outperforms other non-adaptive algorithms like Stochastic Gradient Descent ( SGD ) not only on the solution performance , but also on both the training and testing error ’ s convergence rates . It ’ s worth mentioned that the most popular pre-training language model BERT ( Devlin et al. , 2018 ) also uses Adam as its optimizer , which shows the power of Adam-type algorithms . Also , Adam-type algorithms are very effective in min-max optimization . As a direct and widely used application of min-max optimization , generative adversarial networks ( GANs ) are notorious for the training difficulty . Training by SGD will easily diverge to nowhere or converge to a limiting cycle , both of which will lead to an ill-performing solution , while Adam optimizer , as the default optimizer for GANs ( Hsieh et al. , 2020 ) , can obtain better performance . The reason why these two optimizers have so much difference in GAN ’ s training has long been an open problem . Traditionally , the training performance of min-max optimization is measured according to its first-order convergence , which means the norm of the gradient , but is it really true in GAN ’ s training ? After training GAN on two relatively simple datasets , MNIST and Fashion-MNIST , we can find that , in a practical training process of GAN , Adam optimizer does not perfectly converge since the norm of discriminator ’ s gradient remains quite high through out the training process . Instead , it only has a one-sided convergence as the norm of generator ’ s gradient actually converges to 0 . This paper thus aims to explain this phenomenon by bridging the gap between theory and practice . On one hand , we understand under which conditions Adam-type optimization algorithms have provable convergence for min-max optimization . Towards this end , a recent work ( Liu et al. , 2020 ) designs two algorithms , Optimistic Stochastic Gradient ( OSG ) and Optimistic AdaGrad ( OAdaGrad ) for solving a class of non-convex non-concave min-max problems and gives theoretical guarantee on their convergence . ( Liu et al. , 2020 ) also proposes an open problem on the convergence proof of Adam-type algorithms , which is solved by this paper . On the other hand , we find that the MVI condition needed for our convergence proof does not practically hold for GANs . Instead , we propose the much milder one-sided MVI condition , which tends to hold practically and under which we provide the theoretical guarantee of the one-sided convergence of Adam-type algorithms . Despite some theoretical guarantee made on the convergence of Adam-type algorithms on convex concave or non-convex concave min-max optimization , in the non-convex non-concave setting which is most general , there is no theoretical guarantee on convergence . Comparatively speaking , proving the convergence of Adam-type algorithms is much more difficult since they use an empirical version of Momentum . Although it has been shown to perform well in practice , it is actually difficult to analyze theoretically . Even in the standard convex setting , proving the convergence of Adam-type algorithms ( Reddi et al. , 2018 ; Zou et al. , 2021 ) is much harder than other adaptive algorithms such as AdaGrad ( Duchi et al. , 2011 ) . Actually , the original version of Adam is known not to converge in convex settings . Therefore , to formally analyze the convergence of Adam-type algorithms in min-max optimization , we also consider a “ theoretically correct ” version of Adam , which is an analog of AMSGrad ( Reddi et al. , 2018 ) . In this paper , there are three main contributions . ( 1 ) We analyze Extra Gradient AMSGrad , which is an Adam-type algorithm used for solving non-convex non-concave min-max optimization problems as well as GAN ’ s training . We prove that , under the assumption of standard MVI condition , the Extra Gradient AMSGrad algorithm provably converges to a ε-stationary point with O ( dε−2 ) complexity in deterministic setting and O ( dε−4 ) complexity in stochastic setting . ( 2 ) Although the standard MVI condition above is a much milder assumption than convexity , we empirically show that MVI condition does not hold for GAN ’ s objective functions in reality . Instead , the one-sided MVI condition proposed by us tends to hold , which is the mildest assumption ever used in all the convergence proofs for min-max optimization . Under the the one-sided MVI condition , we modify the algorithm above by using dual rate decay , and theoretically prove its convergence rate . ( 3 ) We conduct empirical experiments on GAN ’ s training by the Extra Gradient AMSGrad algorithm and the Extra Gradient AMSGrad with dual rate decay analyzed by us . We show that they have much better performance than the Stochastic Gradient Descent Ascent ( SGDA ) algorithm . Also , we empirically verify that our new one-sided MVI condition is indeed satisfied during GAN ’ s training while the previously proposed standard MVI condition is not , which makes the one-sided MVI condition much closer to reality than the standard version . After achieving all these results , we are eventually able to understand the one-sided convergence of Adam-type algorithms in min-max optimization as well as in GAN ’ s training . 2 BACKGROUND AND RELATED WORKS . In this section , we will introduce the background knowledge as well as related works on the following three fields : adaptive gradient methods , min-max optimization , and the convergence properties of multiple algorithms for min-max optimization problems . 2.1 ADAPTIVE GRADIENT METHODS AND ADAM-TYPE METHODS . We consider the simplest 1-dimensional unconstrained minimization problem : min x∈D⊆R f ( x ) . where f : D → R is a continuously differentiable function . As one of the most dominant algorithms on the optimization problem above , Stochastic Gradient Descent ( SGD ) was originally proposed by ( Goodfellow et al. , 2016 ) , which has been both empirically and theoretically proved effective , especially when facing large datasets and complicated models . To further improve the performance of SGD , several adaptive variants of SGD have been proposed , such as RMSprop , Adam ( Kingma & Ba , 2015 ) , AdaGrad ( Duchi et al. , 2011 ) , AMSGrad ( Reddi et al. , 2018 ) and AdaDelta ( Zeiler , 2012 ) . Distinguished from the vanilla gradient descent or its stochastic version SGD , adaptive gradient methods use a coordinate-wise scaling of the updating direction and each iteration relies on the history information of past gradients . In AdaGrad , we use arithmetic average when adopting history gradient information of each iteration while in Adam , RMSprop etc. , we use exponential moving average instead because its believed that the more current gradient information is more important . Although adaptive gradient methods and momentum based methods are two different routes on optimization , they are combined perfectly in Adam . Now we introduce the family of adaptive gradient methods and Adam-type , and all of them have the following form : mt+1 = ht∇f ( xt ) + rt ·mt , vt+1 = pt ( ∇f ( xt ) ) 2 + qt · vt xt+1 = xt − λt · mt+1√ vt+1 + ε . [ Adaptive ] Here , f is the objective function to minimize . h , r , p , q are scalars depending on t , λt is the learning rate of the t-th iteration and ε > 0 is a small constant used to protect the denominator from being close to 0 . From the formula above , we see that the momentummt is the weighted sum of the past gradients and vt is the weighted sum of the past squared gradients . When h = 1 , r = 0 , mt+1 = ∇f ( xt ) is just the current gradient . We start with the original Adam . vt+1 = αtvt + ( 1− αt ) ( ∇f ( xt ) ) 2 , mt+1 = βtmt + ( 1− βt ) ∇f ( xt ) xt+1 = xt − λ · mt+1√ vt+1 + ε . [ Adam ] As we can see , Adam is a combination of adaptive gradient method and momentum method . Here , the momentum term is empirical , meaning that it does not coincide with acceleration techniques that are theoretically sound , which creates extra difficult for the analysis . In Adam , we have ht + rt = pt + qt = 1 . When the αt = α , βt = β remains constant , there is a bias correction step where vt+1 ← vt+11−αt and mt+1 ← mt+1 1−βt . However , we may practically ignore this bias correction step since 11−αt and 1 1−βt rapidly approach to 1 . As one of the variants of Adam , AMSGrad has the following formulation : v̂t+1 = αtvt + ( 1− αt ) ( ∇f ( xt ) ) 2 , vt+1 = max ( vt , v̂t+1 ) mt+1 = βtvt + ( 1− βt ) ∇f ( xt ) , xt+1 = xt − λ · mt+1√ vt+1 + ε . [ AMSGrad ] As we can see , their difference is that the velocity term vt keeps increasing in AMSGrad . After showing the details of these traditional adaptive gradient methods and Adam-type methods , we introduce their convergence properties as well as their further variants . Reddi et al . ( 2018 ) shows that Adam does not converge in some settings where large gradient information is rarely encountered and it will die out quickly because of the “ short memory ” property of the exponential moving average . However , under some conditions , the convergence proofs of adaptive gradient methods have been obtained . Basu et al . ( 2018 ) proved the convergence rate of RMSprop and Adam when using deterministic gradients instead of stochastic gradients . Li & Orabona ( 2018 ) analyzed the convergence rate of AdaGrad under both convex and non-convex settings . All the papers above provide theoretical guarantee for the convergence of different types of adaptive gradient descent . After that , Chen et al . ( 2019 ) extends Adam to a broader class of Adam-type algorithms and provides its convergence analysis for non-convex optimization problems . In order to combine the fast convergence of adaptive methods and better generalization with momentum based methods , a number of new algorithms are proposed , such as SC-AdaGrad / SC-RMSprop ( Mukkamala & Hein , 2017 ) , AdamW ( Loshchilov & Hutter , 2019 ) , AdaBound ( Luo et al. , 2019 ) etc .. 2.2 MIN-MAX OPTIMIZATION . In the min-max optimization problem ( or saddle point problem ) , we have to solve : min x∈X max y∈Y φ ( x , y ) , [ SP ] where X ⊆ Rn1 , Y ⊆ Rn2 , and φ : X × Y → R is the objective function . When φ is convex on x and concave on y , we call it a convex-concave min-max optimization . Otherwise , it ’ s a more general non-convex non-concave min-max optimization . For the brevity , we denote z = ( x , y ) and Z = X × Y ⊆ Rn1+n2 . We also introduce our gradient vector field : V ( z ) = ( −∇xφ ( x , y ) , ∇yφ ( x , y ) ) , which are the update directions on both sides . The goal of [ SP ] is to find a tuple z∗ = ( x∗ , y∗ ) such that φ ( x∗ , y ) 6 φ ( x∗ , y∗ ) 6 φ ( x , y∗ ) holds for ∀x ∈ X , y ∈ Y , which is called the solution of [ SP ] . If the inequality above only holds in the local neighbourhood of z∗ , then z∗ can only be called a local solution . Notice that the necessary condition of being a solution ( or even a local solution ) is to be a stationary point of φ , which means V ( z∗ ) = 0 . Furthermore , if V is C1 , any local solution of [ SP ] must be stable , which means ∇2xxφ ( x∗ , y∗ ) 0 and ∇2yyφ ( x∗ , y∗ ) 0 . Next , we will introduce several commonly-used algorithms which are designed to solve [ SP ] . Stochastic Gradient Descent Ascent ( SGDA ) This is a simple extension of Stochastic Gradient Descent ( SGD ) algorithm for minimization problems ( Johnsen , 1959 ) . In the t-th iteration : zt+1 = zt + γt · V ( zt ; ωt ) , [ SGDA ] where ω1 , ω2 , · · · are the independent and identically distributed sequence of noises . V ( z , ω ) can be treated as a query to the stochastic first-order oracle ( SFO ) . In each iteration of SGDA , we need to query SFO once . Notice that , we simultaneously update x , y in each iteration of SGDA . Therefore , if we alternate the updates of x and y , we obtain a variant of SGDA , which is named as the alternating stochastic gradient descent ascent ( AltSGDA ) algorithm . Different from original SGDA , we have to make two queries to SFO in each iteration . One for zt = ( xt , yt ) , and the other for the intermediate step ( xt+1 , yt ) . Since original SGDA is not going to work even in the convex-concave setting ( such as minx maxy f ( x , y ) = xy ) , so researchers propose the following “ theoretically correct modification ” . Stochastic Extra-gradient ( SEG ) This is a different algorithm with the above SGDA , and it is originally proposed for solving the convex-concave setting of min-max optimization problems by Korpelevich ( 1976 ) . Given zt as a base , we take a virtual gradient descent ascent step and obtain a z̃t , which can be treated as the shadow of zt . Then we use the gradient at z′t as the update direction of zt . This process can be described as : z′t = zt + γt · V ( zt ; ω ( 1 ) t ) zt+1 = zt + γt · V ( z′t ; ω ( 2 ) t ) . [ SEG ] In each iteration , we need to make two queries to the SFO . One for the base zt and the other for the shadow z′t . However , in the first step of [ SEG ] , we can use the gradient at the previous shadow z ′ t−1 so that we only have to make only one query in each iteration and remember the query ’ s result of the previous step . This algorithm is called Optimistic Gradient or Popov ’ s Extra-gradient ( Popov , 1980 ) which can be described as : z′t = zt + γt · V ( z′t−1 ; ωt−1 ) zt+1 = zt + γt · V ( z′t ; ωt ) . [ OG ] As a widely used algorithm , it has been applied in multiple works ( Daskalakis et al. , 2018 ; Mertikopoulos et al. , 2019 ) . Under some mild assumptions , convergence rates are proved by many theoretical works and we will summarize them in the next section .
This paper analyzes the performance of Adam-type algorithms (AMSGrad, to be specific) in nonconvex nonconcave minimax optimization. The authors propose that Adam-type algorithms can converge to a stationary point with the standard MVI assumption and an even weaker one-sided MVI assumption. The authors verify their claims using Experiments.
SP:768d2d6dcf6baec2092cb4587df7fe3566e4a27d
On the One-sided Convergence of Adam-type Algorithms in Non-convex Non-concave Min-max Optimization
1 INTRODUCTION . As one of the most popular optimizers in supervised deep learning tasks like natural language processing ( Chowdhury , 2003 ) as well as the main workhorse of generative adversarial network training ( Goodfellow et al. , 2014 ) , Adam-type methods are widely used because of their minimal need for learning rate tuning and their coordinate-wise adaptivity on local geometry . Starting from AdaGrad ( Duchi et al. , 2011 ) , adaptive gradient methods have evolved into a variety of different Adam-type algorithms , such as Adam ( Kingma & Ba , 2015 ) , RMSprop , AMSGrad ( Reddi et al. , 2018 ) and AdaDelta ( Zeiler , 2012 ) . In supervised learning , adaptive gradient methods and Adamtype algorithms play important roles . Especially in the field of NLP ( natural language processing ) , Adam-type algorithms are the goto optimizer . Multiple NLP experiments show that sparse Adam outperforms other non-adaptive algorithms like Stochastic Gradient Descent ( SGD ) not only on the solution performance , but also on both the training and testing error ’ s convergence rates . It ’ s worth mentioned that the most popular pre-training language model BERT ( Devlin et al. , 2018 ) also uses Adam as its optimizer , which shows the power of Adam-type algorithms . Also , Adam-type algorithms are very effective in min-max optimization . As a direct and widely used application of min-max optimization , generative adversarial networks ( GANs ) are notorious for the training difficulty . Training by SGD will easily diverge to nowhere or converge to a limiting cycle , both of which will lead to an ill-performing solution , while Adam optimizer , as the default optimizer for GANs ( Hsieh et al. , 2020 ) , can obtain better performance . The reason why these two optimizers have so much difference in GAN ’ s training has long been an open problem . Traditionally , the training performance of min-max optimization is measured according to its first-order convergence , which means the norm of the gradient , but is it really true in GAN ’ s training ? After training GAN on two relatively simple datasets , MNIST and Fashion-MNIST , we can find that , in a practical training process of GAN , Adam optimizer does not perfectly converge since the norm of discriminator ’ s gradient remains quite high through out the training process . Instead , it only has a one-sided convergence as the norm of generator ’ s gradient actually converges to 0 . This paper thus aims to explain this phenomenon by bridging the gap between theory and practice . On one hand , we understand under which conditions Adam-type optimization algorithms have provable convergence for min-max optimization . Towards this end , a recent work ( Liu et al. , 2020 ) designs two algorithms , Optimistic Stochastic Gradient ( OSG ) and Optimistic AdaGrad ( OAdaGrad ) for solving a class of non-convex non-concave min-max problems and gives theoretical guarantee on their convergence . ( Liu et al. , 2020 ) also proposes an open problem on the convergence proof of Adam-type algorithms , which is solved by this paper . On the other hand , we find that the MVI condition needed for our convergence proof does not practically hold for GANs . Instead , we propose the much milder one-sided MVI condition , which tends to hold practically and under which we provide the theoretical guarantee of the one-sided convergence of Adam-type algorithms . Despite some theoretical guarantee made on the convergence of Adam-type algorithms on convex concave or non-convex concave min-max optimization , in the non-convex non-concave setting which is most general , there is no theoretical guarantee on convergence . Comparatively speaking , proving the convergence of Adam-type algorithms is much more difficult since they use an empirical version of Momentum . Although it has been shown to perform well in practice , it is actually difficult to analyze theoretically . Even in the standard convex setting , proving the convergence of Adam-type algorithms ( Reddi et al. , 2018 ; Zou et al. , 2021 ) is much harder than other adaptive algorithms such as AdaGrad ( Duchi et al. , 2011 ) . Actually , the original version of Adam is known not to converge in convex settings . Therefore , to formally analyze the convergence of Adam-type algorithms in min-max optimization , we also consider a “ theoretically correct ” version of Adam , which is an analog of AMSGrad ( Reddi et al. , 2018 ) . In this paper , there are three main contributions . ( 1 ) We analyze Extra Gradient AMSGrad , which is an Adam-type algorithm used for solving non-convex non-concave min-max optimization problems as well as GAN ’ s training . We prove that , under the assumption of standard MVI condition , the Extra Gradient AMSGrad algorithm provably converges to a ε-stationary point with O ( dε−2 ) complexity in deterministic setting and O ( dε−4 ) complexity in stochastic setting . ( 2 ) Although the standard MVI condition above is a much milder assumption than convexity , we empirically show that MVI condition does not hold for GAN ’ s objective functions in reality . Instead , the one-sided MVI condition proposed by us tends to hold , which is the mildest assumption ever used in all the convergence proofs for min-max optimization . Under the the one-sided MVI condition , we modify the algorithm above by using dual rate decay , and theoretically prove its convergence rate . ( 3 ) We conduct empirical experiments on GAN ’ s training by the Extra Gradient AMSGrad algorithm and the Extra Gradient AMSGrad with dual rate decay analyzed by us . We show that they have much better performance than the Stochastic Gradient Descent Ascent ( SGDA ) algorithm . Also , we empirically verify that our new one-sided MVI condition is indeed satisfied during GAN ’ s training while the previously proposed standard MVI condition is not , which makes the one-sided MVI condition much closer to reality than the standard version . After achieving all these results , we are eventually able to understand the one-sided convergence of Adam-type algorithms in min-max optimization as well as in GAN ’ s training . 2 BACKGROUND AND RELATED WORKS . In this section , we will introduce the background knowledge as well as related works on the following three fields : adaptive gradient methods , min-max optimization , and the convergence properties of multiple algorithms for min-max optimization problems . 2.1 ADAPTIVE GRADIENT METHODS AND ADAM-TYPE METHODS . We consider the simplest 1-dimensional unconstrained minimization problem : min x∈D⊆R f ( x ) . where f : D → R is a continuously differentiable function . As one of the most dominant algorithms on the optimization problem above , Stochastic Gradient Descent ( SGD ) was originally proposed by ( Goodfellow et al. , 2016 ) , which has been both empirically and theoretically proved effective , especially when facing large datasets and complicated models . To further improve the performance of SGD , several adaptive variants of SGD have been proposed , such as RMSprop , Adam ( Kingma & Ba , 2015 ) , AdaGrad ( Duchi et al. , 2011 ) , AMSGrad ( Reddi et al. , 2018 ) and AdaDelta ( Zeiler , 2012 ) . Distinguished from the vanilla gradient descent or its stochastic version SGD , adaptive gradient methods use a coordinate-wise scaling of the updating direction and each iteration relies on the history information of past gradients . In AdaGrad , we use arithmetic average when adopting history gradient information of each iteration while in Adam , RMSprop etc. , we use exponential moving average instead because its believed that the more current gradient information is more important . Although adaptive gradient methods and momentum based methods are two different routes on optimization , they are combined perfectly in Adam . Now we introduce the family of adaptive gradient methods and Adam-type , and all of them have the following form : mt+1 = ht∇f ( xt ) + rt ·mt , vt+1 = pt ( ∇f ( xt ) ) 2 + qt · vt xt+1 = xt − λt · mt+1√ vt+1 + ε . [ Adaptive ] Here , f is the objective function to minimize . h , r , p , q are scalars depending on t , λt is the learning rate of the t-th iteration and ε > 0 is a small constant used to protect the denominator from being close to 0 . From the formula above , we see that the momentummt is the weighted sum of the past gradients and vt is the weighted sum of the past squared gradients . When h = 1 , r = 0 , mt+1 = ∇f ( xt ) is just the current gradient . We start with the original Adam . vt+1 = αtvt + ( 1− αt ) ( ∇f ( xt ) ) 2 , mt+1 = βtmt + ( 1− βt ) ∇f ( xt ) xt+1 = xt − λ · mt+1√ vt+1 + ε . [ Adam ] As we can see , Adam is a combination of adaptive gradient method and momentum method . Here , the momentum term is empirical , meaning that it does not coincide with acceleration techniques that are theoretically sound , which creates extra difficult for the analysis . In Adam , we have ht + rt = pt + qt = 1 . When the αt = α , βt = β remains constant , there is a bias correction step where vt+1 ← vt+11−αt and mt+1 ← mt+1 1−βt . However , we may practically ignore this bias correction step since 11−αt and 1 1−βt rapidly approach to 1 . As one of the variants of Adam , AMSGrad has the following formulation : v̂t+1 = αtvt + ( 1− αt ) ( ∇f ( xt ) ) 2 , vt+1 = max ( vt , v̂t+1 ) mt+1 = βtvt + ( 1− βt ) ∇f ( xt ) , xt+1 = xt − λ · mt+1√ vt+1 + ε . [ AMSGrad ] As we can see , their difference is that the velocity term vt keeps increasing in AMSGrad . After showing the details of these traditional adaptive gradient methods and Adam-type methods , we introduce their convergence properties as well as their further variants . Reddi et al . ( 2018 ) shows that Adam does not converge in some settings where large gradient information is rarely encountered and it will die out quickly because of the “ short memory ” property of the exponential moving average . However , under some conditions , the convergence proofs of adaptive gradient methods have been obtained . Basu et al . ( 2018 ) proved the convergence rate of RMSprop and Adam when using deterministic gradients instead of stochastic gradients . Li & Orabona ( 2018 ) analyzed the convergence rate of AdaGrad under both convex and non-convex settings . All the papers above provide theoretical guarantee for the convergence of different types of adaptive gradient descent . After that , Chen et al . ( 2019 ) extends Adam to a broader class of Adam-type algorithms and provides its convergence analysis for non-convex optimization problems . In order to combine the fast convergence of adaptive methods and better generalization with momentum based methods , a number of new algorithms are proposed , such as SC-AdaGrad / SC-RMSprop ( Mukkamala & Hein , 2017 ) , AdamW ( Loshchilov & Hutter , 2019 ) , AdaBound ( Luo et al. , 2019 ) etc .. 2.2 MIN-MAX OPTIMIZATION . In the min-max optimization problem ( or saddle point problem ) , we have to solve : min x∈X max y∈Y φ ( x , y ) , [ SP ] where X ⊆ Rn1 , Y ⊆ Rn2 , and φ : X × Y → R is the objective function . When φ is convex on x and concave on y , we call it a convex-concave min-max optimization . Otherwise , it ’ s a more general non-convex non-concave min-max optimization . For the brevity , we denote z = ( x , y ) and Z = X × Y ⊆ Rn1+n2 . We also introduce our gradient vector field : V ( z ) = ( −∇xφ ( x , y ) , ∇yφ ( x , y ) ) , which are the update directions on both sides . The goal of [ SP ] is to find a tuple z∗ = ( x∗ , y∗ ) such that φ ( x∗ , y ) 6 φ ( x∗ , y∗ ) 6 φ ( x , y∗ ) holds for ∀x ∈ X , y ∈ Y , which is called the solution of [ SP ] . If the inequality above only holds in the local neighbourhood of z∗ , then z∗ can only be called a local solution . Notice that the necessary condition of being a solution ( or even a local solution ) is to be a stationary point of φ , which means V ( z∗ ) = 0 . Furthermore , if V is C1 , any local solution of [ SP ] must be stable , which means ∇2xxφ ( x∗ , y∗ ) 0 and ∇2yyφ ( x∗ , y∗ ) 0 . Next , we will introduce several commonly-used algorithms which are designed to solve [ SP ] . Stochastic Gradient Descent Ascent ( SGDA ) This is a simple extension of Stochastic Gradient Descent ( SGD ) algorithm for minimization problems ( Johnsen , 1959 ) . In the t-th iteration : zt+1 = zt + γt · V ( zt ; ωt ) , [ SGDA ] where ω1 , ω2 , · · · are the independent and identically distributed sequence of noises . V ( z , ω ) can be treated as a query to the stochastic first-order oracle ( SFO ) . In each iteration of SGDA , we need to query SFO once . Notice that , we simultaneously update x , y in each iteration of SGDA . Therefore , if we alternate the updates of x and y , we obtain a variant of SGDA , which is named as the alternating stochastic gradient descent ascent ( AltSGDA ) algorithm . Different from original SGDA , we have to make two queries to SFO in each iteration . One for zt = ( xt , yt ) , and the other for the intermediate step ( xt+1 , yt ) . Since original SGDA is not going to work even in the convex-concave setting ( such as minx maxy f ( x , y ) = xy ) , so researchers propose the following “ theoretically correct modification ” . Stochastic Extra-gradient ( SEG ) This is a different algorithm with the above SGDA , and it is originally proposed for solving the convex-concave setting of min-max optimization problems by Korpelevich ( 1976 ) . Given zt as a base , we take a virtual gradient descent ascent step and obtain a z̃t , which can be treated as the shadow of zt . Then we use the gradient at z′t as the update direction of zt . This process can be described as : z′t = zt + γt · V ( zt ; ω ( 1 ) t ) zt+1 = zt + γt · V ( z′t ; ω ( 2 ) t ) . [ SEG ] In each iteration , we need to make two queries to the SFO . One for the base zt and the other for the shadow z′t . However , in the first step of [ SEG ] , we can use the gradient at the previous shadow z ′ t−1 so that we only have to make only one query in each iteration and remember the query ’ s result of the previous step . This algorithm is called Optimistic Gradient or Popov ’ s Extra-gradient ( Popov , 1980 ) which can be described as : z′t = zt + γt · V ( z′t−1 ; ωt−1 ) zt+1 = zt + γt · V ( z′t ; ωt ) . [ OG ] As a widely used algorithm , it has been applied in multiple works ( Daskalakis et al. , 2018 ; Mertikopoulos et al. , 2019 ) . Under some mild assumptions , convergence rates are proved by many theoretical works and we will summarize them in the next section .
This manuscript developed several algorithms (e.g., AMSGrad-EG, AMSGrad-EG-DRD) for nonconvex-nonconcave min-max optimization. The convergence result of AMSGrad-EG-DRD is shown under the one-sided MVI condition. Polynomial-time complexity results are established. Some toy experiments are conducted for GAN on MNIST and fashion-MNIST datasets.
SP:768d2d6dcf6baec2092cb4587df7fe3566e4a27d
Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions
1 INTRODUCTION In recent years , machine learning gained importance in computational quantum physics and chemistry to accelerate material discovery by approximating quantum mechanical ( QM ) calculations ( Huang & von Lilienfeld , 2021 ) . In particular , a lot of work has gone into building surrogate models to reproduce QM properties , e.g. , energies . These models learn from datasets created using classical techniques such as density functional theory ( DFT ) ( Ramakrishnan et al. , 2014 ; Klicpera et al. , 2019 ) or coupled clusters ( CCSD ) ( Chmiela et al. , 2018 ) . While this approach has shown great success in recovering the baseline calculations , it suffers from several disadvantages . Firstly , due to the tremendous success of graph neural networks ( GNNs ) in this area , the regression target quality became the limiting factor for accuracy ( Klicpera et al. , 2019 ; Qiao et al. , 2021 ; Batzner et al. , 2021 ) , i.e. , the network ’ s prediction is closer to the data label than the data label is to the actual prop- erty . Secondly , these surrogate models are subject to the usual difficulties of neural networks such as overconfidence outside the training domain ( Pappu & Paige , 2020 ; Guo et al. , 2017 ) . In orthogonal research , neural networks have been used as wave function Ansätze to solve the stationary Schrödinger equation ( Kessler et al. , 2021 ; Han et al. , 2019 ) . These methods use the variational Monte Carlo ( VMC ) ( McMillan , 1965 ) framework to iteratively optimize a neural wave function to obtain the ground-state electronic wave function of a given system . Chemists refer to such methods as ab-initio , whereas the machine learning community may refer to this as a form of self-generative learning as no dataset is required . The data ( electron positions ) are sampled from the wave function itself , and the loss is derived from the Schrödinger equation ( Ceperley et al. , 1977 ) . This approach has shown great success as multiple authors report results outperforming the tradi- tional ‘ gold-standard ’ CCSD on various systems ( Pfau et al. , 2020 ; Hermann et al. , 2020 ) . However , these techniques require expensive training for each geometry , resulting in high computational requirements and , thus , limiting their application to small sets of configurations . In this work , we accelerate VMC with neural wave functions by proposing an architecture that solves the Schrödinger equation for multiple systems simultaneously . The core idea is to predict a set of parameters such that a given wave function , e.g. , FermiNet ( Pfau et al. , 2020 ) , solves the Schrödinger equation for a specific geometry . Previously , these parameters were obtained by optimizing a separate wave function for each geometry . We improve this procedure by generating the parameters with a GNN , as illustrated in Figure 1 . This enables us to capture continuous subsets of the potential energy surface in one training pass , removing the need for costly retraining . Additionally , we take inspiration from supervised surrogate networks and enforce the invariances of the energy to physical symmetries such as translation , rotation , and reflection ( Schütt et al. , 2018 ) . While these symmetries hold for observable metrics such as energies , the wave function itself may not have these symmetries . We solve this issue by defining a coordinate system that is equivariant to the symmetries of the energy . In our experiments , our Potential Energy Surface Network ( PESNet ) consistently matches or surpasses the results of the previous best neural wave functions while training less than 140 of the time for high-resolution potential energy surface scans . 2 RELATED WORK . Molecular property prediction has seen a surge in publications in recent years with the goal of predicting QM properties such as the energy of a system . Classically , features were constructed by hand and fed into a machine learning model to predict target properties ( Christensen et al. , 2020 ; Behler , 2011 ; Bartók et al. , 2013 ) . Recently , GNNs have proven to be more accurate and took over the field ( Yang et al. , 2019 ; Klicpera et al. , 2019 ; Schütt et al. , 2018 ) . As GNNs approach the accuracy limit , recent work focuses on improving generalization by integrating calculations from computational chemistry . For instance , QDF ( Tsubaki & Mizoguchi , 2020 ) and EANN ( Zhang et al. , 2019 ) approximate the electron density while OrbNet ( Qiao et al. , 2020 ) and UNiTE ( Qiao et al. , 2021 ) include features taken from QM calculations . Another promising direction is ∆-ML models , which only predict the delta between a high-accuracy QM calculation and a faster lowaccuracy one ( Wengert et al. , 2021 ) . Despite their success , surrogate models lack reliability . Even if uncertainty estimates are available ( Lamb & Paige , 2020 ; Hirschfeld et al. , 2020 ) , generalization outside of the training regime is unpredictable ( Guo et al. , 2017 ) . While such supervised models are architecturally related , they pursue a fundamentally different objective than PESNet . Where surrogate models approximate QM calculations from data , this work focuses on performing the exact QM calculations from scratch . Neural wave function Ansätze in combination with the VMC framework have recently been proposed as an alternative ( Carleo & Troyer , 2017 ) to classical self-consistent field ( SCF ) methods such as Hartree-Fock , DFT , or CCSD to solve the Schrödinger equation ( Szabo & Ostlund , 2012 ) . However , early works were limited to small systems and low accuracy ( Kessler et al. , 2021 ; Han et al. , 2019 ; Choo et al. , 2020 ) . Recently , FermiNet ( Pfau et al. , 2020 ) and PauliNet ( Hermann et al. , 2020 ) presented more scalable approaches and accuracy on par with the best traditional QM computations . To further improve accuracy , Wilson et al . ( 2021 ) coupled FermiNet with diffusion Monte-Carlo ( DMC ) . But , all these methods need to be trained for each configuration individually . To address this issue , weight-sharing has been proposed to reduce the time per training , but this was initially limited to non-fermionic systems ( Yang et al. , 2020 ) . In a concurrent work , Scherbela et al . ( 2021 ) extend this idea to electronic wave functions . However , their DeepErwin model still requires separate models for each geometry , does not account for symmetries and achieves lower accuracy , as we show in Section 4 . Other efforts have aimed at accelerating Ansätze by replacing costly determinant operations , but this comes at a significant loss of accuracy ( Acevedo et al. , 2020 ) . 3 METHOD . To build a model that solves the Schrödinger equation for many geometries simultaneously and accounts for the symmetries of the energy , we use three key ingredients . Firstly , to solve the Schrödinger equation , we leverage the VMC framework , i.e. , we iteratively update our wave function model ( WFModel ) until it converges to the ground-state electronic wave function . The WFModel ψθ ( −→r ) : RN×3 7→ R is a function parametrized by θ that maps electron configurations to amplitudes . It must obey the Fermi-Dirac statistics , i.e. , the sign of the output must flip under the exchange of two electrons of the same spin . As we cover in Section 3.4 , the WFModel is essential for sampling electron configurations and computing energies . Secondly , we extend this to multiple geometries by introducing a GNN that reparametrizes the WFModel . In reference to meta-learning , we call this the MetaGNN . It takes the nuclei coordinates−→ Rm and charges Zm and outputs subsets ω , ωm ⊂ θ , m ∈ { 1 , . . . , M } of WFModel ’ s parameters . Thanks to message passing , the MetaGNN can capture the full 3D geometry of the nuclei graph . Lastly , as we prove in Appendix A , to predict energies invariant to rotations and reflections the wave function needs to be equivariant . We accomplish this by constructing an equivariant coordinate system E = [ −→e 1 , −→e 2 , −→e 3 ] based on the principle component analysis ( PCA ) . Together , these components form PESNet , whose architecture is shown in Figure 2 . Since sampling and energy computations only need the WFModel , we only need a single forward pass of the MetaGNN for each geometry during evaluation . Furthermore , its end-to-end differentiability facilitates optimization , see Section 3.4 , and we may benefit from better generalization thanks to our equivariant wave function ( Elesedy & Zaidi , 2021 ; Kondor & Trivedi , 2018 ) . Notation . We use bold lower-case letters h for vectors , bold upper-case W letters for matrices , −−−−→arrows to indicate vectors in 3D , −→r i to denote electron coordinates , −→ Rm , Zm for nuclei coordinates and charge , respectively . [ ◦ , ◦ ] and [ ◦ ] Ni=1 denote vector concatenations . 3.1 WAVE FUNCTION MODEL . We use the FermiNet ( Pfau et al. , 2020 ) architecture and augment it with a new feature construction that is invariant to reindexing nuclei . In the original FermiNet , the inputs to the first layer are simply concatenations of the electron-nuclei distances . This causes the features to permute if nuclei indexing changes . To circumvent this issue , we propose a new feature construction as follows : h1i = M∑ m=1 MLP ( W [ ( −→r i − −→ Rm ) E , ∥−→r i − −→ Rm∥ ] + zm ) , ( 1 ) g1ij = ( ( −→r i −−→r j ) E , ∥−→r i −−→r j∥ ) ( 2 ) where zm is an embedding of the m-th nuclei and E ∈ R3×3 is our equivariant coordinate system , see Section 3.3 . By summing over all nuclei instead of concatenating we obtain the desired invariance . The features are then iteratively updated using the update rule from Wilson et al . ( 2021 ) ht+1i = σ W tsingle hti , ∑ j∈A↑ gtij , ∑ j∈A↓ gtij + btsingle +W tglobal ∑ j∈A↑ htj , ∑ j∈A↓ htj , ( 3 ) gt+1ij = σ ( W tdoubleg t ij + b t double ) ( 4 ) where σ is an activation function , A↑ and A↓ are the index sets of the spin-up and spin-down electrons , respectively . We also add skip connections where possible . We chose σ : = tanh since it must be at least twice differentiable to compute the energy , see Section 3.4 . After LWF many updates , we take the electron embeddings hLWFi and construct K orbitals : ϕkαij = ( w kα i h LWF j + b kα orbital , i ) M∑ m πkαim exp ( −σkαim∥−→r j − −→ Rm∥ ) , πkαim = Sigmoid ( p kα im ) , σkαim = Softplus ( s kα im ) ( 5 ) where k ∈ { 1 , · · · , K } , α ∈ { ↑ , ↓ } , i , j ∈ Aα , and pi , si are free parameters . Here , we use the sigmoid and softplus functions to ensure that the wave function decays to 0 infinitely far away from any nuclei . To satisfy the antisymmetry to the exchange of same-spin electrons , the output is a weighted sum of determinants ( Hutter , 2020 ) ψ ( −→r ) = K∑ k=1 wk detϕ k↑ detϕk↓ . ( 6 )
The paper develops a neural network based variational ansatz for modeling wave functions. Authors build their model on top of FermiNet architecture with a few modifications: they use a different feature embedding approach that is invariant with respect to basic spatial symmetries. In addition, authors use a GNN “hypernetwork” that predicts the parameters of the variational wave function for a given configuration of the system. With these modifications authors show that the model can be optimized on a range of system configurations simultaneously. The optimization is performed in a standard VMC setting. The results are compared on commonly studied problems such as variations of energy of the H4 molecule, hydrogen chain system, nitrogen molecule and cyclobutadiene. Authors find that their approach generally achieves similar results to those of a FermiNet model, while training on multiple system configurations simultaneously, reducing the time needed to obtain variational energies for multiple configurations.
SP:6b55a41e6cde3b4e740941d48c237127c982da27
Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions
1 INTRODUCTION In recent years , machine learning gained importance in computational quantum physics and chemistry to accelerate material discovery by approximating quantum mechanical ( QM ) calculations ( Huang & von Lilienfeld , 2021 ) . In particular , a lot of work has gone into building surrogate models to reproduce QM properties , e.g. , energies . These models learn from datasets created using classical techniques such as density functional theory ( DFT ) ( Ramakrishnan et al. , 2014 ; Klicpera et al. , 2019 ) or coupled clusters ( CCSD ) ( Chmiela et al. , 2018 ) . While this approach has shown great success in recovering the baseline calculations , it suffers from several disadvantages . Firstly , due to the tremendous success of graph neural networks ( GNNs ) in this area , the regression target quality became the limiting factor for accuracy ( Klicpera et al. , 2019 ; Qiao et al. , 2021 ; Batzner et al. , 2021 ) , i.e. , the network ’ s prediction is closer to the data label than the data label is to the actual prop- erty . Secondly , these surrogate models are subject to the usual difficulties of neural networks such as overconfidence outside the training domain ( Pappu & Paige , 2020 ; Guo et al. , 2017 ) . In orthogonal research , neural networks have been used as wave function Ansätze to solve the stationary Schrödinger equation ( Kessler et al. , 2021 ; Han et al. , 2019 ) . These methods use the variational Monte Carlo ( VMC ) ( McMillan , 1965 ) framework to iteratively optimize a neural wave function to obtain the ground-state electronic wave function of a given system . Chemists refer to such methods as ab-initio , whereas the machine learning community may refer to this as a form of self-generative learning as no dataset is required . The data ( electron positions ) are sampled from the wave function itself , and the loss is derived from the Schrödinger equation ( Ceperley et al. , 1977 ) . This approach has shown great success as multiple authors report results outperforming the tradi- tional ‘ gold-standard ’ CCSD on various systems ( Pfau et al. , 2020 ; Hermann et al. , 2020 ) . However , these techniques require expensive training for each geometry , resulting in high computational requirements and , thus , limiting their application to small sets of configurations . In this work , we accelerate VMC with neural wave functions by proposing an architecture that solves the Schrödinger equation for multiple systems simultaneously . The core idea is to predict a set of parameters such that a given wave function , e.g. , FermiNet ( Pfau et al. , 2020 ) , solves the Schrödinger equation for a specific geometry . Previously , these parameters were obtained by optimizing a separate wave function for each geometry . We improve this procedure by generating the parameters with a GNN , as illustrated in Figure 1 . This enables us to capture continuous subsets of the potential energy surface in one training pass , removing the need for costly retraining . Additionally , we take inspiration from supervised surrogate networks and enforce the invariances of the energy to physical symmetries such as translation , rotation , and reflection ( Schütt et al. , 2018 ) . While these symmetries hold for observable metrics such as energies , the wave function itself may not have these symmetries . We solve this issue by defining a coordinate system that is equivariant to the symmetries of the energy . In our experiments , our Potential Energy Surface Network ( PESNet ) consistently matches or surpasses the results of the previous best neural wave functions while training less than 140 of the time for high-resolution potential energy surface scans . 2 RELATED WORK . Molecular property prediction has seen a surge in publications in recent years with the goal of predicting QM properties such as the energy of a system . Classically , features were constructed by hand and fed into a machine learning model to predict target properties ( Christensen et al. , 2020 ; Behler , 2011 ; Bartók et al. , 2013 ) . Recently , GNNs have proven to be more accurate and took over the field ( Yang et al. , 2019 ; Klicpera et al. , 2019 ; Schütt et al. , 2018 ) . As GNNs approach the accuracy limit , recent work focuses on improving generalization by integrating calculations from computational chemistry . For instance , QDF ( Tsubaki & Mizoguchi , 2020 ) and EANN ( Zhang et al. , 2019 ) approximate the electron density while OrbNet ( Qiao et al. , 2020 ) and UNiTE ( Qiao et al. , 2021 ) include features taken from QM calculations . Another promising direction is ∆-ML models , which only predict the delta between a high-accuracy QM calculation and a faster lowaccuracy one ( Wengert et al. , 2021 ) . Despite their success , surrogate models lack reliability . Even if uncertainty estimates are available ( Lamb & Paige , 2020 ; Hirschfeld et al. , 2020 ) , generalization outside of the training regime is unpredictable ( Guo et al. , 2017 ) . While such supervised models are architecturally related , they pursue a fundamentally different objective than PESNet . Where surrogate models approximate QM calculations from data , this work focuses on performing the exact QM calculations from scratch . Neural wave function Ansätze in combination with the VMC framework have recently been proposed as an alternative ( Carleo & Troyer , 2017 ) to classical self-consistent field ( SCF ) methods such as Hartree-Fock , DFT , or CCSD to solve the Schrödinger equation ( Szabo & Ostlund , 2012 ) . However , early works were limited to small systems and low accuracy ( Kessler et al. , 2021 ; Han et al. , 2019 ; Choo et al. , 2020 ) . Recently , FermiNet ( Pfau et al. , 2020 ) and PauliNet ( Hermann et al. , 2020 ) presented more scalable approaches and accuracy on par with the best traditional QM computations . To further improve accuracy , Wilson et al . ( 2021 ) coupled FermiNet with diffusion Monte-Carlo ( DMC ) . But , all these methods need to be trained for each configuration individually . To address this issue , weight-sharing has been proposed to reduce the time per training , but this was initially limited to non-fermionic systems ( Yang et al. , 2020 ) . In a concurrent work , Scherbela et al . ( 2021 ) extend this idea to electronic wave functions . However , their DeepErwin model still requires separate models for each geometry , does not account for symmetries and achieves lower accuracy , as we show in Section 4 . Other efforts have aimed at accelerating Ansätze by replacing costly determinant operations , but this comes at a significant loss of accuracy ( Acevedo et al. , 2020 ) . 3 METHOD . To build a model that solves the Schrödinger equation for many geometries simultaneously and accounts for the symmetries of the energy , we use three key ingredients . Firstly , to solve the Schrödinger equation , we leverage the VMC framework , i.e. , we iteratively update our wave function model ( WFModel ) until it converges to the ground-state electronic wave function . The WFModel ψθ ( −→r ) : RN×3 7→ R is a function parametrized by θ that maps electron configurations to amplitudes . It must obey the Fermi-Dirac statistics , i.e. , the sign of the output must flip under the exchange of two electrons of the same spin . As we cover in Section 3.4 , the WFModel is essential for sampling electron configurations and computing energies . Secondly , we extend this to multiple geometries by introducing a GNN that reparametrizes the WFModel . In reference to meta-learning , we call this the MetaGNN . It takes the nuclei coordinates−→ Rm and charges Zm and outputs subsets ω , ωm ⊂ θ , m ∈ { 1 , . . . , M } of WFModel ’ s parameters . Thanks to message passing , the MetaGNN can capture the full 3D geometry of the nuclei graph . Lastly , as we prove in Appendix A , to predict energies invariant to rotations and reflections the wave function needs to be equivariant . We accomplish this by constructing an equivariant coordinate system E = [ −→e 1 , −→e 2 , −→e 3 ] based on the principle component analysis ( PCA ) . Together , these components form PESNet , whose architecture is shown in Figure 2 . Since sampling and energy computations only need the WFModel , we only need a single forward pass of the MetaGNN for each geometry during evaluation . Furthermore , its end-to-end differentiability facilitates optimization , see Section 3.4 , and we may benefit from better generalization thanks to our equivariant wave function ( Elesedy & Zaidi , 2021 ; Kondor & Trivedi , 2018 ) . Notation . We use bold lower-case letters h for vectors , bold upper-case W letters for matrices , −−−−→arrows to indicate vectors in 3D , −→r i to denote electron coordinates , −→ Rm , Zm for nuclei coordinates and charge , respectively . [ ◦ , ◦ ] and [ ◦ ] Ni=1 denote vector concatenations . 3.1 WAVE FUNCTION MODEL . We use the FermiNet ( Pfau et al. , 2020 ) architecture and augment it with a new feature construction that is invariant to reindexing nuclei . In the original FermiNet , the inputs to the first layer are simply concatenations of the electron-nuclei distances . This causes the features to permute if nuclei indexing changes . To circumvent this issue , we propose a new feature construction as follows : h1i = M∑ m=1 MLP ( W [ ( −→r i − −→ Rm ) E , ∥−→r i − −→ Rm∥ ] + zm ) , ( 1 ) g1ij = ( ( −→r i −−→r j ) E , ∥−→r i −−→r j∥ ) ( 2 ) where zm is an embedding of the m-th nuclei and E ∈ R3×3 is our equivariant coordinate system , see Section 3.3 . By summing over all nuclei instead of concatenating we obtain the desired invariance . The features are then iteratively updated using the update rule from Wilson et al . ( 2021 ) ht+1i = σ W tsingle hti , ∑ j∈A↑ gtij , ∑ j∈A↓ gtij + btsingle +W tglobal ∑ j∈A↑ htj , ∑ j∈A↓ htj , ( 3 ) gt+1ij = σ ( W tdoubleg t ij + b t double ) ( 4 ) where σ is an activation function , A↑ and A↓ are the index sets of the spin-up and spin-down electrons , respectively . We also add skip connections where possible . We chose σ : = tanh since it must be at least twice differentiable to compute the energy , see Section 3.4 . After LWF many updates , we take the electron embeddings hLWFi and construct K orbitals : ϕkαij = ( w kα i h LWF j + b kα orbital , i ) M∑ m πkαim exp ( −σkαim∥−→r j − −→ Rm∥ ) , πkαim = Sigmoid ( p kα im ) , σkαim = Softplus ( s kα im ) ( 5 ) where k ∈ { 1 , · · · , K } , α ∈ { ↑ , ↓ } , i , j ∈ Aα , and pi , si are free parameters . Here , we use the sigmoid and softplus functions to ensure that the wave function decays to 0 infinitely far away from any nuclei . To satisfy the antisymmetry to the exchange of same-spin electrons , the output is a weighted sum of determinants ( Hutter , 2020 ) ψ ( −→r ) = K∑ k=1 wk detϕ k↑ detϕk↓ . ( 6 )
The paper presents a new meta-learning method for solving the Schrodingers equation using neural networks. This method combines an existing neural wave function model called FermiNet together with a GNN (MetaGNN) to solve the Schrodingers equation for multiple geometries simultaneously. The MetaGNN takes the atomic graph as input and outputs a set of parameters that capture the 3D geometry of the system, which are then input to the FermiNet. The resulting method can is applicable to multiple geometries while being significantly faster to train.
SP:6b55a41e6cde3b4e740941d48c237127c982da27
Universal Approximation Under Constraints is Possible with Transformers
Keywords : Constrained Universal Approximation , Probabilistic Attention , Transformer Networks , Geometric Deep Learning , Measurable Maximum Theorem , Non-Affine Random Projections . 1 INTRODUCTION . In supervised learning , we select a parameterized model f̂ : Rn ! Rm by optimizing a real-valued loss function1L over training data from an input-output domain X ⇥ Y ✓ Rn ⇥ Rm . A necessary property for a model class to produce asymptotically optimal results , for any continuous loss L , is the universal approximation property . However , often more structure ( beyond vectorial Rm ) is present in a learning problem and this structure must be encoded into the trained model f̂ to obtain meaningful or feasible predictions . This additional structure is typically described by a constraint set K ✓ Rm and the condition f̂ ( X ) ✓ K. For example , in classification K = { y 2 [ 0 , 1 ] m : Pm i=1 yi = 1 } ( Shalev-Shwartz & Ben-David , 2014 ) , in Stackelberg games ( Holters et al. , 2018 ; Jin et al. , 2020 ; Li et al. , 2021 ) K is the set of utility-maximizing actions of an opponent , in integer programming K is the integer lattice Zm ( Conforti et al. , 2014 ) , in financial risk-management K is a set of positions meeting the minimum solvency requirements imposed by international regularity bodies ( Basel Committee on Banking Supervision , 2015 ; 2019 ; McNeil et al. , 2015 ) , in covariance matrix prediction K ✓ Rm⇥m is the set of m⇥m matrices which are symmetric and positive semidefinite ( Bonnabel et al. , 2013 ; Bonnabel & Sepulchre , 2009 ; Baes et al. , 2021 ) , in geometric deep learning K is typically a manifold ( e.g . a pose manifold in computer vision and robotics ( Ding & Fan , 2014 ) or a manifold of distance matrices ( Dokmanic et al. , 2015 ) ) , a graph , or an orbit of a group action ( Bronstein et al. , 2017 ; 2021 ; Kratsios & Bilokopytov , 2020 ) . Therefore , we ask : Is exact constraint satisfaction possible with universal deep learning models ? ⇤Corresponding authors . 1For example , in a regression problem one can set L ( x , y ) = kf ( x ) yk for an unknown function f or in regression problems one sets L ( x , y ) = Pm i=1 [ C ( x ) ] i log ( yi ) for an unknown classifier C. The answer to this question begins by examining the classical universal approximation theorems for deep feedforward networks . If L and K are mildly regular , the universal approximation theorems of Hornik et al . ( 1989 ) ; Cybenko ( 1989 ) ; Pinkus ( 1999 ) ; Gühring et al . ( 2020 ) ; Kidger & Lyons ( 2020 ) ; Park et al . ( 2021 ) guarantee that for any “ good activation function ” and for every tolerance level ✏ > 0 , there is a deep feedforward network with activation function , such that infy2K L ( x , y ) and L ( x , f̂ ( x ) ) are uniformly at most ✏ apart . Written in terms of the optimality set , sup x2X kf̂ ( x ) argmin y2K L ( x , y ) k ✏ , ( 1 ) where the distance of a point y 2 Rm to a set A ✓ Rm is defined by ky Ak , infa2A ky ak . Since argminy2K L ( x , y ) ✓ K , then ( 1 ) only implies that kf̂ ( x ) Kk ✏ and there is no reason to believe that the constraint f̂ ( x ) 2 K is exactly satisfied , for every x 2 X . This kind of approximate constraint satisfaction is not always appropriate . In the following examples constraint violation causes either practical or theoretical concerns : ( i ) In post-financial crisis risk management , international regulatory bodies mandate that any financial actor should maintain solubility proportional to the risk of their investments ( Basel Committee on Banking Supervision , 2015 ; 2019 ) . To prevent future financial crises , any violation of these risk constraints , no matter the size , incurs large and immediate fines . ( ii ) In geometric deep learning , we often need to encode complicated non-vectorial structure present in a dataset , by viewing it as a K valued function ( Fletcher , 2013 ; Bonnabel & Sepulchre , 2009 ; Baes et al. , 2021 ) . However , if K is non-convex then Motzkin ( 1935 ) confirms that there is no unique way to map predictions f̂ ( x ) 62 K to a closest point in K. Thus , we are faced with the dilemma : either make an ad-hoc choice of a k in K with k ⇡ f̂ ( x ) ( ex . : an arbitrary choice scheme when K = Zm ) or have meaningless predictions ( ex : non-integer values to integer programs , or symmetry breaking ( Weinberg , 1976 ) 2 ) . Constrained learning was recognized as an effective framework for fairness and robustness by Chamon & Ribeiro ( 2020 ) who study empirical risk minimization under constraints . Many emerging topics in machine learning lead to constrained learning formulations . A case in point is modelbased domain generalization ( Robey et al. , 2021 ) . Despite the importance of ( deep ) learning with constraints , there are no related approximation-theoretic results to the best of our knowledge . In this paper , we bridge this theoretical gap by showing that universal approximation with exact constraint satisfaction is always possible for deep ( probabilistic ) transformer networks with a single attention mechanism as output layer . Our contribution is three-fold : 1 . We derive the first universal approximation theorem with exact constraint satisfaction ; 2 . Our transformer network ’ s encoder and decoder adapt to the dimension of the constraint set and thus beat the curse of dimensionality for low-dimensional constraints ; 3 . Our models leverage a probabilistic attention mechanism that can encode non-convex con- straints . This probabilistic approach is key to bypass the topological obstructions to nonEuclidean universal approximation ( Kratsios & Papon , 2021 ) . Our analysis provides perspective on the empirical success of attention and adds to the recent line of work on approximation theory for transformer networks , ( Yun et al. , 2020a ; b ) , which roughly considers the unconstrained case ( with K in ( 1 ) replaced by Rm ) in the special case of L ( x , y ) = kf ( x ) yk for a suitable target function f : Rn ! Rm . Our probabilistic perspective on transformer networks fits with the representations of Vuckovic et al . ( 2021 ) and of Kratsios ( 2021 ) . Our results can be regarded as an approximation-theoretic counterpart to the constrained statistical learning theory of Chamon & Ribeiro ( 2020 ) . Further , they put forward a perspective on randomness in neural networks that is complementary to the work of Louart et al . ( 2018 ) ; Gonon et al . ( 2020a ; b ) . We look at the same problem focusing on constraint satisfaction instead of training efficiency . Finally , our proof methods are novel , and build on contemporary tools from metric geometry ( Ambrosio & Puglisi , 2020 ; Bruè et al. , 2021 ) . 2As discussed in Rosset et al . ( 2021 ) this is problematic since respecting symmetries can often massively reduce the computational burden of a learning task . 1.1 THE PROBABILISTIC ATTENTION MECHANISM . We now give a high-level explanation of our results ; the detailed formulations are in Section 2 . Introduced in ( Bahdanau et al. , 2015 ) and later used to define the transformer architecture ( Vaswani et al. , 2017 ) , in the NLP context , attention maps a matrix of queries Q , a matrix of keys K , and a matrix of values V to the quantity Softmax ( QK > ) V , where the softmax function ( defined below ) is applied row-wise to QK > . Just as the authors of ( Petersen & Voigtlaender , 2020 ; Zhou , 2020 ) focus on the simplified versions of practically implementable ConvNets in the study of approximation theory of deep ConvNets ( e.g . omitting pooling layers ) , we find it sufficient to study the following simplified attention mechanism to obtain universal approximation results : Attention ( w , Y ) , SoftmaxN ( w ) > Y = NX n=1 [ SoftmaxN ( w ) n ] Yn , ( 2 ) where w 2 RN , SoftmaxN : RN 3 w 7 ! ( e wkPN j=1 e wj ) N k=1 , and Y is an N ⇥m matrix . The attention mechanism ( 2 ) can be interpreted as “ paying attention ” to a set of particles Y1 , . . . , YN 2 Rm defined by Y ’ s rows . This simplified form of attention is sufficient to demonstrate that transformer networks can approximate a function while respecting a constraint set , K , whether convex or non-convex . Informal Theorem 1.1 ( Deep Maximum Theorem for Transformers ) . If K is convex and the quantities defining ( 1 ) are regular then , for any ✏ 2 ( 0 , 1 ] , there is a feedforward network f̂ , an X✏ ⇢ Rn of probability 1-✏ , and a matrix Y such that the transformer Attention ( f̂ ( x ) , Y ) satisfies : ( i ) Exact Constraint Satisfaction : For each x 2 Rn , Attention ( f̂ ( x ) , Y ) 2 K , ( ii ) Universal Approximation : supx2X✏ kAttention ( f̂ ( x ) , Y ) argmin y ? 2K L ( x , y ? ) k ✏ Informal Theorem 1.1 guarantees that simple transformer networks can minimize any loss function while exactly satisfying the set of convex constraints . As illustrated by Figure 1 and Figure 2 , K ’ s convexity is critical here , since without it the transformer ’ s prediction may fail to lie in K. This is because any transformer network ’ s output is a convex combinations of the particles Y1 , Y2 , Y3 ; thus , any transformer network ’ s predictions must belong to these particles ’ convex hull . In Figures 1 and 2 , Y ’ s columns , i.e . the particles Y1 , Y2 , and Y3 , are each illustrated by a • at the constraint set ( K ) vertices . The bubble around each each Yi illustrates the predicted probability , for a given input , that f ( x ) is nearest to that Yi . The ⇥ is the transformer ’ s prediction which is , by construction , a convex combination of the Yi weighted by the aforementioned probabilities and therefore they lie in the K if it is convex ( Figure 1 ) but not if K is non-convex ( Figure 2 ) . Naturally , we arrive at the question : How can ( i ) and ( ii ) simultaneously hold when K is non-convex ? Returning to Vaswani et al . ( 2017 ) and using the introduced terminology , we note that the role of the SoftmaxN layer is to rank the importance of the particles { Yn } Nn=1 when optimizing L , at any given input : the weights [ SoftmaxN ( w ) ] n in ( 2 ) can be interpreted as charging their respective point masses { Yn } Nn=1 with probabilities of being optimal for L ( relative to the other particles ) 3 . This suggests the following probabilistic reinterpretation of attention ( which we denote by p-attention ) : P-attention ( w , Y ) , NX n=1 [ SoftmaxN ( w ) ] n Yn . ( 3 ) 3Following Villani ( 2009 ) , Yn is the Borel probability measure on R m assigning full probability to any Borel subset of Rm containing the particle Yn and 0 otherwise . Crudely put , P-attention ( · , Y ) “ pays relative attention to the particles ” Y1 , . . . , Yn 2 Rm . A simple computation shows that the mean prediction of our probabilistic attention mechanism , exactly implements “ classical ” Attention of Vaswani et al . ( 2017 ) , as defined in ( 2 ) , Attention ( w , Y ) = EX⇠P-attention ( w , Y ) [ X ] , ( 4 ) where EX⇠P-attention ( w , Y ) [ X ] denotes the ( vector-valued ) expectation of a random-vector X distributed according to P-attention ( w , Y ) . Hence , ( 3 ) is no less general than ( 2 ) . The advantage of ( 3 ) is that , if each particle Yn belongs to K ( even if K is non-convex ) then , any sample drawn from the probability measure P-attention ( w , Y ) necessarily belongs to K .
The paper under review studies the universal approximation theory with constraints. For any convex or non-convex compact set, a universal approximation is proved through a probabilistic transformer with constraints. Furthermore, a chart-free universal approximation is established on the Riemannian manifold with geodesically-convex constraints.
SP:b3a424fda4f96b24f753105c1c0ca8b04ebb15e2
Universal Approximation Under Constraints is Possible with Transformers
Keywords : Constrained Universal Approximation , Probabilistic Attention , Transformer Networks , Geometric Deep Learning , Measurable Maximum Theorem , Non-Affine Random Projections . 1 INTRODUCTION . In supervised learning , we select a parameterized model f̂ : Rn ! Rm by optimizing a real-valued loss function1L over training data from an input-output domain X ⇥ Y ✓ Rn ⇥ Rm . A necessary property for a model class to produce asymptotically optimal results , for any continuous loss L , is the universal approximation property . However , often more structure ( beyond vectorial Rm ) is present in a learning problem and this structure must be encoded into the trained model f̂ to obtain meaningful or feasible predictions . This additional structure is typically described by a constraint set K ✓ Rm and the condition f̂ ( X ) ✓ K. For example , in classification K = { y 2 [ 0 , 1 ] m : Pm i=1 yi = 1 } ( Shalev-Shwartz & Ben-David , 2014 ) , in Stackelberg games ( Holters et al. , 2018 ; Jin et al. , 2020 ; Li et al. , 2021 ) K is the set of utility-maximizing actions of an opponent , in integer programming K is the integer lattice Zm ( Conforti et al. , 2014 ) , in financial risk-management K is a set of positions meeting the minimum solvency requirements imposed by international regularity bodies ( Basel Committee on Banking Supervision , 2015 ; 2019 ; McNeil et al. , 2015 ) , in covariance matrix prediction K ✓ Rm⇥m is the set of m⇥m matrices which are symmetric and positive semidefinite ( Bonnabel et al. , 2013 ; Bonnabel & Sepulchre , 2009 ; Baes et al. , 2021 ) , in geometric deep learning K is typically a manifold ( e.g . a pose manifold in computer vision and robotics ( Ding & Fan , 2014 ) or a manifold of distance matrices ( Dokmanic et al. , 2015 ) ) , a graph , or an orbit of a group action ( Bronstein et al. , 2017 ; 2021 ; Kratsios & Bilokopytov , 2020 ) . Therefore , we ask : Is exact constraint satisfaction possible with universal deep learning models ? ⇤Corresponding authors . 1For example , in a regression problem one can set L ( x , y ) = kf ( x ) yk for an unknown function f or in regression problems one sets L ( x , y ) = Pm i=1 [ C ( x ) ] i log ( yi ) for an unknown classifier C. The answer to this question begins by examining the classical universal approximation theorems for deep feedforward networks . If L and K are mildly regular , the universal approximation theorems of Hornik et al . ( 1989 ) ; Cybenko ( 1989 ) ; Pinkus ( 1999 ) ; Gühring et al . ( 2020 ) ; Kidger & Lyons ( 2020 ) ; Park et al . ( 2021 ) guarantee that for any “ good activation function ” and for every tolerance level ✏ > 0 , there is a deep feedforward network with activation function , such that infy2K L ( x , y ) and L ( x , f̂ ( x ) ) are uniformly at most ✏ apart . Written in terms of the optimality set , sup x2X kf̂ ( x ) argmin y2K L ( x , y ) k ✏ , ( 1 ) where the distance of a point y 2 Rm to a set A ✓ Rm is defined by ky Ak , infa2A ky ak . Since argminy2K L ( x , y ) ✓ K , then ( 1 ) only implies that kf̂ ( x ) Kk ✏ and there is no reason to believe that the constraint f̂ ( x ) 2 K is exactly satisfied , for every x 2 X . This kind of approximate constraint satisfaction is not always appropriate . In the following examples constraint violation causes either practical or theoretical concerns : ( i ) In post-financial crisis risk management , international regulatory bodies mandate that any financial actor should maintain solubility proportional to the risk of their investments ( Basel Committee on Banking Supervision , 2015 ; 2019 ) . To prevent future financial crises , any violation of these risk constraints , no matter the size , incurs large and immediate fines . ( ii ) In geometric deep learning , we often need to encode complicated non-vectorial structure present in a dataset , by viewing it as a K valued function ( Fletcher , 2013 ; Bonnabel & Sepulchre , 2009 ; Baes et al. , 2021 ) . However , if K is non-convex then Motzkin ( 1935 ) confirms that there is no unique way to map predictions f̂ ( x ) 62 K to a closest point in K. Thus , we are faced with the dilemma : either make an ad-hoc choice of a k in K with k ⇡ f̂ ( x ) ( ex . : an arbitrary choice scheme when K = Zm ) or have meaningless predictions ( ex : non-integer values to integer programs , or symmetry breaking ( Weinberg , 1976 ) 2 ) . Constrained learning was recognized as an effective framework for fairness and robustness by Chamon & Ribeiro ( 2020 ) who study empirical risk minimization under constraints . Many emerging topics in machine learning lead to constrained learning formulations . A case in point is modelbased domain generalization ( Robey et al. , 2021 ) . Despite the importance of ( deep ) learning with constraints , there are no related approximation-theoretic results to the best of our knowledge . In this paper , we bridge this theoretical gap by showing that universal approximation with exact constraint satisfaction is always possible for deep ( probabilistic ) transformer networks with a single attention mechanism as output layer . Our contribution is three-fold : 1 . We derive the first universal approximation theorem with exact constraint satisfaction ; 2 . Our transformer network ’ s encoder and decoder adapt to the dimension of the constraint set and thus beat the curse of dimensionality for low-dimensional constraints ; 3 . Our models leverage a probabilistic attention mechanism that can encode non-convex con- straints . This probabilistic approach is key to bypass the topological obstructions to nonEuclidean universal approximation ( Kratsios & Papon , 2021 ) . Our analysis provides perspective on the empirical success of attention and adds to the recent line of work on approximation theory for transformer networks , ( Yun et al. , 2020a ; b ) , which roughly considers the unconstrained case ( with K in ( 1 ) replaced by Rm ) in the special case of L ( x , y ) = kf ( x ) yk for a suitable target function f : Rn ! Rm . Our probabilistic perspective on transformer networks fits with the representations of Vuckovic et al . ( 2021 ) and of Kratsios ( 2021 ) . Our results can be regarded as an approximation-theoretic counterpart to the constrained statistical learning theory of Chamon & Ribeiro ( 2020 ) . Further , they put forward a perspective on randomness in neural networks that is complementary to the work of Louart et al . ( 2018 ) ; Gonon et al . ( 2020a ; b ) . We look at the same problem focusing on constraint satisfaction instead of training efficiency . Finally , our proof methods are novel , and build on contemporary tools from metric geometry ( Ambrosio & Puglisi , 2020 ; Bruè et al. , 2021 ) . 2As discussed in Rosset et al . ( 2021 ) this is problematic since respecting symmetries can often massively reduce the computational burden of a learning task . 1.1 THE PROBABILISTIC ATTENTION MECHANISM . We now give a high-level explanation of our results ; the detailed formulations are in Section 2 . Introduced in ( Bahdanau et al. , 2015 ) and later used to define the transformer architecture ( Vaswani et al. , 2017 ) , in the NLP context , attention maps a matrix of queries Q , a matrix of keys K , and a matrix of values V to the quantity Softmax ( QK > ) V , where the softmax function ( defined below ) is applied row-wise to QK > . Just as the authors of ( Petersen & Voigtlaender , 2020 ; Zhou , 2020 ) focus on the simplified versions of practically implementable ConvNets in the study of approximation theory of deep ConvNets ( e.g . omitting pooling layers ) , we find it sufficient to study the following simplified attention mechanism to obtain universal approximation results : Attention ( w , Y ) , SoftmaxN ( w ) > Y = NX n=1 [ SoftmaxN ( w ) n ] Yn , ( 2 ) where w 2 RN , SoftmaxN : RN 3 w 7 ! ( e wkPN j=1 e wj ) N k=1 , and Y is an N ⇥m matrix . The attention mechanism ( 2 ) can be interpreted as “ paying attention ” to a set of particles Y1 , . . . , YN 2 Rm defined by Y ’ s rows . This simplified form of attention is sufficient to demonstrate that transformer networks can approximate a function while respecting a constraint set , K , whether convex or non-convex . Informal Theorem 1.1 ( Deep Maximum Theorem for Transformers ) . If K is convex and the quantities defining ( 1 ) are regular then , for any ✏ 2 ( 0 , 1 ] , there is a feedforward network f̂ , an X✏ ⇢ Rn of probability 1-✏ , and a matrix Y such that the transformer Attention ( f̂ ( x ) , Y ) satisfies : ( i ) Exact Constraint Satisfaction : For each x 2 Rn , Attention ( f̂ ( x ) , Y ) 2 K , ( ii ) Universal Approximation : supx2X✏ kAttention ( f̂ ( x ) , Y ) argmin y ? 2K L ( x , y ? ) k ✏ Informal Theorem 1.1 guarantees that simple transformer networks can minimize any loss function while exactly satisfying the set of convex constraints . As illustrated by Figure 1 and Figure 2 , K ’ s convexity is critical here , since without it the transformer ’ s prediction may fail to lie in K. This is because any transformer network ’ s output is a convex combinations of the particles Y1 , Y2 , Y3 ; thus , any transformer network ’ s predictions must belong to these particles ’ convex hull . In Figures 1 and 2 , Y ’ s columns , i.e . the particles Y1 , Y2 , and Y3 , are each illustrated by a • at the constraint set ( K ) vertices . The bubble around each each Yi illustrates the predicted probability , for a given input , that f ( x ) is nearest to that Yi . The ⇥ is the transformer ’ s prediction which is , by construction , a convex combination of the Yi weighted by the aforementioned probabilities and therefore they lie in the K if it is convex ( Figure 1 ) but not if K is non-convex ( Figure 2 ) . Naturally , we arrive at the question : How can ( i ) and ( ii ) simultaneously hold when K is non-convex ? Returning to Vaswani et al . ( 2017 ) and using the introduced terminology , we note that the role of the SoftmaxN layer is to rank the importance of the particles { Yn } Nn=1 when optimizing L , at any given input : the weights [ SoftmaxN ( w ) ] n in ( 2 ) can be interpreted as charging their respective point masses { Yn } Nn=1 with probabilities of being optimal for L ( relative to the other particles ) 3 . This suggests the following probabilistic reinterpretation of attention ( which we denote by p-attention ) : P-attention ( w , Y ) , NX n=1 [ SoftmaxN ( w ) ] n Yn . ( 3 ) 3Following Villani ( 2009 ) , Yn is the Borel probability measure on R m assigning full probability to any Borel subset of Rm containing the particle Yn and 0 otherwise . Crudely put , P-attention ( · , Y ) “ pays relative attention to the particles ” Y1 , . . . , Yn 2 Rm . A simple computation shows that the mean prediction of our probabilistic attention mechanism , exactly implements “ classical ” Attention of Vaswani et al . ( 2017 ) , as defined in ( 2 ) , Attention ( w , Y ) = EX⇠P-attention ( w , Y ) [ X ] , ( 4 ) where EX⇠P-attention ( w , Y ) [ X ] denotes the ( vector-valued ) expectation of a random-vector X distributed according to P-attention ( w , Y ) . Hence , ( 3 ) is no less general than ( 2 ) . The advantage of ( 3 ) is that , if each particle Yn belongs to K ( even if K is non-convex ) then , any sample drawn from the probability measure P-attention ( w , Y ) necessarily belongs to K .
This paper presents a family of constrained universal approximation results for probabilistic transformers. The authors provide significant theoretical contributions for both convex and non-convex constraint sets. In my opinion, this represents a significant advance in our understanding of universality in ML.
SP:b3a424fda4f96b24f753105c1c0ca8b04ebb15e2
Exploring General Intelligence of Program Analysis for Multiple Tasks
1 INTRODUCTION . With the development of information technology , computer programs are used in an increasingly wide range of fields . As the number and variety of programs continue to expand , the task of analyzing programs becomes more significant and complex . Common program analysis tasks include program classification , duplicate code detection , making programming suggestions ( syntax and algorithms ) , program vulnerability analysis , bug detection , cross-language program translation , code annotation , etc . Program analysis is not a simple problem and many factors bring complexity to it . First , program functions are not implemented in a unique way ; apparently completely different codes may implement the same functions , and similar codes may implement quite different functions . Second , programs may be written in different languages and run on various platforms . The same code compiled by different compilation configurations may also have outstanding differences in terms of correctness and performance . All of these are the major factors to consider during program analysis . In recent years , more and more researchers have introduced machine learning approaches to solve complex program analysis problems . Earlier approaches treat codes as shallow textual structures , e.g. , sequences of tokens ( Allamanis et al. , 2016a ; Hindle et al. , 2016 ; Allamanis et al. , 2016b ; Pradel & Sen , 2018 ) , and analyze them with natural language processing models . These approaches ignore the rich and explicit structural information that exists in a program . In addition , since many variables are defined and used not adjacent to each other or even far apart , this “ def-use ” relations are difficult to capture by this kind of model . However , such long-distance dependencies , can be reflected in the structural information of the program . By abstracting programs as graphs and using graph neural networks ( GNN ) for program analysis tasks , better use can be made of these structure information that are readily available in programs . Graphs such as control flow graph , call graph , and data flow graph reflect the control flow , function calls and data flow relations of a program . Previous approaches based on graph neural networks have used only one graph structure ( Xu et al. , 2017 ; Ben-Nun et al. , 2018 ; Wang et al. , 2020 ) . However , different graph structures respond to different program features , so it is necessary to combine multiple graphs together to obtain richer structural information . Existing methods explore program embedding through different inputs , e.g. , source code , abstract syntax tree , assembly code and so forth . Constructing embeddings via source code or AST makes good use of semantic information in program variables . However , they lack compile-time information and can not solve compilation- and architecture-related analysis tasks . In order to build embed- dings that can solve both source-code level tasks ( program classification ) and compilation-related tasks ( program vulnerability analysis ) , we choose to build embeddings through the assembly code of a program . The embeddings we construct not only contain semantic and structural information of the program code , but also reflect differences in compilation configurations , so they can be applied to a wider range of tasks . In summary , this paper makes the following contributions : • We abstract a program into multiple graphs , extract different program structure information , and perform multi-dimensional analysis of the program . • We build program embedding based on the assembly code , offering the possibility to analyze both source code related and compile related analysis tasks . • We perform procedure-level and program-level analysis , capturing both local and global features . • We measure on two program analysis tasks , program classification and binary similarity detection , and achieved an accuracy of up to 82.58 % and 83.25 % , 8.6 % and 84.4 % of improvement over the state-of-art approaches . 2 CHALLENGES OF PROGRAM EMBEDDING . 2.1 MULTIPLE PROGRAM REPRESENTATIONS . From the perspective of language syntax , a program may be represented in many different formats , such as source code , abstract syntax tree ( AST ) , compiler IR , assembly code , etc . Fig . 1 shows the four formats with one code example . As seen from Fig . 1 , different formats share similar information but also have their unique characteristics and complexities . For example , only the source code and AST contain variable literals . Therefore , which level of code to embed with deserves a serious study . Program embedding from source code certainly has its own merits . A programmer uses English words with clear meaning for variable or function names when coding . Direct embedding over these names can capture the literal semantics for program analysis . Also , source code is small sized , often within a few kilobytes . However , the downside is having to maintain a vocabulary for all possible variable names , which can be an incredibly large number considering programmers can name variables as they want . Moreover , embedding over the source code lacks information with respect to program structure semantics . Another line of program embedding is over AST , an abstract representation of program syntactic structure in the form of tree . Unfortunately , embedding over AST also suffers from the problem of oversized vocabulary lists and the shortage of program structure information . This work chooses to embed a program based on the assembly code for several reasons . An assembly code can be compiled from a source code . It is specific to a processor architecture , for example x86-64 . It is typically less stylish and tight to program semantics . For example , programs that are syntactically different but semantically equivalent tend to correspond to similar assembly codes . Moreover , the assembly code often embraces more compilation-specific characteristics than the program intermediate representation or AST ( I. Neamtiu & Hicks , 2005 ) . This work aims to solve the task of binary similarity detection , so it is necessary to distinguish between different architectures and compilers . This compilation-related information is not available in the source code or AST . Therefore , embedding over the assembly code can solve both source code related problems such as program classification , and compilation-related problems such as binary similarity detection . 2.2 CHALLENGES BY NUMEROUS GRAPHS OF A PROGRAM . There are plenty of graph structures implicitly inside a program , including but not limited to control flow graph ( CFG ) , call graph ( CG ) , and data flow graph ( DFG ) and more . A CFG is the graphical representation of the control flow of a code , mostly used in static analysis . It shows all execution paths that can be traversed during a program execution . Edges in a CFG portray conditional branches and nodes represent basic blocks . It contains loops , branches , and other structures , which is difficult to obtain directly from the source code of a program . A CG represents calling relationships between functions . Each node represents a function and each edge ( a , b ) indicates that the function a is calling b . Thus , a cycle in the graph indicates recursive procedure calls . A DFG represents def-use chains in a program . A “ def ” means an assignment to a variable and a “ use ” is a reference . A program consists of a large number of ordered operations forming a partial-order graph . The data flow graph reflects data dependencies between these operations . Different graphs represent quite different program structures and semantics . Program embedding requires to embed as much semantic information as possible . How to embed these graphs together in an effective and efficient way is a challenging problem . Furthermore , these graphs are relative large for the efficient graph-based deep learning models ( e.g. , graph neural networks ) to run . As tested , the number of nodes and edges of the DFG can be as high as 10847 nodes and 26910 edges . These large-scale graphs incur tediously long training time using graph neural networks . 3 METHODOLOGY . In this section , we propose a program embedding methodology that embeds multiple graph structures in a program as a whole from the level of assembly code . For the best practice , we adopt the deep learning models of graph neural networks ( Scarselli et al. , 2009 ) to represent the semantics of these three types of graphs . The proposed methodology has two standalone training processes consisting of two models , BERT+ and Gated GNN . We fine-tune the pretrained BERT model to derive the initial embedding of each node of a CFG or DFG . Since the BERT model is specifically fine-tuned for program understanding , we name the model as BERT+ , to distinguish . The second model we use is Gated GNN ( GGNN ) to derive the whole embedding of the CFG , CG and DFG , as GGNN is good at memorizing long-term data dependencies between nodes . Depending on the specific tasks , program analysis may be in either procedure or program level . Procedure-level analysis is usually conducted to analyze a program in the unit of procedure , therefore for local function-level tasks . Program-level analysis is for program-level tasks . In this work , we focus on two program-level analysis tasks of program classification and binary similarity detection . However , procedure-level analysis tasks such as function name prediction can also be solved by the proposed model , which we will leave as future work . For procedure-level analysis , we construct a control flow graph for each procedure and combine it with instruction semantics to complete the embedding of each procedure . Then the embedding of each procedure is used to do procedure-level analysis . For program-level analysis , we represent the calling relations between different procedures by call graphs and represent the data dependencies between all variables in the program by data flow graphs . We embed every call graph and data flow graph for each procedure and combine the embeddings of all procedures to give rise to the embedding of the entire program . 3.1 GRAPH NEURAL NETWORKS . The objective of Graph Neural Network is to learn the node representation and graph representation for predicting node attributes or attributes of the entire graph . A Graph Neural Network ( GNN ) ( Scarselli et al. , 2009 ) structure G = ( V , E ) consists of a set of node V and a set of edge E. Each node v ∈ V is annotated with an initial node embedding by x ∈ RD and a hidden state vector htv ∈ RD ( h0v often equals to x ) . A node updates its hidden state by aggregating its neighbor hidden states and its own state at the previous time step . In total , T steps of state propagation are applied onto a GNN . In the t-th step , node v gathers its neighbors ’ states to an aggregation as mtv , as shown in Eq.1 . Then the aggregated state is combined with node v ’ s previous state ht−1v through a neural network called g , as shown in Eq.2 . f can be an arbitrary function , for example a linear layer , representing a model with parameters θ. mtv = ∑ ( u , v ) ∈E f ( htv ; θ ) ( 1 ) h t v = g ( m t v ; h t−1 v ) ( 2 ) h t v = GRU ( m t v ; h t−1 v ) ( 3 ) Gated Graph Neural Network ( GGNN ) ( Li et al. , 2016 ) is an extension of GNN by replacing g in Eq.2 with the Gated Recurrent Unit ( GRU ) ( Chung et al. , 2014 ) function as shown in Eq.3 . The GRU function lets a node memorize history long-term dependency information , as it is good at dealing with long sequences by propagating the internal hidden state additively instead of multiplicatively .
This paper presents a graph neural network-based approach to solve two binary analysis tasks (program classification and binary similarity detection). The key idea of the paper is to merge different forms of representation of binary code (compiler IR, assembly code, etc.).
SP:58544efe4373310d82b14a2822a0c4e34e810c25
Exploring General Intelligence of Program Analysis for Multiple Tasks
1 INTRODUCTION . With the development of information technology , computer programs are used in an increasingly wide range of fields . As the number and variety of programs continue to expand , the task of analyzing programs becomes more significant and complex . Common program analysis tasks include program classification , duplicate code detection , making programming suggestions ( syntax and algorithms ) , program vulnerability analysis , bug detection , cross-language program translation , code annotation , etc . Program analysis is not a simple problem and many factors bring complexity to it . First , program functions are not implemented in a unique way ; apparently completely different codes may implement the same functions , and similar codes may implement quite different functions . Second , programs may be written in different languages and run on various platforms . The same code compiled by different compilation configurations may also have outstanding differences in terms of correctness and performance . All of these are the major factors to consider during program analysis . In recent years , more and more researchers have introduced machine learning approaches to solve complex program analysis problems . Earlier approaches treat codes as shallow textual structures , e.g. , sequences of tokens ( Allamanis et al. , 2016a ; Hindle et al. , 2016 ; Allamanis et al. , 2016b ; Pradel & Sen , 2018 ) , and analyze them with natural language processing models . These approaches ignore the rich and explicit structural information that exists in a program . In addition , since many variables are defined and used not adjacent to each other or even far apart , this “ def-use ” relations are difficult to capture by this kind of model . However , such long-distance dependencies , can be reflected in the structural information of the program . By abstracting programs as graphs and using graph neural networks ( GNN ) for program analysis tasks , better use can be made of these structure information that are readily available in programs . Graphs such as control flow graph , call graph , and data flow graph reflect the control flow , function calls and data flow relations of a program . Previous approaches based on graph neural networks have used only one graph structure ( Xu et al. , 2017 ; Ben-Nun et al. , 2018 ; Wang et al. , 2020 ) . However , different graph structures respond to different program features , so it is necessary to combine multiple graphs together to obtain richer structural information . Existing methods explore program embedding through different inputs , e.g. , source code , abstract syntax tree , assembly code and so forth . Constructing embeddings via source code or AST makes good use of semantic information in program variables . However , they lack compile-time information and can not solve compilation- and architecture-related analysis tasks . In order to build embed- dings that can solve both source-code level tasks ( program classification ) and compilation-related tasks ( program vulnerability analysis ) , we choose to build embeddings through the assembly code of a program . The embeddings we construct not only contain semantic and structural information of the program code , but also reflect differences in compilation configurations , so they can be applied to a wider range of tasks . In summary , this paper makes the following contributions : • We abstract a program into multiple graphs , extract different program structure information , and perform multi-dimensional analysis of the program . • We build program embedding based on the assembly code , offering the possibility to analyze both source code related and compile related analysis tasks . • We perform procedure-level and program-level analysis , capturing both local and global features . • We measure on two program analysis tasks , program classification and binary similarity detection , and achieved an accuracy of up to 82.58 % and 83.25 % , 8.6 % and 84.4 % of improvement over the state-of-art approaches . 2 CHALLENGES OF PROGRAM EMBEDDING . 2.1 MULTIPLE PROGRAM REPRESENTATIONS . From the perspective of language syntax , a program may be represented in many different formats , such as source code , abstract syntax tree ( AST ) , compiler IR , assembly code , etc . Fig . 1 shows the four formats with one code example . As seen from Fig . 1 , different formats share similar information but also have their unique characteristics and complexities . For example , only the source code and AST contain variable literals . Therefore , which level of code to embed with deserves a serious study . Program embedding from source code certainly has its own merits . A programmer uses English words with clear meaning for variable or function names when coding . Direct embedding over these names can capture the literal semantics for program analysis . Also , source code is small sized , often within a few kilobytes . However , the downside is having to maintain a vocabulary for all possible variable names , which can be an incredibly large number considering programmers can name variables as they want . Moreover , embedding over the source code lacks information with respect to program structure semantics . Another line of program embedding is over AST , an abstract representation of program syntactic structure in the form of tree . Unfortunately , embedding over AST also suffers from the problem of oversized vocabulary lists and the shortage of program structure information . This work chooses to embed a program based on the assembly code for several reasons . An assembly code can be compiled from a source code . It is specific to a processor architecture , for example x86-64 . It is typically less stylish and tight to program semantics . For example , programs that are syntactically different but semantically equivalent tend to correspond to similar assembly codes . Moreover , the assembly code often embraces more compilation-specific characteristics than the program intermediate representation or AST ( I. Neamtiu & Hicks , 2005 ) . This work aims to solve the task of binary similarity detection , so it is necessary to distinguish between different architectures and compilers . This compilation-related information is not available in the source code or AST . Therefore , embedding over the assembly code can solve both source code related problems such as program classification , and compilation-related problems such as binary similarity detection . 2.2 CHALLENGES BY NUMEROUS GRAPHS OF A PROGRAM . There are plenty of graph structures implicitly inside a program , including but not limited to control flow graph ( CFG ) , call graph ( CG ) , and data flow graph ( DFG ) and more . A CFG is the graphical representation of the control flow of a code , mostly used in static analysis . It shows all execution paths that can be traversed during a program execution . Edges in a CFG portray conditional branches and nodes represent basic blocks . It contains loops , branches , and other structures , which is difficult to obtain directly from the source code of a program . A CG represents calling relationships between functions . Each node represents a function and each edge ( a , b ) indicates that the function a is calling b . Thus , a cycle in the graph indicates recursive procedure calls . A DFG represents def-use chains in a program . A “ def ” means an assignment to a variable and a “ use ” is a reference . A program consists of a large number of ordered operations forming a partial-order graph . The data flow graph reflects data dependencies between these operations . Different graphs represent quite different program structures and semantics . Program embedding requires to embed as much semantic information as possible . How to embed these graphs together in an effective and efficient way is a challenging problem . Furthermore , these graphs are relative large for the efficient graph-based deep learning models ( e.g. , graph neural networks ) to run . As tested , the number of nodes and edges of the DFG can be as high as 10847 nodes and 26910 edges . These large-scale graphs incur tediously long training time using graph neural networks . 3 METHODOLOGY . In this section , we propose a program embedding methodology that embeds multiple graph structures in a program as a whole from the level of assembly code . For the best practice , we adopt the deep learning models of graph neural networks ( Scarselli et al. , 2009 ) to represent the semantics of these three types of graphs . The proposed methodology has two standalone training processes consisting of two models , BERT+ and Gated GNN . We fine-tune the pretrained BERT model to derive the initial embedding of each node of a CFG or DFG . Since the BERT model is specifically fine-tuned for program understanding , we name the model as BERT+ , to distinguish . The second model we use is Gated GNN ( GGNN ) to derive the whole embedding of the CFG , CG and DFG , as GGNN is good at memorizing long-term data dependencies between nodes . Depending on the specific tasks , program analysis may be in either procedure or program level . Procedure-level analysis is usually conducted to analyze a program in the unit of procedure , therefore for local function-level tasks . Program-level analysis is for program-level tasks . In this work , we focus on two program-level analysis tasks of program classification and binary similarity detection . However , procedure-level analysis tasks such as function name prediction can also be solved by the proposed model , which we will leave as future work . For procedure-level analysis , we construct a control flow graph for each procedure and combine it with instruction semantics to complete the embedding of each procedure . Then the embedding of each procedure is used to do procedure-level analysis . For program-level analysis , we represent the calling relations between different procedures by call graphs and represent the data dependencies between all variables in the program by data flow graphs . We embed every call graph and data flow graph for each procedure and combine the embeddings of all procedures to give rise to the embedding of the entire program . 3.1 GRAPH NEURAL NETWORKS . The objective of Graph Neural Network is to learn the node representation and graph representation for predicting node attributes or attributes of the entire graph . A Graph Neural Network ( GNN ) ( Scarselli et al. , 2009 ) structure G = ( V , E ) consists of a set of node V and a set of edge E. Each node v ∈ V is annotated with an initial node embedding by x ∈ RD and a hidden state vector htv ∈ RD ( h0v often equals to x ) . A node updates its hidden state by aggregating its neighbor hidden states and its own state at the previous time step . In total , T steps of state propagation are applied onto a GNN . In the t-th step , node v gathers its neighbors ’ states to an aggregation as mtv , as shown in Eq.1 . Then the aggregated state is combined with node v ’ s previous state ht−1v through a neural network called g , as shown in Eq.2 . f can be an arbitrary function , for example a linear layer , representing a model with parameters θ. mtv = ∑ ( u , v ) ∈E f ( htv ; θ ) ( 1 ) h t v = g ( m t v ; h t−1 v ) ( 2 ) h t v = GRU ( m t v ; h t−1 v ) ( 3 ) Gated Graph Neural Network ( GGNN ) ( Li et al. , 2016 ) is an extension of GNN by replacing g in Eq.2 with the Gated Recurrent Unit ( GRU ) ( Chung et al. , 2014 ) function as shown in Eq.3 . The GRU function lets a node memorize history long-term dependency information , as it is good at dealing with long sequences by propagating the internal hidden state additively instead of multiplicatively .
This paper proposes a program analysis model based on graph neural networks that performs analysis on the assembly code of a program and uses Control Flow Graph (CFG), Call Graph (CG), and Data Flow Graph (DFG) of the program as inputs. The goal is to design a generalized model that can solve both source-code level tasks (e.g., program classification) and compilation-related task (e.g., vulnerability analysis). Program embedding based on the assembly code allows the model to learn compilation-specific features and the use of multiple graphs reflect semantic and structural information.
SP:58544efe4373310d82b14a2822a0c4e34e810c25
Improving greedy core-set configurations for active learning with uncertainty-scaled distances
1 INTRODUCTION . Active learning aims to identify the most informative data to label and include in supervised training . Often , these algorithms focus on reducing model variance , representing distributional densities , maximizing expected model change , or minimizing expected generalization error ( Kirsch et al. , 2019 ; Sener & Savarese , 2018 ; Settles , 2009 ; Shen et al. , 2018 ; Sinha et al. , 2019 ) . A unifying theme is efficient data collection , which is measured by the rate in improvement as more data are labelled . This is important when we want to identify only the most promising samples to be labelled , but also for tasks that require slow or expensive labelling ( Ducoffe & Precioso , 2018 ; Ma et al. , 2020 ; Settles , 2009 ) . We describe active learning with the same notation as Sener & Savarese ( 2018 ) . Suppose we wish to classify elements of a compact space X into labels Y = { 1 , . . . , C } . We collect n data points { xi , yi } i∈ [ n ] ∼ PX×Y , but only have access to the labels of m of these , denoted by their indices s = { s ( i ) ∈ [ n ] } i∈ [ m ] . We use the learning algorithmAs on the labelled set s to return the optimized parameters of the classifier , and measure performance with the loss function l ( · , · ; As ) : X×Y → R. The goal of active learning is to produce a set of indices s+ whose cardinality is limited by the labelling budget b , such that expected loss is minimized upon labelling and training on these elements : arg mins+ : |s+|≤b Ex , y∼PX×Y [ l ( x , y ; As∪s+ ) ] ( Sener & Savarese , 2018 ) . In practice , we use the test set { xi , yi } i∈ [ t ] ∼ PX×Y to approximate the expectation . The typical way to assess the dataefficiency of any particular active learning algorithm is to compare its trend of test performance across increasing labels compared to random and other sampling baselines ( Kirsch et al. , 2019 ; Sener & Savarese , 2018 ; Settles , 2009 ; Shen et al. , 2018 ; Sinha et al. , 2019 ; Ducoffe & Precioso , 2018 ; Ma et al. , 2020 ) . Sener & Savarese ( 2018 ) suggested that we can improve data-efficiency by minimizing the core-set radius , δ , defined as the maximum distance of any unlabelled point from its nearest labelled point : δ = maxi∈ [ n ] minj∈s δ ( xi , xj ) . Given the generalization error ζn of all labelled and unlabelled data , and zero training error , expected error converges linearly with respect to δ ( Sener & Savarese , 2018 ) : E x , y∼PX×Y [ l ( x , y ; As ) ] ≤ ζn + 1 n ∑ i∈ [ n ] l ( xi , yi ) ≤ ζn +O ( Cδ ) +O ( √ 1 n ) ( 1 ) Sener & Savarese ( 2018 ) argued that generalization error for neural networks has well-defined bounds , so optimizing the rest of Equation 1 , referred to as core-set loss , is critical for active learning . Indeed , their algorithms for optimizing core-sets consistently improved over their baselines ( Sener & Savarese , 2018 ) . Uncertainty-based sampling is particularly valuable for identifying support vectors , leading to finer classification boundaries ( Kirsch et al. , 2019 ; Settles , 2009 ) . However , these methods may catastrophically concentrate their labelling budget on difficult , noisy regions between classes , as shown in Figure 1 . We present a two-part solution for incorporating uncertainty into core-sets : 1 . Scale distances between points by doubt ( I ) ( Settles , 2009 ; Shen et al. , 2018 ) before computing core-set radii : δ̂i = δi I ( xi ) , where I ( x ) = 1−max y P ( y|x ) ( 2 ) 2 . Apply beam search to greedily identify the core-set configuration among K-candidates with the lowest maximum log-confidence to reduce the variance of core-set trajectories . 2 BACKGROUND . Greedy versus optimal core-set for active learning . Core-set radius δ is the maximum of all distances between each data point in xu = { xi : ∀i ∈ [ n ] } and its closest labelled point in xl = { xi : ∀i ∈ s } ( Sener & Savarese , 2018 ) . The optimal coreset achieves linear convergence of core-set loss in respect to δ by finding the acquisition set s+ with optimal core-set radii δOPT shown in Equation 3 ( Sener & Savarese , 2018 ) . Sener and Savarese used l2-norm between activations of the last layer of VGG16 as ∆ . δOPT = min s+ max i∈ [ n ] min j∈s+∪s ∆ ( xi , xj ) ( 3 ) δg = max i∈ [ n ] min j∈ŝ+∪s ∆ ( xi , xj ) ≤ 2 δOPT ( 4 ) Since this problem is NP-hard ( Cook et al. , 1998 ) , Sener & Savarese ( 2018 ) proposed a greedy version shown in Algorithm 1 with acquisitions ŝ+ bounded above by Equation 4 . It returns a selection mask over the data pool to signal labelling requests for elements that greedily minimize the maximum distance between any point and its nearest labelled point . Algorithm 1 : Greedy core-set ( Sener & Savarese , 2018 ) 1 def greedy core-set ( xu , xl , budget ) : 2 selection = [ 0 : ∀i ∈ [ |xu| ] ] ; 3 for t = 0 , . . . , budget do 4 i = arg maxi∈ [ |xu| ] minj∈ [ |xl| ] ∆ ( x ( i ) u , x ( j ) l ) ; 5 selection ( i ) = 1 ; 6 return selection ; Figure 2 shows how δ varies compared to closest-K-means core-sets , where the core-set consists of the closest points to optimized K-means . Related techniques for batched acquisition . Batch active learning by diverse gradient embeddings acquires batches in two steps . First , we compute loss gradients in respect to the parameters of the last layer of the classifier for each unlabelled point and its most probable label ( Ash et al. , 2020 ) . Then , we sample from clusters of these gradients using , for instance , K-means++ to avoid catastrophic concentration ( Ash et al. , 2020 ) . We share similar intuitions about classifier confidence and intra-batch diversity being important sources of information that may enhance active learning , but differ in that we do not optimize for model change and use core-sets for diversification because of its theoretical foundations . BatchBALD acquires batches that maximize the mutual information between the joint data and model parameters and was intended to overcome redundant sampling of repeated BALD ( Kirsch et al. , 2019 ) . We tried combining this with a probabilistic technique of estimating likely core-set locations ( see Appendix A.2 ) , but it appeared that core-sets mainly require δ minimization for coreset loss convergence . 3 METHODS . Algorithm 2 and its dependence on Algorithm 3 implement doubt-weighted greedy core-set to run on GPU . To incorporate uncertainty information , we make two key changes to the original greedy core-set algorithm . First , we compute core-sets in a warped space where distances originating from any unlabelled point diminish to zero with classification confidence . Given inputs xu and xl , which represent the unlabelled and labelled data , Line 2 of Algorithm 2 calls Algorithm 3 to pre-compute distances of each unlabelled datum to their nearest labelled data . We scale these distances by the doubt of the classifier on the respective unlabelled data on Lines 3 and 11 . Each acquisition is removed from the existing unlabelled pool , and Line 12 updates new core-set radii . Figure 3 illustrates how core-sets in these spaces preferentially cover regions of low confidence . Algorithm 2 : Doubt-weighted core-set 1 def doubted core-set ( xu , xl , batch size , budget ) : 2 min δ = compute min δ ( xu , xl , batch size ) ; 3 min δ̂ = [ min δ ( i ) · I ( x ( i ) u ) : ∀i ∈ [ |xu| ] ] ] ; 4 xu = memory-copy ( xu ) ; 5 index = [ i : ∀i ∈ [ |xu| ] ] ; 6 selection = [ 0 : ∀i ∈ [ |xu| ] ] ; 7 for t = 0 , . . . , budget do 8 i = arg max min δ̂ ( i ) ; 9 selection ( index [ i ] ) = 1 ; 10 x = x ( i ) u ; splice out : index ( i ) , x ( i ) u , min δ ( i ) 11 δ̂ = ∆ ( x , xu ) · I ( x ) ; 12 min δ̂ = [ min { min δ̂ ( k ) , δ̂ ( k ) } : ∀k ∈ [ |xu| ] ] 13 return selection ; We choose the same ∆ as Sener & Savarese ( 2018 ) , which is l2-norm between activations of the last layer of VGG16 . Given unlabelled data of size U = |xu| , labelled data of size L = |xl| , feature size ∀i ∈ [ U ] , ∀j ∈ [ L ] , D = dimx ( i ) u = dimx ( j ) l , batch size B min { U , L } and labelling budget b , Algorithm 3 costs Θ ( ULD ) steps and Θ ( B2D ) memory with the bottleneck on line 5 . Excluding line 2 , Algorithm 2 costs O ( bUD ) in both computation and memory with the bottleneck on line 11 . Since both the original core-set algorithm and our modification requires fine-tuning VGG16 per addition to the training set , and computing the class probabilities requires only a single linear transformation , the final computational complexity is the same as the original core-set search . For core-set sizes 5k to 15k , compute time scales linearly from 25 s to 50 s on a NVIDIA Titan GPU . Algorithm 3 : Memory-efficient core-set radii 1 def compute min δ ( xu , xl , b ) : 2 min δ = [ ∞ : ∀k ∈ [ |xu| ] ] ; 3 for i = 0 , . . . , |xu| b do 4 for j = 0 , . . . , |xl| b do 5 δ̃i : i+b , j : j+b = ∆ ( x ( i : i+b ) u , x ( j : j+b ) l ) ; 6 min δ̃i : i+b = [ min c∈ [ j , j+b ] δ̃ ( k , c ) i : i+b , j : j+b : ∀k ∈ [ i , i+ b ] ] ; 7 min δ ( i : i+b ) = [ min { min δ ( k ) , min δ̃ ( k ) i : i+b } : ∀k ∈ [ i , i+ b ] ] 8 return min δ ; Second , we use beam search to greedily prune and keep track of the top resulting core-set configurations with the lowest overall confidence . Since there is no guarantee for the optimality of greedy core-sets ( Sener & Savarese , 2018 ) , we seek an orientation with the most points near classification regions of high uncertainty at the cost of increasing compute and memory complexity by a factor of the beam width . We modify maximum normalized log probability ( Shen et al. , 2018 ) to rank overall classifier uncertainty U of core-set s : U ( s ) = − 1 |s| ∑ x∈s log ( 1− I ( x ) ) ( 5 ) Figure 4 shows a sample ranking of the configurations found during beam search with widthK = 4 . Active learning pipeline . For each active learning experiment , we start by randomly partitioning the full training set into an initial pool of labelled data and an unlabelled pool of features . We finetune the parameters of a ImageNet-1-pretrained VGG16 on this initial dataset . For each training batch size of 64 , we optimize for cross-entropy loss using Adam ( Kingma & Ba , 2015 ) under default hyperparameters from PyTorch ( Paszke et al. , 2019 ) and a learning rate of 0.01 for CIFAR10/100 and 0.005 for SVHN . We then use either random acquisition , the original greedy core-set algorithm , or variations of Algorithm 2 with the trained model to produce a selection mask over the unlabelled data . We enforce that the number of selected elements is equal to the labelling budget per iteration . The selected features and their labels join the training set , the model retrains with a re-initialized optimizer , and the process is repeated until the number of the labelled data reaches the specified ceiling for the experiment . Note that we do not compare with the other baselines used in the original core-set experiments . Since the original core-set algorithm improved significantly from those baselines , we expect improvement over the original core-set algorithm to imply similar or greater improvement as well . Table 1 shows the iterations that we found were necessary to roughly meet zero training error on the initial dataset ( “ First-pass ” ) and all additions to the dataset per active learning iteration ( “ Thereafter ” ) . Note that validation error is not required to satisfy the convergence requirement of core-sets , so we ignore it in our experiments . For the ablation studies , we tune hyperparameters and conduct ablation studies on CIFAR10 ( Krizhevsky , 2009 ) using a budget of 400 labels per active learning iteration and an initial dataset size of 1000 samples . We use the same hyperparameters as the ablation studies in the main experiments on CIFAR10/100 ( Krizhevsky , 2009 ) and SVHN ( Netzer et al. , 2011 ) , which uses a budget of 5000 labels per iteration and a starting dataset size of 5000 samples .
In this paper, the authors propose to improve the vanilla greedy Core-set active learning algorithm by (1) weighting the distance with uncertainty (measured by doubt, $1-\max_yP(y|x)$) and (2) use beam search instead of greedy search where the beams are selected by average uncertainty. They show with several toy examples that this way the samples are concentrated closer to low-confidence region. They further try to find theoretical groundings for the advantage of the proposed algorithm with several assumptions. Finally they show that the proposed algorithm outperform vanilla coreset on CIFAR and SVHN, and provide some ablation studies to showcase the effect of beam search and uncertainty weighting.
SP:bb83b81f007ebd41f444ed55636f0c3b3ca6b2c0
Improving greedy core-set configurations for active learning with uncertainty-scaled distances
1 INTRODUCTION . Active learning aims to identify the most informative data to label and include in supervised training . Often , these algorithms focus on reducing model variance , representing distributional densities , maximizing expected model change , or minimizing expected generalization error ( Kirsch et al. , 2019 ; Sener & Savarese , 2018 ; Settles , 2009 ; Shen et al. , 2018 ; Sinha et al. , 2019 ) . A unifying theme is efficient data collection , which is measured by the rate in improvement as more data are labelled . This is important when we want to identify only the most promising samples to be labelled , but also for tasks that require slow or expensive labelling ( Ducoffe & Precioso , 2018 ; Ma et al. , 2020 ; Settles , 2009 ) . We describe active learning with the same notation as Sener & Savarese ( 2018 ) . Suppose we wish to classify elements of a compact space X into labels Y = { 1 , . . . , C } . We collect n data points { xi , yi } i∈ [ n ] ∼ PX×Y , but only have access to the labels of m of these , denoted by their indices s = { s ( i ) ∈ [ n ] } i∈ [ m ] . We use the learning algorithmAs on the labelled set s to return the optimized parameters of the classifier , and measure performance with the loss function l ( · , · ; As ) : X×Y → R. The goal of active learning is to produce a set of indices s+ whose cardinality is limited by the labelling budget b , such that expected loss is minimized upon labelling and training on these elements : arg mins+ : |s+|≤b Ex , y∼PX×Y [ l ( x , y ; As∪s+ ) ] ( Sener & Savarese , 2018 ) . In practice , we use the test set { xi , yi } i∈ [ t ] ∼ PX×Y to approximate the expectation . The typical way to assess the dataefficiency of any particular active learning algorithm is to compare its trend of test performance across increasing labels compared to random and other sampling baselines ( Kirsch et al. , 2019 ; Sener & Savarese , 2018 ; Settles , 2009 ; Shen et al. , 2018 ; Sinha et al. , 2019 ; Ducoffe & Precioso , 2018 ; Ma et al. , 2020 ) . Sener & Savarese ( 2018 ) suggested that we can improve data-efficiency by minimizing the core-set radius , δ , defined as the maximum distance of any unlabelled point from its nearest labelled point : δ = maxi∈ [ n ] minj∈s δ ( xi , xj ) . Given the generalization error ζn of all labelled and unlabelled data , and zero training error , expected error converges linearly with respect to δ ( Sener & Savarese , 2018 ) : E x , y∼PX×Y [ l ( x , y ; As ) ] ≤ ζn + 1 n ∑ i∈ [ n ] l ( xi , yi ) ≤ ζn +O ( Cδ ) +O ( √ 1 n ) ( 1 ) Sener & Savarese ( 2018 ) argued that generalization error for neural networks has well-defined bounds , so optimizing the rest of Equation 1 , referred to as core-set loss , is critical for active learning . Indeed , their algorithms for optimizing core-sets consistently improved over their baselines ( Sener & Savarese , 2018 ) . Uncertainty-based sampling is particularly valuable for identifying support vectors , leading to finer classification boundaries ( Kirsch et al. , 2019 ; Settles , 2009 ) . However , these methods may catastrophically concentrate their labelling budget on difficult , noisy regions between classes , as shown in Figure 1 . We present a two-part solution for incorporating uncertainty into core-sets : 1 . Scale distances between points by doubt ( I ) ( Settles , 2009 ; Shen et al. , 2018 ) before computing core-set radii : δ̂i = δi I ( xi ) , where I ( x ) = 1−max y P ( y|x ) ( 2 ) 2 . Apply beam search to greedily identify the core-set configuration among K-candidates with the lowest maximum log-confidence to reduce the variance of core-set trajectories . 2 BACKGROUND . Greedy versus optimal core-set for active learning . Core-set radius δ is the maximum of all distances between each data point in xu = { xi : ∀i ∈ [ n ] } and its closest labelled point in xl = { xi : ∀i ∈ s } ( Sener & Savarese , 2018 ) . The optimal coreset achieves linear convergence of core-set loss in respect to δ by finding the acquisition set s+ with optimal core-set radii δOPT shown in Equation 3 ( Sener & Savarese , 2018 ) . Sener and Savarese used l2-norm between activations of the last layer of VGG16 as ∆ . δOPT = min s+ max i∈ [ n ] min j∈s+∪s ∆ ( xi , xj ) ( 3 ) δg = max i∈ [ n ] min j∈ŝ+∪s ∆ ( xi , xj ) ≤ 2 δOPT ( 4 ) Since this problem is NP-hard ( Cook et al. , 1998 ) , Sener & Savarese ( 2018 ) proposed a greedy version shown in Algorithm 1 with acquisitions ŝ+ bounded above by Equation 4 . It returns a selection mask over the data pool to signal labelling requests for elements that greedily minimize the maximum distance between any point and its nearest labelled point . Algorithm 1 : Greedy core-set ( Sener & Savarese , 2018 ) 1 def greedy core-set ( xu , xl , budget ) : 2 selection = [ 0 : ∀i ∈ [ |xu| ] ] ; 3 for t = 0 , . . . , budget do 4 i = arg maxi∈ [ |xu| ] minj∈ [ |xl| ] ∆ ( x ( i ) u , x ( j ) l ) ; 5 selection ( i ) = 1 ; 6 return selection ; Figure 2 shows how δ varies compared to closest-K-means core-sets , where the core-set consists of the closest points to optimized K-means . Related techniques for batched acquisition . Batch active learning by diverse gradient embeddings acquires batches in two steps . First , we compute loss gradients in respect to the parameters of the last layer of the classifier for each unlabelled point and its most probable label ( Ash et al. , 2020 ) . Then , we sample from clusters of these gradients using , for instance , K-means++ to avoid catastrophic concentration ( Ash et al. , 2020 ) . We share similar intuitions about classifier confidence and intra-batch diversity being important sources of information that may enhance active learning , but differ in that we do not optimize for model change and use core-sets for diversification because of its theoretical foundations . BatchBALD acquires batches that maximize the mutual information between the joint data and model parameters and was intended to overcome redundant sampling of repeated BALD ( Kirsch et al. , 2019 ) . We tried combining this with a probabilistic technique of estimating likely core-set locations ( see Appendix A.2 ) , but it appeared that core-sets mainly require δ minimization for coreset loss convergence . 3 METHODS . Algorithm 2 and its dependence on Algorithm 3 implement doubt-weighted greedy core-set to run on GPU . To incorporate uncertainty information , we make two key changes to the original greedy core-set algorithm . First , we compute core-sets in a warped space where distances originating from any unlabelled point diminish to zero with classification confidence . Given inputs xu and xl , which represent the unlabelled and labelled data , Line 2 of Algorithm 2 calls Algorithm 3 to pre-compute distances of each unlabelled datum to their nearest labelled data . We scale these distances by the doubt of the classifier on the respective unlabelled data on Lines 3 and 11 . Each acquisition is removed from the existing unlabelled pool , and Line 12 updates new core-set radii . Figure 3 illustrates how core-sets in these spaces preferentially cover regions of low confidence . Algorithm 2 : Doubt-weighted core-set 1 def doubted core-set ( xu , xl , batch size , budget ) : 2 min δ = compute min δ ( xu , xl , batch size ) ; 3 min δ̂ = [ min δ ( i ) · I ( x ( i ) u ) : ∀i ∈ [ |xu| ] ] ] ; 4 xu = memory-copy ( xu ) ; 5 index = [ i : ∀i ∈ [ |xu| ] ] ; 6 selection = [ 0 : ∀i ∈ [ |xu| ] ] ; 7 for t = 0 , . . . , budget do 8 i = arg max min δ̂ ( i ) ; 9 selection ( index [ i ] ) = 1 ; 10 x = x ( i ) u ; splice out : index ( i ) , x ( i ) u , min δ ( i ) 11 δ̂ = ∆ ( x , xu ) · I ( x ) ; 12 min δ̂ = [ min { min δ̂ ( k ) , δ̂ ( k ) } : ∀k ∈ [ |xu| ] ] 13 return selection ; We choose the same ∆ as Sener & Savarese ( 2018 ) , which is l2-norm between activations of the last layer of VGG16 . Given unlabelled data of size U = |xu| , labelled data of size L = |xl| , feature size ∀i ∈ [ U ] , ∀j ∈ [ L ] , D = dimx ( i ) u = dimx ( j ) l , batch size B min { U , L } and labelling budget b , Algorithm 3 costs Θ ( ULD ) steps and Θ ( B2D ) memory with the bottleneck on line 5 . Excluding line 2 , Algorithm 2 costs O ( bUD ) in both computation and memory with the bottleneck on line 11 . Since both the original core-set algorithm and our modification requires fine-tuning VGG16 per addition to the training set , and computing the class probabilities requires only a single linear transformation , the final computational complexity is the same as the original core-set search . For core-set sizes 5k to 15k , compute time scales linearly from 25 s to 50 s on a NVIDIA Titan GPU . Algorithm 3 : Memory-efficient core-set radii 1 def compute min δ ( xu , xl , b ) : 2 min δ = [ ∞ : ∀k ∈ [ |xu| ] ] ; 3 for i = 0 , . . . , |xu| b do 4 for j = 0 , . . . , |xl| b do 5 δ̃i : i+b , j : j+b = ∆ ( x ( i : i+b ) u , x ( j : j+b ) l ) ; 6 min δ̃i : i+b = [ min c∈ [ j , j+b ] δ̃ ( k , c ) i : i+b , j : j+b : ∀k ∈ [ i , i+ b ] ] ; 7 min δ ( i : i+b ) = [ min { min δ ( k ) , min δ̃ ( k ) i : i+b } : ∀k ∈ [ i , i+ b ] ] 8 return min δ ; Second , we use beam search to greedily prune and keep track of the top resulting core-set configurations with the lowest overall confidence . Since there is no guarantee for the optimality of greedy core-sets ( Sener & Savarese , 2018 ) , we seek an orientation with the most points near classification regions of high uncertainty at the cost of increasing compute and memory complexity by a factor of the beam width . We modify maximum normalized log probability ( Shen et al. , 2018 ) to rank overall classifier uncertainty U of core-set s : U ( s ) = − 1 |s| ∑ x∈s log ( 1− I ( x ) ) ( 5 ) Figure 4 shows a sample ranking of the configurations found during beam search with widthK = 4 . Active learning pipeline . For each active learning experiment , we start by randomly partitioning the full training set into an initial pool of labelled data and an unlabelled pool of features . We finetune the parameters of a ImageNet-1-pretrained VGG16 on this initial dataset . For each training batch size of 64 , we optimize for cross-entropy loss using Adam ( Kingma & Ba , 2015 ) under default hyperparameters from PyTorch ( Paszke et al. , 2019 ) and a learning rate of 0.01 for CIFAR10/100 and 0.005 for SVHN . We then use either random acquisition , the original greedy core-set algorithm , or variations of Algorithm 2 with the trained model to produce a selection mask over the unlabelled data . We enforce that the number of selected elements is equal to the labelling budget per iteration . The selected features and their labels join the training set , the model retrains with a re-initialized optimizer , and the process is repeated until the number of the labelled data reaches the specified ceiling for the experiment . Note that we do not compare with the other baselines used in the original core-set experiments . Since the original core-set algorithm improved significantly from those baselines , we expect improvement over the original core-set algorithm to imply similar or greater improvement as well . Table 1 shows the iterations that we found were necessary to roughly meet zero training error on the initial dataset ( “ First-pass ” ) and all additions to the dataset per active learning iteration ( “ Thereafter ” ) . Note that validation error is not required to satisfy the convergence requirement of core-sets , so we ignore it in our experiments . For the ablation studies , we tune hyperparameters and conduct ablation studies on CIFAR10 ( Krizhevsky , 2009 ) using a budget of 400 labels per active learning iteration and an initial dataset size of 1000 samples . We use the same hyperparameters as the ablation studies in the main experiments on CIFAR10/100 ( Krizhevsky , 2009 ) and SVHN ( Netzer et al. , 2011 ) , which uses a budget of 5000 labels per iteration and a starting dataset size of 5000 samples .
This paper attempts to improve upon the greedy core-set for active learning (Sener and Savarese) by employing distances scaled by uncertainty. The proposed method then leverages a beam search algorithm to identify the best core-set configuration among candidates with the lowest log-confidence to yield further improvements. Empirically evaluated on CIFAR10/100 and SVHN.
SP:bb83b81f007ebd41f444ed55636f0c3b3ca6b2c0
Denoising Diffusion Gamma Models
1 INTRODUCTION . Deep generative neural networks have shown significant progress over the last years . The main architectures for generation are : ( i ) VAE ( Kingma & Welling , 2013 ) based , for example , NVAE ( Vahdat & Kautz , 2020 ) and VQ-VAE ( Razavi et al. , 2019 ) , ( ii ) GAN ( Goodfellow et al. , 2014 ) based , for example , StyleGAN ( Karras et al. , 2020 ) for vision application and WaveGAN ( Donahue et al. , 2018 ) for speech ( iii ) Flow-based , for example Glow ( Kingma & Dhariwal , 2018 ) ( iv ) Autoregessive , for example , Wavenet for speech ( Oord et al. , 2016 ) and ( v ) Diffusion Probabilistic Models ( SohlDickstein et al. , 2015 ) , for example , Denoising Diffusion Probabilistic Models ( DDPM ) ( Ho et al. , 2020 ) and its implicit version DDIM ( Song et al. , 2020a ) . Models from this last family have shown significant progress in generation capabilities in the last years , e.g. , ( Chen et al. , 2020 ; Kong et al. , 2020b ) , and have achieved results comparable to state-of-the-art generation architecture for both images and speech . A DDPM is a Markov chain of latent variables . Two processes are modeled : ( i ) a diffusion process and ( ii ) a denoising process . During training , the diffusion process learns to transform data samples into Gaussian noise . Denoising is the reverse process and it is used during inference for generating data samples , starting from Gaussian noise . The second process can be conditioned on attributes to control the generation sample . To obtain high-quality synthesis , a large number of denoising steps is used ( i.e . 1000 steps ) . A notable property of the diffusion process is a closed-form formulation of the noise that arises from accumulating diffusion stems . This allows sampling arbitrary states in the Markov chain of the diffusion process without calculating the previous steps . In the Gaussian case , this property stems from the fact that adding Gaussian random variables leads to another Gaussian random variable . Other distributions have similar properties . For example , for the Gamma distribution , the sum of two random variables that share the scale parameter is a Gamma random variable of the same scale . The Poisson distribution has a similar property . However , its discrete nature makes it less suitable for DDPM . In DDPM , the mean of the Gaussian random variables is set at zero . The Gamma random variable , with its two parameters ( shape and scale ) , is better suited to fit the data than a Gaussian random variable with one degree of freedom ( scale ) . Furthermore , the Gamma random variable generalizes other distributions , and many other distributions can be derived from it ( Leemis & McQueston , 2008 ) . The added modeling capacity of the Gamma random variable can help speed up the convergence of the DDPM model . Consider , for example , a conventional DDPM model that was trained with Gaussian noise on the CelebA dataset ( Liu et al. , 2015 ) . The noise distribution throughout the diffusion process can be visualized by computing the histogram of the estimated residual noise in the generation process . The estimated residual noise ̂ is given by difference between x0 and the image xt after t DDPM steps ̂ = √ ᾱtx0−xt√ 1−|ᾱt| . The model is a pretrained DDPM ( Gaussian ) celebA ( 64x64 ) model . ( a ) The fitting of a Gaussian to the histogram of a typical image after t− 50 steps . ( b ) Fitting a Gamma distribution . ( c ) The fitting error to Gaussian and Gamma distribution , measured as the MSE between the histogram and the fitted probability distribution function . Each point is the average value for the generation of 100 images . The vertical error bars denote the standard deviation . ̂ = √ ᾱtx0−xt√ 1−|ᾱt| , where ᾱt is the noise schedule , x0 is the data point and xt is the estimate state at timestep t , as can be derived from Eq.4 from ( Song et al. , 2020a ) . Both a Gaussian distribution and Gamma distribution can then be fitted to this histogram , as shown in Fig . 1 ( a , b ) . As can be seen , the Gamma distribution provides a better fit to the estimated residual noise ̂ . Moreover , Fig . 1 ( c ) presents the mean fitting error between the histogram and the fitted probability distribution function . Evidently , the Gamma distribution is a better fit than the Gaussian distribution . While the model was trained to estimate Gaussian noise , at inference time it has to address a different distribution . In this paper , we investigate the non-Gaussian Gamma noise distribution . As noted , this distribution seems to fit the histogram of the generation error better than the Gaussian distribution , and it also has favorable properties such as its behavior under addition and scalar multiplication . The proposed models maintain the property of the diffusion process of sampling arbitrary states without calculating the previous steps . Our results are demonstrated in two major domains : vision and audio . In the first domain , the proposed method is shown to provide a better FID score for generated images . For speech data , we show that the proposed method improves various measures , such as Perceptual Evaluation of Speech Quality ( PESQ ) and short-time objective intelligibility ( STOI ) . 2 RELATED WORK . In their seminal work , Sohl-Dickstein et al . ( 2015 ) introduce the Diffusion Probabilistic Model . This model is applied to various domains , such as time series and images . The main drawback in the proposed model is that it needs up to thousands of iterative steps to generate a valid data sample . Song & Ermon ( 2019 ) proposed a diffusion generative model based on Langevin dynamics and the score matching method ( Hyvärinen & Dayan , 2005 ) . The model estimates the Stein score function ( Liu et al. , 2016 ) which is the gradient of the logarithm of data density . Given the Stein score function , the model can generate data points . Denoising Diffusion Probabilistic Models ( DDPM ) ( Ho et al. , 2020 ) combine generative models based on score matching and neural Diffusion Probabilistic Models into a single model . Similarly , in Chen et al . ( 2020 ) ; Kong et al . ( 2020a ) a generative neural diffusion process based on score matching was applied to speech generation . These models achieve state-of-the-art results for speech generation , and show superior results over well-established methods , such as Wavernn ( Kalchbrenner et al. , 2018 ) , Wavenet ( Oord et al. , 2016 ) , and GAN-TTS ( Bińkowski et al. , 2019 ) . Diffusion Implicit Models ( DDIM ) offer a way to accelerate the denoising process ( Song et al. , 2020a ) . The model employs a non-Markovian diffusion process to generate a higher quality sample . The model helps reduce the number of diffusion steps , e.g. , from a thousand steps to a few hundred . Algorithm 1 DDPM training procedure . 1 : Input : dataset d , diffusion process length T , noise schedule β1 , ... , βT 2 : repeat 3 : x0 ∼ d ( x0 ) 4 : t ∼ U ( { 1 , ... , T } ) 5 : ε ∼ N ( 0 , I ) 6 : xt = √ ᾱtx0 + √ 1− ᾱtε 7 : Take gradient descent step on : ‖ε− εθ ( xt , t ) ‖1 8 : until converged Algorithm 2 DDPM sampling algorithm 1 : xT ∼ N ( 0 , I ) 2 : for t= T , ... , 1 do 3 : z ∼ N ( 0 , I ) 4 : ε̂ = εθ ( xt , t ) 5 : xt−1 = xt− 1−αt√ 1−ᾱt ε̂ √ αt 6 : if t 6= 1 then 7 : xt−1 = xt−1 + σtz 8 : end if 9 : end for 10 : return x0 Dhariwal & Nichol ( 2021 ) find a better diffusion architecture through a series of exploratory experiments , leading to the Ablated Diffusion Model ( ADM ) . This model outperforms the state-of-the-art in image synthesis , which was previously provided by GAN based-models , such as BigGAN-deep ( Brock et al. , 2018 ) and StyleGAN2 ( Karras et al. , 2020 ) . ADM is further improved using a novel Cascaded Diffusion Model ( CDM ) . Our contribution is fundamental and can be incorporated into the proposed ADM and CDM architectures . Watson et al . ( 2021 ) proposed an efficient method for sampling from diffusion probabilistic models by a dynamic programming algorithm that finds the optimal discrete time schedules . Choi et al . ( 2021 ) introduces the Iterative Latent Variable Refinement ( ILVR ) method for guiding the generative process in DDPM . Moreover , Kong & Ping ( 2021 ) systematically investigates fast sampling methods for diffusion denoising models . Lam et al . ( 2021 ) propose bilateral denoising diffusion models ( BDDM ) , which take significantly fewer steps to generate high-quality samples . Huang et al . ( 2021 ) derive a variational framework for likelihood estimating of the marginal likelihood of continuous-time diffusion models . Moreover , Kingma et al . ( 2021 ) shows equivalence between various diffusion processes by using a simplification of the variational lower bound ( VLB ) . Song et al . ( 2020b ) show that score-based generative models can be considered a solution to a stochastic differential equation . Gao et al . ( 2020 ) provide an alternative approach for training an energy-based generative model using a diffusion process . Another line of work in audio is that of neural vocoders based on a denoising diffusion process . WaveGrad ( Chen et al. , 2020 ) and DiffWave ( Kong et al. , 2020a ) are conditioned on the melspectrogram and produce high-fidelity audio samples , using as few as six steps of the diffusion process . These models outperform adversarial non-autoregressive baselines . Popov et al . ( 2021 ) propose a text-to-speech diffusion base model , which allows generating speech with the flexibility of controlling the trade-off between sound quality and inference speed . Diffusion models were also applied to natural language processing tasks . Hoogeboom et al . ( 2021 ) proposed a multinomial diffusion process for categorical data and applied it to language modeling . Austin et al . ( 2021 ) generalize the multinomial diffusion process with Discrete Denoising Diffusion Probabilistic Models ( D3PMs ) and improve the generated results for the text8 and One Billion Word ( LM1B ) datasets . 3 DIFFUSION MODELS FOR GAMMA DISTRIBUTION . We start by recapitulating the Gaussian case , after which we derive diffusion models for the Gamma distribution . 3.1 BACKGROUND - GAUSSIAN DDPM . Diffusion networks learn the gradients of the data log density : s ( y ) = ∇y log p ( y ) ( 1 ) By using Langevin Dynamics and the gradients of the data log density ∇y log p ( y ) , a sample procedure from the probability can be done by : ỹi+1 = ỹi + η 2 s ( ỹi ) + √ ηzi ( 2 ) where zi ∼ N ( 0 , I ) and η > 0 is the step size . The diffusion process in DDPM ( Ho et al. , 2020 ) is defined by a Markov chain that gradually adds Gaussian noise to the data according to a noise schedule . The diffusion process is defined by : q ( x1 : T |x0 ) = T∏ t=1 q ( xt|xt−1 ) , ( 3 ) where T is the length of the diffusion process , and xT , ... , xt , xt−1 , ... , x0 is a sequence of latent variables with the same size as the clean sample x0 . The Diffusion process is parameterized with a set of parameters called noise schedule ( β1 , . . . βT ) , which defines the variance of the noise added at each step : q ( xt|xt−1 ) : = N ( xt ; √ 1− βtxt−1 , βtI ) , ( 4 ) Since we are using a Gaussian noise random variable at each step , the diffusion process can be simulated for any number of steps with the closed formula : xt = √ ᾱtx0 + √ 1− ᾱtε , ( 5 ) where αi = 1− βi , ᾱt = ∏t i=1 αi and ε = N ( 0 , I ) . Diffusion models are a class of generative neural network of the form pθ ( x0 ) = ∫ pθ ( x0 : T ) dx0 : T that learn to reverse the diffusion process . One can write that : pθ ( x0 : T ) = p ( xT ) T∏ t=1 pθ ( xt−1|xt ) ( 6 ) As described in ( Ho et al. , 2020 ) , one can learn to predict the noise present in the data with a network εθ and sample from pθ ( xt−1|xt ) using the following formula : xt−1 = xt − 1−αt√1−ᾱt εθ ( xt , t ) √ ᾱt + σtε , ( 7 ) where ε is white noise and σt is the standard deviation of added noise . ( Song et al. , 2020a ) use σt 2 = βt . The training procedure of εθ is defined in Alg.1 . Given the input dataset d , the algorithm samples , x0 and t. The noisy latent state xt is calculated and fed to the DDPM neural network εθ . A gradient descent step is taken in order to estimate the ε noise with the DDPM network εθ . The objective for the diffusion model is a variational bound on the model data log likelihood . The complete inference algorithm present at Alg . 2 . Starting from Gaussian noise and then reversing the diffusion process step-by-step , by iteratively employing the update rule of Eq . 7 . To perform generation with few denoising iterations one can use the update equation introduced in Song et al . ( 2020a ) . This work greatly improves the results of diffusion networks when performing sampling with few generative steps . xn−1 = √ ᾱn−1x̂0 , n + √ 1− ᾱn−1 − σ̃2nεθ ( xn , ᾱn ) + σ̃nε , ( 8 ) Intuitively this equation changes the added noise for the generative steps . It uses a blend of the noise from the previous state ( εθ ( xn , ᾱn ) and random noise ( ε ) . One can write σ̃ = η √ βn ( 1− ᾱn−1 ) ( 1− ᾱn ) this allows to have a simple parameter η to choose the ratio of the blend . On top of improving the results in short samplings regimes , this allows to generate in a deterministic way when using η = 0 .
## Summary This paper explores the use of a non-Gaussian diffusion process for Diffusion Probabilistic Models. Unlike the original work by Ho et al., the authors replace the diffusion process with a Markov chain with transition kernel defined by a Gamma distribution. They show that the similar (and necessary) properties of Gaussian distribution that enable training DPM in practice also hold true for Gamma distribution. The main motivation why Gamma distribution is used seems to be that Gamma distribution is more expressive than Gaussian due to having an extra parameter. The authors experimentally verify the performance gains on a few datasets.
SP:8c94149941c39da4ae567d924ac7925fa314e145
Denoising Diffusion Gamma Models
1 INTRODUCTION . Deep generative neural networks have shown significant progress over the last years . The main architectures for generation are : ( i ) VAE ( Kingma & Welling , 2013 ) based , for example , NVAE ( Vahdat & Kautz , 2020 ) and VQ-VAE ( Razavi et al. , 2019 ) , ( ii ) GAN ( Goodfellow et al. , 2014 ) based , for example , StyleGAN ( Karras et al. , 2020 ) for vision application and WaveGAN ( Donahue et al. , 2018 ) for speech ( iii ) Flow-based , for example Glow ( Kingma & Dhariwal , 2018 ) ( iv ) Autoregessive , for example , Wavenet for speech ( Oord et al. , 2016 ) and ( v ) Diffusion Probabilistic Models ( SohlDickstein et al. , 2015 ) , for example , Denoising Diffusion Probabilistic Models ( DDPM ) ( Ho et al. , 2020 ) and its implicit version DDIM ( Song et al. , 2020a ) . Models from this last family have shown significant progress in generation capabilities in the last years , e.g. , ( Chen et al. , 2020 ; Kong et al. , 2020b ) , and have achieved results comparable to state-of-the-art generation architecture for both images and speech . A DDPM is a Markov chain of latent variables . Two processes are modeled : ( i ) a diffusion process and ( ii ) a denoising process . During training , the diffusion process learns to transform data samples into Gaussian noise . Denoising is the reverse process and it is used during inference for generating data samples , starting from Gaussian noise . The second process can be conditioned on attributes to control the generation sample . To obtain high-quality synthesis , a large number of denoising steps is used ( i.e . 1000 steps ) . A notable property of the diffusion process is a closed-form formulation of the noise that arises from accumulating diffusion stems . This allows sampling arbitrary states in the Markov chain of the diffusion process without calculating the previous steps . In the Gaussian case , this property stems from the fact that adding Gaussian random variables leads to another Gaussian random variable . Other distributions have similar properties . For example , for the Gamma distribution , the sum of two random variables that share the scale parameter is a Gamma random variable of the same scale . The Poisson distribution has a similar property . However , its discrete nature makes it less suitable for DDPM . In DDPM , the mean of the Gaussian random variables is set at zero . The Gamma random variable , with its two parameters ( shape and scale ) , is better suited to fit the data than a Gaussian random variable with one degree of freedom ( scale ) . Furthermore , the Gamma random variable generalizes other distributions , and many other distributions can be derived from it ( Leemis & McQueston , 2008 ) . The added modeling capacity of the Gamma random variable can help speed up the convergence of the DDPM model . Consider , for example , a conventional DDPM model that was trained with Gaussian noise on the CelebA dataset ( Liu et al. , 2015 ) . The noise distribution throughout the diffusion process can be visualized by computing the histogram of the estimated residual noise in the generation process . The estimated residual noise ̂ is given by difference between x0 and the image xt after t DDPM steps ̂ = √ ᾱtx0−xt√ 1−|ᾱt| . The model is a pretrained DDPM ( Gaussian ) celebA ( 64x64 ) model . ( a ) The fitting of a Gaussian to the histogram of a typical image after t− 50 steps . ( b ) Fitting a Gamma distribution . ( c ) The fitting error to Gaussian and Gamma distribution , measured as the MSE between the histogram and the fitted probability distribution function . Each point is the average value for the generation of 100 images . The vertical error bars denote the standard deviation . ̂ = √ ᾱtx0−xt√ 1−|ᾱt| , where ᾱt is the noise schedule , x0 is the data point and xt is the estimate state at timestep t , as can be derived from Eq.4 from ( Song et al. , 2020a ) . Both a Gaussian distribution and Gamma distribution can then be fitted to this histogram , as shown in Fig . 1 ( a , b ) . As can be seen , the Gamma distribution provides a better fit to the estimated residual noise ̂ . Moreover , Fig . 1 ( c ) presents the mean fitting error between the histogram and the fitted probability distribution function . Evidently , the Gamma distribution is a better fit than the Gaussian distribution . While the model was trained to estimate Gaussian noise , at inference time it has to address a different distribution . In this paper , we investigate the non-Gaussian Gamma noise distribution . As noted , this distribution seems to fit the histogram of the generation error better than the Gaussian distribution , and it also has favorable properties such as its behavior under addition and scalar multiplication . The proposed models maintain the property of the diffusion process of sampling arbitrary states without calculating the previous steps . Our results are demonstrated in two major domains : vision and audio . In the first domain , the proposed method is shown to provide a better FID score for generated images . For speech data , we show that the proposed method improves various measures , such as Perceptual Evaluation of Speech Quality ( PESQ ) and short-time objective intelligibility ( STOI ) . 2 RELATED WORK . In their seminal work , Sohl-Dickstein et al . ( 2015 ) introduce the Diffusion Probabilistic Model . This model is applied to various domains , such as time series and images . The main drawback in the proposed model is that it needs up to thousands of iterative steps to generate a valid data sample . Song & Ermon ( 2019 ) proposed a diffusion generative model based on Langevin dynamics and the score matching method ( Hyvärinen & Dayan , 2005 ) . The model estimates the Stein score function ( Liu et al. , 2016 ) which is the gradient of the logarithm of data density . Given the Stein score function , the model can generate data points . Denoising Diffusion Probabilistic Models ( DDPM ) ( Ho et al. , 2020 ) combine generative models based on score matching and neural Diffusion Probabilistic Models into a single model . Similarly , in Chen et al . ( 2020 ) ; Kong et al . ( 2020a ) a generative neural diffusion process based on score matching was applied to speech generation . These models achieve state-of-the-art results for speech generation , and show superior results over well-established methods , such as Wavernn ( Kalchbrenner et al. , 2018 ) , Wavenet ( Oord et al. , 2016 ) , and GAN-TTS ( Bińkowski et al. , 2019 ) . Diffusion Implicit Models ( DDIM ) offer a way to accelerate the denoising process ( Song et al. , 2020a ) . The model employs a non-Markovian diffusion process to generate a higher quality sample . The model helps reduce the number of diffusion steps , e.g. , from a thousand steps to a few hundred . Algorithm 1 DDPM training procedure . 1 : Input : dataset d , diffusion process length T , noise schedule β1 , ... , βT 2 : repeat 3 : x0 ∼ d ( x0 ) 4 : t ∼ U ( { 1 , ... , T } ) 5 : ε ∼ N ( 0 , I ) 6 : xt = √ ᾱtx0 + √ 1− ᾱtε 7 : Take gradient descent step on : ‖ε− εθ ( xt , t ) ‖1 8 : until converged Algorithm 2 DDPM sampling algorithm 1 : xT ∼ N ( 0 , I ) 2 : for t= T , ... , 1 do 3 : z ∼ N ( 0 , I ) 4 : ε̂ = εθ ( xt , t ) 5 : xt−1 = xt− 1−αt√ 1−ᾱt ε̂ √ αt 6 : if t 6= 1 then 7 : xt−1 = xt−1 + σtz 8 : end if 9 : end for 10 : return x0 Dhariwal & Nichol ( 2021 ) find a better diffusion architecture through a series of exploratory experiments , leading to the Ablated Diffusion Model ( ADM ) . This model outperforms the state-of-the-art in image synthesis , which was previously provided by GAN based-models , such as BigGAN-deep ( Brock et al. , 2018 ) and StyleGAN2 ( Karras et al. , 2020 ) . ADM is further improved using a novel Cascaded Diffusion Model ( CDM ) . Our contribution is fundamental and can be incorporated into the proposed ADM and CDM architectures . Watson et al . ( 2021 ) proposed an efficient method for sampling from diffusion probabilistic models by a dynamic programming algorithm that finds the optimal discrete time schedules . Choi et al . ( 2021 ) introduces the Iterative Latent Variable Refinement ( ILVR ) method for guiding the generative process in DDPM . Moreover , Kong & Ping ( 2021 ) systematically investigates fast sampling methods for diffusion denoising models . Lam et al . ( 2021 ) propose bilateral denoising diffusion models ( BDDM ) , which take significantly fewer steps to generate high-quality samples . Huang et al . ( 2021 ) derive a variational framework for likelihood estimating of the marginal likelihood of continuous-time diffusion models . Moreover , Kingma et al . ( 2021 ) shows equivalence between various diffusion processes by using a simplification of the variational lower bound ( VLB ) . Song et al . ( 2020b ) show that score-based generative models can be considered a solution to a stochastic differential equation . Gao et al . ( 2020 ) provide an alternative approach for training an energy-based generative model using a diffusion process . Another line of work in audio is that of neural vocoders based on a denoising diffusion process . WaveGrad ( Chen et al. , 2020 ) and DiffWave ( Kong et al. , 2020a ) are conditioned on the melspectrogram and produce high-fidelity audio samples , using as few as six steps of the diffusion process . These models outperform adversarial non-autoregressive baselines . Popov et al . ( 2021 ) propose a text-to-speech diffusion base model , which allows generating speech with the flexibility of controlling the trade-off between sound quality and inference speed . Diffusion models were also applied to natural language processing tasks . Hoogeboom et al . ( 2021 ) proposed a multinomial diffusion process for categorical data and applied it to language modeling . Austin et al . ( 2021 ) generalize the multinomial diffusion process with Discrete Denoising Diffusion Probabilistic Models ( D3PMs ) and improve the generated results for the text8 and One Billion Word ( LM1B ) datasets . 3 DIFFUSION MODELS FOR GAMMA DISTRIBUTION . We start by recapitulating the Gaussian case , after which we derive diffusion models for the Gamma distribution . 3.1 BACKGROUND - GAUSSIAN DDPM . Diffusion networks learn the gradients of the data log density : s ( y ) = ∇y log p ( y ) ( 1 ) By using Langevin Dynamics and the gradients of the data log density ∇y log p ( y ) , a sample procedure from the probability can be done by : ỹi+1 = ỹi + η 2 s ( ỹi ) + √ ηzi ( 2 ) where zi ∼ N ( 0 , I ) and η > 0 is the step size . The diffusion process in DDPM ( Ho et al. , 2020 ) is defined by a Markov chain that gradually adds Gaussian noise to the data according to a noise schedule . The diffusion process is defined by : q ( x1 : T |x0 ) = T∏ t=1 q ( xt|xt−1 ) , ( 3 ) where T is the length of the diffusion process , and xT , ... , xt , xt−1 , ... , x0 is a sequence of latent variables with the same size as the clean sample x0 . The Diffusion process is parameterized with a set of parameters called noise schedule ( β1 , . . . βT ) , which defines the variance of the noise added at each step : q ( xt|xt−1 ) : = N ( xt ; √ 1− βtxt−1 , βtI ) , ( 4 ) Since we are using a Gaussian noise random variable at each step , the diffusion process can be simulated for any number of steps with the closed formula : xt = √ ᾱtx0 + √ 1− ᾱtε , ( 5 ) where αi = 1− βi , ᾱt = ∏t i=1 αi and ε = N ( 0 , I ) . Diffusion models are a class of generative neural network of the form pθ ( x0 ) = ∫ pθ ( x0 : T ) dx0 : T that learn to reverse the diffusion process . One can write that : pθ ( x0 : T ) = p ( xT ) T∏ t=1 pθ ( xt−1|xt ) ( 6 ) As described in ( Ho et al. , 2020 ) , one can learn to predict the noise present in the data with a network εθ and sample from pθ ( xt−1|xt ) using the following formula : xt−1 = xt − 1−αt√1−ᾱt εθ ( xt , t ) √ ᾱt + σtε , ( 7 ) where ε is white noise and σt is the standard deviation of added noise . ( Song et al. , 2020a ) use σt 2 = βt . The training procedure of εθ is defined in Alg.1 . Given the input dataset d , the algorithm samples , x0 and t. The noisy latent state xt is calculated and fed to the DDPM neural network εθ . A gradient descent step is taken in order to estimate the ε noise with the DDPM network εθ . The objective for the diffusion model is a variational bound on the model data log likelihood . The complete inference algorithm present at Alg . 2 . Starting from Gaussian noise and then reversing the diffusion process step-by-step , by iteratively employing the update rule of Eq . 7 . To perform generation with few denoising iterations one can use the update equation introduced in Song et al . ( 2020a ) . This work greatly improves the results of diffusion networks when performing sampling with few generative steps . xn−1 = √ ᾱn−1x̂0 , n + √ 1− ᾱn−1 − σ̃2nεθ ( xn , ᾱn ) + σ̃nε , ( 8 ) Intuitively this equation changes the added noise for the generative steps . It uses a blend of the noise from the previous state ( εθ ( xn , ᾱn ) and random noise ( ε ) . One can write σ̃ = η √ βn ( 1− ᾱn−1 ) ( 1− ᾱn ) this allows to have a simple parameter η to choose the ratio of the blend . On top of improving the results in short samplings regimes , this allows to generate in a deterministic way when using η = 0 .
This paper formulates a denoising diffusion probabilistic model, but with Gamma distributed noise instead of Gaussian noise. The claim is that the Gamma noise model shares many of the same useful properties as the Gaussian model (eg a variational bound on data log likelihood, and repeated application of Gamma noise remains Gamma distributed, etc). And they show, empirically, that the Gamma model produces superior results on image generation (eg on the CelebA and LSUN Church datasets) and on speech generation (eg on the LJ dataset).
SP:8c94149941c39da4ae567d924ac7925fa314e145
How to measure deep uncertainty estimation performance and which models are naturally better at providing it
1 INTRODUCTION . Deep neural networks ( DNNs ) show great performance in a wide variety of application domains including computer vision , natural language understanding and audio processing . Successful deployment of these models , however , is critically dependent on providing an effective uncertainty estimation of their predictions in the form of some kind of selective prediction or providing a probabilistic confidence score for their predictions . But how should we evaluate the performance of uncertainty estimation ? Let us consider two classification models for the stock market that predict whether a stock ’ s value is about to increase , decrease or remain neutral ( three-class classification ) . Suppose that model A has a 95 % true accuracy , and generates a confidence score of 0.95 on every prediction ( even on misclassified instances ) ; model B has a 40 % true accuracy , but always gives a confidence score of 0.6 on correct predictions , and 0.4 on incorrect ones . Model B can be utilized easily to generate perfect investment decisions . Using selective prediction ( Geifman & El-Yaniv , 2017 ) , Model B will reject all investments on stocks whenever the confidence score is 0.4 . While model A offers many more investment opportunities , each of its predictions carries a 5 % risk of failure . Among the various metrics proposed for evaluating the performance of uncertainty estimation are : Area Under the Receiver Operating Characteristic ( AUROC or AUC ) , Area Under the RiskCoverage curve ( AURC ) ( Geifman et al. , 2018 ) , selective risk or coverage for a selective accuracy constraint ( SAC ) , Negative Log-likelihood ( NLL ) , Expected Calibration Error ( ECE ) , which is of- ten used for evaluating a model ’ s calibration ( see Section 2 ) and Brier score ( Brier , 1950 ) . All these metrics are well known and are often used for comparing the uncertainty estimation performance of models ( Moon et al. , 2020 ; Nado et al. , 2021 ; Maddox et al. , 2019 ; Lakshminarayanan et al. , 2017 ) . Somewhat surprisingly , NLL , Brier , AURC , and ECE all fail to reveal the uncertainty superiority of Model B in our investment example ( see Appendix A for the calculations ) . Both AUROC and SAC , on the other hand , reveal the advantage of Model B perfectly ( see Appendix A for details ) . It is not hard to construct counter examples where these two metrics fails and others ( e.g. , ECE ) succeed . The risk-coverage ( RC ) curve ( El-Yaniv & Wiener , 2010 ) is perhaps one of the most informative and practical representations of the overall uncertainty profile of a given model . In general , though , two RC curves are not necessarily comparable if one does not fully dominate the other ( see Figure 2 ) . The advantage of scalar metrics such as the above is that they summarize the model ’ s overall uncertainty estimation behavior by reducing it to a single scalar . When not carefully chosen , however , these reductions could result in a loss of vital information about the problem ( for example , reducing an RC curve to an AURC does not show that Model B has an optimal 0 risk if the coverage is smaller than 0.4 ) . Thus , the choice of the “ correct ” single scalar performance metric unfortunately must be task-specific . When comparing the uncertainty estimation performance of deep architectures that exhibit different accuracies , we find that AUROC and SAC can effectively “ normalize ” accuracy differences that plague the usefulness of other metrics ( see Section 2 ) . This normalization is essential to our study where we compare uncertainty performance of hundreds of models that can greatly differ in their accuracies . In applications where risk ( or coverage ) constraints are dictated ( Geifman & El-Yaniv , 2017 ) , the most straightforward and natural metric is the SAC ( or selective risk ) , which directly measures the coverage ( resp. , risk ) given at the required level of risk ( resp. , coverage ) constraint . We demonstrate this in Appendix J , evaluating which models give the most coverage for a SAC of 99 % . Sometimes , however , such constraints are unknown in advance , or even irrelevant , e.g. , the constructed model should serve a variety of risk constraint use cases , or the model may not be allowed to abstain from predicting at all . In this paper we conduct a comprehensive study of DNNs ’ ability to estimate uncertainty by evaluating 484 models pretrained on ImageNet ( Deng et al. , 2009 ) , taken from the PyTorch and timm respositories ( Paszke et al. , 2019 ; Wightman , 2019 ) . We identify the main factors contributing to or harming the confidence ranking of predictions ( “ ranking ” for short ) , calibration and selective prediction . Furthermore , we also consider the source of uncertainty as either internal ( stemming from either the aleatoric or epistemic uncertainty of the model ( Kiureghian & Ditlevsen , 2009 ) ) or external ( originating from unseen or unknown class-out-of-distribution ( C-OOD ) data ) and evaluate these models in multiple ways . After first evaluating models solely on in-distribution ( ID ) data , we then define and test two ways of evaluating C-OOD data , each of which also divides the data into different groups by how difficult it is for the model to distinguish instances as external . Our study lead to quite a few new observations and conclusions ; ( 1 ) Training regimes incorporating any kind of knowledge distillation ( KD ) ( Hinton et al. , 2015 ) leads to DNNs with improved uncertainty estimation performance evaluated by any metric , in both internal and external settings ( i.e. , leading also to better C-OOD detection ) , more than by using any other training tricks ( such as pretraining on a larger dataset , adversarial training , etc. ) . ( 2 ) Some architectures are naturally superb at all aspects of uncertainty estimation and in all settings , e.g. , vision transformers ( ViTs ) ( Dosovitskiy et al. , 2020 ; Steiner et al. , 2021 ) , while other architectures tend to perform worse , e.g. , EfficientNetV2 and GENet ( Tan & Le , 2021 ; Lin et al. , 2020 ) . These results are visualized in Figure 1 . ( 3 ) The superiority of ViTs remains even when the comparison considers the models ’ sizes—meaning that for any size , ViTs outperform the competition in uncertainty estimation performance , as visualized in Appendix B in Figures 9 and 10 . ( 4 ) The simple post-training calibration method of temperature scaling ( Guo et al. , 2017 ) , which is known to improve ECE , for the most part also improves ranking ( AUROC ) and selective prediction—meaning not only does it calibrate the probabilistic estimation for each individual instance , but it also improves the partial order of all instances induced by those improved estimations , pushing instances more likely to be correct to have higher confidence than instances less likely to be correct ( see Section 3 ) . ( 5 ) Contrary to previous work by Guo et al . ( 2017 ) , we observe that while there is a strong correlation between accuracy/number of parameters and ECE or AUROC within each specific family of models of the same architecture , the correlation flips between a strong negative and a strong positive correlation depending on the type of architecture being observed . For example , as ViT architectures increase in size and accuracy , their ECE deteriorates while their AUROC improves . The exact opposite , however , could be observed in XCiTs ( El-Nouby et al. , 2021 ) as their ECE improves with size while their AUROC deteriorates . ( see Appendix G ) . ( 6 ) The best model in terms of AUROC or SAC is not always the best in terms of calibration , as illustrated in Figure 1 , and the trade-off should be considered when choosing a model based on its application . Due to lack of space , a number of additional interesting observations are briefly mentioned in the paper without supporting empirical evidence ( which is provided in the appendix ) . 2 HOW TO EVALUATE DEEP UNCERTAINTY ESTIMATION PERFORMANCE . Let X be the input space and Y be the label space . Let P ( X , Y ) be an unknown distribution over X × Y . A model f is a prediction function f : X → Y , and its predicted label for an image x is denoted by ŷf ( x ) . The model ’ s true risk w.r.t . P is R ( f |P ) = EP ( X , Y ) [ ` ( f ( x ) , y ) ] , where ` : Y × Y → R+ is a given loss function , for example , 0/1 loss for classification . Given a labeled set Sm = { ( xi , yi ) } mi=1 ⊆ ( X × Y ) , sampled i.i.d . from P ( X , Y ) , the empirical risk of model f is r̂ ( f |Sm ) , 1m ∑m i=1 ` ( f ( xi ) , yi ) . Following Geifman et al . ( 2018 ) , for a given model f we define a confidence score function κ ( x , ŷ|f ) , where x ∈ X , and ŷ ∈ Y is the model ’ s prediction for x , as follows . The function κ should quantify confidence in the prediction of ŷ for the input x , based on signals from model f . This function should induce a partial order over instances in X , and is not required to distinguish between points with the same score . The most common and well-known κ function for a classification model f ( with softmax at its last layer ) is its softmax response values : κ ( x , ŷ|f ) , f ( x ) ŷ ( Cordella et al. , 1995 ; De Stefano et al. , 2000 ) . While this is the main κ we evaluate , we also test the popular uncertainty estimation technique of Monte-Carlo dropout ( MC-Dropout ) ( Gal & Ghahramani , 2016 ) , which is motivated by Bayesian reasoning . Although these methods use the direct output from f , κ could be a different model unrelated to f and unable to affect f ’ s predictions . Note that to enable a probabilistic interpretation , κ can only be calibrated if its values reside in [ 0 , 1 ] whereas for ranking and selective prediction any value in R can be used . A selective model f ( El-Yaniv & Wiener , 2010 ; Chow , 1957 ) uses a selection function g : X → { 0 , 1 } to serve as a binary selector for f , enabling it to abstain from giving predictions for certain inputs . g can be defined by a threshold θ on the values of a κ function such that gθ ( x|κ , f ) = 1 [ κ ( x , ŷf ( x ) |f ) > θ ] . The performance of a selective model is measured using coverage and risk , where coverage , defined as φ ( f , g ) = EP [ g ( x ) ] , is the probability mass of the non-rejected instances inX . The selective risk of the selective model ( f , g ) is defined asR ( f , g ) , EP [ ` ( f ( x ) , y ) g ( x ) ] φ ( f , g ) . These quantities can be evaluated empirically over a finite labeled set Sm , with the empirical coverage defined as φ̂ ( f , g|Sm ) = 1m ∑m i=1 g ( xi ) , and the empirical selective risk defined as r̂ ( f , g|Sm ) , 1 m ∑m i=1 ` ( f ( xi ) , yi ) g ( xi ) φ̂ ( f , g|Sm ) . Similarly , SAC is defined as the largest coverage available for a specific accuracy constraint . A way to visually inspect the behavior of a κ function for selective prediction can be done using an RC curve—a curve showing the selective risk as a function of coverage , measured on some chosen test set ; see Figure 2 for an example . The AURC and E-AURC metrics were defined by Geifman et al . ( 2018 ) for quantifying the selective quality of κ functions via a single number , with AURC being defined as the area under the RC curve . AURC , however , is very sensitive to the model ’ s accuracy , and in an attempt to mitigate this , E-AURC was suggested . The latter also suffers from sensitivity to accuracy , as we demonstrate in Appendix C. Let us consider the two models in Figure 2 for risk-sensitive deployment ; EfficientNetV2-XL ( Tan & Le , 2021 ) and ViT-B/32-SAM ( Chen et al. , 2021a ) . While the former model has better overall accuracy and AURC ( metrics that could lead us to believe the model is best for our needs ) , it can not guarantee a Top-1 ImageNet selective accuracy above 95 % for any coverage . ViTB/32-SAM , on the other hand , can provide accuracies above 95 % for all coverages below 50 % . When there are requirements for specific coverages , the most direct metric to utilize would be the matching selective risks , by which we can select the model offering the best performance for our task . If instead a specific range of coverages is specified , we could measure the area under the RC curve for those coverages : AURCC ( κ , f |Sm ) = 1|C| ∑ c∈C r̂ ( f , gc|Sm ) , with C being those required coverages . Lastly , if a certain accuracy constraint is specified , the chosen model should be the one providing the largest coverage for that constraint ( the largest coverage for a certain SAC ) . Often , these requirements are not known or can change as a result of changing circumstances or individual needs . Also , using metrics sensitive to accuracy such as AURC makes designing architectures and methods to improve κ very hard , since an improvement in these metrics could be attributed to either an increase in overall accuracy ( if such occurred ) or to a real improvement in the model ’ s “ metacognition ” . Lastly , some tasks might not allow the model to abstain from making predictions at all , but instead require interpretable and well-calibrated probabilities of correctness , which could be measured using ECE .
The paper provides an empirical comparison of uncertainty estimates obtained from 484 deep neural networks (DNN), trained for image classification tasks on the ImageNet dataset. They compared uncertainty estimation performance of different architectures and training strategies (knowledge distillation) on many quantitative metrics such as AUC-ROC on classification tasks, Area under the risk coverage curve (AURC), etc. The authors finally summarize their findings on what architectures are best at providing uncertainty estimations and how to compare different methods on OoD detection and uncertainty in in-distribution samples.
SP:f28b29a379b100a68f8bc3e01146f6a7b06aebf8
How to measure deep uncertainty estimation performance and which models are naturally better at providing it
1 INTRODUCTION . Deep neural networks ( DNNs ) show great performance in a wide variety of application domains including computer vision , natural language understanding and audio processing . Successful deployment of these models , however , is critically dependent on providing an effective uncertainty estimation of their predictions in the form of some kind of selective prediction or providing a probabilistic confidence score for their predictions . But how should we evaluate the performance of uncertainty estimation ? Let us consider two classification models for the stock market that predict whether a stock ’ s value is about to increase , decrease or remain neutral ( three-class classification ) . Suppose that model A has a 95 % true accuracy , and generates a confidence score of 0.95 on every prediction ( even on misclassified instances ) ; model B has a 40 % true accuracy , but always gives a confidence score of 0.6 on correct predictions , and 0.4 on incorrect ones . Model B can be utilized easily to generate perfect investment decisions . Using selective prediction ( Geifman & El-Yaniv , 2017 ) , Model B will reject all investments on stocks whenever the confidence score is 0.4 . While model A offers many more investment opportunities , each of its predictions carries a 5 % risk of failure . Among the various metrics proposed for evaluating the performance of uncertainty estimation are : Area Under the Receiver Operating Characteristic ( AUROC or AUC ) , Area Under the RiskCoverage curve ( AURC ) ( Geifman et al. , 2018 ) , selective risk or coverage for a selective accuracy constraint ( SAC ) , Negative Log-likelihood ( NLL ) , Expected Calibration Error ( ECE ) , which is of- ten used for evaluating a model ’ s calibration ( see Section 2 ) and Brier score ( Brier , 1950 ) . All these metrics are well known and are often used for comparing the uncertainty estimation performance of models ( Moon et al. , 2020 ; Nado et al. , 2021 ; Maddox et al. , 2019 ; Lakshminarayanan et al. , 2017 ) . Somewhat surprisingly , NLL , Brier , AURC , and ECE all fail to reveal the uncertainty superiority of Model B in our investment example ( see Appendix A for the calculations ) . Both AUROC and SAC , on the other hand , reveal the advantage of Model B perfectly ( see Appendix A for details ) . It is not hard to construct counter examples where these two metrics fails and others ( e.g. , ECE ) succeed . The risk-coverage ( RC ) curve ( El-Yaniv & Wiener , 2010 ) is perhaps one of the most informative and practical representations of the overall uncertainty profile of a given model . In general , though , two RC curves are not necessarily comparable if one does not fully dominate the other ( see Figure 2 ) . The advantage of scalar metrics such as the above is that they summarize the model ’ s overall uncertainty estimation behavior by reducing it to a single scalar . When not carefully chosen , however , these reductions could result in a loss of vital information about the problem ( for example , reducing an RC curve to an AURC does not show that Model B has an optimal 0 risk if the coverage is smaller than 0.4 ) . Thus , the choice of the “ correct ” single scalar performance metric unfortunately must be task-specific . When comparing the uncertainty estimation performance of deep architectures that exhibit different accuracies , we find that AUROC and SAC can effectively “ normalize ” accuracy differences that plague the usefulness of other metrics ( see Section 2 ) . This normalization is essential to our study where we compare uncertainty performance of hundreds of models that can greatly differ in their accuracies . In applications where risk ( or coverage ) constraints are dictated ( Geifman & El-Yaniv , 2017 ) , the most straightforward and natural metric is the SAC ( or selective risk ) , which directly measures the coverage ( resp. , risk ) given at the required level of risk ( resp. , coverage ) constraint . We demonstrate this in Appendix J , evaluating which models give the most coverage for a SAC of 99 % . Sometimes , however , such constraints are unknown in advance , or even irrelevant , e.g. , the constructed model should serve a variety of risk constraint use cases , or the model may not be allowed to abstain from predicting at all . In this paper we conduct a comprehensive study of DNNs ’ ability to estimate uncertainty by evaluating 484 models pretrained on ImageNet ( Deng et al. , 2009 ) , taken from the PyTorch and timm respositories ( Paszke et al. , 2019 ; Wightman , 2019 ) . We identify the main factors contributing to or harming the confidence ranking of predictions ( “ ranking ” for short ) , calibration and selective prediction . Furthermore , we also consider the source of uncertainty as either internal ( stemming from either the aleatoric or epistemic uncertainty of the model ( Kiureghian & Ditlevsen , 2009 ) ) or external ( originating from unseen or unknown class-out-of-distribution ( C-OOD ) data ) and evaluate these models in multiple ways . After first evaluating models solely on in-distribution ( ID ) data , we then define and test two ways of evaluating C-OOD data , each of which also divides the data into different groups by how difficult it is for the model to distinguish instances as external . Our study lead to quite a few new observations and conclusions ; ( 1 ) Training regimes incorporating any kind of knowledge distillation ( KD ) ( Hinton et al. , 2015 ) leads to DNNs with improved uncertainty estimation performance evaluated by any metric , in both internal and external settings ( i.e. , leading also to better C-OOD detection ) , more than by using any other training tricks ( such as pretraining on a larger dataset , adversarial training , etc. ) . ( 2 ) Some architectures are naturally superb at all aspects of uncertainty estimation and in all settings , e.g. , vision transformers ( ViTs ) ( Dosovitskiy et al. , 2020 ; Steiner et al. , 2021 ) , while other architectures tend to perform worse , e.g. , EfficientNetV2 and GENet ( Tan & Le , 2021 ; Lin et al. , 2020 ) . These results are visualized in Figure 1 . ( 3 ) The superiority of ViTs remains even when the comparison considers the models ’ sizes—meaning that for any size , ViTs outperform the competition in uncertainty estimation performance , as visualized in Appendix B in Figures 9 and 10 . ( 4 ) The simple post-training calibration method of temperature scaling ( Guo et al. , 2017 ) , which is known to improve ECE , for the most part also improves ranking ( AUROC ) and selective prediction—meaning not only does it calibrate the probabilistic estimation for each individual instance , but it also improves the partial order of all instances induced by those improved estimations , pushing instances more likely to be correct to have higher confidence than instances less likely to be correct ( see Section 3 ) . ( 5 ) Contrary to previous work by Guo et al . ( 2017 ) , we observe that while there is a strong correlation between accuracy/number of parameters and ECE or AUROC within each specific family of models of the same architecture , the correlation flips between a strong negative and a strong positive correlation depending on the type of architecture being observed . For example , as ViT architectures increase in size and accuracy , their ECE deteriorates while their AUROC improves . The exact opposite , however , could be observed in XCiTs ( El-Nouby et al. , 2021 ) as their ECE improves with size while their AUROC deteriorates . ( see Appendix G ) . ( 6 ) The best model in terms of AUROC or SAC is not always the best in terms of calibration , as illustrated in Figure 1 , and the trade-off should be considered when choosing a model based on its application . Due to lack of space , a number of additional interesting observations are briefly mentioned in the paper without supporting empirical evidence ( which is provided in the appendix ) . 2 HOW TO EVALUATE DEEP UNCERTAINTY ESTIMATION PERFORMANCE . Let X be the input space and Y be the label space . Let P ( X , Y ) be an unknown distribution over X × Y . A model f is a prediction function f : X → Y , and its predicted label for an image x is denoted by ŷf ( x ) . The model ’ s true risk w.r.t . P is R ( f |P ) = EP ( X , Y ) [ ` ( f ( x ) , y ) ] , where ` : Y × Y → R+ is a given loss function , for example , 0/1 loss for classification . Given a labeled set Sm = { ( xi , yi ) } mi=1 ⊆ ( X × Y ) , sampled i.i.d . from P ( X , Y ) , the empirical risk of model f is r̂ ( f |Sm ) , 1m ∑m i=1 ` ( f ( xi ) , yi ) . Following Geifman et al . ( 2018 ) , for a given model f we define a confidence score function κ ( x , ŷ|f ) , where x ∈ X , and ŷ ∈ Y is the model ’ s prediction for x , as follows . The function κ should quantify confidence in the prediction of ŷ for the input x , based on signals from model f . This function should induce a partial order over instances in X , and is not required to distinguish between points with the same score . The most common and well-known κ function for a classification model f ( with softmax at its last layer ) is its softmax response values : κ ( x , ŷ|f ) , f ( x ) ŷ ( Cordella et al. , 1995 ; De Stefano et al. , 2000 ) . While this is the main κ we evaluate , we also test the popular uncertainty estimation technique of Monte-Carlo dropout ( MC-Dropout ) ( Gal & Ghahramani , 2016 ) , which is motivated by Bayesian reasoning . Although these methods use the direct output from f , κ could be a different model unrelated to f and unable to affect f ’ s predictions . Note that to enable a probabilistic interpretation , κ can only be calibrated if its values reside in [ 0 , 1 ] whereas for ranking and selective prediction any value in R can be used . A selective model f ( El-Yaniv & Wiener , 2010 ; Chow , 1957 ) uses a selection function g : X → { 0 , 1 } to serve as a binary selector for f , enabling it to abstain from giving predictions for certain inputs . g can be defined by a threshold θ on the values of a κ function such that gθ ( x|κ , f ) = 1 [ κ ( x , ŷf ( x ) |f ) > θ ] . The performance of a selective model is measured using coverage and risk , where coverage , defined as φ ( f , g ) = EP [ g ( x ) ] , is the probability mass of the non-rejected instances inX . The selective risk of the selective model ( f , g ) is defined asR ( f , g ) , EP [ ` ( f ( x ) , y ) g ( x ) ] φ ( f , g ) . These quantities can be evaluated empirically over a finite labeled set Sm , with the empirical coverage defined as φ̂ ( f , g|Sm ) = 1m ∑m i=1 g ( xi ) , and the empirical selective risk defined as r̂ ( f , g|Sm ) , 1 m ∑m i=1 ` ( f ( xi ) , yi ) g ( xi ) φ̂ ( f , g|Sm ) . Similarly , SAC is defined as the largest coverage available for a specific accuracy constraint . A way to visually inspect the behavior of a κ function for selective prediction can be done using an RC curve—a curve showing the selective risk as a function of coverage , measured on some chosen test set ; see Figure 2 for an example . The AURC and E-AURC metrics were defined by Geifman et al . ( 2018 ) for quantifying the selective quality of κ functions via a single number , with AURC being defined as the area under the RC curve . AURC , however , is very sensitive to the model ’ s accuracy , and in an attempt to mitigate this , E-AURC was suggested . The latter also suffers from sensitivity to accuracy , as we demonstrate in Appendix C. Let us consider the two models in Figure 2 for risk-sensitive deployment ; EfficientNetV2-XL ( Tan & Le , 2021 ) and ViT-B/32-SAM ( Chen et al. , 2021a ) . While the former model has better overall accuracy and AURC ( metrics that could lead us to believe the model is best for our needs ) , it can not guarantee a Top-1 ImageNet selective accuracy above 95 % for any coverage . ViTB/32-SAM , on the other hand , can provide accuracies above 95 % for all coverages below 50 % . When there are requirements for specific coverages , the most direct metric to utilize would be the matching selective risks , by which we can select the model offering the best performance for our task . If instead a specific range of coverages is specified , we could measure the area under the RC curve for those coverages : AURCC ( κ , f |Sm ) = 1|C| ∑ c∈C r̂ ( f , gc|Sm ) , with C being those required coverages . Lastly , if a certain accuracy constraint is specified , the chosen model should be the one providing the largest coverage for that constraint ( the largest coverage for a certain SAC ) . Often , these requirements are not known or can change as a result of changing circumstances or individual needs . Also , using metrics sensitive to accuracy such as AURC makes designing architectures and methods to improve κ very hard , since an improvement in these metrics could be attributed to either an increase in overall accuracy ( if such occurred ) or to a real improvement in the model ’ s “ metacognition ” . Lastly , some tasks might not allow the model to abstain from making predictions at all , but instead require interpretable and well-calibrated probabilities of correctness , which could be measured using ECE .
The paper presents an evaluation of different models in their capacity to reflect epistemic and aleatoric uncertainty and reviews different methods for uncertainty performance measurement. Models are classifiers trained on ImageNet. The number of models reported is 484. The analysis also considers the cases when the data is in-distribution and out-of-distribution. The conclusions lead to a group of models (trained with distillation), improving uncertainty estimation performance, and the vision transformers architecture having the best uncertainty estimation performance.
SP:f28b29a379b100a68f8bc3e01146f6a7b06aebf8
PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning
1 INTRODUCTION . Synthesizing data under differential privacy ( DP ) ( Dwork ( 2006 ; 2011 ) ; Dwork & Roth ( 2014 ) ) enables us to share the synthetic data and generative model with rigorous privacy guarantees . Particularly , DP approaches of data synthesis involving the use of deep generative models have received attention lately ( Takagi et al . ( 2020 ) ; Xie et al . ( 2018 ) ; Torkzadehmahani et al . ( 2019 ) ; Frigerio et al . ( 2019 ) ; Yoon et al . ( 2019 ) ; Chen et al . ( 2020 ) ; Harder et al . ( 2021 ) ) . Typically , the training of such models utilizes gradient sanitization techniques ( Abadi et al . ( 2016 ) ) that add noises to the gradient updates to preserve privacy . While such methods are conducive to deep learning , due to composability , each access to data leads to degradation in privacy guarantees , and as a result , the training iteration is limited by the privacy budget . Recently , Harder et al . ( 2021 ) has proposed DP-MERF , which first represents the sensitive data as random features in a DP manner and then learns a generator by minimizing the discrepancy between the ( fixed ) representation and generated data points . DP-MERF can iterate the learning process of the generator without further consuming the privacy budget ; however , it is limited in the learning and generalization capabilities due to its fixed representation . In this work , we seek a strategy of training deep generative models privately that is able to resolve the aforementioned shortcomings , and is practical in terms of privacy ( e.g. , usable image data at ' 1 . ) We propose a private learning framework called PEARL ( Private Embeddings and Adversarial Reconstruction Learning ) . In this framework , we have i ) no limitation in learning iterations , and ii ) well-reconstruction capability . Towards those preferable properties , our framework first obtains ( 1 ) informative embedding of sensitive data and ( 2 ) auxiliary information ( e.g. , hyperparameters ) useful for training , both in a differentially private manner , then ( 3 ) the generative model is trained implicitly like GANs via the private embedding and auiliary information , where the learning is based on a stochastic procedure that generates data , and ( 4 ) a critic distinguishing between the real and generated data . The overview of PEARL is illustrated in Fig . 1 . As a concrete realization of PEARL , We first identify that the characteristic function ( CF ) representation of data can be sanitized as the private embedding of PEARL . Consequently , it is possible to train deep generative models using an appropriately defined metric measuring the discrepancy between the real ( but sanitized ) and generated data distribution based on the CF without re-using the original data . As will be explained in detail in later Sections , the generative modelling approach using CFs also involves sampling “ frequencies ” from an ad hoc distribution , to project the data to the embedding . It is desirable to optimize the sampling distribution to better represent the data as an embedding , but the naive way of optimizing it would require re-accessing the data via sampling , coming at a cost of privacy budget . Henceforth , we also propose to incorporate a privacy-preserving critic to optimize the sampling strategy , which , through re-weighting , chooses the best representation from a fixed samples of frequencies without extra privacy costs . To this end , we propose the following minimax optimization training objective : inf θ∈Θ sup ω∈Ω k∑ i=1 ω ( ti ) ω0 ( ti ) ∣∣Φ̃Pr ( ti ) − Φ̂Qθ ( ti ) ∣∣2 . ( 1 ) See later parts for notations and details . Theoretically , we show that our proposed objective has properties similar to those that are suited to training GANs , i.e. , continuity and differentiability of the generator ’ s parameters , and continuity in weak topology . We also prove the consistency of our privacy-preserving sampling strategy at the asymptotic limit of infinite sampling . Empirical evaluations show that PEARL is able to high-quality synthetic data at reasonable privacy levels . Related works . Traditional methods of synthesizing data are mainly concerned with discrete data or data preprocessed to the discrete form ( Zhang et al . ( 2017 ) ; Qardaji et al . ( 2014 ) ; He et al . ( 2015 ) ; Chen et al . ( 2015 ) ; Cai et al . ( 2021 ) ; Zhang et al . ( 2021 ) ) , whereas we are interested in more general methods involving continuous data . Deep generative models under the DP setting are suitable for this type of tasks ( Takagi et al . ( 2020 ) ; Xie et al . ( 2018 ) ; Torkzadehmahani et al . ( 2019 ) ; Frigerio et al . ( 2019 ) ; Yoon et al . ( 2019 ) ; Chen et al . ( 2020 ) ; Harder et al . ( 2021 ) ) . The private training of deep generative models is usually performed using gradient sanitization methods . An exception is DP-MERF ( Harder et al . ( 2021 ) ) , which is closest to our work . There , random features used to approximate the maximum mean discrepancy ( MMD ) objective are privatized and utilized for training a generator . PEARL , which , as a realization , uses CFs , may be viewed as a generalization of DP-MERF . Additionally , PEARL has several distinctive features which are lacking in DP-MERF . The first lies in the introduction of a privacy-preserving critic , which leads to an improvement of performance . The second is the private selection of the parameter of the sampling distribution , which is also shown to be vital . Moreover , DP-MERF uses non-characteristic kernels when treating tabular data , in contrast to ours , which is characteristic and has guarantees in convergence . We finally note that generative models using CFs but only non-privately have been explored before ( Ansari et al . ( 2020 ) ; Li et al . ( 2020 ) ) . Contributions . Our contribution in this paper is three-fold : ( i ) We propose a general framework called PEARL , where , unlike gradient sanitization methods , the generator training process and iteration are unconstrained ; reliance on ad-hoc ( non-private ) hyperparameter tuning is reduced by extracting hyperparameters ( auxiliary information ) privately . ( ii ) We demonstrate a realization of our framework by making use of the characteristic function and an adversarial re-weighting objective . ( iii ) Our proposal has theoretical guarantees of performance , and empirical evaluations show that our approach outperforms competitors at reasonable levels of privacy ( ' 1 ) . 2 PRELIMINARIES . This Section gives a brief review of essential preliminaries about differential privacy , characteristic function and the related notations . 2.1 DIFFERENTIAL PRIVACY . Definition 1 ( ( , δ ) -Differential Privacy ) . Given privacy parameters ≥ 0 and δ ≥ 0 , a randomized mechanism , M : D → R with domain D and rangeR satisfies ( , δ ) -differential privacy ( DP ) if for any two adjacent inputs d , d′ ∈ D and for any subset of outputs S ⊆ R , the following holds : Pr [ M ( d ) ∈ S ] ≤ e · Pr [ M ( d′ ) ∈ S ] + δ . ( 2 ) We next consider concrete ways of sanitizing certain outputs with DP . A typical paradigm of DP is applying the randomized mechanism , M , to a certain deterministic function f : D → R such that the output of f is DP . The noise magnitude added byM is determined by the sensitivity of f , defined as ∆f = supd , d′∈D ‖f ( d ) − f ( d′ ) ‖ , where || · || is a norm function defined on f ’ s output domain . d and d′ are any adjacent pairs of dataset . Laplacian and Gaussian mechanisms are the standard randomized mechanisms . We primarily utilize the Gaussian mechanism in this paper ( Dwork & Roth ( 2014 ) ) : Definition 2 ( Gaussian Mechanism ) . Let f : X → R be an arbitrary function with sensitivity ∆f . The Gaussian Mechanism , Mσ , parameterized by σ , adds noise to the output of f as follows : Mσ ( x ) = f ( x ) +N ( 0 , σ2I ) . ( 3 ) One of the most important properties of DP relevant to our work is the post-processing theorem ( Dwork & Roth ( 2014 ) ) : Theorem 1 ( Post-processing Theorem ) . LetM : D → R be ( , δ ) -DP and let f : R → R′ be an arbitrary randomized function . Then , f ◦M : D → R′ is ( , δ ) -DP . It ensures that the DP-sanitized data can be re-used without further consuming privacy budgets . 2.2 CHARACTERISTIC FUNCTIONS . Characteristic function ( CF ) is widely utilized in statistics and probability theory , and perhaps is best known to be used to prove the central limit theorem ( Williams ( 1991 ) ) . The definition is as follows . Definition 3 ( Characteristic Function ) . Given a random variable X ⊆ Rd and P as the probability measure associated with it , and t ∈ Rd , the corresponding characteristic function ( CF ) is given by ΦP ( t ) = Ex∼P [ eit·x ] = ∫ Rd eit·xdP . ( 4 ) Here , i is the imaginary number . From the signal processing point of view , this mathematical operation is equivalent to the Fourier transformation , and ΦP ( t ) is the Fourier transform at frequency t. It is noted that we deal with the discrete approximation of CFs in practice . That is , given a dataset with n i.i.d . samples , { xj } nj=1 from P , the empirical CF is written as Φ̂P ( t ) = 1n ∑n i=j e it·xj . We next introduce characteristic function distance ( CFD ) ( Heathcote ( 1972 ) ; Chwialkowski et al . ( 2015 ) ) : Definition 4 ( Characteristic Function Distance ) . Given two distributions P and Q of random variables residing in Rd , and ω a sampling distribution on t ∈ Rd , the squared characteristic function distance ( CFD ) between P and Q is computed as : C2 ( P , Q ) = Et∼ω ( t ) [ ∣∣ΦP ( t ) − ΦQ ( t ) ∣∣2 ] = ∫ Rd ∣∣ΦP ( t ) − ΦQ ( t ) ∣∣2ω ( t ) dt . ( 5 ) Notations . Let us make a short note on the notations before continuing . Let k be the number of t drawn from ω and P be the probability measure of a random variable . We group the CFs associated to P of different frequencies , ( Φ̂P ( t1 ) , . . . , Φ̂P ( tk ) ) > more compactly as φ̂P ( x ) . To make the dependence of φ̂P ( x ) on the sampled data explicit , we also use the fol- lowing notation : φ̂P ( x ) = 1n ∑n j=1 φ̂P ( xj ) . We notice that ‖φ̂P ( x ) ‖2 ≡ √∑k m=1 |Φ̂P ( tm ) |2 =√∑k m=1 | ∑n l=1 e itm·xl/n|2 ≤ √∑k m=1 | ∑n l=1 1/n|2 = √ k , where the norm is taken over the ( complex ) frequency space . With a slight abuse of notation , we abbreviate φ̂P as φ̂ when there is no ambiguity in the underlying probability measure associated with the CF .
This paper studied an important topic in the field of data synthesis: how to train a private deep generative model without reusing the original data. In this paper, the authors proposed a new framework that uses deep generative models to synthesize data in different private ways. Unlike popular gradient cleaning methods, the framework proposed in this paper does not incur additional privacy costs or model constraints. In addition, in order to avoid the problem of reduced privacy guarantee as training iterations increase, this paper used feature functions and adversarial re-weighting objectives to solve the above problems. Both theory and extensive case studies verify the performance of the proposed framework.
SP:1e92779dd25fcac41115f5771712163690fcc2eb
PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning
1 INTRODUCTION . Synthesizing data under differential privacy ( DP ) ( Dwork ( 2006 ; 2011 ) ; Dwork & Roth ( 2014 ) ) enables us to share the synthetic data and generative model with rigorous privacy guarantees . Particularly , DP approaches of data synthesis involving the use of deep generative models have received attention lately ( Takagi et al . ( 2020 ) ; Xie et al . ( 2018 ) ; Torkzadehmahani et al . ( 2019 ) ; Frigerio et al . ( 2019 ) ; Yoon et al . ( 2019 ) ; Chen et al . ( 2020 ) ; Harder et al . ( 2021 ) ) . Typically , the training of such models utilizes gradient sanitization techniques ( Abadi et al . ( 2016 ) ) that add noises to the gradient updates to preserve privacy . While such methods are conducive to deep learning , due to composability , each access to data leads to degradation in privacy guarantees , and as a result , the training iteration is limited by the privacy budget . Recently , Harder et al . ( 2021 ) has proposed DP-MERF , which first represents the sensitive data as random features in a DP manner and then learns a generator by minimizing the discrepancy between the ( fixed ) representation and generated data points . DP-MERF can iterate the learning process of the generator without further consuming the privacy budget ; however , it is limited in the learning and generalization capabilities due to its fixed representation . In this work , we seek a strategy of training deep generative models privately that is able to resolve the aforementioned shortcomings , and is practical in terms of privacy ( e.g. , usable image data at ' 1 . ) We propose a private learning framework called PEARL ( Private Embeddings and Adversarial Reconstruction Learning ) . In this framework , we have i ) no limitation in learning iterations , and ii ) well-reconstruction capability . Towards those preferable properties , our framework first obtains ( 1 ) informative embedding of sensitive data and ( 2 ) auxiliary information ( e.g. , hyperparameters ) useful for training , both in a differentially private manner , then ( 3 ) the generative model is trained implicitly like GANs via the private embedding and auiliary information , where the learning is based on a stochastic procedure that generates data , and ( 4 ) a critic distinguishing between the real and generated data . The overview of PEARL is illustrated in Fig . 1 . As a concrete realization of PEARL , We first identify that the characteristic function ( CF ) representation of data can be sanitized as the private embedding of PEARL . Consequently , it is possible to train deep generative models using an appropriately defined metric measuring the discrepancy between the real ( but sanitized ) and generated data distribution based on the CF without re-using the original data . As will be explained in detail in later Sections , the generative modelling approach using CFs also involves sampling “ frequencies ” from an ad hoc distribution , to project the data to the embedding . It is desirable to optimize the sampling distribution to better represent the data as an embedding , but the naive way of optimizing it would require re-accessing the data via sampling , coming at a cost of privacy budget . Henceforth , we also propose to incorporate a privacy-preserving critic to optimize the sampling strategy , which , through re-weighting , chooses the best representation from a fixed samples of frequencies without extra privacy costs . To this end , we propose the following minimax optimization training objective : inf θ∈Θ sup ω∈Ω k∑ i=1 ω ( ti ) ω0 ( ti ) ∣∣Φ̃Pr ( ti ) − Φ̂Qθ ( ti ) ∣∣2 . ( 1 ) See later parts for notations and details . Theoretically , we show that our proposed objective has properties similar to those that are suited to training GANs , i.e. , continuity and differentiability of the generator ’ s parameters , and continuity in weak topology . We also prove the consistency of our privacy-preserving sampling strategy at the asymptotic limit of infinite sampling . Empirical evaluations show that PEARL is able to high-quality synthetic data at reasonable privacy levels . Related works . Traditional methods of synthesizing data are mainly concerned with discrete data or data preprocessed to the discrete form ( Zhang et al . ( 2017 ) ; Qardaji et al . ( 2014 ) ; He et al . ( 2015 ) ; Chen et al . ( 2015 ) ; Cai et al . ( 2021 ) ; Zhang et al . ( 2021 ) ) , whereas we are interested in more general methods involving continuous data . Deep generative models under the DP setting are suitable for this type of tasks ( Takagi et al . ( 2020 ) ; Xie et al . ( 2018 ) ; Torkzadehmahani et al . ( 2019 ) ; Frigerio et al . ( 2019 ) ; Yoon et al . ( 2019 ) ; Chen et al . ( 2020 ) ; Harder et al . ( 2021 ) ) . The private training of deep generative models is usually performed using gradient sanitization methods . An exception is DP-MERF ( Harder et al . ( 2021 ) ) , which is closest to our work . There , random features used to approximate the maximum mean discrepancy ( MMD ) objective are privatized and utilized for training a generator . PEARL , which , as a realization , uses CFs , may be viewed as a generalization of DP-MERF . Additionally , PEARL has several distinctive features which are lacking in DP-MERF . The first lies in the introduction of a privacy-preserving critic , which leads to an improvement of performance . The second is the private selection of the parameter of the sampling distribution , which is also shown to be vital . Moreover , DP-MERF uses non-characteristic kernels when treating tabular data , in contrast to ours , which is characteristic and has guarantees in convergence . We finally note that generative models using CFs but only non-privately have been explored before ( Ansari et al . ( 2020 ) ; Li et al . ( 2020 ) ) . Contributions . Our contribution in this paper is three-fold : ( i ) We propose a general framework called PEARL , where , unlike gradient sanitization methods , the generator training process and iteration are unconstrained ; reliance on ad-hoc ( non-private ) hyperparameter tuning is reduced by extracting hyperparameters ( auxiliary information ) privately . ( ii ) We demonstrate a realization of our framework by making use of the characteristic function and an adversarial re-weighting objective . ( iii ) Our proposal has theoretical guarantees of performance , and empirical evaluations show that our approach outperforms competitors at reasonable levels of privacy ( ' 1 ) . 2 PRELIMINARIES . This Section gives a brief review of essential preliminaries about differential privacy , characteristic function and the related notations . 2.1 DIFFERENTIAL PRIVACY . Definition 1 ( ( , δ ) -Differential Privacy ) . Given privacy parameters ≥ 0 and δ ≥ 0 , a randomized mechanism , M : D → R with domain D and rangeR satisfies ( , δ ) -differential privacy ( DP ) if for any two adjacent inputs d , d′ ∈ D and for any subset of outputs S ⊆ R , the following holds : Pr [ M ( d ) ∈ S ] ≤ e · Pr [ M ( d′ ) ∈ S ] + δ . ( 2 ) We next consider concrete ways of sanitizing certain outputs with DP . A typical paradigm of DP is applying the randomized mechanism , M , to a certain deterministic function f : D → R such that the output of f is DP . The noise magnitude added byM is determined by the sensitivity of f , defined as ∆f = supd , d′∈D ‖f ( d ) − f ( d′ ) ‖ , where || · || is a norm function defined on f ’ s output domain . d and d′ are any adjacent pairs of dataset . Laplacian and Gaussian mechanisms are the standard randomized mechanisms . We primarily utilize the Gaussian mechanism in this paper ( Dwork & Roth ( 2014 ) ) : Definition 2 ( Gaussian Mechanism ) . Let f : X → R be an arbitrary function with sensitivity ∆f . The Gaussian Mechanism , Mσ , parameterized by σ , adds noise to the output of f as follows : Mσ ( x ) = f ( x ) +N ( 0 , σ2I ) . ( 3 ) One of the most important properties of DP relevant to our work is the post-processing theorem ( Dwork & Roth ( 2014 ) ) : Theorem 1 ( Post-processing Theorem ) . LetM : D → R be ( , δ ) -DP and let f : R → R′ be an arbitrary randomized function . Then , f ◦M : D → R′ is ( , δ ) -DP . It ensures that the DP-sanitized data can be re-used without further consuming privacy budgets . 2.2 CHARACTERISTIC FUNCTIONS . Characteristic function ( CF ) is widely utilized in statistics and probability theory , and perhaps is best known to be used to prove the central limit theorem ( Williams ( 1991 ) ) . The definition is as follows . Definition 3 ( Characteristic Function ) . Given a random variable X ⊆ Rd and P as the probability measure associated with it , and t ∈ Rd , the corresponding characteristic function ( CF ) is given by ΦP ( t ) = Ex∼P [ eit·x ] = ∫ Rd eit·xdP . ( 4 ) Here , i is the imaginary number . From the signal processing point of view , this mathematical operation is equivalent to the Fourier transformation , and ΦP ( t ) is the Fourier transform at frequency t. It is noted that we deal with the discrete approximation of CFs in practice . That is , given a dataset with n i.i.d . samples , { xj } nj=1 from P , the empirical CF is written as Φ̂P ( t ) = 1n ∑n i=j e it·xj . We next introduce characteristic function distance ( CFD ) ( Heathcote ( 1972 ) ; Chwialkowski et al . ( 2015 ) ) : Definition 4 ( Characteristic Function Distance ) . Given two distributions P and Q of random variables residing in Rd , and ω a sampling distribution on t ∈ Rd , the squared characteristic function distance ( CFD ) between P and Q is computed as : C2 ( P , Q ) = Et∼ω ( t ) [ ∣∣ΦP ( t ) − ΦQ ( t ) ∣∣2 ] = ∫ Rd ∣∣ΦP ( t ) − ΦQ ( t ) ∣∣2ω ( t ) dt . ( 5 ) Notations . Let us make a short note on the notations before continuing . Let k be the number of t drawn from ω and P be the probability measure of a random variable . We group the CFs associated to P of different frequencies , ( Φ̂P ( t1 ) , . . . , Φ̂P ( tk ) ) > more compactly as φ̂P ( x ) . To make the dependence of φ̂P ( x ) on the sampled data explicit , we also use the fol- lowing notation : φ̂P ( x ) = 1n ∑n j=1 φ̂P ( xj ) . We notice that ‖φ̂P ( x ) ‖2 ≡ √∑k m=1 |Φ̂P ( tm ) |2 =√∑k m=1 | ∑n l=1 e itm·xl/n|2 ≤ √∑k m=1 | ∑n l=1 1/n|2 = √ k , where the norm is taken over the ( complex ) frequency space . With a slight abuse of notation , we abbreviate φ̂P as φ̂ when there is no ambiguity in the underlying probability measure associated with the CF .
In this paper, the authors propose a novel differentially private approach to generate both continuous as well as discrete valued synthetic data. The authors utilize a one shot approach to providing the privacy, by first generating privatized embedding of the sensitive dataset. The privatized embeddings are then iteratively compared against the synthetic samples generated by the generator module using the characteristic function distance. The approach is similar to and can be considered a generalization of another new approach DP-MERF and seems to compare favorably in empirical experiments against DP-MERF and DP-GAN (two popular alternative approaches).
SP:1e92779dd25fcac41115f5771712163690fcc2eb
Hinge Policy Optimization: Rethinking Policy Improvement and Reinterpreting PPO
1 INTRODUCTION . Reinforcement learning ( RL ) has served as a powerful framework for achieving optimal sequential decision making by directly interacting with the environment and learning from the underlying random process . Policy optimization , as a fundamental design principle of RL algorithms , iteratively searches for an optimal policy by alternating between a policy evaluation step and a policy improvement subroutine . Most of the popular policy optimization approaches , including the policy gradient methods ( Sutton et al. , 1999 ; Mnih et al. , 2016 ; Silver et al. , 2014 ; Lillicrap et al. , 2016 ) , Trust Region Policy Optimization ( Schulman et al. , 2015 ) , and Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) , aim to achieve policy improvement in terms of the expected total reward via gradient-based parameter updates . Despite the empirical success of the above approaches , these policy improvement schemes inherently lead to the following fundamental issues : ( i ) Global convergence is usually difficult to establish due to the non-convexity of the underlying objective function ( e.g. , total discounted reward ) and the use of gradient methods , even for the tabular cases ; ( ii ) The policy update direction for improvement is tightly coupled with the true discounted state visitation distribution , which is usually intractable to obtain and needs to be addressed via sampling in practice . To address the above issues , we propose to rethink policy improvement in RL from the perspective of state-wise policy improvement , which aims for improvement directly in the policy space through a partial ordering of the policies , as adopted by policy iteration for the optimal control of Markov decision processes ( MDPs ) . We start by identifying two useful sufficient conditions for state-wise policy improvement and thereafter pinpoint that improvement can be achieved based solely on the sign of the advantage function . Based on this insight , we propose Hinge Policy Optimization ( HPO ) by connecting state-wise policy improvement to solving a large-margin classification problem , where we regard the process of policy improvement as training a binary classifier with hinge loss via empirical risk minimization . As policy improvement of HPO is not enforced in the expected total reward but in a state-wise manner , the policy update of HPO is completely decoupled from the state visitation distribution , which is by contrast required by many existing popular policy optimization methods . By leveraging various types of classifiers , the proposed HPO framework opens up a whole new family of policy-based RL algorithms . Interestingly , the popular PPO algorithm with a surrogate clipped objective ( PPO-clip ) can be shown to be a special case of HPO with a specific type of classifier . Given the policy improvement property of the proposed classification-based scheme , we are able to establish global convergence to an optimal policy for the HPO algorithms . To the best of our knowledge , our analysis provides the first global convergence guarantee for a variant of PPO-clip . This paper is meant to provide a clear picture of the RL algorithms in the new HPO family with global convergence guarantees and thereby introduce a new perspective for reinterpreting the popular PPO-clip algorithm . The main contributions of this paper can be summarized as follows : • We propose HPO , a new policy optimization framework where the policy update is built on statewise policy improvement via hinge loss . The members in the HPO family share a generic loss function , and their differences lie in the choice of the margin and the classifier . We also show that the widely-used PPO-clip algorithm can be viewed as a special case in this family . • This paper is the first ever that can prove state-wise policy improvement and global convergence for a variant of PPO-clip algorithm1 . Specifically , we first present a variant of the PPO algorithm with an adaptive clipped objective , which can be viewed as an HPO algorithm with adaptive margin , called HPO-AM . We prove the convergence to an optimal policy for HPO-AM , and proceed to generalize the global convergence result to some other HPO-AM like algorithms equipped with other classifiers . We also empirically validate the proposed theoretical framework through experiments and thereby corroborate the performance of various HPO algorithms with different classifiers and margins ( with the experimental results provided in Appendices E and F ) . 2 MAIN RESULTS . In this section , we first review the theoretical background about RL and view PPO-clip update as a classification problem . Then , we establish global convergence for the proposed HPO-AM . 2.1 BACKGROUND . A discounted Markov Decision Process ( MDP ) is defined by the tuple ( S , A , P , r , γ ) , where S is a finite state space , A is a finite action space , P : S ×A×S → [ 0 , 1 ] is the state transition probability matrix , r : S × A → R is the reward function , and γ ∈ ( 0 , 1 ) is the discounted factor . A stochastic policy π : S → ∆ ( A ) specifies the action distribution based on the current state , where ∆ ( A ) is the probability simplex over A , i.e. , ∆ ( A ) = { x ∈ R|A| | x0 + ... + x|A|−1 = 1 , xi ≥ 0 , ∀i = 0 , ... , |A| − 1 } . Given any policy π , the state value function V π , the state-action value function Qπ , and the advantage function Aπ are defined as follows . V π ( s ) : = E at∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∞∑ t=0 γtr ( st , at ) ∣∣∣s0 = s ] , ( 1 ) Qπ ( s , a ) : = E at∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∞∑ t=0 γtr ( st , at ) ∣∣∣s0 = s , a0 = a ] , ( 2 ) Aπ ( s , a ) : = Qπ ( s , a ) − V π ( s ) . ( 3 ) Define dπs ( s ′ ) : = ( 1 − γ ) ∑∞ t=0 γ t Pr ( st = s ′|s0 = s , π ) as the normalized discounted state visitation frequency , which represents the probability of visiting state s′ in a trajectory of π , given that s0 = s. With the above definition , ( Kakade & Langford , 2002 ) quantified the difference in performance between two policies as follows . Given policies π1 and π2 , V π1 ( s ) − V π2 ( s ) = 1 1− γ ∑ s′∈S dπ1s ( s ′ ) ∑ a∈A π1 ( a|s′ ) Aπ2 ( s′ , a ) . ( 4 ) 1Regarding the convergence of PPO , ( Liu et al. , 2019 ) proves global convergence in expected total reward for a neural variant of PPO with adaptive Kullback-Leibler penalty ( PPO-KL ) . Given the salient algorithmic difference between PPO-KL and PPO-clip , to the best of our knowledge , there remains no proof of global convergence to an optimal policy for PPO-clip . The TRPO algorithm proceeds to maximize the expected value of surrogate function of ( 4 ) over the initial state distribution under the constraint of KL divergence . State-wise policy improvement is formalized based on the following partial ordering relation . Definition 1 ( Partial ordering over policies ) . Let π1 and π2 be two policies . Then , π1 ≥ π2 , called π1 improves upon π2 , if and only if V π1 ( s ) ≥ V π2 ( s ) , ∀s ∈ S. Moreover , we say π1 > π2 , called π1 strictly improves upon π2 , if and only if π1 ≥ π2 and there exists at least one state s such that V π1 ( s ) > V π2 ( s ) . Definition 2 ( An optimal policy ) . A policy π∗ is said to be an optimal policy if π∗ ≥ π′ , for every policy π′ . Moreover , we let V ∗ ( s ) denote the optimal value of each state s , i.e. , V ∗ ( s ) = V π ∗ ( s ) . We first introduce two sufficient conditions for state-wise policy improvement in Propositions 1-2 . These two conditions have been identified by ( Hu et al. , 2020 , Section 3.1 ) . Proposition 1 . Given policies π1 and π2 , π1 improves upon π2 if the following condition holds : ∑ a∈A π1 ( a|s ) Aπ2 ( s , a ) ≥ 0 , ∀s ∈ S. ( 5 ) Proposition 2 . Given policies π1 and π2 , π1 improves upon π2 if the following condition holds : ( π1 ( a|s ) − π2 ( a|s ) ) Aπ2 ( s , a ) ≥ 0 , ∀ ( s , a ) ∈ S ×A . ( 6 ) Proposition 1 holds for the following reason : Since all dπ1s ( s ′ ) are non-negative , all values in ( 4 ) are non-negative , and hence π1 improves upon π2 . Proposition 2 can be derived directly from Proposition 1 and the fact that ∑ a∈A π2 ( a|s ) Aπ2 ( s , a ) = 0 , ∀s ∈ S. Notably , Proposition 2 offers a useful insight that state-wise policy improvement can be achieved by determining the sign of the advantage of each state-action pair ( regardless of its magnitude ) and adjusting the action probabilities accordingly . In this way , no additional constraints , such as the KL divergence constraint used in TRPO ( Schulman et al. , 2015 ) , are needed to ensure policy improvement . This also naturally motivates the design of using the signs of the advantage function as labels in determining the direction of the policy update . More specifically , we can draw an analogy between ( 6 ) in Proposition 2 and the training of a linear classifier : ( i ) The state-action pair serves as the feature vector of a training sample ; ( ii ) The sign ofAπ2 ( s , a ) plays the role of a binary label ; ( iii ) π1 ( a|s ) − π2 ( a|s ) resembles the prediction of a linear classifier . In the next section , we substantiate this insight and present the proposed HPO framework . In the rest of this paper , our analysis relies on the following assumptions : Assumption 1 ( Bounded reward ) . To avoid trivial cases , we assume not all rewards are zero . Since both state and action spaces are finite , it holds naturally that there exists a positive constant R = sup ( s , a ) ∈S×A|r ( s , a ) | > 0 . Assumption 2 ( Tabular policies ) . Policies are parameterized by π ( a|s ) = θs , a , where θs ∈ ∆ ( A ) refers to the vector of θs.· for some fixed state s , and θ ∈ ∆ ( A ) |S| , i.e. , θ is subject to θs , a ≥ 0 and∑ a∈A θs , a = 1 , ∀s ∈ S , ∀a ∈ A . Notations . Throughout this paper , we let 〈a , b〉 and a◦ b denote the inner product and the Hadamard product of two real vectors a , b , respectively .
This paper reinterprets the theory of PPO-clip based on the hinge policy optimization. They prove the global convergence of PPO by introducing some assumptions. Besides, they generalize the algorithm to a new family of policy-based algorithms by regarding the policy as a generalized classifier.
SP:7102235a9333e4d73a5f90d5241ec37ca3fe9345
Hinge Policy Optimization: Rethinking Policy Improvement and Reinterpreting PPO
1 INTRODUCTION . Reinforcement learning ( RL ) has served as a powerful framework for achieving optimal sequential decision making by directly interacting with the environment and learning from the underlying random process . Policy optimization , as a fundamental design principle of RL algorithms , iteratively searches for an optimal policy by alternating between a policy evaluation step and a policy improvement subroutine . Most of the popular policy optimization approaches , including the policy gradient methods ( Sutton et al. , 1999 ; Mnih et al. , 2016 ; Silver et al. , 2014 ; Lillicrap et al. , 2016 ) , Trust Region Policy Optimization ( Schulman et al. , 2015 ) , and Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) , aim to achieve policy improvement in terms of the expected total reward via gradient-based parameter updates . Despite the empirical success of the above approaches , these policy improvement schemes inherently lead to the following fundamental issues : ( i ) Global convergence is usually difficult to establish due to the non-convexity of the underlying objective function ( e.g. , total discounted reward ) and the use of gradient methods , even for the tabular cases ; ( ii ) The policy update direction for improvement is tightly coupled with the true discounted state visitation distribution , which is usually intractable to obtain and needs to be addressed via sampling in practice . To address the above issues , we propose to rethink policy improvement in RL from the perspective of state-wise policy improvement , which aims for improvement directly in the policy space through a partial ordering of the policies , as adopted by policy iteration for the optimal control of Markov decision processes ( MDPs ) . We start by identifying two useful sufficient conditions for state-wise policy improvement and thereafter pinpoint that improvement can be achieved based solely on the sign of the advantage function . Based on this insight , we propose Hinge Policy Optimization ( HPO ) by connecting state-wise policy improvement to solving a large-margin classification problem , where we regard the process of policy improvement as training a binary classifier with hinge loss via empirical risk minimization . As policy improvement of HPO is not enforced in the expected total reward but in a state-wise manner , the policy update of HPO is completely decoupled from the state visitation distribution , which is by contrast required by many existing popular policy optimization methods . By leveraging various types of classifiers , the proposed HPO framework opens up a whole new family of policy-based RL algorithms . Interestingly , the popular PPO algorithm with a surrogate clipped objective ( PPO-clip ) can be shown to be a special case of HPO with a specific type of classifier . Given the policy improvement property of the proposed classification-based scheme , we are able to establish global convergence to an optimal policy for the HPO algorithms . To the best of our knowledge , our analysis provides the first global convergence guarantee for a variant of PPO-clip . This paper is meant to provide a clear picture of the RL algorithms in the new HPO family with global convergence guarantees and thereby introduce a new perspective for reinterpreting the popular PPO-clip algorithm . The main contributions of this paper can be summarized as follows : • We propose HPO , a new policy optimization framework where the policy update is built on statewise policy improvement via hinge loss . The members in the HPO family share a generic loss function , and their differences lie in the choice of the margin and the classifier . We also show that the widely-used PPO-clip algorithm can be viewed as a special case in this family . • This paper is the first ever that can prove state-wise policy improvement and global convergence for a variant of PPO-clip algorithm1 . Specifically , we first present a variant of the PPO algorithm with an adaptive clipped objective , which can be viewed as an HPO algorithm with adaptive margin , called HPO-AM . We prove the convergence to an optimal policy for HPO-AM , and proceed to generalize the global convergence result to some other HPO-AM like algorithms equipped with other classifiers . We also empirically validate the proposed theoretical framework through experiments and thereby corroborate the performance of various HPO algorithms with different classifiers and margins ( with the experimental results provided in Appendices E and F ) . 2 MAIN RESULTS . In this section , we first review the theoretical background about RL and view PPO-clip update as a classification problem . Then , we establish global convergence for the proposed HPO-AM . 2.1 BACKGROUND . A discounted Markov Decision Process ( MDP ) is defined by the tuple ( S , A , P , r , γ ) , where S is a finite state space , A is a finite action space , P : S ×A×S → [ 0 , 1 ] is the state transition probability matrix , r : S × A → R is the reward function , and γ ∈ ( 0 , 1 ) is the discounted factor . A stochastic policy π : S → ∆ ( A ) specifies the action distribution based on the current state , where ∆ ( A ) is the probability simplex over A , i.e. , ∆ ( A ) = { x ∈ R|A| | x0 + ... + x|A|−1 = 1 , xi ≥ 0 , ∀i = 0 , ... , |A| − 1 } . Given any policy π , the state value function V π , the state-action value function Qπ , and the advantage function Aπ are defined as follows . V π ( s ) : = E at∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∞∑ t=0 γtr ( st , at ) ∣∣∣s0 = s ] , ( 1 ) Qπ ( s , a ) : = E at∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∞∑ t=0 γtr ( st , at ) ∣∣∣s0 = s , a0 = a ] , ( 2 ) Aπ ( s , a ) : = Qπ ( s , a ) − V π ( s ) . ( 3 ) Define dπs ( s ′ ) : = ( 1 − γ ) ∑∞ t=0 γ t Pr ( st = s ′|s0 = s , π ) as the normalized discounted state visitation frequency , which represents the probability of visiting state s′ in a trajectory of π , given that s0 = s. With the above definition , ( Kakade & Langford , 2002 ) quantified the difference in performance between two policies as follows . Given policies π1 and π2 , V π1 ( s ) − V π2 ( s ) = 1 1− γ ∑ s′∈S dπ1s ( s ′ ) ∑ a∈A π1 ( a|s′ ) Aπ2 ( s′ , a ) . ( 4 ) 1Regarding the convergence of PPO , ( Liu et al. , 2019 ) proves global convergence in expected total reward for a neural variant of PPO with adaptive Kullback-Leibler penalty ( PPO-KL ) . Given the salient algorithmic difference between PPO-KL and PPO-clip , to the best of our knowledge , there remains no proof of global convergence to an optimal policy for PPO-clip . The TRPO algorithm proceeds to maximize the expected value of surrogate function of ( 4 ) over the initial state distribution under the constraint of KL divergence . State-wise policy improvement is formalized based on the following partial ordering relation . Definition 1 ( Partial ordering over policies ) . Let π1 and π2 be two policies . Then , π1 ≥ π2 , called π1 improves upon π2 , if and only if V π1 ( s ) ≥ V π2 ( s ) , ∀s ∈ S. Moreover , we say π1 > π2 , called π1 strictly improves upon π2 , if and only if π1 ≥ π2 and there exists at least one state s such that V π1 ( s ) > V π2 ( s ) . Definition 2 ( An optimal policy ) . A policy π∗ is said to be an optimal policy if π∗ ≥ π′ , for every policy π′ . Moreover , we let V ∗ ( s ) denote the optimal value of each state s , i.e. , V ∗ ( s ) = V π ∗ ( s ) . We first introduce two sufficient conditions for state-wise policy improvement in Propositions 1-2 . These two conditions have been identified by ( Hu et al. , 2020 , Section 3.1 ) . Proposition 1 . Given policies π1 and π2 , π1 improves upon π2 if the following condition holds : ∑ a∈A π1 ( a|s ) Aπ2 ( s , a ) ≥ 0 , ∀s ∈ S. ( 5 ) Proposition 2 . Given policies π1 and π2 , π1 improves upon π2 if the following condition holds : ( π1 ( a|s ) − π2 ( a|s ) ) Aπ2 ( s , a ) ≥ 0 , ∀ ( s , a ) ∈ S ×A . ( 6 ) Proposition 1 holds for the following reason : Since all dπ1s ( s ′ ) are non-negative , all values in ( 4 ) are non-negative , and hence π1 improves upon π2 . Proposition 2 can be derived directly from Proposition 1 and the fact that ∑ a∈A π2 ( a|s ) Aπ2 ( s , a ) = 0 , ∀s ∈ S. Notably , Proposition 2 offers a useful insight that state-wise policy improvement can be achieved by determining the sign of the advantage of each state-action pair ( regardless of its magnitude ) and adjusting the action probabilities accordingly . In this way , no additional constraints , such as the KL divergence constraint used in TRPO ( Schulman et al. , 2015 ) , are needed to ensure policy improvement . This also naturally motivates the design of using the signs of the advantage function as labels in determining the direction of the policy update . More specifically , we can draw an analogy between ( 6 ) in Proposition 2 and the training of a linear classifier : ( i ) The state-action pair serves as the feature vector of a training sample ; ( ii ) The sign ofAπ2 ( s , a ) plays the role of a binary label ; ( iii ) π1 ( a|s ) − π2 ( a|s ) resembles the prediction of a linear classifier . In the next section , we substantiate this insight and present the proposed HPO framework . In the rest of this paper , our analysis relies on the following assumptions : Assumption 1 ( Bounded reward ) . To avoid trivial cases , we assume not all rewards are zero . Since both state and action spaces are finite , it holds naturally that there exists a positive constant R = sup ( s , a ) ∈S×A|r ( s , a ) | > 0 . Assumption 2 ( Tabular policies ) . Policies are parameterized by π ( a|s ) = θs , a , where θs ∈ ∆ ( A ) refers to the vector of θs.· for some fixed state s , and θ ∈ ∆ ( A ) |S| , i.e. , θ is subject to θs , a ≥ 0 and∑ a∈A θs , a = 1 , ∀s ∈ S , ∀a ∈ A . Notations . Throughout this paper , we let 〈a , b〉 and a◦ b denote the inner product and the Hadamard product of two real vectors a , b , respectively .
The paper proposes hinge policy optimization, a new theoretical framework for interpreting policy gradient algorithms as classification problems to be solved with a hinge loss. In this perspective, the sign of the advantage function becomes the label, and the difference in action probabilities between policy after and before an update becomes the classifier's output. The paper shows the equivalence between such a formulation and the popular PPO-clip objective and provides global converge guarantees on the blueprint of (Agarwal, 2020). It also proposes a range of policy optimization algorithms, depending on the details of the classification algorithm that is used, and empirically evaluates some of them.
SP:7102235a9333e4d73a5f90d5241ec37ca3fe9345
Estimating Instance-dependent Label-noise Transition Matrix using DNNs
In label-noise learning , estimating the transition matrix is a hot topic as the matrix plays an important role in building statistically consistent classifiers . Traditionally , the transition from clean labels to noisy labels ( i.e. , clean label transition matrix ) has been widely exploited to learn a clean label classifier by employing the noisy data . Motivated by that classifiers mostly output Bayes optimal labels for prediction , in this paper , we study to directly model the transition from Bayes optimal labels to noisy labels ( i.e. , Bayes label transition matrix ) and learn a classifier to predict Bayes optimal labels . Note that given only noisy data , it is ill-posed to estimate either the clean label transition matrix or the Bayes label transition matrix . But favorably , Bayes optimal labels have less uncertainty compared with the clean labels , i.e. , the class posteriors of Bayes optimal labels are one-hot vectors while those of clean labels are not . This enables two advantages to estimate the Bayes label transition matrix , i.e. , ( a ) we could theoretically recover a set of noisy data with Bayes optimal labels under mild conditions ; ( b ) the feasible solution space is much smaller . By exploiting the advantages , we estimate the Bayes label transition matrix by employing a deep neural network in a parameterized way , leading to better generalization and superior classification performance . 1 INTRODUCTION . The study of classification in the presence of noisy labels has been of interest for three decades ( Angluin & Laird , 1988 ) , but becomes more and more important in weakly supervised learning ( Thekumparampil et al. , 2018 ; Li et al. , 2020b ; Guo et al. , 2018 ; Xiao et al. , 2015 ; Zhang et al. , 2017a ; Yang et al. , 2021b ; a ) . The main reason behind this is that datasets are becoming bigger and bigger . To improve annotation efficiency , these large-scale datasets are often collected from crowdsourcing platforms ( Yan et al. , 2014 ) , online queries ( Blum et al. , 2003 ) , and image engines ( Li et al. , 2017 ) , which suffer from unavoidable label noise ( Yao et al. , 2020a ) . Recent researches show that the label noise significantly degenerates the performance of deep neural networks , since deep models easily memorize the noisy labels ( Zhang et al. , 2017a ; Yao et al. , 2020a ) . Generally , the algorithms for combating noisy labels can be categorized into statistically inconsistent algorithms and statistically consistent algorithms . The statistically inconsistent algorithms are heuristic , such as selecting possible clean examples to train the classifier ( Han et al. , 2020 ; Yao et al. , 2020a ; Yu et al. , 2019 ; Han et al. , 2018b ; Malach & Shalev-Shwartz , 2017 ; Ren et al. , 2018 ; Jiang et al. , 2018 ) , re-weighting examples to reduce the effect of noisy labels ( Ren et al. , 2018 ) , correcting labels ( Ma et al. , 2018 ; Kremer et al. , 2018 ; Tanaka et al. , 2018 ; Reed et al. , 2015 ) , or adding regularization ( Han et al. , 2018a ; Guo et al. , 2018 ; Veit et al. , 2017 ; Vahdat , 2017 ; Li et al. , 2017 ; 2020b ; Wu et al. , 2020 ) . These approaches empirically work well , but there is no theoretical guarantee that the learned classifiers can converge to the optimal ones learned from clean data . To address this limitation , algorithms in the second category aim to design classifier-consistent algorithms ( Yu et al. , 2017 ; Zhang & Sabuncu , 2018 ; Kremer et al. , 2018 ; Liu & Tao , 2016 ; Northcutt et al. , 2017 ; Scott , 2015 ; Natarajan et al. , 2013 ; Goldberger & Ben-Reuven , 2017 ; Patrini et al. , 2017 ; Thekumparampil et al. , 2018 ; Yu et al. , 2018 ; Liu & Guo , 2020 ; Xu et al. , 2019 ; Xia et al. , 2020b ) , where classifiers learned on noisy data will asymptotically converge to the optimal classifiers defined on the clean domain . The label transition matrix T ( x ) plays an important role in building statistically consistent algorithms . Traditionally , the transition matrix T ( x ) is defined to relate clean distribution and noisy distribution , where T ( x ) = P ( Ỹ | Y , X = x ) andX denotes the random variable of instances/features , Ỹ as the variable for the noisy label , and Y as the variable for the clean label . The above matrix is denoted as the clean label transition matrix , which is widely used to learn a clean label classifier by employing the noisy data . The learned clean label classifier is expected to predict a probability distribution over a set of pre-defined classes given an input , i.e . clean class posterior probability P ( Y | X ) . The clean class posterior probability is the distribution from which clean labels are sampled . However , Bayes optimal labels Y ∗ , i.e. , the class labels that maximize the clean class posteriors Y ∗ | X : = argmaxY P ( Y | X ) , are mostly used as the predicted labels and for computing classification accuracy . Motivated by this , in this paper , we propose to directly model the transition matrix T ∗ ( x ) that relates Bayes optimal distribution and noisy distribution , i.e. , T ∗ ( x ) = P ( Ỹ | Y ∗ , X = x ) , where Y ∗ denotes the variable for Bayes optimal label . The Bayes optimal label classifier can be learned by exploiting the Bayes label transition matrix directly . Studying the transition between Bayes optimal distribution and noisy distribution is considered advantageous to that of studying the transition between clean distribution and noisy distribution . The main reason is due to that the class posteriors of Bayes optimal labels are one-hot vectors while those of clean labels are not . Two advantages can be introduced by this to better estimate the instance-dependent transition matrix : ( a ) We can collect a set of examples with theoretically guaranteed Bayes optimal labels out of noisy data . The intrinsic reason that Bayes optimal labels can be inferred from the noisy data while clean labels can not is that Bayes optimal labels are the labels that maximize the clean class posteriors while clean labels are sampled from the clean class posteriors . In the presence of label noise , the labels that maximize the noisy class posteriors could be identical to those that maximize the clean class posteriors ( Bayes optimal labels ) under mild conditions ( Cheng et al. , 2020 ) . Therefore some instances ’ Bayes optimal labels can be inferred from their noisy class posteriors while their clean labels are impossible to infer since the clean class posteriors are unobservable , as shown in Figure 1 . ( b ) The feasible solution space of the Bayes label transition matrix is much smaller than that of the clean label transition matrix . This is because that Bayes optimal labels have less uncertainty compared with the clean labels . The transition matrix defined by Bayes optimal labels and the noisy labels is therefore sparse and can be estimated more efficiently with the same amount of training data . These two advantages naturally motivate us to collect a set of examples with their theoretically guaranteed Bayes optimal labels out of the noisy data to learn to approximate the Bayes label transition matrix T ∗ ( x ) . Due to the high complexity of the instance-dependent matrix T ∗ ( x ) , we simplify its estimation by parameterizing it using a deep neural network . The collected examples , inferred Bayes optimal labels , and their noisy labels are served as data points to optimize the deep neural network to approximate the T ∗ ( x ) . Compared with the previous method ( Xia et al. , 2020a ) , which made assumptions and leveraged hand-crafted priors to approximate the instance-dependent transition matrices , we train a deep neural network to estimate the instance-dependent label transition matrix with a reduced feasible solution space , which achieves lower approximation error , better generalization , and superior classification performance . 2 RELATED WORK . Noise model . Currently , there are several typical label noise models . Specifically , the random classification noise ( RCN ) model assumes that clean labels flip randomly with a constant rate ( Biggio et al. , 2011 ; Manwani & Sastry , 2013 ; Natarajan et al. , 2013 ) . The class-conditional label noise ( CCN ) model assumes that the flip rate depends on the latent clean class ( Patrini et al. , 2017 ; Xia et al. , 2019 ; Ma et al. , 2018 ) . The instance-dependent label noise ( IDN ) model considers the most general case of label noise , where the flip rate depends on its instance/features ( Cheng et al. , 2020 ; Xia et al. , 2020a ; Zhu et al. , 2020 ) . Obviously , the IDN model is more realistic and applicable . For example , in real-world datasets , an instance whose feature contains less information or is of poor quality may be more prone to be labeled wrongly . The bounded instance dependent label noise ( BIDN ) ( Cheng et al. , 2020 ) is a reasonable extension of IDN , where the flip rates are dependent on instances but upper bounded by a value smaller than 1 . However , with only noisy data , it is a non-trivial task to model such realistic noise without any assumption ( Xia et al. , 2020a ) . This paper focuses on the challenging BIDN problem setting . Learning clean distributions . It is significant to reduce the side effect of noisy labels by inferring clean distributions statistically . The label transition matrix plays an important role in such an inference process , which is used to denote the probabilities that clean labels flip into noisy labels . We first review prior efforts under the class-dependent condition ( Patrini et al. , 2017 ) . By exploiting the class-dependent transition matrix T , the training loss on noisy data can be corrected . The transition matrix T can be estimated in many ways , e.g. , by introducing the anchor point assumption ( Liu & Tao , 2016 ) , by exploiting clustering ( Zhu et al. , 2021 ) , by minimizing volume of T ( Li et al. , 2021 ) , and by using extra clean data ( Hendrycks et al. , 2018 ; Shu et al. , 2020 ) . To make the estimation more accurately , a slack variable ( Xia et al. , 2019 ) or a multiplicative dual T ( Yao et al. , 2020b ) can be introduced to revise the transition matrix . As for the efforts on the instance-dependent transition matrix , existing methods rely on various assumptions , e.g. , the noise rate is bounded ( Cheng et al. , 2020 ) , the noise only depends on the parts of the instance ( Xia et al. , 2020a ) , and additional valuable information is available ( Berthon et al. , 2020 ) . Although the above advanced methods achieve superior performance empirically , the introduction of strong assumptions limit their applications in practice . In this paper , we propose to infer Bayes optimal distribution instead of clean distribution , as Bayes optimal distribution is less uncertain and easy to be inferred under mild conditions . Other approaches . Other methods exist with more sophisticated training frameworks or pipelines , including but not limited to robust loss functions ( Zhang & Sabuncu , 2018 ; Xu et al. , 2019 ; Liu & Guo , 2020 ) , sample selection ( Han et al. , 2018b ; Wang et al. , 2019 ; Lyu & Tsang , 2020 ) , label correction ( Tanaka et al. , 2018 ; Zhang et al. , 2021 ; Zheng et al. , 2020 ) , ( implicit ) regularization ( Xia et al. , 2021 ; Zhang et al. , 2017b ; Liu et al. , 2020 ) , and semi-supervised learning ( Li et al. , 2020a ; Nguyen et al. , 2020 ) .
The paper proposes to estimate an Instance-Dependent Noise (IDN) label transition matrix. Instead of modelling the clean label transition as typically done in previous literature, the authors propose to estimate the Bayes label transition using a DNN, motivate by several advantages including theoretically guaranteed Bayes label collection and smaller feasible solution space, hence empirically easier to model. Controlled experiments show consistent improvement over other SOTA methods in noisy label transition.
SP:54c86a69dc233f3a53e816f66089ab72a7997bac
Estimating Instance-dependent Label-noise Transition Matrix using DNNs
In label-noise learning , estimating the transition matrix is a hot topic as the matrix plays an important role in building statistically consistent classifiers . Traditionally , the transition from clean labels to noisy labels ( i.e. , clean label transition matrix ) has been widely exploited to learn a clean label classifier by employing the noisy data . Motivated by that classifiers mostly output Bayes optimal labels for prediction , in this paper , we study to directly model the transition from Bayes optimal labels to noisy labels ( i.e. , Bayes label transition matrix ) and learn a classifier to predict Bayes optimal labels . Note that given only noisy data , it is ill-posed to estimate either the clean label transition matrix or the Bayes label transition matrix . But favorably , Bayes optimal labels have less uncertainty compared with the clean labels , i.e. , the class posteriors of Bayes optimal labels are one-hot vectors while those of clean labels are not . This enables two advantages to estimate the Bayes label transition matrix , i.e. , ( a ) we could theoretically recover a set of noisy data with Bayes optimal labels under mild conditions ; ( b ) the feasible solution space is much smaller . By exploiting the advantages , we estimate the Bayes label transition matrix by employing a deep neural network in a parameterized way , leading to better generalization and superior classification performance . 1 INTRODUCTION . The study of classification in the presence of noisy labels has been of interest for three decades ( Angluin & Laird , 1988 ) , but becomes more and more important in weakly supervised learning ( Thekumparampil et al. , 2018 ; Li et al. , 2020b ; Guo et al. , 2018 ; Xiao et al. , 2015 ; Zhang et al. , 2017a ; Yang et al. , 2021b ; a ) . The main reason behind this is that datasets are becoming bigger and bigger . To improve annotation efficiency , these large-scale datasets are often collected from crowdsourcing platforms ( Yan et al. , 2014 ) , online queries ( Blum et al. , 2003 ) , and image engines ( Li et al. , 2017 ) , which suffer from unavoidable label noise ( Yao et al. , 2020a ) . Recent researches show that the label noise significantly degenerates the performance of deep neural networks , since deep models easily memorize the noisy labels ( Zhang et al. , 2017a ; Yao et al. , 2020a ) . Generally , the algorithms for combating noisy labels can be categorized into statistically inconsistent algorithms and statistically consistent algorithms . The statistically inconsistent algorithms are heuristic , such as selecting possible clean examples to train the classifier ( Han et al. , 2020 ; Yao et al. , 2020a ; Yu et al. , 2019 ; Han et al. , 2018b ; Malach & Shalev-Shwartz , 2017 ; Ren et al. , 2018 ; Jiang et al. , 2018 ) , re-weighting examples to reduce the effect of noisy labels ( Ren et al. , 2018 ) , correcting labels ( Ma et al. , 2018 ; Kremer et al. , 2018 ; Tanaka et al. , 2018 ; Reed et al. , 2015 ) , or adding regularization ( Han et al. , 2018a ; Guo et al. , 2018 ; Veit et al. , 2017 ; Vahdat , 2017 ; Li et al. , 2017 ; 2020b ; Wu et al. , 2020 ) . These approaches empirically work well , but there is no theoretical guarantee that the learned classifiers can converge to the optimal ones learned from clean data . To address this limitation , algorithms in the second category aim to design classifier-consistent algorithms ( Yu et al. , 2017 ; Zhang & Sabuncu , 2018 ; Kremer et al. , 2018 ; Liu & Tao , 2016 ; Northcutt et al. , 2017 ; Scott , 2015 ; Natarajan et al. , 2013 ; Goldberger & Ben-Reuven , 2017 ; Patrini et al. , 2017 ; Thekumparampil et al. , 2018 ; Yu et al. , 2018 ; Liu & Guo , 2020 ; Xu et al. , 2019 ; Xia et al. , 2020b ) , where classifiers learned on noisy data will asymptotically converge to the optimal classifiers defined on the clean domain . The label transition matrix T ( x ) plays an important role in building statistically consistent algorithms . Traditionally , the transition matrix T ( x ) is defined to relate clean distribution and noisy distribution , where T ( x ) = P ( Ỹ | Y , X = x ) andX denotes the random variable of instances/features , Ỹ as the variable for the noisy label , and Y as the variable for the clean label . The above matrix is denoted as the clean label transition matrix , which is widely used to learn a clean label classifier by employing the noisy data . The learned clean label classifier is expected to predict a probability distribution over a set of pre-defined classes given an input , i.e . clean class posterior probability P ( Y | X ) . The clean class posterior probability is the distribution from which clean labels are sampled . However , Bayes optimal labels Y ∗ , i.e. , the class labels that maximize the clean class posteriors Y ∗ | X : = argmaxY P ( Y | X ) , are mostly used as the predicted labels and for computing classification accuracy . Motivated by this , in this paper , we propose to directly model the transition matrix T ∗ ( x ) that relates Bayes optimal distribution and noisy distribution , i.e. , T ∗ ( x ) = P ( Ỹ | Y ∗ , X = x ) , where Y ∗ denotes the variable for Bayes optimal label . The Bayes optimal label classifier can be learned by exploiting the Bayes label transition matrix directly . Studying the transition between Bayes optimal distribution and noisy distribution is considered advantageous to that of studying the transition between clean distribution and noisy distribution . The main reason is due to that the class posteriors of Bayes optimal labels are one-hot vectors while those of clean labels are not . Two advantages can be introduced by this to better estimate the instance-dependent transition matrix : ( a ) We can collect a set of examples with theoretically guaranteed Bayes optimal labels out of noisy data . The intrinsic reason that Bayes optimal labels can be inferred from the noisy data while clean labels can not is that Bayes optimal labels are the labels that maximize the clean class posteriors while clean labels are sampled from the clean class posteriors . In the presence of label noise , the labels that maximize the noisy class posteriors could be identical to those that maximize the clean class posteriors ( Bayes optimal labels ) under mild conditions ( Cheng et al. , 2020 ) . Therefore some instances ’ Bayes optimal labels can be inferred from their noisy class posteriors while their clean labels are impossible to infer since the clean class posteriors are unobservable , as shown in Figure 1 . ( b ) The feasible solution space of the Bayes label transition matrix is much smaller than that of the clean label transition matrix . This is because that Bayes optimal labels have less uncertainty compared with the clean labels . The transition matrix defined by Bayes optimal labels and the noisy labels is therefore sparse and can be estimated more efficiently with the same amount of training data . These two advantages naturally motivate us to collect a set of examples with their theoretically guaranteed Bayes optimal labels out of the noisy data to learn to approximate the Bayes label transition matrix T ∗ ( x ) . Due to the high complexity of the instance-dependent matrix T ∗ ( x ) , we simplify its estimation by parameterizing it using a deep neural network . The collected examples , inferred Bayes optimal labels , and their noisy labels are served as data points to optimize the deep neural network to approximate the T ∗ ( x ) . Compared with the previous method ( Xia et al. , 2020a ) , which made assumptions and leveraged hand-crafted priors to approximate the instance-dependent transition matrices , we train a deep neural network to estimate the instance-dependent label transition matrix with a reduced feasible solution space , which achieves lower approximation error , better generalization , and superior classification performance . 2 RELATED WORK . Noise model . Currently , there are several typical label noise models . Specifically , the random classification noise ( RCN ) model assumes that clean labels flip randomly with a constant rate ( Biggio et al. , 2011 ; Manwani & Sastry , 2013 ; Natarajan et al. , 2013 ) . The class-conditional label noise ( CCN ) model assumes that the flip rate depends on the latent clean class ( Patrini et al. , 2017 ; Xia et al. , 2019 ; Ma et al. , 2018 ) . The instance-dependent label noise ( IDN ) model considers the most general case of label noise , where the flip rate depends on its instance/features ( Cheng et al. , 2020 ; Xia et al. , 2020a ; Zhu et al. , 2020 ) . Obviously , the IDN model is more realistic and applicable . For example , in real-world datasets , an instance whose feature contains less information or is of poor quality may be more prone to be labeled wrongly . The bounded instance dependent label noise ( BIDN ) ( Cheng et al. , 2020 ) is a reasonable extension of IDN , where the flip rates are dependent on instances but upper bounded by a value smaller than 1 . However , with only noisy data , it is a non-trivial task to model such realistic noise without any assumption ( Xia et al. , 2020a ) . This paper focuses on the challenging BIDN problem setting . Learning clean distributions . It is significant to reduce the side effect of noisy labels by inferring clean distributions statistically . The label transition matrix plays an important role in such an inference process , which is used to denote the probabilities that clean labels flip into noisy labels . We first review prior efforts under the class-dependent condition ( Patrini et al. , 2017 ) . By exploiting the class-dependent transition matrix T , the training loss on noisy data can be corrected . The transition matrix T can be estimated in many ways , e.g. , by introducing the anchor point assumption ( Liu & Tao , 2016 ) , by exploiting clustering ( Zhu et al. , 2021 ) , by minimizing volume of T ( Li et al. , 2021 ) , and by using extra clean data ( Hendrycks et al. , 2018 ; Shu et al. , 2020 ) . To make the estimation more accurately , a slack variable ( Xia et al. , 2019 ) or a multiplicative dual T ( Yao et al. , 2020b ) can be introduced to revise the transition matrix . As for the efforts on the instance-dependent transition matrix , existing methods rely on various assumptions , e.g. , the noise rate is bounded ( Cheng et al. , 2020 ) , the noise only depends on the parts of the instance ( Xia et al. , 2020a ) , and additional valuable information is available ( Berthon et al. , 2020 ) . Although the above advanced methods achieve superior performance empirically , the introduction of strong assumptions limit their applications in practice . In this paper , we propose to infer Bayes optimal distribution instead of clean distribution , as Bayes optimal distribution is less uncertain and easy to be inferred under mild conditions . Other approaches . Other methods exist with more sophisticated training frameworks or pipelines , including but not limited to robust loss functions ( Zhang & Sabuncu , 2018 ; Xu et al. , 2019 ; Liu & Guo , 2020 ) , sample selection ( Han et al. , 2018b ; Wang et al. , 2019 ; Lyu & Tsang , 2020 ) , label correction ( Tanaka et al. , 2018 ; Zhang et al. , 2021 ; Zheng et al. , 2020 ) , ( implicit ) regularization ( Xia et al. , 2021 ; Zhang et al. , 2017b ; Liu et al. , 2020 ) , and semi-supervised learning ( Li et al. , 2020a ; Nguyen et al. , 2020 ) .
The transition matrix plays a vital role in modeling label noise. Current methods focus on modeling the transition from clean labels to noisy labels. While this paper alternatively models the transition from Bayes optimal labels to noisy labels. Since we usually use the Bayes optimal labels for prediction. This transformation will not affect the practical use but makes the estimation of the matrix much easier. Specifically, this paper designs a DNN to estimate the transition matrix. During training, the DNN can be optimized with the classifier simultaneously in an end-to-end manner. Extensive experiments are conducted to support the proposed method.
SP:54c86a69dc233f3a53e816f66089ab72a7997bac
FedMorph: Communication Efficient Federated Learning via Morphing Neural Network
1 INTRODUCTION . Federated Learning ( FL ) ( Li et al. , 2021 ; Bonawitz et al. , 2019 ; Kairouz et al. , 2019 ; Li et al. , 2019 ) ensures data privacy by decoupling the training on the dataset in local clients and model aggregation in the global server . The iterative process of the local updates optimized on each client and aggregating the updates by the global server assures the convergence of the global model and the excellent generalization on test datasets . ( A ) . Despite the benefits gained by the paradigm of local-train and global-aggregation of FL , the server-clients communication burden and limited local computing resources suspend the deployment of the State-of-The-Art ( SOTA ) large neural networks on a wide scale . Recently the lossy model compression techniques , such as gradient sparsification ( Wangni et al. , 2017 ; Zhou et al. , 2021 ) and quantization ( Courbariaux et al. , 2016 ; Hubara et al. , 2017 ) , have been studied to relieve the communication issues . Practical as they announced , but the limitations are also apparent . The gradient sparsification technique reduces the communication burden by selectively updating local gradients . Hence , they only work for client-to-server communication and have no compression benefits in the downstream direction . Meanwhile , the quantization technique compresses the communication and computation by changing the floating operations into low precision operations , and it works in both directions . However , its compression ability is at most 32× by reducing 32-bits floating operations to the 1-bit operations , not to mention the performance degradation . Absorbing the benefits from the former lossy compression techniques , FedDropout ( Caldas et al. , 2018 ) is another trial to reduce the downstream direction communication . It builds upon the basic idea of dropout ( Srivastava et al. , 2014 ) by randomly dropping some neurons of the global model at each round , resulting in a smaller sub-network with smaller weight matrices before broadcasting it to local clients . The updated sub-networks by local clients will map back to the global network . Distillation of the ‘ knowledge ’ from an extensive neural network ( Hinton et al. , 2015 ) is another branch to solve local devices ’ communication and computation burdens . The main idea is to train a small model to keep the output similar to the large one on a sizeable prepared dataset . It has significant performance on the recent developments ( Sanh et al. , 2019 ) ( Jiao et al. , 2019 ) . However , it requires a well pre-trained large model and a vast dataset for training the smaller network , which is not always satisfactory in the context of FL . ( B ) . Besides considering communication and computation budget , broadcasting a raw neural network may also cause a severe generalization problem . While it is almost common sense in the deep learning community that training a well-designed deep neural network performs better than a shallow one , the statement only establishes when the training dataset is sufficient to avoid overfitting ( Pitt & Myung , 2002 ; Hinton et al. , 2012 ) . In FL settings , each local client holds a dataset in a small ratio of the whole , which is universal when the number of clients is enormous . Hence , human labor ’ s well-designed deep neural network works well on the whole dataset , nevertheless easily overfit the local datasets . The overfitting will impede the number of local iterations , which is a crucial hyper-parameter in FL settings ( McMahan et al. , 2017 ) , resulting in a deteriorated model performance . Kairouz et al . ( 2019 ) has a similar statement in their Network Architecture Search ( NAS ) section that a well-designed network architecture in centralized settings may not work well in federated learning settings . This paper tries to solve the earlier issues by decoupling the network architectures on local clients and the global server . In detail , the global server maintains a large neural network suitable in a centralized setting , and keeps the architecture untouched throughout the entire process . During each communication round , the global server morphs a new sub-network from the maintained architecture and broadcasts it to selected clients for local optimization . The maintained network optimizes its weights at the morphing process by learning from the average aggregated network updated by local clients . Meanwhile , a newly morphed sub-network is constrained by keeping the knowledge similar to the average aggregated one while minimizing the number of its parameters . By morphing the shared neural network into a smaller size among all communication rounds , we ( 1 ) decrease the communication in both upload and download directions ; ( 2 ) relieve the computation overload of local clients ; ( 3 ) reduce the generalization error caused by overfitting on local datasets with a large network . 2 RELATED WORKS . Neural Architecture Search Our work is closely related to the NAS ( Zoph & Le , 2016 ) , whose purpose is automatically designing neural network architecture such as the number of layers and the number of neurons or filters in each layer . The search strategies include random search , evolutionary methods , Bayesian optimization , and gradient-based methods ( Elsken et al. , 2019 ) . Since NAS involves a vast search space , recent works focus on improving the speed and efficiency by attentive sampling ( Wang et al. , 2021 ) , untrained scheme ( Mellor et al. , 2021 ) , and block-wisely search with knowledge distillation ( Li et al. , 2020 ) . Besides , the network architecture search under the scenario of federated learning scenario was also explored in ( Zhu et al. , 2021 ) , aiming to reduce both computation and communication . Model Compression To reduce the complexity of deep neural networks , model compression was first proposed in ( Buciluǎ et al. , 2006 ) , followed by tremendous attention in both academia and industry . One of the most straightforward method to reduce model size is parameter pruning and sharing by removing redundant parameters that are not critical to model performance ( Han et al. , 2015 ; Blakeney et al. , 2020 ) . Similarly , the informative parameters can also be measured and selected by low-rank factorization ( Sainath et al. , 2013 ; Denton et al. , 2014 ) . To simultaneously reduce the computation and storage of a deep model , approaches based on transferred or compact convolutional filters were further proposed by designing special structural kernels ( Cohen & Welling , 2016 ; Wu et al. , 2016 ) and achieved benefits in domains with human prior . Besides , some other works are focusing on transferring the learned knowledge of a large teacher network to a small and lightweight student network , which yields the concept of knowledge distillation ( Hinton et al. , 2015 ; Mirzadeh et al. , 2020 ; Chen et al. , 2021 ; Gao et al. , 2021 ) , which is suitable for small- or medium-size data sets ( Cheng et al. , 2018 ) . 3 PROBLEM DEFINITION . 3.1 PRELIMINARY . In this work , we consider the following federated learning optimization problem : min w { F ( w ) , K∑ k=1 pkFk ( w ) } , ( 1 ) where K is the number of clients , and pk is the weight of the k-th client such that pk ≥ 0 and∑K k=1 pk = 1 . Suppose the k-th client holds the nk training data : xk,1 , xk,2 , · · · , xk , nk . The local objective Fk ( · ) is defined by Fk ( w ) , 1 nk nk∑ j=1 ` ( w ; xk , j ) , ( 2 ) where ` ( · ; · ) is a user-specified loss function . Specifically we consider a C class classification problem defined over a compact space X and a label space Y = [ C ] , where [ C ] = { 1 , . . . , C } . The data point { x , y } is a random sample over X ×Y . A function fθ : X → S maps x to the probability simplex S , where S = { z | ∑C c=1 zc = 1 , zc ≥ 0 , ∀c ∈ [ C ] } , and z is a C-dimensional column vector . fθ is parameterized over the hypothesis class w , i.e. , the weight of the neural network θ . We define the loss function ` ( w , x ) with the widely used cross-entropy loss as ` ( w , x ) = Ey [ log fθ ( w , x ) ] . 3.2 ALGORITHM DESCRIPTION . Definition 1 . Morphing Set : Given a neural network θ with its parameter weight w , its Morphing Set ( Θ , W ) is defined as a set that contains every pair of sub-network and the corresponding weight of the original network θ . We define the concept of Morphing Set for the convenience of explanation . Before the description of the proposed algorithm , we define θo and wo as the server maintained neural network and its parameter weights , and denote ( Θo , Wo ) as the Morphing Set of θo . Here , we describe one round ( say the t-th ) of our proposed algorithm . First , the central server broadcasts the latest model , ( θt , wt ) , to the selected clients ( say the K ) . Second , every client ( say the k-th ) begins with wkt , e=0 = wt , and then performsE ≥ 1 local updates with a randomly selected batch samples ξkt , e as follows , wkt , e+1 ←− wkt , e − ηt , e∇Fk ( wkt , e , ξ k t , e ) ( 3 ) Third , unlike Federated Average ( FedAvg ) ( McMahan et al. , 2017 ) or other conventional algorithms letting w̃t+1 = ∑ k∈K wkt , E |K| be the model weights for next round , our method , in order to compress the communication , morphs a small neural network from θo by minimizing the performance divergence of the morphed model wt+1 with w̃t+1 as follows , ( θt+1 , wt+1 ) = arg min ( θ , w ) ∈ ( Θo , Wo ) Lt ( w ; w̃t+1 ) = arg min ( θ , w ) ∈ ( Θo , Wo ) ∑ x∈Xv JS ( fθ ( w , x ) ||fθt ( w̃t+1 , x ) ) , ( 4 ) where w̃t+1 = ∑ k∈K wkt , E |K| follows the FedAvg algorithm to be the average aggregated value of local clients updated parameter weights . ( θ , w ) ∈ ( Θo , Wo ) denotes the newly morphed network and its weights , which is a sub-network of the server maintained one . fθ is the corresponding mapping function defined on the morphed neural network θ . The Jensen-Shannon ( JS ) divergence is a well-known metric for symmetrically measuring the distance of two probabilities . We apply the JS divergence as the loss function to minimize the output performance of the morphed sub-network and average aggregated network on the validation compact space . Here , we use the finite-sum on the validation dataset Xv ( without label information ) to evaluate the performance divergence . Finally , the newly morphed sub-network with the updated weights ( θt+1 , wt+1 ) are broadcasted to selected clients for next round local optimization . The optimization objective is to maintain as much knowledge as possible from the aggregated model in previous round , while keeping the morphed network as small as possible . Therefore , we consider the network architecture as a regularizer and reformat equation ( 4 ) as follows , ( θt+1 , wt+1 ) = arg min ( θ , w ) ∈ ( Θo , Wo ) { Lt ( w ; w̃t+1 ) + λLc ( θ ) } , ( 5 ) where Lc ( θ ) is a regularizer to measure the constraints on the number of neural network parameters ( PARAMs ) or the computation floating-point operations per second ( FLOPs ) or any other constraints as required . λ is an adjustable non-trained variable to balance the two losses .
The paper proposes FedMorph to address the communication and computation heterogeneity problem in federated learning. At every round, small morphed sub networks are send to clients for local training. The server updates the global model by distilling from the aggregated morphed subnets.
SP:47cd92b480c1b66cd5c559da3909188d43e0db87
FedMorph: Communication Efficient Federated Learning via Morphing Neural Network
1 INTRODUCTION . Federated Learning ( FL ) ( Li et al. , 2021 ; Bonawitz et al. , 2019 ; Kairouz et al. , 2019 ; Li et al. , 2019 ) ensures data privacy by decoupling the training on the dataset in local clients and model aggregation in the global server . The iterative process of the local updates optimized on each client and aggregating the updates by the global server assures the convergence of the global model and the excellent generalization on test datasets . ( A ) . Despite the benefits gained by the paradigm of local-train and global-aggregation of FL , the server-clients communication burden and limited local computing resources suspend the deployment of the State-of-The-Art ( SOTA ) large neural networks on a wide scale . Recently the lossy model compression techniques , such as gradient sparsification ( Wangni et al. , 2017 ; Zhou et al. , 2021 ) and quantization ( Courbariaux et al. , 2016 ; Hubara et al. , 2017 ) , have been studied to relieve the communication issues . Practical as they announced , but the limitations are also apparent . The gradient sparsification technique reduces the communication burden by selectively updating local gradients . Hence , they only work for client-to-server communication and have no compression benefits in the downstream direction . Meanwhile , the quantization technique compresses the communication and computation by changing the floating operations into low precision operations , and it works in both directions . However , its compression ability is at most 32× by reducing 32-bits floating operations to the 1-bit operations , not to mention the performance degradation . Absorbing the benefits from the former lossy compression techniques , FedDropout ( Caldas et al. , 2018 ) is another trial to reduce the downstream direction communication . It builds upon the basic idea of dropout ( Srivastava et al. , 2014 ) by randomly dropping some neurons of the global model at each round , resulting in a smaller sub-network with smaller weight matrices before broadcasting it to local clients . The updated sub-networks by local clients will map back to the global network . Distillation of the ‘ knowledge ’ from an extensive neural network ( Hinton et al. , 2015 ) is another branch to solve local devices ’ communication and computation burdens . The main idea is to train a small model to keep the output similar to the large one on a sizeable prepared dataset . It has significant performance on the recent developments ( Sanh et al. , 2019 ) ( Jiao et al. , 2019 ) . However , it requires a well pre-trained large model and a vast dataset for training the smaller network , which is not always satisfactory in the context of FL . ( B ) . Besides considering communication and computation budget , broadcasting a raw neural network may also cause a severe generalization problem . While it is almost common sense in the deep learning community that training a well-designed deep neural network performs better than a shallow one , the statement only establishes when the training dataset is sufficient to avoid overfitting ( Pitt & Myung , 2002 ; Hinton et al. , 2012 ) . In FL settings , each local client holds a dataset in a small ratio of the whole , which is universal when the number of clients is enormous . Hence , human labor ’ s well-designed deep neural network works well on the whole dataset , nevertheless easily overfit the local datasets . The overfitting will impede the number of local iterations , which is a crucial hyper-parameter in FL settings ( McMahan et al. , 2017 ) , resulting in a deteriorated model performance . Kairouz et al . ( 2019 ) has a similar statement in their Network Architecture Search ( NAS ) section that a well-designed network architecture in centralized settings may not work well in federated learning settings . This paper tries to solve the earlier issues by decoupling the network architectures on local clients and the global server . In detail , the global server maintains a large neural network suitable in a centralized setting , and keeps the architecture untouched throughout the entire process . During each communication round , the global server morphs a new sub-network from the maintained architecture and broadcasts it to selected clients for local optimization . The maintained network optimizes its weights at the morphing process by learning from the average aggregated network updated by local clients . Meanwhile , a newly morphed sub-network is constrained by keeping the knowledge similar to the average aggregated one while minimizing the number of its parameters . By morphing the shared neural network into a smaller size among all communication rounds , we ( 1 ) decrease the communication in both upload and download directions ; ( 2 ) relieve the computation overload of local clients ; ( 3 ) reduce the generalization error caused by overfitting on local datasets with a large network . 2 RELATED WORKS . Neural Architecture Search Our work is closely related to the NAS ( Zoph & Le , 2016 ) , whose purpose is automatically designing neural network architecture such as the number of layers and the number of neurons or filters in each layer . The search strategies include random search , evolutionary methods , Bayesian optimization , and gradient-based methods ( Elsken et al. , 2019 ) . Since NAS involves a vast search space , recent works focus on improving the speed and efficiency by attentive sampling ( Wang et al. , 2021 ) , untrained scheme ( Mellor et al. , 2021 ) , and block-wisely search with knowledge distillation ( Li et al. , 2020 ) . Besides , the network architecture search under the scenario of federated learning scenario was also explored in ( Zhu et al. , 2021 ) , aiming to reduce both computation and communication . Model Compression To reduce the complexity of deep neural networks , model compression was first proposed in ( Buciluǎ et al. , 2006 ) , followed by tremendous attention in both academia and industry . One of the most straightforward method to reduce model size is parameter pruning and sharing by removing redundant parameters that are not critical to model performance ( Han et al. , 2015 ; Blakeney et al. , 2020 ) . Similarly , the informative parameters can also be measured and selected by low-rank factorization ( Sainath et al. , 2013 ; Denton et al. , 2014 ) . To simultaneously reduce the computation and storage of a deep model , approaches based on transferred or compact convolutional filters were further proposed by designing special structural kernels ( Cohen & Welling , 2016 ; Wu et al. , 2016 ) and achieved benefits in domains with human prior . Besides , some other works are focusing on transferring the learned knowledge of a large teacher network to a small and lightweight student network , which yields the concept of knowledge distillation ( Hinton et al. , 2015 ; Mirzadeh et al. , 2020 ; Chen et al. , 2021 ; Gao et al. , 2021 ) , which is suitable for small- or medium-size data sets ( Cheng et al. , 2018 ) . 3 PROBLEM DEFINITION . 3.1 PRELIMINARY . In this work , we consider the following federated learning optimization problem : min w { F ( w ) , K∑ k=1 pkFk ( w ) } , ( 1 ) where K is the number of clients , and pk is the weight of the k-th client such that pk ≥ 0 and∑K k=1 pk = 1 . Suppose the k-th client holds the nk training data : xk,1 , xk,2 , · · · , xk , nk . The local objective Fk ( · ) is defined by Fk ( w ) , 1 nk nk∑ j=1 ` ( w ; xk , j ) , ( 2 ) where ` ( · ; · ) is a user-specified loss function . Specifically we consider a C class classification problem defined over a compact space X and a label space Y = [ C ] , where [ C ] = { 1 , . . . , C } . The data point { x , y } is a random sample over X ×Y . A function fθ : X → S maps x to the probability simplex S , where S = { z | ∑C c=1 zc = 1 , zc ≥ 0 , ∀c ∈ [ C ] } , and z is a C-dimensional column vector . fθ is parameterized over the hypothesis class w , i.e. , the weight of the neural network θ . We define the loss function ` ( w , x ) with the widely used cross-entropy loss as ` ( w , x ) = Ey [ log fθ ( w , x ) ] . 3.2 ALGORITHM DESCRIPTION . Definition 1 . Morphing Set : Given a neural network θ with its parameter weight w , its Morphing Set ( Θ , W ) is defined as a set that contains every pair of sub-network and the corresponding weight of the original network θ . We define the concept of Morphing Set for the convenience of explanation . Before the description of the proposed algorithm , we define θo and wo as the server maintained neural network and its parameter weights , and denote ( Θo , Wo ) as the Morphing Set of θo . Here , we describe one round ( say the t-th ) of our proposed algorithm . First , the central server broadcasts the latest model , ( θt , wt ) , to the selected clients ( say the K ) . Second , every client ( say the k-th ) begins with wkt , e=0 = wt , and then performsE ≥ 1 local updates with a randomly selected batch samples ξkt , e as follows , wkt , e+1 ←− wkt , e − ηt , e∇Fk ( wkt , e , ξ k t , e ) ( 3 ) Third , unlike Federated Average ( FedAvg ) ( McMahan et al. , 2017 ) or other conventional algorithms letting w̃t+1 = ∑ k∈K wkt , E |K| be the model weights for next round , our method , in order to compress the communication , morphs a small neural network from θo by minimizing the performance divergence of the morphed model wt+1 with w̃t+1 as follows , ( θt+1 , wt+1 ) = arg min ( θ , w ) ∈ ( Θo , Wo ) Lt ( w ; w̃t+1 ) = arg min ( θ , w ) ∈ ( Θo , Wo ) ∑ x∈Xv JS ( fθ ( w , x ) ||fθt ( w̃t+1 , x ) ) , ( 4 ) where w̃t+1 = ∑ k∈K wkt , E |K| follows the FedAvg algorithm to be the average aggregated value of local clients updated parameter weights . ( θ , w ) ∈ ( Θo , Wo ) denotes the newly morphed network and its weights , which is a sub-network of the server maintained one . fθ is the corresponding mapping function defined on the morphed neural network θ . The Jensen-Shannon ( JS ) divergence is a well-known metric for symmetrically measuring the distance of two probabilities . We apply the JS divergence as the loss function to minimize the output performance of the morphed sub-network and average aggregated network on the validation compact space . Here , we use the finite-sum on the validation dataset Xv ( without label information ) to evaluate the performance divergence . Finally , the newly morphed sub-network with the updated weights ( θt+1 , wt+1 ) are broadcasted to selected clients for next round local optimization . The optimization objective is to maintain as much knowledge as possible from the aggregated model in previous round , while keeping the morphed network as small as possible . Therefore , we consider the network architecture as a regularizer and reformat equation ( 4 ) as follows , ( θt+1 , wt+1 ) = arg min ( θ , w ) ∈ ( Θo , Wo ) { Lt ( w ; w̃t+1 ) + λLc ( θ ) } , ( 5 ) where Lc ( θ ) is a regularizer to measure the constraints on the number of neural network parameters ( PARAMs ) or the computation floating-point operations per second ( FLOPs ) or any other constraints as required . λ is an adjustable non-trained variable to balance the two losses .
The nature of FL workloads poses three evident system and machine learning challenges: communication overheads to broadcast models to clients and send locally-updated models back to the server; compute overheads, since clients might be in the form of constrained devices or battery powered; and overfitting, due to the asymmetric data distribution in the clients in terms of labels and number of training examples. The work here proposed (FedMorph) addresses these three challenges by morphing/extracting sub-models from the global model and dispatch these to the clients to perform local training. Then, the morphed sub-networks get aggregated into the global model via distillation. A FL setup using FedMorph demonstrates a much reduced overffiting.
SP:47cd92b480c1b66cd5c559da3909188d43e0db87
Generalization to Out-of-Distribution transformations
1 INTRODUCTION . Humans have a unique ability to generalize beyond the scope of prior experience ( Chollet , 2019 ; Lake et al. , 2017 ; Marcus , 2001 ) , while artificial agents struggle to apply knowledge to distributions outside the convex hull of their training data ( Santoro et al. , 2018 ; Lake & Baroni , 2018 ) . One way humans seem to achieve such generalization is by learning a set of primitive abstract structures , like the 1D ordinal scale ( Summerfield et al. , 2020 ) and grid-like representations ( Hafting et al. , 2005 ) . These structures can also be thought of as symmetry functions : transformations that are in some way invariant to the specific value of their arguments . As a concrete example , we can imagine moving any object around in space , regardless of its shape , color or size . A fundamental question is how can such symmetry functions be learned ? We hypothesize that during development , humans learn a set of canonical transformations - e.g . the translation , rotation and change in size of objects - that are grounded in the sensorimotor system ( Barsalou , 2008 ) , and learned as a consequence of predicting the sensory results of primitive actions ( Battaglia et al. , 2013 ) . The abilities to translate , rotate or scale arbitrary objects then become our first abstract affordances ( Gibson , 2014 ) . Indeed , infants that spend more time playing with blocks are better at abstract mental rotation tasks ( Schwarzer et al. , 2013 ) . Our aim is to model part of this process by presenting artificial neural networks ( LeCun et al. , 1995 ) with 2-dimensional shapes , and training them to predict the effect of translation , rotation or scaling of that shape in pixel space . This is analogous to predicting tactile or visual signals resulting from a simple movement or saccade ( Rao & Ballard , 1999 ; Wolpert et al. , 1995 ) . We do not explicitly model motor actions , but rather transform the image offline , and feed the result back to the model . We then test the extent to which such predictions generalize OOD along dimensions such as shape , size , location and time . Evidence of OOD generalization would suggest that the network has learned a symmetry function . We assume that , in order to learn symmetry functions , we must introduce principled inductive biases . The first is convolution . Second , to constrain the network to learn a primitive function that can apply to any shape , we assume it requires exposure to a sufficiently diverse set of examples . We therefore operationalize and vary ’ diversity ’ as the number of distinct shapes present in the training set . As the translational invariance built into convolution is well aligned with the task of translation unsurprisingly , a fully convolutional autoencoder with high training set diversity acheives near perfect OOD generalization for translation to unseen shapes in larger grids and new locations ( Figure 1 ) . However the two components weren ’ t sufficient for OOD generalization of rotation ( Figure 2 ) and scaling . To remedy this we introduce two more principle components . First is the effect of ’ iteration ’ during training ( output fed back as input iteratively ) , based on the idea that sequential applications of the same transformation should maintain the identity of an object ( i.e . object permanence ( Piaget , 2006 ) ) . Secondly , motivated by the strong OOD generalization of translation as depicted in Figure 1 , and the fact that translation in log-polar space is equivalent to rotation and scaling in cartesian space ( Figure 1 of ( Esteves et al. , 2017 ; Tootell et al. , 1982 ) ) , we hypothesize that performing convolutions in log polar space would help in the OOD generalization of rotation and scaling . We summarize the following contributions : • We build a flexible data generator that can produce and transform a large variety of simple but structured images - irregular polygons with variable size , location , and complexity . • We show that iteration has the ability to conserve the shape far past the time horizon seen during training and is necessary for better OOD generalization of rotation and scaling transformations along with convolution and high diversity . • We find an interesting tradeoff between diversity and iteration where each could partially make up for less of the other to produce better OOD generalization capabilities . • We novelly propose POLARAE , a fully convolutional autoencoder build using concepts from polar transformer networks ( Esteves et al. , 2017 ) which exploits all the four components demonstrating better OOD generalization performance than a standard autoencoder and a variational autoencoder in cartesian space . 2 RELATED WORK . 2.1 PSYCHOLOGY . Humans can perform mental transformations OOD ( i.e . on unseen shapes ) . For example , humans can mentally translate or rotate shapes at a steady rate ( Shepard & Metzler , 1971 ) or scale abstract distances ( Trope & Liberman , 2010 ) . There are also known mechanisms that might make these transformations general . For example , there is a known impact of diversity on the generalization of properties in psychology , namely , the diversity effect ( Osherson et al. , 1990 ) . Iteration is also a plausible mechanism in humans - psychological data for mental rotation suggest that we transform objects iteratively , since larger angles of rotation elicit longer reaction times ( Shepard & Metzler , 1971 ) . There is also evidence that discrete temporal context updating by recurrent thalamocortical loops serves predictive learning in the brain ( O ’ Reilly et al. , 2014 ) . Basically , iterated operations achieve generality since they are time-invariant . Finally , humans have built-in architectural transforms , such as the transformation of retinal images into a log-polar coordinate system ( Maiello et al. , 2020 ) . 2.2 MACHINE LEARNING . There have been several works that demonstrate that high diversity is necessary for OOD generalization . Xu et al . ( 2020 ) provided theoretical guarantees that ReLU MLP networks extrapolate well with linear functions and sufficient diversity in their training set . Sufficient diversity of input is also required for systematic generalization of neural network based reinforcement learning agents ( Hill et al. , 2019 ) . Madan et al . ( 2021 ) showed that data diversity improves OOD category-viewpoint classification but didn ’ t focus on the generative aspect of it using CNNs . There have also been works collecting large scale controlled synthetic datasets ( Borji et al. , 2016 ; Gross et al. , 2010 ; Qiu et al. , 2017 ) to facilitate learning invariance to different types of transformations in deep neural networks . Following a similar motivation , we want CNNs to be able to learn such transformations and to be able to generate them on OOD examples . The idea of iterative training has also been explored in this domain . Using an iterative training technique similar to ours generative adversarial networks learned 3D rotations ( but not scaling ) ( Galama & Mensink , 2019 ) . However there were limited assessment of how different amounts of iteration during training impacted extrapolation in time . Generally , predicting past the number of iterations seen during training has required building in a conservation law of some sort ( Cranmer et al. , 2020 ; Greydanus et al. , 2019 ) . Kumar et al . ( 2020 ) showed that gradual self training at each iteration helps in domain adaptation . In a different context , Kuchaiev & Ginsburg ( 2017 ) proposed iterative output re-feeding to train deep autoencoders for improved collaborative filtering . Recently , the idea that iteration can achieve OOD generalization has been put forth in the context of problem solving ( Schwarzschild et al. , 2021 ) . Kim et al . ( 2020 ) explored the effectiveness of log-polar space in achieving rotation invariance but was limited to classification and didn ’ t study other types of transformations like scaling . 3 METHOD . 3.1 DATA GENERATOR . All training stimuli were shapes contained within a 64x64 pixel grid space . We constructed irregular N-sided polygonal shapes by first sampling N angular distances between 0 and 2π , and then sampling a radial distance from a centroid ( xcenter , ycenter ) at each of these angles uniformly between 0 and a scale parameter r. This produced a set of vertices ; pixels within the convexity of the vertices were set to 1 , and pixels outside were set to 0 . There was also the option to produce ’ hollow ’ shapes such that an interior cut-out of the shape was set to 0 , in order to produce a test-set of shapes with different distribution from the training set . This data generator was therefore capable of producing a combinatorically large set of possible shapes . Our procedure for shape generation is most similar to Attneave forms ( Attneave & Arnoult , 1956 ) and also bears relation to the method of Fourier descriptors ( Zhang & Lu , 2005 ) , but was selected due to its computational speed and interpretable manipulations of shape parameters . Each shape was used as the input to a neural network , and was transformed in one of the following ways to generate the target for training : for translation , shift 2 pixels to the right ; for rotation , rotate π25 radians clockwise ; for scaling , increase radial length of the vertices by 0.1 ( Figure 3 ) . These transformations were hard-coded but meant to represent a set of innate primitive actions . 3.2 MODEL ARCHITECTURE . We first describe the three baseline models followed by the proposed model : The first model was a fully convolutional autoencoder . The encoder consisted of 3 convolutional layers , the first with 16 3x3 kernels , ( stride=2 , padding=1 ) , the second with 32 3x3 kernels ( stride=2 , padding=1 ) , and the last with 64 7x7 kernels ( stride=1 ) . The decoder had 3 layers ( padding=1 , output padding=1 ) that inverted these operations with transposed convolution layers that were mirror images of the encoder layers . This produced an output with the same dimensions as the input image . All layers were followed by rectified linear ( ReLU ) activations . Being fully convolutional , it could accept any input grid size . All weights were initialized using Xavier uniform initialization . We use AE to denote this model . The second model was a fully convolutional variational autoencoder ( Kingma & Welling , 2013 ) . The encoder and decoder architectures were similar to AE . We use VAE to denote this model . The third model was a β-VAE ( Higgins et al. , 2016 ) . We used β = 4 since that learnt a good disentangled representation of the data generative factors in the original work . The proposed model was a combination of polar transformer networks and the fully convolutional autoencoder described above . More specifically , given an input image the polar origin predictor computed a single channel feature map using a fully convolutional network and the centroid of the heatmap was used as the polar origin . The polar transformer module converted the image to log polar coordinates using the predicted polar origin . The transformed image was then fed to AE . Since the rotation of the input image resulted in vertical shift in log polar space wrapping at the boundary , we used wrap around padding on the vertical dimension and zero padding in the horizontal dimension before applying convolutions for the first two layers of the encoder . An inverse polar transformer module was used to convert the output of the AE from log polar space to the original image space . The inverse polar transformer module used the sample points of the original input image used to compute the log-polar transform and mean approximation since a single point in the original image contributed to multiple points in the log-polar space to convert the image back to cartesian coordinates . Since this is effectively an autoencoder operating in the log-polar space we use POLARAE to denote this model . Figure 4 shows the architecture .
The authors trained autoencoders and variational autoencoders in cartesian space, as well as autoencoders in log-polar space, with generated data representing a range of interesting transformations. They then tested the ability of the models to extrapolate beyond the learned transformations in pixel space. The authors find that iterative training, data diversity, convolutions, and transformation to log-polar space all improve the generalization performance.
SP:00f68b4f5ceeddc2e6b93cfdf1a75599bffd2acb
Generalization to Out-of-Distribution transformations
1 INTRODUCTION . Humans have a unique ability to generalize beyond the scope of prior experience ( Chollet , 2019 ; Lake et al. , 2017 ; Marcus , 2001 ) , while artificial agents struggle to apply knowledge to distributions outside the convex hull of their training data ( Santoro et al. , 2018 ; Lake & Baroni , 2018 ) . One way humans seem to achieve such generalization is by learning a set of primitive abstract structures , like the 1D ordinal scale ( Summerfield et al. , 2020 ) and grid-like representations ( Hafting et al. , 2005 ) . These structures can also be thought of as symmetry functions : transformations that are in some way invariant to the specific value of their arguments . As a concrete example , we can imagine moving any object around in space , regardless of its shape , color or size . A fundamental question is how can such symmetry functions be learned ? We hypothesize that during development , humans learn a set of canonical transformations - e.g . the translation , rotation and change in size of objects - that are grounded in the sensorimotor system ( Barsalou , 2008 ) , and learned as a consequence of predicting the sensory results of primitive actions ( Battaglia et al. , 2013 ) . The abilities to translate , rotate or scale arbitrary objects then become our first abstract affordances ( Gibson , 2014 ) . Indeed , infants that spend more time playing with blocks are better at abstract mental rotation tasks ( Schwarzer et al. , 2013 ) . Our aim is to model part of this process by presenting artificial neural networks ( LeCun et al. , 1995 ) with 2-dimensional shapes , and training them to predict the effect of translation , rotation or scaling of that shape in pixel space . This is analogous to predicting tactile or visual signals resulting from a simple movement or saccade ( Rao & Ballard , 1999 ; Wolpert et al. , 1995 ) . We do not explicitly model motor actions , but rather transform the image offline , and feed the result back to the model . We then test the extent to which such predictions generalize OOD along dimensions such as shape , size , location and time . Evidence of OOD generalization would suggest that the network has learned a symmetry function . We assume that , in order to learn symmetry functions , we must introduce principled inductive biases . The first is convolution . Second , to constrain the network to learn a primitive function that can apply to any shape , we assume it requires exposure to a sufficiently diverse set of examples . We therefore operationalize and vary ’ diversity ’ as the number of distinct shapes present in the training set . As the translational invariance built into convolution is well aligned with the task of translation unsurprisingly , a fully convolutional autoencoder with high training set diversity acheives near perfect OOD generalization for translation to unseen shapes in larger grids and new locations ( Figure 1 ) . However the two components weren ’ t sufficient for OOD generalization of rotation ( Figure 2 ) and scaling . To remedy this we introduce two more principle components . First is the effect of ’ iteration ’ during training ( output fed back as input iteratively ) , based on the idea that sequential applications of the same transformation should maintain the identity of an object ( i.e . object permanence ( Piaget , 2006 ) ) . Secondly , motivated by the strong OOD generalization of translation as depicted in Figure 1 , and the fact that translation in log-polar space is equivalent to rotation and scaling in cartesian space ( Figure 1 of ( Esteves et al. , 2017 ; Tootell et al. , 1982 ) ) , we hypothesize that performing convolutions in log polar space would help in the OOD generalization of rotation and scaling . We summarize the following contributions : • We build a flexible data generator that can produce and transform a large variety of simple but structured images - irregular polygons with variable size , location , and complexity . • We show that iteration has the ability to conserve the shape far past the time horizon seen during training and is necessary for better OOD generalization of rotation and scaling transformations along with convolution and high diversity . • We find an interesting tradeoff between diversity and iteration where each could partially make up for less of the other to produce better OOD generalization capabilities . • We novelly propose POLARAE , a fully convolutional autoencoder build using concepts from polar transformer networks ( Esteves et al. , 2017 ) which exploits all the four components demonstrating better OOD generalization performance than a standard autoencoder and a variational autoencoder in cartesian space . 2 RELATED WORK . 2.1 PSYCHOLOGY . Humans can perform mental transformations OOD ( i.e . on unseen shapes ) . For example , humans can mentally translate or rotate shapes at a steady rate ( Shepard & Metzler , 1971 ) or scale abstract distances ( Trope & Liberman , 2010 ) . There are also known mechanisms that might make these transformations general . For example , there is a known impact of diversity on the generalization of properties in psychology , namely , the diversity effect ( Osherson et al. , 1990 ) . Iteration is also a plausible mechanism in humans - psychological data for mental rotation suggest that we transform objects iteratively , since larger angles of rotation elicit longer reaction times ( Shepard & Metzler , 1971 ) . There is also evidence that discrete temporal context updating by recurrent thalamocortical loops serves predictive learning in the brain ( O ’ Reilly et al. , 2014 ) . Basically , iterated operations achieve generality since they are time-invariant . Finally , humans have built-in architectural transforms , such as the transformation of retinal images into a log-polar coordinate system ( Maiello et al. , 2020 ) . 2.2 MACHINE LEARNING . There have been several works that demonstrate that high diversity is necessary for OOD generalization . Xu et al . ( 2020 ) provided theoretical guarantees that ReLU MLP networks extrapolate well with linear functions and sufficient diversity in their training set . Sufficient diversity of input is also required for systematic generalization of neural network based reinforcement learning agents ( Hill et al. , 2019 ) . Madan et al . ( 2021 ) showed that data diversity improves OOD category-viewpoint classification but didn ’ t focus on the generative aspect of it using CNNs . There have also been works collecting large scale controlled synthetic datasets ( Borji et al. , 2016 ; Gross et al. , 2010 ; Qiu et al. , 2017 ) to facilitate learning invariance to different types of transformations in deep neural networks . Following a similar motivation , we want CNNs to be able to learn such transformations and to be able to generate them on OOD examples . The idea of iterative training has also been explored in this domain . Using an iterative training technique similar to ours generative adversarial networks learned 3D rotations ( but not scaling ) ( Galama & Mensink , 2019 ) . However there were limited assessment of how different amounts of iteration during training impacted extrapolation in time . Generally , predicting past the number of iterations seen during training has required building in a conservation law of some sort ( Cranmer et al. , 2020 ; Greydanus et al. , 2019 ) . Kumar et al . ( 2020 ) showed that gradual self training at each iteration helps in domain adaptation . In a different context , Kuchaiev & Ginsburg ( 2017 ) proposed iterative output re-feeding to train deep autoencoders for improved collaborative filtering . Recently , the idea that iteration can achieve OOD generalization has been put forth in the context of problem solving ( Schwarzschild et al. , 2021 ) . Kim et al . ( 2020 ) explored the effectiveness of log-polar space in achieving rotation invariance but was limited to classification and didn ’ t study other types of transformations like scaling . 3 METHOD . 3.1 DATA GENERATOR . All training stimuli were shapes contained within a 64x64 pixel grid space . We constructed irregular N-sided polygonal shapes by first sampling N angular distances between 0 and 2π , and then sampling a radial distance from a centroid ( xcenter , ycenter ) at each of these angles uniformly between 0 and a scale parameter r. This produced a set of vertices ; pixels within the convexity of the vertices were set to 1 , and pixels outside were set to 0 . There was also the option to produce ’ hollow ’ shapes such that an interior cut-out of the shape was set to 0 , in order to produce a test-set of shapes with different distribution from the training set . This data generator was therefore capable of producing a combinatorically large set of possible shapes . Our procedure for shape generation is most similar to Attneave forms ( Attneave & Arnoult , 1956 ) and also bears relation to the method of Fourier descriptors ( Zhang & Lu , 2005 ) , but was selected due to its computational speed and interpretable manipulations of shape parameters . Each shape was used as the input to a neural network , and was transformed in one of the following ways to generate the target for training : for translation , shift 2 pixels to the right ; for rotation , rotate π25 radians clockwise ; for scaling , increase radial length of the vertices by 0.1 ( Figure 3 ) . These transformations were hard-coded but meant to represent a set of innate primitive actions . 3.2 MODEL ARCHITECTURE . We first describe the three baseline models followed by the proposed model : The first model was a fully convolutional autoencoder . The encoder consisted of 3 convolutional layers , the first with 16 3x3 kernels , ( stride=2 , padding=1 ) , the second with 32 3x3 kernels ( stride=2 , padding=1 ) , and the last with 64 7x7 kernels ( stride=1 ) . The decoder had 3 layers ( padding=1 , output padding=1 ) that inverted these operations with transposed convolution layers that were mirror images of the encoder layers . This produced an output with the same dimensions as the input image . All layers were followed by rectified linear ( ReLU ) activations . Being fully convolutional , it could accept any input grid size . All weights were initialized using Xavier uniform initialization . We use AE to denote this model . The second model was a fully convolutional variational autoencoder ( Kingma & Welling , 2013 ) . The encoder and decoder architectures were similar to AE . We use VAE to denote this model . The third model was a β-VAE ( Higgins et al. , 2016 ) . We used β = 4 since that learnt a good disentangled representation of the data generative factors in the original work . The proposed model was a combination of polar transformer networks and the fully convolutional autoencoder described above . More specifically , given an input image the polar origin predictor computed a single channel feature map using a fully convolutional network and the centroid of the heatmap was used as the polar origin . The polar transformer module converted the image to log polar coordinates using the predicted polar origin . The transformed image was then fed to AE . Since the rotation of the input image resulted in vertical shift in log polar space wrapping at the boundary , we used wrap around padding on the vertical dimension and zero padding in the horizontal dimension before applying convolutions for the first two layers of the encoder . An inverse polar transformer module was used to convert the output of the AE from log polar space to the original image space . The inverse polar transformer module used the sample points of the original input image used to compute the log-polar transform and mean approximation since a single point in the original image contributed to multiple points in the log-polar space to convert the image back to cartesian coordinates . Since this is effectively an autoencoder operating in the log-polar space we use POLARAE to denote this model . Figure 4 shows the architecture .
At its heart, the paper explores which inductive biases (and image representations) enable CNNs to generalize better across 2D transformations including rotations, translations and scale variations. The paper starts by proposing a methodology for generating a controlled dataset of binary masks consisting of random polygons. All training and testing is conducted on this dataset. After showing that convolutions and high data diversity enable better generalization across these transformations, the paper proposes two additional ways to improve performance. Firstly, their approach transforms input images into log-polar space, as object rotations in the original space is equivalent to translations in this space, and thus convolutions on log-polar space are equivariant to rotations in the original pixel space. Secondly, the paper proposes an iterative approach to model transformations, where output mask generated by their Encoder-Decoder model is fed back into the network as an input. The motivation behind this iterative approach is that these transformations form a symmetric group, and so the output after k iterations is still a member of the set which can be reached via a single transformation. The motivation is that such training would force the network to preserve the shape being transformed across iterations. The paper shows that networks trained with these inductive biases and training methodologies do indeed generalize better across these 2D transformations.
SP:00f68b4f5ceeddc2e6b93cfdf1a75599bffd2acb
Conditional Expectation based Value Decomposition for Scalable On-Demand Ride Pooling
1 INTRODUCTION . Taxi/car on Demand ( ToD ) services ( e.g. , UberX , Lyft , Grab ) not only provide a comfortable means of transport for customers , but also are good for the environment by enabling sharing of vehicles over time ( while being used to serve one request at any one point in time ) . A further improvement of ToD is on-demand ride pooling ( e.g. , UberPool , LyftLine , GrabShare etc . ) , where vehicles are shared not only over time but also in space ( on the taxi/car ) . On-demand ride pooling reduces the number of vehicles required , thereby reducing emissions and traffic congestion compared to Taxi/car onDemand ( ToD ) services . This is achieved while providing benefits to all the stakeholders involved : ( a ) Individual passengers have reduced costs due to sharing of space ; ( b ) Drivers make more money per trip as multiple passengers ( or passenger groups ) are present ; ( c ) For the aggregation company more customer requests can be satisfied with the same number of vehicles . In this paper , we focus on this on-demand ride pooling problem at city scale , referred to as RidePool Matching Problem ( RMP ) [ Alonso-Mora et al . ( 2017 ) ; Bei & Zhang ( 2018 ) ; Lowalekar et al . ( 2019 ) ] . The goal in an RMP is to assign combinations of user requests to vehicles ( of arbitrary capacity ) online such that quality constraints ( e.g. , delay in reaching destination due to sharing is not more than 10 minutes ) and matching constraints ( one request can be assigned at most one vehicle , one vehicle must be assigned at most one request combination ) are satisfied while maximizing an overall objective ( e.g. , number of requests , revenue ) . Unlike the ToD problem that requires solving a bipartite matching problem between vehicles and customers , RMP requires effective matching on a tripartite graph of requests , trips ( combinations of requests ) and vehicles . This matching on tripartite graph significantly increases the complexity of solving RMP online , especially at city scale where there are hundreds or thousands of vehicles , hundreds of requests arriving every minute and request combinations have to be computed for each vehicle . Due to this complexity and the need to make decisions online , most existing work related to solving RMP has focused on computing best greedy assignments [ Ma et al . ( 2013 ) ; Tong et al . ( 2018 ) ; Huang et al . ( 2014 ) ; Lowalekar et al . ( 2019 ) ; Alonso-Mora et al . ( 2017 ) ] . While these scale well , they are myopic and , as a result , do not consider the impact of a given assignment on future assignments . The closest works of relevance to this paper are by Shah et al . [ Shah et al . ( 2020 ) ] and Lowalekar et al . [ Lowalekar et al . ( 2021 ) ] . We specifically focus on the work by Shah et al. , as it has the best performance , while being scalable . That work considers future impact of current assignment from an individual agents ’ perspective without sacrificing on scalability ( to city scale ) . However , a key limitation of that work is that they do not consider the impact of other agents ( vehicles ) actions on an agents ’ ( vehicle ) future impact , which as we demonstrate in our experiments can have a major effect ( primarily because vehicles are competing for the common demand ) . To that end , we develop a conditional expectation based value decomposition approach that not only considers future impact of current assignments but also of other agents state and actions through the use of conditional probabilities and tighter estimates of individual impact . Due to these conditional probability based tighter estimates of individual value functions , we can scale the work by Guestrin et al . [ Guestrin & Parr ( 2002 ) ] and Li et al . [ Li & Kochenderfer ( 2021 ) ] to solve problems with no explicit coordination graphs and hundreds/thousands of homogeneous agents . Unlike value decomposition approaches [ Rashid & Whiteson ( 2018 ) ; Sunehag & Graepel ( 2018 ) ] developed for solving cooperative Multi-Agent Reinforcement Learning ( MARL ) with tens of agents and under centralized training and decentralized execution set up , we focus on problems with hundreds or thousands of agents with centralized training and centralized execution ( e.g. , Uber , Lyft , Grab ) . In this application domain of taxi on demand services , where improving 0.5 % -1 % is a major achievement [ Lin et al . ( 2018 ) ] , we demonstrate that our approach easily outperforms the existing best approach , NeurADP [ Shah et al . ( 2020 ) ] by at least 3.8 % and up to 9.76 % on a wide variety of settings for the benchmark real world taxi dataset [ NYYellowTaxi ( 2016 ) ] . 2 BACKGROUND . In this section , we formally describe the RMP problem and also provide details of an existing approach for on-demand ride pooling called NeurADP , which we improve over . Ride-pool Matching Problem ( RMP ) : We consider a fleet of vehicles/resources R with random initial locations , travelling on a predefined road network G with intersections : L as nodes , road segments : E as edges and weights on edges indicate the travel time on the road segment . Passengers that want to travel from one location to another send requests to a central entity that collects these requests over a time-window called the decision epoch ∆ . The goal of the RMP is to match these collected requests U t to empty or partially filled vehicles that can serve them such that an objective J is maximised subject to constraints on the delay D. We upperbound D and consider the objective J to be the number of requests served . Thus , RMP is defined using the tuple [ G , U , R , D , ∆ , J ] 1 . Please refer Appendix A.1 for a detailed description . Delay constraints : D consider two delays , { τ , 2τ } . τ denotes the maximum allowed pick-up delay which is the difference between the arrival time of a request and the time at which a vehicle picks the user up . 2τ denotes the maximum allowed detour delay which is the difference between the time at which the user arrived at their destination in a shared cab and the time at which they would have arrived if they had taken a single-passenger cab . Neural Approximate Dynamic Programming ( NeurADP ) for Solving RMP : Figure 1 provides the overall approach . In this paper , there are two NeurADP [ Shah et al . ( 2020 ) ] contributions of relevance : FV : To estimate Future Value of current actions , a method for solving the underlying Approximate Dynamic Program ( ADP ) [ Powell ( 2007 ) ] by considering neural network representations of value functions . 1Everywhere in the paper [ , ] is used as the concatenation operator DJV : To ensure scalability , Decomposing the Joint Value function into individual vehicle value functions by extending on the work of Russell et al . [ Russell & Zimdars ( 2003 ) ] . Future Value ( FV ) : ADP is similar to a Markov Decision Problem ( MDP ) with the key difference that the transition uncertainty is extrinsic to the system and not dependent on the action . The ADP problem for RMP is formulated using the tuple 〈S , A , ξ , T , J 〉 , where : S : The state of the system is represented as st = ( rt , ut ) where rt is the state of all vehicles and ut contains all the requests waiting to be served . The state is obtained in Step A of Figure 1 . A : At each time step there are a large number of requests arriving to the taxi service provider , however for an individual vehicle only a small number of such requests are reachable . The feasible set of request combinations for each vehicle i at time t , F it is computed in Step B of Figure 1 : F it = { f i|f i ∈ ∪c i c′=1 [ U ] c ′ , PickUpDelay ( f i , i ) ≤ τ , DetourDelay ( f i , i ) ≤ 2τ } ( 1 ) ai , ft is the decision variable that indicates whether vehicle i takes action f ( a combination of requests ) at a decision epoch t. Joint actions across vehicles have to satisfy matching constraints : ( i ) each vehicle , i can only be assigned at most one request combination , f ; ( ii ) at most one vehicle , i can be assigned to a request j ; and ( iii ) a vehicle , i can be either assigned or not assigned to a request combination.∑ f∈Fit ai , ft = 1 : : : ∀i ∈ R ∑ i∈R ∑ f∈Fit ; j∈f ai , ft ≤ 1 : : : ∀j ∈ Ut a i , f t ∈ { 0 , 1 } : : : ∀i , f ( 2 ) ξ : denotes the exogenous information – the source of randomness in the system . This would correspond to the user requests or demand . ξt denotes the exogenous information at time t. T : denotes the transitions of system state . In an ADP , the system evolution happens as ( s0 , a0 , s a 0 , ξ1 , s1 , a1 , s a 1 , · · · , st , at , sat , · · · ) , where st denotes the pre-decision state at decision epoch t and sat denotes the post-decision state [ Powell ( 2007 ) ] . The transition from state st to st+1 depends on the action vector at and the exogenous information ξt+1 . Therefore , st+1 = T ( st , at , ξt+1 ) ; s a t = T a ( st , at ) ; st+1 = T ξ ( sat , ξt+1 ) It should be noted that T a ( . , . ) is deterministic as uncertainty is extrinsic to the system . J : denotes the reward function and in RMP , this will be the revenue from a trip . Let V ( st ) denotes the value of being in state st at decision epoch t , then using Bellman equation : V ( st ) = max at∈At ( J ( st , at ) + γE [ V ( st+1 ) |st , at , ξt+1 ] ) ( 3 ) where γ is the discount factor . Using post-decision state , this expression breaks down nicely : V ( st ) = max at∈At ( J ( st , at ) + γV a ( sat ) ) ; V a ( sat ) = E [ V ( st+1 ) |sat , ξt+1 ] ( 4 ) The advantage of this two step value estimation is that the maximization problem in Equation 4 can be solved using a Integer Linear Program ( ILP ) with matching constraints indicated in expression 2 . Step D of Figure 1 ) provides this aspect of the overall algorithm . The value function approximation around post-decision state , V a ( sat ) is a neural network and is updated ( Step E of Figure 1 ) by stepping forward through time using sample realizations of exogenous information ( i.e . demand observed in data ) . However , as we describe next , maintaining a joint value function is not scalable and hence we decompose and maintain individual value functions . Decomposing Joint Value ( DJV ) : Non-linear value functions , unlike their linear counterparts , can not be directly integrated into the ILP mentioned above . One way to incorporate them is to evaluate the value function for all possible post-decision states and then add these values as constants . However , the number of post-decision states is exponential in the number of resources/vehicles . [ Shah et al . ( 2020 ) ] introduced a two-step decomposition of the joint value function that converts it into a linear combination over individual value functions associated with each vehicle . In the first step , following [ Russell & Zimdars ( 2003 ) ] , the joint value function is written as the sum over individual value functions : V ( sat ) = ∑ i V i ( sat ) . In the second step , the individual vehicles ’ value functions are approximated . They assumed that the long-term expected reward of a given vehicle is not significantly affected by the specific actions another vehicle makes in the current decision epoch and thereby completely neglect the impact of the actions taken by other vehicles at the current time step . Thus they model the value function using the pre-decision , rather than post-decision , state of other vehicles which gives : V i ( sat ) = V i ( [ si , at , s -i , a t ] ) ≈ V i ( [ s i , a t , s -i t ] ) where -i refers to all vehicles except vehicle i . This allows NeurADP to get around the combinatorial explosion of the post-decision state of all vehicles . NeurADP thus has the joint value function : V ( sat ) = ∑ i V i ( [ si , at , s -i t ] ) . They then evaluate these individual V i values ( Step C of Figure 1 ) for all possible si , at ( from the individual value neural network ) and then integrate the overall value function into the ILP as a linear function over these individual values . This reduces the number of evaluations of the non-linear value function from exponential to linear in the number of vehicles .
This paper considers the ridesharing matching problem and builds the solution upon NeurADP. The main contribution over NeurADP is that the action values of each agent (vehicle) takes into account the impact of its action on the neighboring agents within the same cluster, which is obtained through clustering of the intersections on the road network. The impact of agent's action on neighbors is measured by the neighboring agents' independent values weighted by their action probabilities conditional on the agent's action. Benchmarking was performed on the NYC taxi data set against NeurADP. Results on different values of tolerance for delay, capacity, and number of vehicles are reported, and a significant improvement is demonstrated in all cases.
SP:728f326478128c886426a8b9b103db36a47aa5a5
Conditional Expectation based Value Decomposition for Scalable On-Demand Ride Pooling
1 INTRODUCTION . Taxi/car on Demand ( ToD ) services ( e.g. , UberX , Lyft , Grab ) not only provide a comfortable means of transport for customers , but also are good for the environment by enabling sharing of vehicles over time ( while being used to serve one request at any one point in time ) . A further improvement of ToD is on-demand ride pooling ( e.g. , UberPool , LyftLine , GrabShare etc . ) , where vehicles are shared not only over time but also in space ( on the taxi/car ) . On-demand ride pooling reduces the number of vehicles required , thereby reducing emissions and traffic congestion compared to Taxi/car onDemand ( ToD ) services . This is achieved while providing benefits to all the stakeholders involved : ( a ) Individual passengers have reduced costs due to sharing of space ; ( b ) Drivers make more money per trip as multiple passengers ( or passenger groups ) are present ; ( c ) For the aggregation company more customer requests can be satisfied with the same number of vehicles . In this paper , we focus on this on-demand ride pooling problem at city scale , referred to as RidePool Matching Problem ( RMP ) [ Alonso-Mora et al . ( 2017 ) ; Bei & Zhang ( 2018 ) ; Lowalekar et al . ( 2019 ) ] . The goal in an RMP is to assign combinations of user requests to vehicles ( of arbitrary capacity ) online such that quality constraints ( e.g. , delay in reaching destination due to sharing is not more than 10 minutes ) and matching constraints ( one request can be assigned at most one vehicle , one vehicle must be assigned at most one request combination ) are satisfied while maximizing an overall objective ( e.g. , number of requests , revenue ) . Unlike the ToD problem that requires solving a bipartite matching problem between vehicles and customers , RMP requires effective matching on a tripartite graph of requests , trips ( combinations of requests ) and vehicles . This matching on tripartite graph significantly increases the complexity of solving RMP online , especially at city scale where there are hundreds or thousands of vehicles , hundreds of requests arriving every minute and request combinations have to be computed for each vehicle . Due to this complexity and the need to make decisions online , most existing work related to solving RMP has focused on computing best greedy assignments [ Ma et al . ( 2013 ) ; Tong et al . ( 2018 ) ; Huang et al . ( 2014 ) ; Lowalekar et al . ( 2019 ) ; Alonso-Mora et al . ( 2017 ) ] . While these scale well , they are myopic and , as a result , do not consider the impact of a given assignment on future assignments . The closest works of relevance to this paper are by Shah et al . [ Shah et al . ( 2020 ) ] and Lowalekar et al . [ Lowalekar et al . ( 2021 ) ] . We specifically focus on the work by Shah et al. , as it has the best performance , while being scalable . That work considers future impact of current assignment from an individual agents ’ perspective without sacrificing on scalability ( to city scale ) . However , a key limitation of that work is that they do not consider the impact of other agents ( vehicles ) actions on an agents ’ ( vehicle ) future impact , which as we demonstrate in our experiments can have a major effect ( primarily because vehicles are competing for the common demand ) . To that end , we develop a conditional expectation based value decomposition approach that not only considers future impact of current assignments but also of other agents state and actions through the use of conditional probabilities and tighter estimates of individual impact . Due to these conditional probability based tighter estimates of individual value functions , we can scale the work by Guestrin et al . [ Guestrin & Parr ( 2002 ) ] and Li et al . [ Li & Kochenderfer ( 2021 ) ] to solve problems with no explicit coordination graphs and hundreds/thousands of homogeneous agents . Unlike value decomposition approaches [ Rashid & Whiteson ( 2018 ) ; Sunehag & Graepel ( 2018 ) ] developed for solving cooperative Multi-Agent Reinforcement Learning ( MARL ) with tens of agents and under centralized training and decentralized execution set up , we focus on problems with hundreds or thousands of agents with centralized training and centralized execution ( e.g. , Uber , Lyft , Grab ) . In this application domain of taxi on demand services , where improving 0.5 % -1 % is a major achievement [ Lin et al . ( 2018 ) ] , we demonstrate that our approach easily outperforms the existing best approach , NeurADP [ Shah et al . ( 2020 ) ] by at least 3.8 % and up to 9.76 % on a wide variety of settings for the benchmark real world taxi dataset [ NYYellowTaxi ( 2016 ) ] . 2 BACKGROUND . In this section , we formally describe the RMP problem and also provide details of an existing approach for on-demand ride pooling called NeurADP , which we improve over . Ride-pool Matching Problem ( RMP ) : We consider a fleet of vehicles/resources R with random initial locations , travelling on a predefined road network G with intersections : L as nodes , road segments : E as edges and weights on edges indicate the travel time on the road segment . Passengers that want to travel from one location to another send requests to a central entity that collects these requests over a time-window called the decision epoch ∆ . The goal of the RMP is to match these collected requests U t to empty or partially filled vehicles that can serve them such that an objective J is maximised subject to constraints on the delay D. We upperbound D and consider the objective J to be the number of requests served . Thus , RMP is defined using the tuple [ G , U , R , D , ∆ , J ] 1 . Please refer Appendix A.1 for a detailed description . Delay constraints : D consider two delays , { τ , 2τ } . τ denotes the maximum allowed pick-up delay which is the difference between the arrival time of a request and the time at which a vehicle picks the user up . 2τ denotes the maximum allowed detour delay which is the difference between the time at which the user arrived at their destination in a shared cab and the time at which they would have arrived if they had taken a single-passenger cab . Neural Approximate Dynamic Programming ( NeurADP ) for Solving RMP : Figure 1 provides the overall approach . In this paper , there are two NeurADP [ Shah et al . ( 2020 ) ] contributions of relevance : FV : To estimate Future Value of current actions , a method for solving the underlying Approximate Dynamic Program ( ADP ) [ Powell ( 2007 ) ] by considering neural network representations of value functions . 1Everywhere in the paper [ , ] is used as the concatenation operator DJV : To ensure scalability , Decomposing the Joint Value function into individual vehicle value functions by extending on the work of Russell et al . [ Russell & Zimdars ( 2003 ) ] . Future Value ( FV ) : ADP is similar to a Markov Decision Problem ( MDP ) with the key difference that the transition uncertainty is extrinsic to the system and not dependent on the action . The ADP problem for RMP is formulated using the tuple 〈S , A , ξ , T , J 〉 , where : S : The state of the system is represented as st = ( rt , ut ) where rt is the state of all vehicles and ut contains all the requests waiting to be served . The state is obtained in Step A of Figure 1 . A : At each time step there are a large number of requests arriving to the taxi service provider , however for an individual vehicle only a small number of such requests are reachable . The feasible set of request combinations for each vehicle i at time t , F it is computed in Step B of Figure 1 : F it = { f i|f i ∈ ∪c i c′=1 [ U ] c ′ , PickUpDelay ( f i , i ) ≤ τ , DetourDelay ( f i , i ) ≤ 2τ } ( 1 ) ai , ft is the decision variable that indicates whether vehicle i takes action f ( a combination of requests ) at a decision epoch t. Joint actions across vehicles have to satisfy matching constraints : ( i ) each vehicle , i can only be assigned at most one request combination , f ; ( ii ) at most one vehicle , i can be assigned to a request j ; and ( iii ) a vehicle , i can be either assigned or not assigned to a request combination.∑ f∈Fit ai , ft = 1 : : : ∀i ∈ R ∑ i∈R ∑ f∈Fit ; j∈f ai , ft ≤ 1 : : : ∀j ∈ Ut a i , f t ∈ { 0 , 1 } : : : ∀i , f ( 2 ) ξ : denotes the exogenous information – the source of randomness in the system . This would correspond to the user requests or demand . ξt denotes the exogenous information at time t. T : denotes the transitions of system state . In an ADP , the system evolution happens as ( s0 , a0 , s a 0 , ξ1 , s1 , a1 , s a 1 , · · · , st , at , sat , · · · ) , where st denotes the pre-decision state at decision epoch t and sat denotes the post-decision state [ Powell ( 2007 ) ] . The transition from state st to st+1 depends on the action vector at and the exogenous information ξt+1 . Therefore , st+1 = T ( st , at , ξt+1 ) ; s a t = T a ( st , at ) ; st+1 = T ξ ( sat , ξt+1 ) It should be noted that T a ( . , . ) is deterministic as uncertainty is extrinsic to the system . J : denotes the reward function and in RMP , this will be the revenue from a trip . Let V ( st ) denotes the value of being in state st at decision epoch t , then using Bellman equation : V ( st ) = max at∈At ( J ( st , at ) + γE [ V ( st+1 ) |st , at , ξt+1 ] ) ( 3 ) where γ is the discount factor . Using post-decision state , this expression breaks down nicely : V ( st ) = max at∈At ( J ( st , at ) + γV a ( sat ) ) ; V a ( sat ) = E [ V ( st+1 ) |sat , ξt+1 ] ( 4 ) The advantage of this two step value estimation is that the maximization problem in Equation 4 can be solved using a Integer Linear Program ( ILP ) with matching constraints indicated in expression 2 . Step D of Figure 1 ) provides this aspect of the overall algorithm . The value function approximation around post-decision state , V a ( sat ) is a neural network and is updated ( Step E of Figure 1 ) by stepping forward through time using sample realizations of exogenous information ( i.e . demand observed in data ) . However , as we describe next , maintaining a joint value function is not scalable and hence we decompose and maintain individual value functions . Decomposing Joint Value ( DJV ) : Non-linear value functions , unlike their linear counterparts , can not be directly integrated into the ILP mentioned above . One way to incorporate them is to evaluate the value function for all possible post-decision states and then add these values as constants . However , the number of post-decision states is exponential in the number of resources/vehicles . [ Shah et al . ( 2020 ) ] introduced a two-step decomposition of the joint value function that converts it into a linear combination over individual value functions associated with each vehicle . In the first step , following [ Russell & Zimdars ( 2003 ) ] , the joint value function is written as the sum over individual value functions : V ( sat ) = ∑ i V i ( sat ) . In the second step , the individual vehicles ’ value functions are approximated . They assumed that the long-term expected reward of a given vehicle is not significantly affected by the specific actions another vehicle makes in the current decision epoch and thereby completely neglect the impact of the actions taken by other vehicles at the current time step . Thus they model the value function using the pre-decision , rather than post-decision , state of other vehicles which gives : V i ( sat ) = V i ( [ si , at , s -i , a t ] ) ≈ V i ( [ s i , a t , s -i t ] ) where -i refers to all vehicles except vehicle i . This allows NeurADP to get around the combinatorial explosion of the post-decision state of all vehicles . NeurADP thus has the joint value function : V ( sat ) = ∑ i V i ( [ si , at , s -i t ] ) . They then evaluate these individual V i values ( Step C of Figure 1 ) for all possible si , at ( from the individual value neural network ) and then integrate the overall value function into the ILP as a linear function over these individual values . This reduces the number of evaluations of the non-linear value function from exponential to linear in the number of vehicles .
This paper studies an RPM problem (ride-pool matching problem) for on-demand transportation services. This problem is recently studied in various papers, but it is hard to choose a good matching by just using a bipartite graph matching due to the future demands, and it is an online decision-making problem. A recent breakthrough, NeurADP by [Shah et al. 2020], has shown a good performance, but the proposed approach in the paper, CEVD, achieved much more performance gain (reported as 3.8%-9.76%), which has a significant impact on the ToD service. An essential technique of the proposed CEVD is considering the effect of other agents (i.e., other vehicles) when estimating the value of actions.
SP:728f326478128c886426a8b9b103db36a47aa5a5
Learning Dynamics Models for Model Predictive Agents
1 INTRODUCTION . Recently reinforcement learning ( RL ) ( Sutton & Barto , 2018 ) , in particular actor-critic approaches ( Lillicrap et al. , 2015 ) were shown to successfully solve a variety of continuous control problems ( Schulman et al. , 2017 ; Lillicrap et al. , 2015 ; Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) . The simplicity of this approach has led to an explosion of research demonstrating the effectiveness of these methods ( Vecerik et al. , 2019 ; Kalashnikov et al. , 2018 ) . However , model-free RL suffers from two key disadvantages ( Dulac-Arnold et al. , 2021 ) . First , model-free RL is sample inefficient , often requiring millions or billions of environment interactions . Second , the learned policies are tied to a specific task , making transfer of learned knowledge in multi-task settings or across related tasks difficult ( Finn et al. , 2017 ; Andrychowicz et al. , 2017 ) . Model-Based RL holds the promise of overcoming these drawbacks while maintaining the benefits of model-free methods . Model-based RL involves learning a dynamics model from data , and then using this model to optimise behaviour , most often with an online planner . Since the dynamics model is ideally independent of the reward and state-action distribution , multi-task settings are simple : the reward function can be changed at will ( Rajeswaran et al. , 2020 ; Argenson & Dulac-Arnold , 2021 ; Belkhale et al. , 2021 ) . For the same reason , offline learning with off-policy data is possible ( Yu et al. , 2020 ; Kidambi et al. , 2020 ; Argenson & Dulac-Arnold , 2021 ) . Moreover , model-based RL can achieve a better sample efficiency ( Janner et al. , 2019 ; Nagabandi et al. , 2018 ; 2020 ; Byravan et al. , 2020 ; Buckman et al. , 2018 ; Hafner et al. , 2019b ; Argenson & Dulac-Arnold , 2021 ) . In this paper we investigate the impact of the different model learning design choices on the planner performance . For this evaluation we focus on learning models using feed-forward neural networks as these are the dominant models in the recent model-based RL literature . A large portion of these papers mainly proposes different methods to train the networks and variations on the planner . Unfortunately , each new paper proposes a series of evolutions , and so individual design choices are rarely ablated fully . This is the goal of our work . We look specifically at the effects of various model learning design choices on the planning performance . We consider 4 different design choices , deterministic vs. stochastic models , multi-step vs. 1-step losses , use of ensembles , and input noise . These design choices have been proposed to obtain better long term prediction ( Abbeel et al. , 2005 ; Chua et al. , 2018 ; Hafner et al. , 2019b ) and planning performance ( Venkatraman et al. , 2015 ; SanchezGonzalez et al. , 2020 ; Pfaff et al. , 2020 ) . To derive best practices for each choice-parameter , we perform ablation studies evaluating the impact of the different choices . In addition we investigate the qualitative performance of the learned models , by evaluating their ability to describe complex trajectories with reasonable fidelity , and to be consistent . The contributions of this paper are • We perform a systematic comparison evaluating the planning performance of different design choices for model learning , i.e. , stochastic models , multi-step loss , network ensembles and input noise , on five DeepMind Control Suite environments . • We observe that mean squared error ( 1-step or multi-step ) is not a good predictor of planning performance when using a learned model . • We find that multiple model learning design choices can obtain high reward and consistent long-term predictions for the evaluated systems excluding the humanoid . The differences in reward between the different combinations is not significant . • We characterise best practices for learning models using feed-forward networks . For our experiments , deterministic models need to be trained using the multi-step loss and combined with ensembles . Stochastic models require more ensemble components and input noise . The paper is structured as follows . First , we cover related work on model-based RL for continuous control in Sec 2 . Afterwards , we review the naive approach to learning dynamics models ( Sec . 3 ) . Section 4 introduces the design choices of model learning and evaluates the learned models using model-predictive control . The subsequent discussion ( Sec . 5 ) summarizes the insights , results and highlights general challenges of model learning for planning . 2 RELATED WORK . Learning dynamics model has a long tradition in robotics . The seminal work of Atkeson et al . ( 1986 ) proposed to learn the dynamics parameters of a rigid-body kinematic chain from data . To remove the limitations of analytic rigid-body models ( Atkeson et al. , 1986 ; Lutter et al. , 2020 ; 2021 ) , black-box function approximation was leveraged ( Schaal et al. , 2002 ; Choi et al. , 2007 ; Calinon et al. , 2010 ; Nguyen-Tuong et al. , 2009 ; Nguyen-Tuong & Peters , 2010 ; Lutter et al. , 2019 ; Lutter & Peters , 2019 ) to learn system models able to learn the peculiarities of real-world robotic systems . The majority of model-based RL focuses on learning dynamics models , which predict the next step using the current state and action . The model learning for control literature refers to these models as forward models ( Nguyen-Tuong & Peters , 2011 ) . Model-based RL algorithms use the model as a simulator to generate additional data ( Sutton , 1991 ; Janner et al. , 2019 ; Morgan et al. , 2021 ) , to evaluate the reward of an action sequence ( Chua et al. , 2018 ; Nagabandi et al. , 2018 ; Lambert et al. , 2019 ; Nagabandi et al. , 2020 ) , to improve the value function estimate ( Feinberg et al. , 2018 ; Buckman et al. , 2018 ) or backpropagate the policy gradients through time ( Miller et al. , 1995 ; Deisenroth & Rasmussen , 2011 ; Heess et al. , 2015 ; Byravan et al. , 2020 ; Amos et al. , 2021 ) . Current modelbased RL methods use 1-step predictions to compute the optimal trajectory . Most current modelbased RL approaches use different architectures of deep networks to approximate the model , e.g. , deterministic networks ( Nagabandi et al. , 2018 ; Byravan et al. , 2020 ; Feinberg et al. , 2018 ; Kurutach et al. , 2018 ) , stochastic networks ( Heess et al. , 2015 ; Lambert et al. , 2019 ) , recurrent networks ( Hafner et al. , 2019a ; b ; 2020 ; Ha & Schmidhuber , 2018 ) , stochastic ensembles ( Chua et al. , 2018 ; Nagabandi et al. , 2020 ; Janner et al. , 2019 ; Kidambi et al. , 2020 ; Yu et al. , 2020 ; Buckman et al. , 2018 ; Rajeswaran et al. , 2020 ; Lambert et al. , 2020b ) and graph neural networks ( Sanchez-Gonzalez et al. , 2018 ) . In this work we focus on feed-forward neural networks for learning dynamics models with continuous states and actions . The learned models are used with model predictive control ( MPC ) ( Garcia et al. , 1989 ; Allgöwer & Zheng , 2012 ; Pinneri et al. , 2020 ) to solve continuous control tasks . Simple feed-forward neural networks are the standard of most model-based RL and already offer many different variations that affect the planning performance . For the considered environments , MPC enables the direct evaluation of the learned models without requiring a policy or value function approximation . Therefore , the model performance for planning can be directly measured and compared to planning with the ground-truth model . 3 LEARNING DYNAMICS MODELS . We concentrate on 1-step forward models and review this setup in more detail . Dynamics models predict the next observation xi+1 using the current observation xi and action ui . These models can be used as a simulator for planning without interacting with the real system . For example , some sample-based planning algorithms sample action sequences close to the current plan , evaluates the reward of each action sequence by simulating the trajectories using the dynamics model and updates the plan using the action sequences with the highest reward . The simplest approach to learn such a forward model is to use a deterministic model that minimizes the prediction error . This objective can be achieved by minimising the 1-step mean squared error between the predicted and observed next state . This optimization loss is described by θ∗ = arg min θ 1 ND ∑ x , u∈D ( xi+1 − x̂i+1 ) T ( xi+1 − x̂i+1 ) with x̂i+1 = f ( xi , ui ; θ ) , with the model parameters θ , the dataset D with ND samples . Dataset Current model-based RL algorithms re-use the data collected by the agent to learn the model . Therefore , the data is acquired in the vicinity of the optimal policy . This exploration leads to a specialized model for the considered task as only regions of high reward are explored . Hence , the current learned models are not independent of the state-action distribution and the reward function can not be interchanged at will . In contrast to model-based RL , system identification focuses purely on obtaining the optimal data for learning the model cover most of the state space . Integrator Instead of predicting the next observation , one can predict the change of the observation . Then the system dynamics are modelled as an integrator described by x̂i+1 = xi + ∆t f ( xi , ui ; θ ) with the time step ∆t . This technique has proven to result in more stable long term prediction compared to predicting the next state and has been widely adapted ( Chua et al. , 2018 ; Janner et al. , 2019 ; Yu et al. , 2020 ; Amos et al. , 2021 ; Lambert et al. , 2020a ) . Normalization Optimizing the MSE can be problematic if the scales of the different observations are different . In this case , the model overfits to the dimensions with larger scales . For forward models this results in overfitting to the velocities as the position is usually an order of magnitude smaller . To mitigate this overfitting the MSE can be normalized using the diagonal variance of the dataset ΣD . The normalized MSE is described by θ∗ = arg min θ 1 ND ∑ x , u∈D ( xi+1 − x̂i+1 ) T Σ−1D ( xi+1 − x̂i+1 ) . The normalized MSE is comparable to standardization of the model inputs and outputs . In our experiments we did not observe any significant benefits using standardization instead of the normalized MSE . 4 RESULTS . In the following we present our quantitative and qualitative evaluation of the model learning design choices . We evaluate the planning performance for 4 different design choices , i.e. , stochastic models ( Sec . 4.2 ) , multi-step loss ( Sec . 4.3 ) , network ensembles ( Sec . 4.4 ) and input noise ( Sec . 4.5 ) . These different design choices have been previously proposed within the literature to improve the model for planning ( Abbeel et al. , 2005 ; Chua et al. , 2018 ; Hafner et al. , 2019b ) and generate better long-term predictions ( Venkatraman et al. , 2015 ; Sanchez-Gonzalez et al. , 2020 ; Pfaff et al. , 2020 ) . For each of these proposed models we first introduce the approach , summarize the results and conclude the best practices . The best hyperparameter for model learning were identified first using random search and the ablation studies compare to the optimal hyperparameter . The humanoid is only used for the initial ablation study as no model-learning approach learns a model that could be used for planning . A more detailed experimental setup , the hyperparameters and additional ablation studies are contained in the appendix . The links within the results point to the referenced section contained in https : //sites.google.com/view/learning-better-models/ .
This paper investigates different choices of designing the planning model which predict the next state on the observation space. The paper compares (1) prediction with deterministic or stochastic models, (2) using 1-step forward prediction loss or multi-step forward prediction loss to train the model, (3) prediction with single network or network ensemble, and (4) using perfect observation or add gaussian noise to the observation. Through the above experiments, the paper suggests that deterministic models should be trained with multi-step loss, while stochastic models work better when the 1-step log-likelihood loss is used and noise is added to the observation space. In both cases, ensemble networks tend to show better performance.
SP:0fdb3d5169a5c900e3bc0c496d656c5d3395c710
Learning Dynamics Models for Model Predictive Agents
1 INTRODUCTION . Recently reinforcement learning ( RL ) ( Sutton & Barto , 2018 ) , in particular actor-critic approaches ( Lillicrap et al. , 2015 ) were shown to successfully solve a variety of continuous control problems ( Schulman et al. , 2017 ; Lillicrap et al. , 2015 ; Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) . The simplicity of this approach has led to an explosion of research demonstrating the effectiveness of these methods ( Vecerik et al. , 2019 ; Kalashnikov et al. , 2018 ) . However , model-free RL suffers from two key disadvantages ( Dulac-Arnold et al. , 2021 ) . First , model-free RL is sample inefficient , often requiring millions or billions of environment interactions . Second , the learned policies are tied to a specific task , making transfer of learned knowledge in multi-task settings or across related tasks difficult ( Finn et al. , 2017 ; Andrychowicz et al. , 2017 ) . Model-Based RL holds the promise of overcoming these drawbacks while maintaining the benefits of model-free methods . Model-based RL involves learning a dynamics model from data , and then using this model to optimise behaviour , most often with an online planner . Since the dynamics model is ideally independent of the reward and state-action distribution , multi-task settings are simple : the reward function can be changed at will ( Rajeswaran et al. , 2020 ; Argenson & Dulac-Arnold , 2021 ; Belkhale et al. , 2021 ) . For the same reason , offline learning with off-policy data is possible ( Yu et al. , 2020 ; Kidambi et al. , 2020 ; Argenson & Dulac-Arnold , 2021 ) . Moreover , model-based RL can achieve a better sample efficiency ( Janner et al. , 2019 ; Nagabandi et al. , 2018 ; 2020 ; Byravan et al. , 2020 ; Buckman et al. , 2018 ; Hafner et al. , 2019b ; Argenson & Dulac-Arnold , 2021 ) . In this paper we investigate the impact of the different model learning design choices on the planner performance . For this evaluation we focus on learning models using feed-forward neural networks as these are the dominant models in the recent model-based RL literature . A large portion of these papers mainly proposes different methods to train the networks and variations on the planner . Unfortunately , each new paper proposes a series of evolutions , and so individual design choices are rarely ablated fully . This is the goal of our work . We look specifically at the effects of various model learning design choices on the planning performance . We consider 4 different design choices , deterministic vs. stochastic models , multi-step vs. 1-step losses , use of ensembles , and input noise . These design choices have been proposed to obtain better long term prediction ( Abbeel et al. , 2005 ; Chua et al. , 2018 ; Hafner et al. , 2019b ) and planning performance ( Venkatraman et al. , 2015 ; SanchezGonzalez et al. , 2020 ; Pfaff et al. , 2020 ) . To derive best practices for each choice-parameter , we perform ablation studies evaluating the impact of the different choices . In addition we investigate the qualitative performance of the learned models , by evaluating their ability to describe complex trajectories with reasonable fidelity , and to be consistent . The contributions of this paper are • We perform a systematic comparison evaluating the planning performance of different design choices for model learning , i.e. , stochastic models , multi-step loss , network ensembles and input noise , on five DeepMind Control Suite environments . • We observe that mean squared error ( 1-step or multi-step ) is not a good predictor of planning performance when using a learned model . • We find that multiple model learning design choices can obtain high reward and consistent long-term predictions for the evaluated systems excluding the humanoid . The differences in reward between the different combinations is not significant . • We characterise best practices for learning models using feed-forward networks . For our experiments , deterministic models need to be trained using the multi-step loss and combined with ensembles . Stochastic models require more ensemble components and input noise . The paper is structured as follows . First , we cover related work on model-based RL for continuous control in Sec 2 . Afterwards , we review the naive approach to learning dynamics models ( Sec . 3 ) . Section 4 introduces the design choices of model learning and evaluates the learned models using model-predictive control . The subsequent discussion ( Sec . 5 ) summarizes the insights , results and highlights general challenges of model learning for planning . 2 RELATED WORK . Learning dynamics model has a long tradition in robotics . The seminal work of Atkeson et al . ( 1986 ) proposed to learn the dynamics parameters of a rigid-body kinematic chain from data . To remove the limitations of analytic rigid-body models ( Atkeson et al. , 1986 ; Lutter et al. , 2020 ; 2021 ) , black-box function approximation was leveraged ( Schaal et al. , 2002 ; Choi et al. , 2007 ; Calinon et al. , 2010 ; Nguyen-Tuong et al. , 2009 ; Nguyen-Tuong & Peters , 2010 ; Lutter et al. , 2019 ; Lutter & Peters , 2019 ) to learn system models able to learn the peculiarities of real-world robotic systems . The majority of model-based RL focuses on learning dynamics models , which predict the next step using the current state and action . The model learning for control literature refers to these models as forward models ( Nguyen-Tuong & Peters , 2011 ) . Model-based RL algorithms use the model as a simulator to generate additional data ( Sutton , 1991 ; Janner et al. , 2019 ; Morgan et al. , 2021 ) , to evaluate the reward of an action sequence ( Chua et al. , 2018 ; Nagabandi et al. , 2018 ; Lambert et al. , 2019 ; Nagabandi et al. , 2020 ) , to improve the value function estimate ( Feinberg et al. , 2018 ; Buckman et al. , 2018 ) or backpropagate the policy gradients through time ( Miller et al. , 1995 ; Deisenroth & Rasmussen , 2011 ; Heess et al. , 2015 ; Byravan et al. , 2020 ; Amos et al. , 2021 ) . Current modelbased RL methods use 1-step predictions to compute the optimal trajectory . Most current modelbased RL approaches use different architectures of deep networks to approximate the model , e.g. , deterministic networks ( Nagabandi et al. , 2018 ; Byravan et al. , 2020 ; Feinberg et al. , 2018 ; Kurutach et al. , 2018 ) , stochastic networks ( Heess et al. , 2015 ; Lambert et al. , 2019 ) , recurrent networks ( Hafner et al. , 2019a ; b ; 2020 ; Ha & Schmidhuber , 2018 ) , stochastic ensembles ( Chua et al. , 2018 ; Nagabandi et al. , 2020 ; Janner et al. , 2019 ; Kidambi et al. , 2020 ; Yu et al. , 2020 ; Buckman et al. , 2018 ; Rajeswaran et al. , 2020 ; Lambert et al. , 2020b ) and graph neural networks ( Sanchez-Gonzalez et al. , 2018 ) . In this work we focus on feed-forward neural networks for learning dynamics models with continuous states and actions . The learned models are used with model predictive control ( MPC ) ( Garcia et al. , 1989 ; Allgöwer & Zheng , 2012 ; Pinneri et al. , 2020 ) to solve continuous control tasks . Simple feed-forward neural networks are the standard of most model-based RL and already offer many different variations that affect the planning performance . For the considered environments , MPC enables the direct evaluation of the learned models without requiring a policy or value function approximation . Therefore , the model performance for planning can be directly measured and compared to planning with the ground-truth model . 3 LEARNING DYNAMICS MODELS . We concentrate on 1-step forward models and review this setup in more detail . Dynamics models predict the next observation xi+1 using the current observation xi and action ui . These models can be used as a simulator for planning without interacting with the real system . For example , some sample-based planning algorithms sample action sequences close to the current plan , evaluates the reward of each action sequence by simulating the trajectories using the dynamics model and updates the plan using the action sequences with the highest reward . The simplest approach to learn such a forward model is to use a deterministic model that minimizes the prediction error . This objective can be achieved by minimising the 1-step mean squared error between the predicted and observed next state . This optimization loss is described by θ∗ = arg min θ 1 ND ∑ x , u∈D ( xi+1 − x̂i+1 ) T ( xi+1 − x̂i+1 ) with x̂i+1 = f ( xi , ui ; θ ) , with the model parameters θ , the dataset D with ND samples . Dataset Current model-based RL algorithms re-use the data collected by the agent to learn the model . Therefore , the data is acquired in the vicinity of the optimal policy . This exploration leads to a specialized model for the considered task as only regions of high reward are explored . Hence , the current learned models are not independent of the state-action distribution and the reward function can not be interchanged at will . In contrast to model-based RL , system identification focuses purely on obtaining the optimal data for learning the model cover most of the state space . Integrator Instead of predicting the next observation , one can predict the change of the observation . Then the system dynamics are modelled as an integrator described by x̂i+1 = xi + ∆t f ( xi , ui ; θ ) with the time step ∆t . This technique has proven to result in more stable long term prediction compared to predicting the next state and has been widely adapted ( Chua et al. , 2018 ; Janner et al. , 2019 ; Yu et al. , 2020 ; Amos et al. , 2021 ; Lambert et al. , 2020a ) . Normalization Optimizing the MSE can be problematic if the scales of the different observations are different . In this case , the model overfits to the dimensions with larger scales . For forward models this results in overfitting to the velocities as the position is usually an order of magnitude smaller . To mitigate this overfitting the MSE can be normalized using the diagonal variance of the dataset ΣD . The normalized MSE is described by θ∗ = arg min θ 1 ND ∑ x , u∈D ( xi+1 − x̂i+1 ) T Σ−1D ( xi+1 − x̂i+1 ) . The normalized MSE is comparable to standardization of the model inputs and outputs . In our experiments we did not observe any significant benefits using standardization instead of the normalized MSE . 4 RESULTS . In the following we present our quantitative and qualitative evaluation of the model learning design choices . We evaluate the planning performance for 4 different design choices , i.e. , stochastic models ( Sec . 4.2 ) , multi-step loss ( Sec . 4.3 ) , network ensembles ( Sec . 4.4 ) and input noise ( Sec . 4.5 ) . These different design choices have been previously proposed within the literature to improve the model for planning ( Abbeel et al. , 2005 ; Chua et al. , 2018 ; Hafner et al. , 2019b ) and generate better long-term predictions ( Venkatraman et al. , 2015 ; Sanchez-Gonzalez et al. , 2020 ; Pfaff et al. , 2020 ) . For each of these proposed models we first introduce the approach , summarize the results and conclude the best practices . The best hyperparameter for model learning were identified first using random search and the ablation studies compare to the optimal hyperparameter . The humanoid is only used for the initial ablation study as no model-learning approach learns a model that could be used for planning . A more detailed experimental setup , the hyperparameters and additional ablation studies are contained in the appendix . The links within the results point to the referenced section contained in https : //sites.google.com/view/learning-better-models/ .
This work ablates some of the design choices that go into learning a dynamics model for control-based environments. They ablate 4 choices: use of deterministic vs. stochastic models, multistep losses, network ensembles, and input noise. The authors study this in a few of the DeepMind control suite environments.
SP:0fdb3d5169a5c900e3bc0c496d656c5d3395c710
Neural Combinatorial Optimization with Reinforcement Learning : Solving theVehicle Routing Problem with Time Windows
1 INTRODUCTION . Vehicle routing problem with time windows ( VRPTW ) can be defined as an extension of the wellknown vehicle routing problem ( VRP ) in which the objective is to design a network of routes to satisfy customers demands with minimal total costsGan et al . ( 2012 ) . Each route starts from and ends at the depot , and for which the total demand is strictly under the vehicle capacity . Except for the depot , all clients are visited once within the corresponding time window . Moreover , when this previous constraint is violated , a penalty cost will be applied . Plenty of research have focused on studying and solving this problem commonly referred to as NP-hard Lenstra & Kan ( 1981 ) . Introducing time windows increases its computational complexity , therefore VRPTW requires more advanced techniques to get reliable solutions . The literature is full of hand-engineered heuristics that provide near-optimal solutions within practical runtime Bräysy & Gendreau ( 2005 ) . In general , these classical approaches fulfill the trade-off between optimality and complexity . However , the challenge becomes greater when new problem instances are defined or new features are inserted . In this case , a manual adaptation and business knowledge are required to maintain the heuristic ’ s efficiency . Considering this tedious maintenance process and the huge advancement in learning methods , some works have been focusing on using RNN and RL to learn independent heuristics for solving the VRP . In this paper , we are extending those works by adding the time windows constraint . Precisely , we develop an end-to-end model able to provide near-optimal solutions for every problem instance . In other words , as long as the trained model receives data coming roughly from the same generating process , it yields a reliable solution without the need to build another model for this specific data . The framework we propose is suitable for introducing more flexibility regarding inputs variance because of the used reinforcement learning approach . The learning process can be formulated as a Markov decision process in a well-defined environment . Therefore , the optimal solution is designed following a dynamic perspective , where the policy is architected through an attention model and trained to maximize the reward which corresponds to the negative tour length . Additionally , the state can be defined as the data relating to each customer ( cartesian coordinates , demand , service time , allowed time windows ) . This state clearly combines static and dynamic parameters . Its dynamic dimension reflects directly the demand change over the learning steps in a manner that once a client is visited its demand turns into zero . Finally , the environment actions could be seen as the set of customers to include in the solution at each stage . Our approach is an extension of some existing works namely : Bello et al . ( 2016 ) , Nazari et al . ( 2018 ) , and Kool et al . ( 2018 ) . Our added value lies in generalizing those works to solve the VRPTW , one of the most common combinatorial optimization problems . This paper aims also to strengthen the use of machine learning for solving hard combinatorial problems Bengio et al . ( 2021 ) . Including time windows constraint increases the VRP complexity and changes the learning and optimization strategy . Consequently , it requires customization in the data generating process , and a new architecture of the reinforcement learning space , especially the environment policy and the transition function . Concretely , the complete model is made up of a neural network which receives the embedding of dynamic and static inputs . The outputs of this sub-model are processed through an attention mechanism within the reinforcement learning space to deliver at the end the near-optimal sequence . 2 APPROACH BACKGROUND . Before we deep dive in the model architecture and present its main components , we should briefly highlight some problem-related concepts . In addition to the commonly known ideas about VRP , the treated problem presents some specific assumptions . The halting condition is attained when all nodes demand are satisfied . Furthermore , the vehicle of capacity D returns to the depot to refill when its load runs out without resetting the time . Each customer has a service time si strictly lower than the corresponding time window range [ T ( i ) min , T ( i ) max ] to unload its demand . We assume in our case that the time spent to go from customer i to customer j is proportional to the distance dij . In addition , we define T to be the needed time to serve all clients . In short , VRPTW can be formulated as a graph G = ( V , E ) , such that X = { x0 , x1 , ... , xn } is the set of vertices where x0 stands for the depot , and E = { eij = ( xi , xj ) | ( xi , xj ) ∈ V 2 } is the set of edges . Besides , each xi is associated with a tuple of features ( ci , di , si , TWi ) and each eij is associated with a cost dij , where : ci : is the two dimensional coordinates of node i. di : is the demand of node i. si : is the service time of node i. TWi : is the time window to serve node i. dij : is the distance between node i and node j . The ultimate goal of solving VRP TW is to find the path π = ( π1 , ... , πN ) that minimizes the cost within the following space : Π = { ( π1 , ... , πN ) , πi ∈ { x0 , x1 , ... , xn } } . Remark : As a part of the problem setting , it is important to mention that the split deliveries are not allowed , and only one vehicle drive to serve all clients . Many research in the literature tackled the above-described problem relying on hand-crafted approaches Fisher ( 1994 ) , Cordeau & Groupe d ’ études et de recherche en analyse des décisions ( Montréal ( 2000 ) . However , in the recent few years , a new dedicated branch called Neural Combinatorial Optimization has dawned . Precisely , Vinyals et al . ( 2015 ) was the first research that presented its main foundations through a sequence to sequence model called pointer network . This latter consists of two coupled RNN , the first one is used to encode inputs to a specific representation , and the other one to decode the processed output and render it as a sequence . In addition , this work follows a supervised learning approach to train this model using labeled data for the traveling salesman problem ( TSP ) . Even though , this study delivered promising insights about using neural networks for solving combinatorial problems , its results are strongly linked to the quality of the used labels . Furthermore it is a hard task to find enough TSP labeled data for training . To overcome these limitations a reinforcement learning environment to train the RNN models was proposed by Bello et al . ( 2016 ) . Besides , their approach is fully supported by the fact that roughly all combinatorial problems are evaluated through a specific reward policy . They provide significant results for both TSP and Knapsack problems in terms of the solution quality and the computational time . Others attempts in the literature considered the same perspective for solving the VRP namely Nazari et al . ( 2018 ) and Peng et al . ( 2019 ) . Unlike TSP , VRP includes some dynamic parameters , especially regarding node features as the case of the demand which changes after visiting a particular customer . As a consequence , the major changes they proposed deal with the attention mechanism , the transition function in the decoding steps , and the embedding of both dynamic and static inputs . Taking into account the above-described evolution of neural combinatorial optimization , we will present the model architecture to solve the VRPTW . 3 THE MODEL ARCHITECTURE . The configuration encoder-decoder has proved an important efficiency to deal with many problems including VRP Sutskever et al . ( 2014 ) . As shown in Figure 1 , the encoder receives raw data X as described in section 2 and converts it to a convenient representation through many layers . These useful features constructed by the encoder are grasped by the decoder to build progressively the near-optimal sequence . Concretely , the encoder gradually picks out one node to include in the sequence depending on a calculated distribution for each node . Thus , one can give the joint probability of a solution π using the chain rule as follows : pθ ( π | X ) = N∏ i=1 pθ ( πi | X , π1 : i−1 ) ( 1 ) Such that , θ are the estimated distribution parameters , and π1 : i−1 = ( π1 , π2 , ... , πi−1 ) . 3.1 ENCODER . More precisely , the encoder incrementally gets the input sequence and transforms it into a set of embeddings . For each node xi with features of dimension d ( d = 5 in the case of VRPTW ) , the starting embedding h ( 0 ) i with dimension d ′ is calculated via a linear transformation as follows : { h ( 0 ) i = Wxi + b if i 6= 0 h ( 0 ) i = W0xi + b0 if i = 0 } ( 2 ) Where W ∈ IRd ′ ×d and b ∈ IRd ′ are the learnable parameters , while W0 and b0 are the ones used for the case of the depot . As shown in the Figure 1 , this set of embeddings crosses a network of 3 layers for further updates . Each layer comprises two sublayers : a self-attention sublayer and then a feed-forward sublayer . • Self attention sublayer : is called also multi-head attention , it brings specific updates to the primary embeddings Vaswani et al . ( 2017 ) . Let h ( l ) i stands for the embedding of the node i in the layer l ∈ { 1 , 2 , 3 } . Taking into account the recursive relation between layers , one can compute the multi-head attention vector MHA ( l ) i following these equations Peng et al . ( 2019 ) : q ( l ) im = W Q mh ( l−1 ) i k ( l ) im = W K m h ( l−1 ) i v ( l ) im = W V mh ( l−1 ) i ( 3 ) u ( l ) ijm = q ( l ) imk ( l ) jm ( 4 ) a ( l ) ijm = eu ( l ) ijm∑n y e u ( l ) iym ( 5 ) h ′ ( l ) im = n∑ j=0 a ( l ) ijmv ( l ) jm ( 6 ) MHA ( l ) i ( h ( l−1 ) 0 , h ( l−1 ) 1 , ... , h ( l−1 ) n ) = M∑ m=1 WOmh ′ ( l ) im ( 7 ) Where , WQm ∈ IRd×d ′ , WKm ∈ IRd×d ′ , WVm ∈ IRd×d ′ are learnable parameters , M is the number of heads , q ( l ) im ∈ IRd is the query vector , k ( l ) im ∈ IRd is the key vector , and v ( l ) im ∈ IRd is the value vector . • Feed forward sublayer : Using the multi-head attention vector , the update made up through this sublayer for each node i is computed as follows Peng et al . ( 2019 ) : ĥ ( l ) i = tanh ( h ( l−1 ) i +MHA ( l ) i ( h ( l−1 ) 0 , h ( l−1 ) 1 , ... , h ( l−1 ) n ) ) ( 8 ) FF ( ĥ ( l ) i ) = W F 1 ReLu ( W F 0 + b F 0 ĥ ( l ) i ) ( 9 ) h ( l ) i = tanh ( ĥ ( l ) i + FF ( ĥ ( l ) i ) ) ( 10 ) Such that , h ( l−1 ) i is the embedding of node i at the layer l − 1 , WF0 ∈ IRd×d ′ , and WF0 ∈ IRd ′ ×d . This computation process is replicated at each layer , consequently the final embedding vector ( h ( N ) 0 , h ( N ) 1 , ... , h ( N ) n ) is the one obtained while ending the layer N . This output vector is considered as the main structural features for the decoder component .
The paper proposed to solve a vehicle routing problem with time windows using neural network and reinforcement learning framework. An attention based encoder-decoder model is used to predict the distribution over problem instances while satisfying the problem constraints. Then a RL framework is trained to optimize the model parameter. Experimental results on a synthetic dataset demonstrate that the proposed framework performs at par with traditional combinatorial problem solvers like Google OR tools and LKH heuristic.
SP:2d246377df0afb9665f4ea9f2c510e9f48cc6d8c
Neural Combinatorial Optimization with Reinforcement Learning : Solving theVehicle Routing Problem with Time Windows
1 INTRODUCTION . Vehicle routing problem with time windows ( VRPTW ) can be defined as an extension of the wellknown vehicle routing problem ( VRP ) in which the objective is to design a network of routes to satisfy customers demands with minimal total costsGan et al . ( 2012 ) . Each route starts from and ends at the depot , and for which the total demand is strictly under the vehicle capacity . Except for the depot , all clients are visited once within the corresponding time window . Moreover , when this previous constraint is violated , a penalty cost will be applied . Plenty of research have focused on studying and solving this problem commonly referred to as NP-hard Lenstra & Kan ( 1981 ) . Introducing time windows increases its computational complexity , therefore VRPTW requires more advanced techniques to get reliable solutions . The literature is full of hand-engineered heuristics that provide near-optimal solutions within practical runtime Bräysy & Gendreau ( 2005 ) . In general , these classical approaches fulfill the trade-off between optimality and complexity . However , the challenge becomes greater when new problem instances are defined or new features are inserted . In this case , a manual adaptation and business knowledge are required to maintain the heuristic ’ s efficiency . Considering this tedious maintenance process and the huge advancement in learning methods , some works have been focusing on using RNN and RL to learn independent heuristics for solving the VRP . In this paper , we are extending those works by adding the time windows constraint . Precisely , we develop an end-to-end model able to provide near-optimal solutions for every problem instance . In other words , as long as the trained model receives data coming roughly from the same generating process , it yields a reliable solution without the need to build another model for this specific data . The framework we propose is suitable for introducing more flexibility regarding inputs variance because of the used reinforcement learning approach . The learning process can be formulated as a Markov decision process in a well-defined environment . Therefore , the optimal solution is designed following a dynamic perspective , where the policy is architected through an attention model and trained to maximize the reward which corresponds to the negative tour length . Additionally , the state can be defined as the data relating to each customer ( cartesian coordinates , demand , service time , allowed time windows ) . This state clearly combines static and dynamic parameters . Its dynamic dimension reflects directly the demand change over the learning steps in a manner that once a client is visited its demand turns into zero . Finally , the environment actions could be seen as the set of customers to include in the solution at each stage . Our approach is an extension of some existing works namely : Bello et al . ( 2016 ) , Nazari et al . ( 2018 ) , and Kool et al . ( 2018 ) . Our added value lies in generalizing those works to solve the VRPTW , one of the most common combinatorial optimization problems . This paper aims also to strengthen the use of machine learning for solving hard combinatorial problems Bengio et al . ( 2021 ) . Including time windows constraint increases the VRP complexity and changes the learning and optimization strategy . Consequently , it requires customization in the data generating process , and a new architecture of the reinforcement learning space , especially the environment policy and the transition function . Concretely , the complete model is made up of a neural network which receives the embedding of dynamic and static inputs . The outputs of this sub-model are processed through an attention mechanism within the reinforcement learning space to deliver at the end the near-optimal sequence . 2 APPROACH BACKGROUND . Before we deep dive in the model architecture and present its main components , we should briefly highlight some problem-related concepts . In addition to the commonly known ideas about VRP , the treated problem presents some specific assumptions . The halting condition is attained when all nodes demand are satisfied . Furthermore , the vehicle of capacity D returns to the depot to refill when its load runs out without resetting the time . Each customer has a service time si strictly lower than the corresponding time window range [ T ( i ) min , T ( i ) max ] to unload its demand . We assume in our case that the time spent to go from customer i to customer j is proportional to the distance dij . In addition , we define T to be the needed time to serve all clients . In short , VRPTW can be formulated as a graph G = ( V , E ) , such that X = { x0 , x1 , ... , xn } is the set of vertices where x0 stands for the depot , and E = { eij = ( xi , xj ) | ( xi , xj ) ∈ V 2 } is the set of edges . Besides , each xi is associated with a tuple of features ( ci , di , si , TWi ) and each eij is associated with a cost dij , where : ci : is the two dimensional coordinates of node i. di : is the demand of node i. si : is the service time of node i. TWi : is the time window to serve node i. dij : is the distance between node i and node j . The ultimate goal of solving VRP TW is to find the path π = ( π1 , ... , πN ) that minimizes the cost within the following space : Π = { ( π1 , ... , πN ) , πi ∈ { x0 , x1 , ... , xn } } . Remark : As a part of the problem setting , it is important to mention that the split deliveries are not allowed , and only one vehicle drive to serve all clients . Many research in the literature tackled the above-described problem relying on hand-crafted approaches Fisher ( 1994 ) , Cordeau & Groupe d ’ études et de recherche en analyse des décisions ( Montréal ( 2000 ) . However , in the recent few years , a new dedicated branch called Neural Combinatorial Optimization has dawned . Precisely , Vinyals et al . ( 2015 ) was the first research that presented its main foundations through a sequence to sequence model called pointer network . This latter consists of two coupled RNN , the first one is used to encode inputs to a specific representation , and the other one to decode the processed output and render it as a sequence . In addition , this work follows a supervised learning approach to train this model using labeled data for the traveling salesman problem ( TSP ) . Even though , this study delivered promising insights about using neural networks for solving combinatorial problems , its results are strongly linked to the quality of the used labels . Furthermore it is a hard task to find enough TSP labeled data for training . To overcome these limitations a reinforcement learning environment to train the RNN models was proposed by Bello et al . ( 2016 ) . Besides , their approach is fully supported by the fact that roughly all combinatorial problems are evaluated through a specific reward policy . They provide significant results for both TSP and Knapsack problems in terms of the solution quality and the computational time . Others attempts in the literature considered the same perspective for solving the VRP namely Nazari et al . ( 2018 ) and Peng et al . ( 2019 ) . Unlike TSP , VRP includes some dynamic parameters , especially regarding node features as the case of the demand which changes after visiting a particular customer . As a consequence , the major changes they proposed deal with the attention mechanism , the transition function in the decoding steps , and the embedding of both dynamic and static inputs . Taking into account the above-described evolution of neural combinatorial optimization , we will present the model architecture to solve the VRPTW . 3 THE MODEL ARCHITECTURE . The configuration encoder-decoder has proved an important efficiency to deal with many problems including VRP Sutskever et al . ( 2014 ) . As shown in Figure 1 , the encoder receives raw data X as described in section 2 and converts it to a convenient representation through many layers . These useful features constructed by the encoder are grasped by the decoder to build progressively the near-optimal sequence . Concretely , the encoder gradually picks out one node to include in the sequence depending on a calculated distribution for each node . Thus , one can give the joint probability of a solution π using the chain rule as follows : pθ ( π | X ) = N∏ i=1 pθ ( πi | X , π1 : i−1 ) ( 1 ) Such that , θ are the estimated distribution parameters , and π1 : i−1 = ( π1 , π2 , ... , πi−1 ) . 3.1 ENCODER . More precisely , the encoder incrementally gets the input sequence and transforms it into a set of embeddings . For each node xi with features of dimension d ( d = 5 in the case of VRPTW ) , the starting embedding h ( 0 ) i with dimension d ′ is calculated via a linear transformation as follows : { h ( 0 ) i = Wxi + b if i 6= 0 h ( 0 ) i = W0xi + b0 if i = 0 } ( 2 ) Where W ∈ IRd ′ ×d and b ∈ IRd ′ are the learnable parameters , while W0 and b0 are the ones used for the case of the depot . As shown in the Figure 1 , this set of embeddings crosses a network of 3 layers for further updates . Each layer comprises two sublayers : a self-attention sublayer and then a feed-forward sublayer . • Self attention sublayer : is called also multi-head attention , it brings specific updates to the primary embeddings Vaswani et al . ( 2017 ) . Let h ( l ) i stands for the embedding of the node i in the layer l ∈ { 1 , 2 , 3 } . Taking into account the recursive relation between layers , one can compute the multi-head attention vector MHA ( l ) i following these equations Peng et al . ( 2019 ) : q ( l ) im = W Q mh ( l−1 ) i k ( l ) im = W K m h ( l−1 ) i v ( l ) im = W V mh ( l−1 ) i ( 3 ) u ( l ) ijm = q ( l ) imk ( l ) jm ( 4 ) a ( l ) ijm = eu ( l ) ijm∑n y e u ( l ) iym ( 5 ) h ′ ( l ) im = n∑ j=0 a ( l ) ijmv ( l ) jm ( 6 ) MHA ( l ) i ( h ( l−1 ) 0 , h ( l−1 ) 1 , ... , h ( l−1 ) n ) = M∑ m=1 WOmh ′ ( l ) im ( 7 ) Where , WQm ∈ IRd×d ′ , WKm ∈ IRd×d ′ , WVm ∈ IRd×d ′ are learnable parameters , M is the number of heads , q ( l ) im ∈ IRd is the query vector , k ( l ) im ∈ IRd is the key vector , and v ( l ) im ∈ IRd is the value vector . • Feed forward sublayer : Using the multi-head attention vector , the update made up through this sublayer for each node i is computed as follows Peng et al . ( 2019 ) : ĥ ( l ) i = tanh ( h ( l−1 ) i +MHA ( l ) i ( h ( l−1 ) 0 , h ( l−1 ) 1 , ... , h ( l−1 ) n ) ) ( 8 ) FF ( ĥ ( l ) i ) = W F 1 ReLu ( W F 0 + b F 0 ĥ ( l ) i ) ( 9 ) h ( l ) i = tanh ( ĥ ( l ) i + FF ( ĥ ( l ) i ) ) ( 10 ) Such that , h ( l−1 ) i is the embedding of node i at the layer l − 1 , WF0 ∈ IRd×d ′ , and WF0 ∈ IRd ′ ×d . This computation process is replicated at each layer , consequently the final embedding vector ( h ( N ) 0 , h ( N ) 1 , ... , h ( N ) n ) is the one obtained while ending the layer N . This output vector is considered as the main structural features for the decoder component .
This paper proposes a Neural Combinatorial Optimization approach to solving the Vehicle Routing Problem with Time Windows. It uses a policy gradient method to optimize an attention model, paired with a masking scheme that prevents unwanted actions during the policy rollout. Performance is compared with OR-Tools and LKH-3 solvers.
SP:2d246377df0afb9665f4ea9f2c510e9f48cc6d8c
Dense Gaussian Processes for Few-Shot Segmentation
1 INTRODUCTION . Image few-shot segmentation ( FSS ) of semantic classes ( Shaban et al. , 2017 ) has received increased attention in recent years . The aim is to segment novel query images based on only a handful annotated training samples , usually referred to as the support set . The FSS method thus needs to extract information from the support set in order to accurately segment a given query image . The problem is highly challenging , since the query image may present radically different views , contexts , scenes , and objects than what is represented in the support set . The core component in any FSS framework is the mechanism that extracts information from the support set to guide the segmentation of the query image . However , the design of this module presents several challenges . First , it needs to aggregate detailed yet generalizable information from the support set , which requires a flexible representation . Second , the FSS method should effectively leverage larger support sets , achieving scalable segmentation performance when increasing its size . While perhaps trivial at first glance , this has proved to be a major obstacle for many state-of-the-art methods , as visualized in Fig . 1 . Third , the method is bound to be queried with appearances not included in the support set . To achieve robust predictions even in such common cases , the method needs to assess the relevance of the information in the support images in order to gracefully revert to e.g . learned segmentation priors when necessary . We address the aforementioned challenges by densely aggregating information in the support set using Gaussian Processes ( GPs ) . Specifically , we use a GP to learn a mapping between dense local deep feature vectors and their corresponding mask values . The mask values are assumed to have a jointly Gaussian distribution with covariance based on the similarity between the corresponding feature vectors . This permits us to extract detailed relations from the support set , with the capability of modeling complex , non-linear mappings . As a non-parametric model , the GP further effectively benefits from additional support samples , since all given data is retained . As shown in Fig . 1 , the segmentation accuracy of our approach improves consistently with the number of support samples . Lastly , the predictive covariance from the GP provides a principled measure of the uncertainty based on the similarity with local features in the support set . Our FSS approach is learned end-to-end through episodic training , treating the GP as a layer in a neural network . This further enables us to learn the output space of the GP . To this end , we encode the given support masks with a neural network in order to achieve a multi-dimensional output representation . In order to generate the final masks , our decoder module employs the predicted mean query encodings , together with the covariance information . Our decoder is thus capable of reasoning about the uncertainty when fusing the predicted mask encodings with learned segmentation priors . Lastly , we further improve our FSS method by integrating dense GPs at multiple scales . We perform comprehensive experiments on two benchmarks : PASCAL-5i ( Shaban et al. , 2017 ) and COCO-20i ( Nguyen & Todorovic , 2019 ) . Our proposed DGPNet outperforms existing methods for 1-shot and 5-shot by a large margin , setting a new state-of-the-art on both benchmarks . When using the ResNet101 backbone , our DGPNet achieves an absolute gain of 14.9 for 5-shot segmentation on the challenging COCO-20i benchmark , compared to the best reported results in the literature . We further demonstrate the cross-dataset transfer capabilities of our DGPNet approach from COCO-20i to PASCAL and perform detailed ablative studies to probe the effectiveness of our contributions . 2 RELATED WORK . Few-Shot Segmentation The earliest work in few-shot segmentation ( FSS ) , by Shaban et al . ( 2017 ) , proposed a method for predicting the weights of a linear classifier based on the support set , which was further built upon in later works ( Siam et al. , 2019 ; Liu et al. , 2020a ; Boudiaf et al. , 2021 ) . Instead of learning the classifier directly , Rakelly et al . ( 2018 ) proposed to construct a global conditioning prototype from the support set and concatenate it to the query representation , with several subsequent works ( Dong & Xing , 2019 ; Zhang et al. , 2020 ; Wang et al. , 2019 ; Nguyen & Todorovic , 2019 ; Zhang et al. , 2019b ; Liu et al. , 2020c ; Azad et al. , 2021 ; Liu et al. , 2020b ; Xie et al. , 2021 ; Wang et al. , 2021 ) . A major limitation of these methods is the unimodality assumption . To alleviate this problem , Zhang et al . ( 2021 ) construct additional prototypes by a self-guided module , while Yang et al . ( 2020a ) ; Liu et al . ( 2020c ) ; Li et al . ( 2021 ) instead cluster multiple prototypes to create a richer representation . However , clustering introduces extra hyperparameters , such as the number of clusters , as well as optimization difficulties . In contrast , our method is not restricted in any such sense , and only requires us to choose an appropriate kernel . Some recent works consider pointwise correspondences between the support and query set . These works have mostly focused on attention or attention-like mechanisms ( Zhang et al. , 2019a ; Yang et al. , 2020b ; Hu et al. , 2019 ; Tian et al. , 2020 ; Wang et al. , 2020 ) . In contrast with these methods , we construct a principled posterior over functions , which greatly aids the decoder . Combining GPs and Neural Networks While early work focused on combining GPs and neural networks in the standard supervised classification setting ( Salakhutdinov & Hinton , 2009 ; Wilson et al. , 2016 ; Calandra et al. , 2016 ) , there has recently been an increased interest in utilizing Gaussian processes in the context of few-shot classification ( Patacchiola et al. , 2020 ; Snell & Zemel , 2021 ) . Previous works employ the GP in the classification setting and as the final output layer in the network , and optimize proxies of either the predictive or marginal log likelihood directly . Note that the classification likelihood is non-gaussian , and hence computing the exact posterior and marginal likelihoods of the model becomes intractable . Here , we go beyond this limitation and propose an internal dense GP model of the support features , where the posterior predictive distribution is used as input to a CNN decoder . Moreover , this allows us to learn the output space of the GP to further increase its expressive power . To the best of our knowledge , we are the first to introduce a dense GP approach for the challenging dense prediction task of few-shot segmentation . 3 METHOD . 3.1 FEW-SHOT SEGMENTATION . Few-shot segmentation is a dense few-shot learning task ( Shaban et al. , 2017 ) . The aim is to learn to segment objects from novel classes , given only a small set of annotated images . A single instance of this problem , referred to as an episode , comprises a small set of annotated samples , called the support set , and a set of samples on which prediction is to be made , the query set . Formally , we denote the support set as { ( ISk , MSk ) } Kk=1 , comprising K image-mask pairs ISk 2 RH0⇥W0⇥3 and MSk 2 { 0 , 1 } H0⇥W0 . A query image is denoted as IQ 2 RH0⇥W0⇥3 and the aim is to predict its corresponding segmentation mask MQ 2 { 0 , 1 } H0⇥W0 . To develop our approach , we first provide a general formulation for addressing the FSS problem , which applies to several recent methods , including prototype-based ( Li et al. , 2021 ; Liu et al. , 2020a ) and correlation-based ( Tian et al. , 2020 ; Wang et al. , 2020 ) ones . Our formulation proceeds in three steps : feature extraction , few-shot learning , and prediction . In the first step , deep features are extracted from the given images , x = F ( I ) 2 RH⇥W⇥D . ( 1 ) These features provide a more disentangled and invariant representation , which greatly aids the problem of learning from a limited number of samples . The main challenge in FSS , namely how to most effectively leverage the annotated support samples , is encapsulated in the second step . As a general approach , we consider a learner module ⇤ that employs the support set in order to find a function f , which associates each query feature xQ 2 RD with an output yQ 2 RE . The goal is to achieve an output yQ that is strongly correlated with the ground-truth query mask , allowing it to act as a strong cue in the final prediction . Formally , we express this general formulation as , f = ⇤ ( { xSk , MSk } k ) , yQ = f ( xQ ) . ( 2 ) The learner ⇤ aggregates information in the support set { xSk , MSk } k in order to predict the function f . This function is then applied to query features equation 2 . In the final step of our formulation , output from the function f on the query set is finally decoded by a separate network as cMQ = U ( yQ , xQ ) to predict the segmentation cMQ . The general formulation in equation 2 encapsulates several recent approaches for few-shot segmentation . In particular , prototype-based methods , for instance PANet ( Wang et al. , 2019 ) , are retrieved by letting ⇤ represent a mask-pooling operation . The function f thus computes the cosine-similarity between the pooled feature vector and the input query features . In general , the design of the learner ⇤ represents the central problem in few-shot segmentation , since it is the module that extracts information from the support set . Next , we distinguish three key desirable properties of this module . 3.2 MOTIVATION . As discussed above , the core component in few-shot segmentation is the few-shot learner ⇤ . Much research effort has therefore been diverted into its design ( Nguyen & Todorovic , 2019 ; Liu et al. , 2020a ; Wang et al. , 2020 ; Liu et al. , 2020c ; Yang et al. , 2020a ; Li et al. , 2021 ) . To motivate our approach , we first identify three important properties that the learner should possess . Flexibility of f The intent in few-shot segmentation is to be able to segment a wide range of classes , unseen during training . The image feature distributions of different unseen classes are not necessarily linearly separable ( Allen et al. , 2019 ) . Prototypical few-shot learners , which are essentially linear classifiers , would fail in such scenarios . Instead , we need a mechanism that can learn and represent more complex functions f . Scalability in support set size K An FSS method should be able to effectively leverage additional support samples and therefore achieve substantially better accuracy and robustness for larger support sets . However , many prior works show little to no benefit in the 5-shot setting compared to 1-shot . As shown by Li et al . ( 2021 ) and Boudiaf et al . ( 2021 ) , it is crucial that new information is effectively incorporated into the model without averaging out useful cues . Uncertainty Modeling Since only a small number of support samples are available in FSS , the network needs to regularly handle unseen appearances in the query images . For instance , the query may include novel backgrounds , scenes , and objects . Since the function f predicted by the learner is not expected to generalize to such unseen scenarios , the network should instead utilize neighboring predictions or learned priors . However , this can only be achieved if f models and communicates the uncertainty of its prediction to the decoder .
This paper proposes a special Gaussian process (GP) named dense GP, to model a mapping between dense local deep features and their corresponding mask values. Based on this dense GP, a few-shot segmentation method named DGPNet is proposed. The authors claim that DGPNet is novel in that it can be applied to situations that unseen classes are not linearly separable, and can produce the uncertainty of its prediction as well. To support this they conduct series of experiments on PASCAL-5^i and COCO-20^i.
SP:951b5c2a6eba45d57baecfde6cbfbc732e1347ba
Dense Gaussian Processes for Few-Shot Segmentation
1 INTRODUCTION . Image few-shot segmentation ( FSS ) of semantic classes ( Shaban et al. , 2017 ) has received increased attention in recent years . The aim is to segment novel query images based on only a handful annotated training samples , usually referred to as the support set . The FSS method thus needs to extract information from the support set in order to accurately segment a given query image . The problem is highly challenging , since the query image may present radically different views , contexts , scenes , and objects than what is represented in the support set . The core component in any FSS framework is the mechanism that extracts information from the support set to guide the segmentation of the query image . However , the design of this module presents several challenges . First , it needs to aggregate detailed yet generalizable information from the support set , which requires a flexible representation . Second , the FSS method should effectively leverage larger support sets , achieving scalable segmentation performance when increasing its size . While perhaps trivial at first glance , this has proved to be a major obstacle for many state-of-the-art methods , as visualized in Fig . 1 . Third , the method is bound to be queried with appearances not included in the support set . To achieve robust predictions even in such common cases , the method needs to assess the relevance of the information in the support images in order to gracefully revert to e.g . learned segmentation priors when necessary . We address the aforementioned challenges by densely aggregating information in the support set using Gaussian Processes ( GPs ) . Specifically , we use a GP to learn a mapping between dense local deep feature vectors and their corresponding mask values . The mask values are assumed to have a jointly Gaussian distribution with covariance based on the similarity between the corresponding feature vectors . This permits us to extract detailed relations from the support set , with the capability of modeling complex , non-linear mappings . As a non-parametric model , the GP further effectively benefits from additional support samples , since all given data is retained . As shown in Fig . 1 , the segmentation accuracy of our approach improves consistently with the number of support samples . Lastly , the predictive covariance from the GP provides a principled measure of the uncertainty based on the similarity with local features in the support set . Our FSS approach is learned end-to-end through episodic training , treating the GP as a layer in a neural network . This further enables us to learn the output space of the GP . To this end , we encode the given support masks with a neural network in order to achieve a multi-dimensional output representation . In order to generate the final masks , our decoder module employs the predicted mean query encodings , together with the covariance information . Our decoder is thus capable of reasoning about the uncertainty when fusing the predicted mask encodings with learned segmentation priors . Lastly , we further improve our FSS method by integrating dense GPs at multiple scales . We perform comprehensive experiments on two benchmarks : PASCAL-5i ( Shaban et al. , 2017 ) and COCO-20i ( Nguyen & Todorovic , 2019 ) . Our proposed DGPNet outperforms existing methods for 1-shot and 5-shot by a large margin , setting a new state-of-the-art on both benchmarks . When using the ResNet101 backbone , our DGPNet achieves an absolute gain of 14.9 for 5-shot segmentation on the challenging COCO-20i benchmark , compared to the best reported results in the literature . We further demonstrate the cross-dataset transfer capabilities of our DGPNet approach from COCO-20i to PASCAL and perform detailed ablative studies to probe the effectiveness of our contributions . 2 RELATED WORK . Few-Shot Segmentation The earliest work in few-shot segmentation ( FSS ) , by Shaban et al . ( 2017 ) , proposed a method for predicting the weights of a linear classifier based on the support set , which was further built upon in later works ( Siam et al. , 2019 ; Liu et al. , 2020a ; Boudiaf et al. , 2021 ) . Instead of learning the classifier directly , Rakelly et al . ( 2018 ) proposed to construct a global conditioning prototype from the support set and concatenate it to the query representation , with several subsequent works ( Dong & Xing , 2019 ; Zhang et al. , 2020 ; Wang et al. , 2019 ; Nguyen & Todorovic , 2019 ; Zhang et al. , 2019b ; Liu et al. , 2020c ; Azad et al. , 2021 ; Liu et al. , 2020b ; Xie et al. , 2021 ; Wang et al. , 2021 ) . A major limitation of these methods is the unimodality assumption . To alleviate this problem , Zhang et al . ( 2021 ) construct additional prototypes by a self-guided module , while Yang et al . ( 2020a ) ; Liu et al . ( 2020c ) ; Li et al . ( 2021 ) instead cluster multiple prototypes to create a richer representation . However , clustering introduces extra hyperparameters , such as the number of clusters , as well as optimization difficulties . In contrast , our method is not restricted in any such sense , and only requires us to choose an appropriate kernel . Some recent works consider pointwise correspondences between the support and query set . These works have mostly focused on attention or attention-like mechanisms ( Zhang et al. , 2019a ; Yang et al. , 2020b ; Hu et al. , 2019 ; Tian et al. , 2020 ; Wang et al. , 2020 ) . In contrast with these methods , we construct a principled posterior over functions , which greatly aids the decoder . Combining GPs and Neural Networks While early work focused on combining GPs and neural networks in the standard supervised classification setting ( Salakhutdinov & Hinton , 2009 ; Wilson et al. , 2016 ; Calandra et al. , 2016 ) , there has recently been an increased interest in utilizing Gaussian processes in the context of few-shot classification ( Patacchiola et al. , 2020 ; Snell & Zemel , 2021 ) . Previous works employ the GP in the classification setting and as the final output layer in the network , and optimize proxies of either the predictive or marginal log likelihood directly . Note that the classification likelihood is non-gaussian , and hence computing the exact posterior and marginal likelihoods of the model becomes intractable . Here , we go beyond this limitation and propose an internal dense GP model of the support features , where the posterior predictive distribution is used as input to a CNN decoder . Moreover , this allows us to learn the output space of the GP to further increase its expressive power . To the best of our knowledge , we are the first to introduce a dense GP approach for the challenging dense prediction task of few-shot segmentation . 3 METHOD . 3.1 FEW-SHOT SEGMENTATION . Few-shot segmentation is a dense few-shot learning task ( Shaban et al. , 2017 ) . The aim is to learn to segment objects from novel classes , given only a small set of annotated images . A single instance of this problem , referred to as an episode , comprises a small set of annotated samples , called the support set , and a set of samples on which prediction is to be made , the query set . Formally , we denote the support set as { ( ISk , MSk ) } Kk=1 , comprising K image-mask pairs ISk 2 RH0⇥W0⇥3 and MSk 2 { 0 , 1 } H0⇥W0 . A query image is denoted as IQ 2 RH0⇥W0⇥3 and the aim is to predict its corresponding segmentation mask MQ 2 { 0 , 1 } H0⇥W0 . To develop our approach , we first provide a general formulation for addressing the FSS problem , which applies to several recent methods , including prototype-based ( Li et al. , 2021 ; Liu et al. , 2020a ) and correlation-based ( Tian et al. , 2020 ; Wang et al. , 2020 ) ones . Our formulation proceeds in three steps : feature extraction , few-shot learning , and prediction . In the first step , deep features are extracted from the given images , x = F ( I ) 2 RH⇥W⇥D . ( 1 ) These features provide a more disentangled and invariant representation , which greatly aids the problem of learning from a limited number of samples . The main challenge in FSS , namely how to most effectively leverage the annotated support samples , is encapsulated in the second step . As a general approach , we consider a learner module ⇤ that employs the support set in order to find a function f , which associates each query feature xQ 2 RD with an output yQ 2 RE . The goal is to achieve an output yQ that is strongly correlated with the ground-truth query mask , allowing it to act as a strong cue in the final prediction . Formally , we express this general formulation as , f = ⇤ ( { xSk , MSk } k ) , yQ = f ( xQ ) . ( 2 ) The learner ⇤ aggregates information in the support set { xSk , MSk } k in order to predict the function f . This function is then applied to query features equation 2 . In the final step of our formulation , output from the function f on the query set is finally decoded by a separate network as cMQ = U ( yQ , xQ ) to predict the segmentation cMQ . The general formulation in equation 2 encapsulates several recent approaches for few-shot segmentation . In particular , prototype-based methods , for instance PANet ( Wang et al. , 2019 ) , are retrieved by letting ⇤ represent a mask-pooling operation . The function f thus computes the cosine-similarity between the pooled feature vector and the input query features . In general , the design of the learner ⇤ represents the central problem in few-shot segmentation , since it is the module that extracts information from the support set . Next , we distinguish three key desirable properties of this module . 3.2 MOTIVATION . As discussed above , the core component in few-shot segmentation is the few-shot learner ⇤ . Much research effort has therefore been diverted into its design ( Nguyen & Todorovic , 2019 ; Liu et al. , 2020a ; Wang et al. , 2020 ; Liu et al. , 2020c ; Yang et al. , 2020a ; Li et al. , 2021 ) . To motivate our approach , we first identify three important properties that the learner should possess . Flexibility of f The intent in few-shot segmentation is to be able to segment a wide range of classes , unseen during training . The image feature distributions of different unseen classes are not necessarily linearly separable ( Allen et al. , 2019 ) . Prototypical few-shot learners , which are essentially linear classifiers , would fail in such scenarios . Instead , we need a mechanism that can learn and represent more complex functions f . Scalability in support set size K An FSS method should be able to effectively leverage additional support samples and therefore achieve substantially better accuracy and robustness for larger support sets . However , many prior works show little to no benefit in the 5-shot setting compared to 1-shot . As shown by Li et al . ( 2021 ) and Boudiaf et al . ( 2021 ) , it is crucial that new information is effectively incorporated into the model without averaging out useful cues . Uncertainty Modeling Since only a small number of support samples are available in FSS , the network needs to regularly handle unseen appearances in the query images . For instance , the query may include novel backgrounds , scenes , and objects . Since the function f predicted by the learner is not expected to generalize to such unseen scenarios , the network should instead utilize neighboring predictions or learned priors . However , this can only be achieved if f models and communicates the uncertainty of its prediction to the decoder .
Authors propose a novel few-shot segmentation method by adopting dense Gaussian process (GP) regression to capture complex appearance distributions. To boot the performance, authors consider the uncertainty in the final segmentation. Authors exploit the end-to-end learning capabilities of the proposed method to learn a high-dimensional output space for the GP. Authors report state-of-the-art results in two public few shot segmentation benchmarks.
SP:951b5c2a6eba45d57baecfde6cbfbc732e1347ba
Only tails matter: Average-Case Universality and Robustness in the Convex Regime
1 Introduction . The analysis of the average complexity of algorithms has a long story in computer science . Average-case complexity , for instance , drives much of the decisions made in cryptography ( Bogdanov & Trevisan , 2006 ) . Despite their relevance , average-case analyses are difficult to extend to other algorithms , partly because of the intrinsic issue of defining a typical distribution over problem instances . Recently though , Pedregosa & Scieur ( 2020 ) derived a framework to systemically evaluate the complexity of first-order methods when applied on distributions of quadratic minimization problems . This is done by relating the average-case convergence rate to the expected spectral distribution ( e.s.d ) of the objective function ’ s Hessian , which is a well-studied object on random matrix theory . Having access to this object in practice is a much stronger hypothesis when compared to the worst-case analysis that relies only on the values of the edges of this distribution . Paquette et al . ( 2020 ) extended the average-case framework by introducing a noisy generative model for the problems . They further derived the average complexity of the Nesterov Accelerated Method ( Nesterov , 2003 ) on a particular distribution . They showed the strong concentration of the metrics around a limiting value as dimensions go to infinity . Scieur & Pedregosa ( 2020 ) showed that for a strongly convex problem with eigenvalues supported on a contiguous interval , the optimal average-case complexity converges asymptotically to the one given by the Polyak Heavy Ball method ( Polyak , 1964 ) in the worst-case . 1.1 Current limitations of the average-case analysis . When analyzing the state of the art of average-case methods on quadratics problems , we observe significant limitations that we address in this paper . First , little is known about the convergence rate on convex problems . Also , optimal average-case algorithms require an exact estimation of the e.s.d to guarantee an optimal convergence rate , their convergence rate under inexact e.s.d . is not known . Finally , the non-smooth is also discussed in ( Pedregosa & Scieur , 2020 ) , but with little details . Convex problems . The minimization of non-strongly convex problems is drastically slower than their strongly convex counterpart , as Gradient Descent presents worst-case convergence in Θ ( 1t ) and Nesterov is Θ ( 1 t2 ) . In the strongly convex case , both the worst-case and averagecase are asymptotically equal . However , little is known on optimal average-case rates for convex problems , as well as the average-case complexity of classical methods such as gradient descent or Nesterov ’ s method , see ( Paquette et al. , 2020 ) . Exact estimation of the e.s.d . In ( Pedregosa & Scieur , 2020 ) , the theoretical study of optimal algorithms in the average-case requires an exact estimation of the e.s.d . of the problem class . Such estimation may be hard , nor impossible to obtain in practical scenarios . Despite showing good performance when the e.s.d . is estimated with empirical quantities , there are no theoretical guarantees on the performance of the method when the e.s.d . is poorly estimated . There is therefore a need to analyze the algorithm ’ s performance under different notions of uncertainty on the spectrum . This allows a practitioner to choose the best algorithm for a practical problem , even with imperfect a priori information . Non-smooth . Pedregosa & Scieur ( 2020 ) briefly introduce average-case optimal rates on non-smooth problems , when the e.s.d . is the Laguerre distribution e−λ . In this paper , we extend the analysis to the generalized Laguerre distribution λαe−λ , α > −1 . 1.2 Contributions . Our main contribution is a fine-grained analysis of the average-case complexity on convex quadratic problems : we show that a problem ’ s complexity depends on the concentration of the eigenvalues of e.s.d . around the edges of their support . From this perspective , we propose a family of optimal algorithms in the average-case , analyze their robustness , and finally exhibit a universality result for Nesterov ’ s method . More precisely , • ( Optimal algorithms ) . In Section 3 , we propose the Generalized Chebyshev Method ( GCM , Algorithm 1 ) , a family of algorithms whose parameters depend on the concentration of the e.s.d . around the edges of their support . If the parameters of the GCM method are set properly , the algorithm converges at an optimal average-case rate ( Theorem 3 for smooth problems , Theorem 6 for non-smooth problems ) , a rate that we show is faster than worst-case optimal methods like Nesterov acceleration . We show these rates to be representative of the practical performance of the algorithms in Fig . 6 , and retrieve the classical worst-case rates as limits of the average-case ( see Table 1 ) . • ( Robustness ) . Developing an optimal algorithm requires the knowledge of the exact e.s.d . However , in practical scenarios , we only have access to an approximation of the e.s.d . In Theorem 2 in Section 4 we analyze the rate of GCM in the presence of such a mismatch . We also analyze the optimal average-case rates of distributions representing the smooth convex , non-smooth convex , and strongly convex settings and compare them with the worst-case rates ( Table 1 ) . • ( Universality ) . Finally , in Theorem 4 , we analyze the asymptotic average-case convergence rate of Nesterov ’ s method . We show that its convergence rate is nearly optimal ( up to a logarithmic factor ) under some natural assumptions over the data , namely a concentration of eigenvalues around 0 similar to the Marchenko-Pastur measure . This contributes to the theoretical understanding of the numerical efficiency of Nesterov ’ s acceleration . 2 Average-Case Analysis . In this section , we recall the average-case analysis framework for random quadratic problems . The main result is Theorem 1 , which relates the expected error to the expected spectral Regime Worst-case Average-Case Strongly conv . ( 1 − Θ ( 1/√κ ) ) t ( 1 − Θ ( 1/√κ ) ) t Smooth conv . 1/t2 1/t2ξ+4 Convex 1/√t 1/tα+2 Table 1 : Comparison between function value worstcase and average-case convergence . κ is the condition number in the smooth strongly convex case . In the smooth convex case ξ > −1 is the concentration of eigenvalues around 0 ( see Assumption 1 ) and in the non-smooth case we consider dµ ∝ λαe−λ distribution and the residual polynomial . The one-to-one correspondence between the residual polynomials and first-order methods applied to quadratics will allow us to pose the problem of finding an optimal method as the best approximation problem in the space of polynomials . We define a random quadratic problem : Problem 1 . Let H ∈ Rd×d be a random symmetric positive-definite matrix independent to x ? ∈ Rd , a random vector that is the solution to the problem . We define the random quadratic minimization problem as min x∈Rd { f ( x ) : = 1 2 ( x−x ? ) > H ( x−x ? ) } . ( OPT ) We are interested on minimizing the expected errors E‖f ( xt ) − f ( x ? ) ‖ , the expected functionvalue gap , and E||∇f ( xt ) ||2 , the expected gradient norm , where xt is the t-th update of a first-order method starting from x0 and E is the expectation over the random variables H , x0 and x ? . The expectation we consider is over the problem and not over any randomness of the algorithm . In this paper , we consider the class of first-order methods ( F.O.M ’ s ) to minimize ( OPT ) . Methods in this class construct the iterates xt as xt ∈ x0 + span { ∇f ( x0 ) , . . . , ∇f ( xt−1 ) } . ( 1 ) That is , xt belongs to the span of previous gradients . This class of algorithms includes for instance gradient descent and momentum , but not quasi-Newton methods since the preconditioner could allow the iterates to go outside of the span . Furthermore , we will only consider oblivious methods , that is , methods in which the coefficients of the update are known in advance and do not depend on previous updates . This leaves out some methods such as conjugate gradient or methods with line-search . From First-Order Method to Polynomials . There is an intimate link between firstorder methods and polynomials that simplifies the analysis of quadratic objectives . The next proposition shows that , with this link , we can assign to each optimization method a polynomial that determines its convergence . Following Fischer ( 1996 ) , we will say a polynomial Pt is residual if Pt ( 0 ) = 1 . Proposition 1 . ( Hestenes et al. , 1952 ) Let xt be generated by a first-order method . Then there exists a residual polynomial Pt of degree t , that verifies xt − x ? = Pt ( H ) ( x0 − x ? ) . ( 2 ) Remark 1 . If the first-order method is further a momentum method , i.e . xt+1 = xt + ht∇f ( xt ) +mt ( xt − xt−1 ) . We can determine the polynomials by the recurrence P0 = 1 and Pt+1 ( λ ) = Pt ( λ ) + htλPt ( λ ) +mt ( Pt ( λ ) − Pt−1 ( λ ) ) . We note that while most popular F.O.M ’ s can be posed as a momentum method , the Nesterov method can not . A convenient way to collect statistics on the spectrum of a matrix is through its empirical spectral distribution . Definition 1 ( Expected spectral distribution ( e.s.d ) ) . . Let H be a random matrix with eigenvalues { λ1 , . . . , λd } . The empirical spectral distribution of H , called µH , is the probability measure µH : = 1 d ∑d i=1δλi , ( 3 ) where δλi is the Dirac delta , a distribution equal to zero everywhere except at λi and whose integral over the entire real line is equal to one . Since H is random , the empirical spectral distribution µH is a random variable in the space of measures . Its expectation over H is called the expected spectral distribution and we denote it µ : = EH [ µH ] . ( 4 ) We can link the e.s.d . of H to the convergence of a first-order method on the distribution of H. In the following we will consider x0 − x ? and H to be independent , with x0 − x ? sampled isotropically . Theorem 1 . Let xt be generated by a first-order method associated to the polynomial Pt , the measure µ the e.s.d . of H , and E [ ( x0 − x ? ) ( x0 − x ? ) T ] = R2I for some constant R. Then we can write the convergence metrics at time step t as E [ ‖xt − x ? ‖2 ] = R2 ∫ P 2t ( λ ) dµ ( λ ) , E [ f ( xt ) − f ( x ? ) ] = R2 ∫ P 2t ( λ ) λdµ ( λ ) and E [ ||∇f ( xt ) ||22 ] = R2 ∫ P 2t ( λ ) λ 2dµ ( λ ) . ( 5 ) This shows that polynomials are a powerful abstraction as they allow us to write all of our convergence metrics within the same framework . For simplicity , we set R2 = 1 and we will refer directly to the polynomials associated to a given method . We will refer to objective l as the one associated to the added λl term , i.e . the function-value is objective l = 1 . This framework is linked to the field of orthogonal polynomials by the next proposition . We construct an optimal method w.r.t . a given distribution through a family of orthogonal polynomials associated to it . Proposition 2 ( ( Pedregosa & Scieur , 2020 ) ) . Let P lt be defined as P lt : = arg minPt ( 0 ) =1 ∫ P 2t ( λ ) λ ldν ( λ ) . ( 6 ) Then ( P lt ) is the family of residual orthogonal polynomials w.r.t . to λl+1dν . This theorem further implies that the optimal first-order method is a momentum method as Favard ’ s theorem Marcellán & Álvarez-Nodarse ( 2001 ) tells us the orthogonal polynomials w.r.t . a given distribution are related through a three term recurrence , Pt+1 ( λ ) = atPt ( λ ) + btλPt ( λ ) + ( 1− at ) Pt−1 ( λ ) . ( 7 ) Following Remark 1 , the optimal method is derived from this recurrence as xt+1 = xt + ( at − 1 ) ( xt − xt−1 ) + bt∇f ( xt ) . ( 8 )
-- EDIT: I have updated my scores in response to clarifications -- The problem of optimizing a convex quadratic function via first order methods is considered. This is a well-understood problem from the worst case point of view, and its complexity will depend on the largest and smallest eigenvalues of the associated Hessian matrix. However, a nice recent average-case analysis by Pedrogosa and Scieur gives the following result: if the spectral density of the Hessian converges to a "nice" probability measure (such as the Manchenko-Pastur law), then first-order methods that are tailored to this density may converge faster. In fact, such methods can be obtained from the three-term recurrence for the orthogonal polynomials of the measure. The present manuscript follows up on the Pedregosa/Scieur, but departs from in two specific ways. Firstly, it considers the case where the limiting spectral law is a beta distribution, and derives the corresponding iteration. Secondly, the authors analyze the performance of this iteration under "mispecification": that is, they allow for more general spectral measures than the beta, and only specify how these measures behave near the extremes. In spite of this, they are able to derive bounds of the same order as in the beta case. A few other theoretical results are presented: worst-case rates for the method derived from the beta distribution; an analysis of Nesterov's method under conditions on the tail of the spectral measure, and a result on Laguerre spectral measures. These results generalize other theorems obtained in the Pedregosa/Scieur paper. The theory is complemented by a small numerical study.
SP:43329ddc4ce5ca94bde0ed3df97a040d090a4b41
Only tails matter: Average-Case Universality and Robustness in the Convex Regime
1 Introduction . The analysis of the average complexity of algorithms has a long story in computer science . Average-case complexity , for instance , drives much of the decisions made in cryptography ( Bogdanov & Trevisan , 2006 ) . Despite their relevance , average-case analyses are difficult to extend to other algorithms , partly because of the intrinsic issue of defining a typical distribution over problem instances . Recently though , Pedregosa & Scieur ( 2020 ) derived a framework to systemically evaluate the complexity of first-order methods when applied on distributions of quadratic minimization problems . This is done by relating the average-case convergence rate to the expected spectral distribution ( e.s.d ) of the objective function ’ s Hessian , which is a well-studied object on random matrix theory . Having access to this object in practice is a much stronger hypothesis when compared to the worst-case analysis that relies only on the values of the edges of this distribution . Paquette et al . ( 2020 ) extended the average-case framework by introducing a noisy generative model for the problems . They further derived the average complexity of the Nesterov Accelerated Method ( Nesterov , 2003 ) on a particular distribution . They showed the strong concentration of the metrics around a limiting value as dimensions go to infinity . Scieur & Pedregosa ( 2020 ) showed that for a strongly convex problem with eigenvalues supported on a contiguous interval , the optimal average-case complexity converges asymptotically to the one given by the Polyak Heavy Ball method ( Polyak , 1964 ) in the worst-case . 1.1 Current limitations of the average-case analysis . When analyzing the state of the art of average-case methods on quadratics problems , we observe significant limitations that we address in this paper . First , little is known about the convergence rate on convex problems . Also , optimal average-case algorithms require an exact estimation of the e.s.d to guarantee an optimal convergence rate , their convergence rate under inexact e.s.d . is not known . Finally , the non-smooth is also discussed in ( Pedregosa & Scieur , 2020 ) , but with little details . Convex problems . The minimization of non-strongly convex problems is drastically slower than their strongly convex counterpart , as Gradient Descent presents worst-case convergence in Θ ( 1t ) and Nesterov is Θ ( 1 t2 ) . In the strongly convex case , both the worst-case and averagecase are asymptotically equal . However , little is known on optimal average-case rates for convex problems , as well as the average-case complexity of classical methods such as gradient descent or Nesterov ’ s method , see ( Paquette et al. , 2020 ) . Exact estimation of the e.s.d . In ( Pedregosa & Scieur , 2020 ) , the theoretical study of optimal algorithms in the average-case requires an exact estimation of the e.s.d . of the problem class . Such estimation may be hard , nor impossible to obtain in practical scenarios . Despite showing good performance when the e.s.d . is estimated with empirical quantities , there are no theoretical guarantees on the performance of the method when the e.s.d . is poorly estimated . There is therefore a need to analyze the algorithm ’ s performance under different notions of uncertainty on the spectrum . This allows a practitioner to choose the best algorithm for a practical problem , even with imperfect a priori information . Non-smooth . Pedregosa & Scieur ( 2020 ) briefly introduce average-case optimal rates on non-smooth problems , when the e.s.d . is the Laguerre distribution e−λ . In this paper , we extend the analysis to the generalized Laguerre distribution λαe−λ , α > −1 . 1.2 Contributions . Our main contribution is a fine-grained analysis of the average-case complexity on convex quadratic problems : we show that a problem ’ s complexity depends on the concentration of the eigenvalues of e.s.d . around the edges of their support . From this perspective , we propose a family of optimal algorithms in the average-case , analyze their robustness , and finally exhibit a universality result for Nesterov ’ s method . More precisely , • ( Optimal algorithms ) . In Section 3 , we propose the Generalized Chebyshev Method ( GCM , Algorithm 1 ) , a family of algorithms whose parameters depend on the concentration of the e.s.d . around the edges of their support . If the parameters of the GCM method are set properly , the algorithm converges at an optimal average-case rate ( Theorem 3 for smooth problems , Theorem 6 for non-smooth problems ) , a rate that we show is faster than worst-case optimal methods like Nesterov acceleration . We show these rates to be representative of the practical performance of the algorithms in Fig . 6 , and retrieve the classical worst-case rates as limits of the average-case ( see Table 1 ) . • ( Robustness ) . Developing an optimal algorithm requires the knowledge of the exact e.s.d . However , in practical scenarios , we only have access to an approximation of the e.s.d . In Theorem 2 in Section 4 we analyze the rate of GCM in the presence of such a mismatch . We also analyze the optimal average-case rates of distributions representing the smooth convex , non-smooth convex , and strongly convex settings and compare them with the worst-case rates ( Table 1 ) . • ( Universality ) . Finally , in Theorem 4 , we analyze the asymptotic average-case convergence rate of Nesterov ’ s method . We show that its convergence rate is nearly optimal ( up to a logarithmic factor ) under some natural assumptions over the data , namely a concentration of eigenvalues around 0 similar to the Marchenko-Pastur measure . This contributes to the theoretical understanding of the numerical efficiency of Nesterov ’ s acceleration . 2 Average-Case Analysis . In this section , we recall the average-case analysis framework for random quadratic problems . The main result is Theorem 1 , which relates the expected error to the expected spectral Regime Worst-case Average-Case Strongly conv . ( 1 − Θ ( 1/√κ ) ) t ( 1 − Θ ( 1/√κ ) ) t Smooth conv . 1/t2 1/t2ξ+4 Convex 1/√t 1/tα+2 Table 1 : Comparison between function value worstcase and average-case convergence . κ is the condition number in the smooth strongly convex case . In the smooth convex case ξ > −1 is the concentration of eigenvalues around 0 ( see Assumption 1 ) and in the non-smooth case we consider dµ ∝ λαe−λ distribution and the residual polynomial . The one-to-one correspondence between the residual polynomials and first-order methods applied to quadratics will allow us to pose the problem of finding an optimal method as the best approximation problem in the space of polynomials . We define a random quadratic problem : Problem 1 . Let H ∈ Rd×d be a random symmetric positive-definite matrix independent to x ? ∈ Rd , a random vector that is the solution to the problem . We define the random quadratic minimization problem as min x∈Rd { f ( x ) : = 1 2 ( x−x ? ) > H ( x−x ? ) } . ( OPT ) We are interested on minimizing the expected errors E‖f ( xt ) − f ( x ? ) ‖ , the expected functionvalue gap , and E||∇f ( xt ) ||2 , the expected gradient norm , where xt is the t-th update of a first-order method starting from x0 and E is the expectation over the random variables H , x0 and x ? . The expectation we consider is over the problem and not over any randomness of the algorithm . In this paper , we consider the class of first-order methods ( F.O.M ’ s ) to minimize ( OPT ) . Methods in this class construct the iterates xt as xt ∈ x0 + span { ∇f ( x0 ) , . . . , ∇f ( xt−1 ) } . ( 1 ) That is , xt belongs to the span of previous gradients . This class of algorithms includes for instance gradient descent and momentum , but not quasi-Newton methods since the preconditioner could allow the iterates to go outside of the span . Furthermore , we will only consider oblivious methods , that is , methods in which the coefficients of the update are known in advance and do not depend on previous updates . This leaves out some methods such as conjugate gradient or methods with line-search . From First-Order Method to Polynomials . There is an intimate link between firstorder methods and polynomials that simplifies the analysis of quadratic objectives . The next proposition shows that , with this link , we can assign to each optimization method a polynomial that determines its convergence . Following Fischer ( 1996 ) , we will say a polynomial Pt is residual if Pt ( 0 ) = 1 . Proposition 1 . ( Hestenes et al. , 1952 ) Let xt be generated by a first-order method . Then there exists a residual polynomial Pt of degree t , that verifies xt − x ? = Pt ( H ) ( x0 − x ? ) . ( 2 ) Remark 1 . If the first-order method is further a momentum method , i.e . xt+1 = xt + ht∇f ( xt ) +mt ( xt − xt−1 ) . We can determine the polynomials by the recurrence P0 = 1 and Pt+1 ( λ ) = Pt ( λ ) + htλPt ( λ ) +mt ( Pt ( λ ) − Pt−1 ( λ ) ) . We note that while most popular F.O.M ’ s can be posed as a momentum method , the Nesterov method can not . A convenient way to collect statistics on the spectrum of a matrix is through its empirical spectral distribution . Definition 1 ( Expected spectral distribution ( e.s.d ) ) . . Let H be a random matrix with eigenvalues { λ1 , . . . , λd } . The empirical spectral distribution of H , called µH , is the probability measure µH : = 1 d ∑d i=1δλi , ( 3 ) where δλi is the Dirac delta , a distribution equal to zero everywhere except at λi and whose integral over the entire real line is equal to one . Since H is random , the empirical spectral distribution µH is a random variable in the space of measures . Its expectation over H is called the expected spectral distribution and we denote it µ : = EH [ µH ] . ( 4 ) We can link the e.s.d . of H to the convergence of a first-order method on the distribution of H. In the following we will consider x0 − x ? and H to be independent , with x0 − x ? sampled isotropically . Theorem 1 . Let xt be generated by a first-order method associated to the polynomial Pt , the measure µ the e.s.d . of H , and E [ ( x0 − x ? ) ( x0 − x ? ) T ] = R2I for some constant R. Then we can write the convergence metrics at time step t as E [ ‖xt − x ? ‖2 ] = R2 ∫ P 2t ( λ ) dµ ( λ ) , E [ f ( xt ) − f ( x ? ) ] = R2 ∫ P 2t ( λ ) λdµ ( λ ) and E [ ||∇f ( xt ) ||22 ] = R2 ∫ P 2t ( λ ) λ 2dµ ( λ ) . ( 5 ) This shows that polynomials are a powerful abstraction as they allow us to write all of our convergence metrics within the same framework . For simplicity , we set R2 = 1 and we will refer directly to the polynomials associated to a given method . We will refer to objective l as the one associated to the added λl term , i.e . the function-value is objective l = 1 . This framework is linked to the field of orthogonal polynomials by the next proposition . We construct an optimal method w.r.t . a given distribution through a family of orthogonal polynomials associated to it . Proposition 2 ( ( Pedregosa & Scieur , 2020 ) ) . Let P lt be defined as P lt : = arg minPt ( 0 ) =1 ∫ P 2t ( λ ) λ ldν ( λ ) . ( 6 ) Then ( P lt ) is the family of residual orthogonal polynomials w.r.t . to λl+1dν . This theorem further implies that the optimal first-order method is a momentum method as Favard ’ s theorem Marcellán & Álvarez-Nodarse ( 2001 ) tells us the orthogonal polynomials w.r.t . a given distribution are related through a three term recurrence , Pt+1 ( λ ) = atPt ( λ ) + btλPt ( λ ) + ( 1− at ) Pt−1 ( λ ) . ( 7 ) Following Remark 1 , the optimal method is derived from this recurrence as xt+1 = xt + ( at − 1 ) ( xt − xt−1 ) + bt∇f ( xt ) . ( 8 )
The paper considers the problem of average convergence rate of first order methods on a given ensemble of quadratic problems. The authors propose the Generalized Chebyshev Method (GCM) and show that it is optimal when the e.s.d. is beta distribution. They also show that so long as we know the behavior of e.s.d. near the edges of its support, GCM still achieves the optimal rate. The authors finally consider the Nestrov method and derive its asymptotic average-case convergence rate.
SP:43329ddc4ce5ca94bde0ed3df97a040d090a4b41
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
1 INTRODUCTION . Building models that are robust to adversarial examples ( Szegedy et al. , 2014 ; Biggio et al. , 2013 ) is a major challenge and open-problem in machine learning . Due to the inherent difficulty in building robust classifiers , researchers have attempted to build techniques to at least detect adversarial examples , a weaker task that is largely considered easier than robust classification ( Xu et al. , 2018 ; Pang et al. , 2021 ; Sheikholeslami et al. , 2021 ) . Yet , evaluating the robustness of empirical detector defenses is challenging . This is in part due to a lack of strong evaluation guidelines and benchmarks—akin to those developed for robust classifiers ( Carlini et al. , 2019 ; Croce et al. , 2020 ) —as well as to a lack of long-standing comparative baselines such as adversarial training ( Madry et al. , 2018 ) . To illustrate , consider the following ( fictitious ) claims about two defenses against adversarial examples on CIFAR-10 : • defense A is a classifier that achieves robust accuracy of 90 % under ` ∞-perturbations bounded by = 4/255 ; • defense B also has a “ rejection ” option , and achieves robust accuracy of 90 % under ` ∞- perturbations bounded by = 8/255 ( we say that defense B is robust for some example if it classifies that example correctly , and either rejects/detects or correctly classifies all perturbed examples at distance . ) Which of these two ( empirical ) claims are you more likely to believe to be correct ? Defense A claims much higher robustness than the current best result achieved with adversarial training ( Madry et al. , 2018 ; Rebuffi et al. , 2021 ) , the only empirical defense against adversarial examples that has stood the test of time . Indeed , the state-of-the-art ` ∞ robustness for = 4/255 on CIFAR-10 ( without external data ) is ≈ 79 % ( Rebuffi et al. , 2021 ) . Thus , the claim of defense A would likely be met with some initial skepticism and heightened scrutiny , as could be expected for such a claimed breakthrough result . The claim of defense B is harder to assess , due to a lack of long-standing baselines for robust detectors ( many detection defenses have been shown to be broken ( Carlini & Wagner , 2017 ; Tramèr et al. , 2020 ) ) . On one hand , detection of adversarial examples has largely been considered to be an easier task than classification ( Xu et al. , 2018 ; Pang et al. , 2021 ; Sheikholeslami et al. , 2021 ) . On the other hand , defense B claims robustness to perturbations that are twice as large as defense A ( = 8/255 vs. = 4/255 ) . In this paper , we show that the claims of defenses A and B are , in fact , equivalent ! ( up to computational efficiency . ) We prove a general hardness reduction between classification and detection of adversarial examples . Given a detector defense that achieves robust risk α for attacks at distance ( under any metric ) , we show how to build an explicit but inefficient classifier that achieves robust risk α for classifying attacks at distance /2 . The reverse implication also holds : a classifier robust at distance /2 implies an explicit but inefficient robust detector at distance . To the authors knowledge , there is no known way of leveraging computational inefficiency to build more robust models . We should thus be as “ surprised ” by the claim made by defense B as by the claim made by defense A . Our reduction provides a way of assessing the plausibility of new robust detection claims , by contrasting them with results from the more mature literature on robust classification . To illustrate , we revisit 14 published detection defenses across three datasets , and show that in 12/14 cases the defense ’ s robust detection claims would imply an inefficient classifier with robustness far superior to the current state-of-the-art . Yet , none of these detection papers make the claim that their techniques should imply such a breakthrough in robust classification . Using our reduction , it is obvious that many detection defenses are claiming much stronger robustness than we believe feasible with current techniques . And indeed , many of these defenses were later shown to have overestimated their robustness ( Carlini & Wagner , 2017 ; Tramèr et al. , 2020 ) . Remarkably , we find that for certified defenses , the state-of-the-art results for provable robust classification and detection perfectly match the results implied by our reduction . For example , Sheikholeslami et al . ( 2021 ) recently proposed a certified detector on CIFAR-10 with provable robust error that is within 3 % of the provable error of the inefficient detector obtained by combining our result with the state-of-the-art robust classifier of Zhang et al . ( 2020a ) . In summary , we prove that giving classifiers access to a detection option does not help robustness ( or at least , not much ) . Our work provides , to our knowledge , the first example of a hardness reduction between different approaches for robust machine learning . As in the case of computational complexity , we believe that such reductions can be useful for identifying research questions or areas that are unlikely to bear fruit ( bar a significant breakthrough ) —so that the majority of the community ’ s efforts can be redirected elsewhere . On a technical level , our reduction exposes a natural connection between robustness and error correcting codes , which may be of independent interest . 2 HARDNESS REDUCTIONS BETWEEN ROBUST CLASSIFIERS AND DETECTORS . In this section , we prove our main result : a reduction between robust detectors and robust classifiers , and vice-versa . We first introduce some useful notation and define the ( robust ) risk of classifiers with and without a detection option . 2.1 PRELIMINARIES . We consider a classification task with a distribution D over examples x ∈ Rd with labels y ∈ [ C ] . A classifier is a function f : Rd → [ C ] . A detector is a classifier with an extra “ rejection ” or ” detection ” option ⊥ , that indicates the absence of a classification . We assume for simplicity that classifiers and detectors are deterministic . Our results can easily be extended to randomized functions as well . The binary indicator function 1 { A } is 1 if and only if the predicate A is true . We first define a classifier ’ s risk , i.e. , its classification error on unperturbed samples . Definition 1 ( Risk ) . Let f : Rd → [ C ] ∪ { ⊥ } be a classifier ( optionally with a detection output ⊥ ) . The risk of f is the expected rate at which f fails to correctly classify a sample : R ( f ) : = E ( x , y ) ∼D [ 1 { f ( x ) 6=y } ] ( 1 ) Note that for a detector , rejecting an unperturbed example sampled from the distributionD is counted as an error . For classifiers without a rejection option , we define the robust risk as the risk on worst-case adversarial examples ( Madry et al. , 2018 ) . Given an input x sampled from D , an adversarial example x̂ is constrained to being within distance d ( x , x̂ ) ≤ from x , where d is some distance measure . Definition 2 ( Robust risk ) . Let f : Rd → [ C ] be a classifier . The robust risk at distance is : R adv ( f ) : = E ( x , y ) ∼D [ max d ( x , x̂ ) ≤ 1 { f ( x̂ ) 6=y } ] ( 2 ) Thus , a sample ( x , y ) is robustly classified if and only if every point within distance of x ( including x itself ) is correctly classified as y . For a detector ( a classifier with an extra detection/rejection output ) , we analogously define the robust risk with detection . The classifier is now allowed to reject adversarial examples . Definition 3 ( Robust risk with detection ) . Let f : Rd → [ C ] ∪ { ⊥ } be a classifier with an extra detection output ⊥ . The robust risk with detection at distance is : R adv-det ( f ) : = E ( x , y ) ∼D [ max d ( x , x̂ ) ≤ 1 { f ( x ) 6=y ∨ f ( x̂ ) /∈ { y , ⊥ } } ] ( 3 ) That is , a detector defense f is robust on a natural input x if and only if the defense classifies the natural input x correctly , and the defense either rejects or correctly classifies every perturbed input x̂ within distance from x . The requirement that the defense correctly classify natural examples eliminates pathological defenses that reject all inputs . 2.2 ROBUST DETECTION IMPLIES INEFFICIENT ROBUST CLASSIFICATION . We are now ready to introduce our main result , a reduction from a robust detector for adversarial examples at distance , to an inefficient robust classifier at distance /2 . We later prove that this reduction also holds in the reverse direction , thereby demonstrating the equivalence between robust detection and classification—up to computational hardness . Theorem 4 ( -robust detection implies inefficient /2-robust classification ) . Let d ( · , · ) be an arbitrary metric . Let f be a detector that achieves risk R ( f ) = α , and robust risk with detection R adv-det ( f ) = β . Then , we can construct an explicit ( but inefficient ) classifier g that achieves risk R ( g ) ≤ α and robust risk R /2adv ( g ) ≤ β . The classifier g is constructed as follows on input x : • Run the detector model y ← f ( x ) . If the input is not rejected , i.e. , y 6= ⊥ , then output the label y that was predicted by the detector . • Otherwise , find an input x′ within distance /2 of x that is not rejected , i.e. , d ( x , x′ ) ≤ /2 and f ( x′ ) 6= ⊥ . If such an input x′ exists , output the label y ← f ( x′ ) . Else , output a uniformly random label y ∈ [ C ] . An intuitive illustration for our construction , and for the proof of the theorem ( see below ) is in Figure 1 . Our construction can be viewed as an analog of minimum distance decoding in coding theory . We can view a clean data point sampled from D as a codeword , and an adversarial example x̂ as a noisy message with a certain number of errors ( where the error magnitude is measured using an arbitrary metric on Rd rather than the Hamming distance that is typically used for error correcting codes ) . A standard result in coding theory states that if a code can detect α errors , then it can correct α/2 errors . This result follows from a “ ball-packing ” argument : if α errors can be detected , then any two valid codewords must be at least at distance α from each other , and therefore α/2 errors can be corrected via minimum distance decoding . Proof of Theorem 4 . First , note that the natural accuracy of our constructed classifier g is at least as high as that of the detector f , since g always mimics the output of f whenever f does not reject an input sampled from D. Thus , R ( g ) ≤ R ( f ) = α . Now , for the sake of contradiction , consider an input ( x , y ) ∼ D for which the constructed classifier g is not robust at distance /2 . By construction , this means that there exists some input x̂ at distance /2 from x such that x̂ is misclassified , i.e. , g ( x̂ ) = ŷ 6= y . We will show that the detector f is not robust with detection for x either ( for attacks at distance up to ) . By definition of the classifier g , if g ( x̂ ) = ŷ 6= y then either : • The detector f also misclassifies x̂ , i.e. , f ( x̂ ) = ŷ . So f is not robust with detection for x at distance . • There exists an input x′ within distance /2 of x , such that the detector f misclassifies x′ , i.e . f ( x′ ) = ŷ . Note that by the triangular inequality , d ( x , x′ ) ≤ d ( x , x̂ ) + d ( x̂ , x′ ) ≤ /2 + /2 = , and thus f is not robust with detection for x at distance . • The detector f rejects all inputs x′ within distance /2 of x ( and thus g has output ŷ by sampling a label at random ) . Since d ( x , x̂ ) ≤ /2 , this implies that the detector also rejects the clean input x , i.e. , f ( x ) = ⊥ , and thus f is not robust with detection for x . In summary , whenever the constructed classifier g fails to robustly classify an input x up to distance /2 , the detector f also fails to robustly classify x with detection up to distance . Taking expectations over the entire distribution D concludes the proof . Note that the classifier g constructed in Theorem 4 is computationally inefficient . Indeed , the second step of the defense consists in finding a non-rejected input within some metric ball . If the original detector f is a non-convex function ( e.g. , a deep neural network ) , then this step consists in solving an intractable non-convex optimization problem . Our reduction is thus typically not suitable for building a practical robust classifier . Instead , it demonstrates the existence of an inefficient but explicit robust classifier . We discuss the implications of this result more thoroughly in Section 3 . A corollary to our reduction is that many “ information theoretic ” results about robust classifiers can be directly extended to robust detectors . For example , Tsipras et al . ( 2019 ) prove that there exists a formal tradeoff between a classifier ’ s clean accuracy and robust accuracy for certain natural tasks . Since their result applies to any classifier ( including inefficient ones ) , combining their result with our reduction implies that a similar accuracy-robustness tradeoff exists for detectors . More precisely , Tsipras et al . ( 2019 ) show that for certain classification tasks and suitable choices of parameters α , β , , any classifier g which achieves risk R ( g ) ≤ α must have robust risk at least R adv ( g ) ≥ β against ` ∞-perturbations bounded by . By our reduction , this implies that any detector f with risk at most R ( f ) ≤ α must also have robust risk with detection at least R /2adv-det ( f ) ≥ β against ` ∞-perturbations bounded by /2 . Similar arguments can be applied to show , for instance , that the increased data complexity of robust generalization from Schmidt et al . ( 2018 ) , or the tradeoff between robustness to multiple perturbation types from Tramèr & Boneh ( 2019 ) , also apply to robust detectors . Our reduction does not apply for “ computational ” hardness results that have been shown for robust classification . For example , Garg et al . ( 2020 ) and Bubeck et al . ( 2018 ) show ( “ unnatural ” ) distributions where learning a robust classifier is computationally hard—under standard cryptographic assumptions . We can not use Theorem 4 to conclude that learning a robust detector is hard for these distributions , since the existence of such a detector would only imply an inefficient robust classifier which does not contradict the results of Garg et al . ( 2020 ) or Bubeck et al . ( 2018 ) .
This paper considers one important question: how to fairly compare the adversarial robustness between detection-based defenses and classification-based defenses. From the theoretical perspective, the authors show that: one can always (ideally) construct a robust classifier from a robust detector which has equivalent robustness, and vice versa. Based on this construction method, they are able to transfer the robustness between robust detectors and robust classifiers. Finally, they find that most existing detection defenses achieve suspiciously high robust performance compared with state-of-art robust classifiers, if they apply the proposed “transferring” criteria.
SP:e2212b3da261410c7b2714361d85d1b6876d4a3d
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
1 INTRODUCTION . Building models that are robust to adversarial examples ( Szegedy et al. , 2014 ; Biggio et al. , 2013 ) is a major challenge and open-problem in machine learning . Due to the inherent difficulty in building robust classifiers , researchers have attempted to build techniques to at least detect adversarial examples , a weaker task that is largely considered easier than robust classification ( Xu et al. , 2018 ; Pang et al. , 2021 ; Sheikholeslami et al. , 2021 ) . Yet , evaluating the robustness of empirical detector defenses is challenging . This is in part due to a lack of strong evaluation guidelines and benchmarks—akin to those developed for robust classifiers ( Carlini et al. , 2019 ; Croce et al. , 2020 ) —as well as to a lack of long-standing comparative baselines such as adversarial training ( Madry et al. , 2018 ) . To illustrate , consider the following ( fictitious ) claims about two defenses against adversarial examples on CIFAR-10 : • defense A is a classifier that achieves robust accuracy of 90 % under ` ∞-perturbations bounded by = 4/255 ; • defense B also has a “ rejection ” option , and achieves robust accuracy of 90 % under ` ∞- perturbations bounded by = 8/255 ( we say that defense B is robust for some example if it classifies that example correctly , and either rejects/detects or correctly classifies all perturbed examples at distance . ) Which of these two ( empirical ) claims are you more likely to believe to be correct ? Defense A claims much higher robustness than the current best result achieved with adversarial training ( Madry et al. , 2018 ; Rebuffi et al. , 2021 ) , the only empirical defense against adversarial examples that has stood the test of time . Indeed , the state-of-the-art ` ∞ robustness for = 4/255 on CIFAR-10 ( without external data ) is ≈ 79 % ( Rebuffi et al. , 2021 ) . Thus , the claim of defense A would likely be met with some initial skepticism and heightened scrutiny , as could be expected for such a claimed breakthrough result . The claim of defense B is harder to assess , due to a lack of long-standing baselines for robust detectors ( many detection defenses have been shown to be broken ( Carlini & Wagner , 2017 ; Tramèr et al. , 2020 ) ) . On one hand , detection of adversarial examples has largely been considered to be an easier task than classification ( Xu et al. , 2018 ; Pang et al. , 2021 ; Sheikholeslami et al. , 2021 ) . On the other hand , defense B claims robustness to perturbations that are twice as large as defense A ( = 8/255 vs. = 4/255 ) . In this paper , we show that the claims of defenses A and B are , in fact , equivalent ! ( up to computational efficiency . ) We prove a general hardness reduction between classification and detection of adversarial examples . Given a detector defense that achieves robust risk α for attacks at distance ( under any metric ) , we show how to build an explicit but inefficient classifier that achieves robust risk α for classifying attacks at distance /2 . The reverse implication also holds : a classifier robust at distance /2 implies an explicit but inefficient robust detector at distance . To the authors knowledge , there is no known way of leveraging computational inefficiency to build more robust models . We should thus be as “ surprised ” by the claim made by defense B as by the claim made by defense A . Our reduction provides a way of assessing the plausibility of new robust detection claims , by contrasting them with results from the more mature literature on robust classification . To illustrate , we revisit 14 published detection defenses across three datasets , and show that in 12/14 cases the defense ’ s robust detection claims would imply an inefficient classifier with robustness far superior to the current state-of-the-art . Yet , none of these detection papers make the claim that their techniques should imply such a breakthrough in robust classification . Using our reduction , it is obvious that many detection defenses are claiming much stronger robustness than we believe feasible with current techniques . And indeed , many of these defenses were later shown to have overestimated their robustness ( Carlini & Wagner , 2017 ; Tramèr et al. , 2020 ) . Remarkably , we find that for certified defenses , the state-of-the-art results for provable robust classification and detection perfectly match the results implied by our reduction . For example , Sheikholeslami et al . ( 2021 ) recently proposed a certified detector on CIFAR-10 with provable robust error that is within 3 % of the provable error of the inefficient detector obtained by combining our result with the state-of-the-art robust classifier of Zhang et al . ( 2020a ) . In summary , we prove that giving classifiers access to a detection option does not help robustness ( or at least , not much ) . Our work provides , to our knowledge , the first example of a hardness reduction between different approaches for robust machine learning . As in the case of computational complexity , we believe that such reductions can be useful for identifying research questions or areas that are unlikely to bear fruit ( bar a significant breakthrough ) —so that the majority of the community ’ s efforts can be redirected elsewhere . On a technical level , our reduction exposes a natural connection between robustness and error correcting codes , which may be of independent interest . 2 HARDNESS REDUCTIONS BETWEEN ROBUST CLASSIFIERS AND DETECTORS . In this section , we prove our main result : a reduction between robust detectors and robust classifiers , and vice-versa . We first introduce some useful notation and define the ( robust ) risk of classifiers with and without a detection option . 2.1 PRELIMINARIES . We consider a classification task with a distribution D over examples x ∈ Rd with labels y ∈ [ C ] . A classifier is a function f : Rd → [ C ] . A detector is a classifier with an extra “ rejection ” or ” detection ” option ⊥ , that indicates the absence of a classification . We assume for simplicity that classifiers and detectors are deterministic . Our results can easily be extended to randomized functions as well . The binary indicator function 1 { A } is 1 if and only if the predicate A is true . We first define a classifier ’ s risk , i.e. , its classification error on unperturbed samples . Definition 1 ( Risk ) . Let f : Rd → [ C ] ∪ { ⊥ } be a classifier ( optionally with a detection output ⊥ ) . The risk of f is the expected rate at which f fails to correctly classify a sample : R ( f ) : = E ( x , y ) ∼D [ 1 { f ( x ) 6=y } ] ( 1 ) Note that for a detector , rejecting an unperturbed example sampled from the distributionD is counted as an error . For classifiers without a rejection option , we define the robust risk as the risk on worst-case adversarial examples ( Madry et al. , 2018 ) . Given an input x sampled from D , an adversarial example x̂ is constrained to being within distance d ( x , x̂ ) ≤ from x , where d is some distance measure . Definition 2 ( Robust risk ) . Let f : Rd → [ C ] be a classifier . The robust risk at distance is : R adv ( f ) : = E ( x , y ) ∼D [ max d ( x , x̂ ) ≤ 1 { f ( x̂ ) 6=y } ] ( 2 ) Thus , a sample ( x , y ) is robustly classified if and only if every point within distance of x ( including x itself ) is correctly classified as y . For a detector ( a classifier with an extra detection/rejection output ) , we analogously define the robust risk with detection . The classifier is now allowed to reject adversarial examples . Definition 3 ( Robust risk with detection ) . Let f : Rd → [ C ] ∪ { ⊥ } be a classifier with an extra detection output ⊥ . The robust risk with detection at distance is : R adv-det ( f ) : = E ( x , y ) ∼D [ max d ( x , x̂ ) ≤ 1 { f ( x ) 6=y ∨ f ( x̂ ) /∈ { y , ⊥ } } ] ( 3 ) That is , a detector defense f is robust on a natural input x if and only if the defense classifies the natural input x correctly , and the defense either rejects or correctly classifies every perturbed input x̂ within distance from x . The requirement that the defense correctly classify natural examples eliminates pathological defenses that reject all inputs . 2.2 ROBUST DETECTION IMPLIES INEFFICIENT ROBUST CLASSIFICATION . We are now ready to introduce our main result , a reduction from a robust detector for adversarial examples at distance , to an inefficient robust classifier at distance /2 . We later prove that this reduction also holds in the reverse direction , thereby demonstrating the equivalence between robust detection and classification—up to computational hardness . Theorem 4 ( -robust detection implies inefficient /2-robust classification ) . Let d ( · , · ) be an arbitrary metric . Let f be a detector that achieves risk R ( f ) = α , and robust risk with detection R adv-det ( f ) = β . Then , we can construct an explicit ( but inefficient ) classifier g that achieves risk R ( g ) ≤ α and robust risk R /2adv ( g ) ≤ β . The classifier g is constructed as follows on input x : • Run the detector model y ← f ( x ) . If the input is not rejected , i.e. , y 6= ⊥ , then output the label y that was predicted by the detector . • Otherwise , find an input x′ within distance /2 of x that is not rejected , i.e. , d ( x , x′ ) ≤ /2 and f ( x′ ) 6= ⊥ . If such an input x′ exists , output the label y ← f ( x′ ) . Else , output a uniformly random label y ∈ [ C ] . An intuitive illustration for our construction , and for the proof of the theorem ( see below ) is in Figure 1 . Our construction can be viewed as an analog of minimum distance decoding in coding theory . We can view a clean data point sampled from D as a codeword , and an adversarial example x̂ as a noisy message with a certain number of errors ( where the error magnitude is measured using an arbitrary metric on Rd rather than the Hamming distance that is typically used for error correcting codes ) . A standard result in coding theory states that if a code can detect α errors , then it can correct α/2 errors . This result follows from a “ ball-packing ” argument : if α errors can be detected , then any two valid codewords must be at least at distance α from each other , and therefore α/2 errors can be corrected via minimum distance decoding . Proof of Theorem 4 . First , note that the natural accuracy of our constructed classifier g is at least as high as that of the detector f , since g always mimics the output of f whenever f does not reject an input sampled from D. Thus , R ( g ) ≤ R ( f ) = α . Now , for the sake of contradiction , consider an input ( x , y ) ∼ D for which the constructed classifier g is not robust at distance /2 . By construction , this means that there exists some input x̂ at distance /2 from x such that x̂ is misclassified , i.e. , g ( x̂ ) = ŷ 6= y . We will show that the detector f is not robust with detection for x either ( for attacks at distance up to ) . By definition of the classifier g , if g ( x̂ ) = ŷ 6= y then either : • The detector f also misclassifies x̂ , i.e. , f ( x̂ ) = ŷ . So f is not robust with detection for x at distance . • There exists an input x′ within distance /2 of x , such that the detector f misclassifies x′ , i.e . f ( x′ ) = ŷ . Note that by the triangular inequality , d ( x , x′ ) ≤ d ( x , x̂ ) + d ( x̂ , x′ ) ≤ /2 + /2 = , and thus f is not robust with detection for x at distance . • The detector f rejects all inputs x′ within distance /2 of x ( and thus g has output ŷ by sampling a label at random ) . Since d ( x , x̂ ) ≤ /2 , this implies that the detector also rejects the clean input x , i.e. , f ( x ) = ⊥ , and thus f is not robust with detection for x . In summary , whenever the constructed classifier g fails to robustly classify an input x up to distance /2 , the detector f also fails to robustly classify x with detection up to distance . Taking expectations over the entire distribution D concludes the proof . Note that the classifier g constructed in Theorem 4 is computationally inefficient . Indeed , the second step of the defense consists in finding a non-rejected input within some metric ball . If the original detector f is a non-convex function ( e.g. , a deep neural network ) , then this step consists in solving an intractable non-convex optimization problem . Our reduction is thus typically not suitable for building a practical robust classifier . Instead , it demonstrates the existence of an inefficient but explicit robust classifier . We discuss the implications of this result more thoroughly in Section 3 . A corollary to our reduction is that many “ information theoretic ” results about robust classifiers can be directly extended to robust detectors . For example , Tsipras et al . ( 2019 ) prove that there exists a formal tradeoff between a classifier ’ s clean accuracy and robust accuracy for certain natural tasks . Since their result applies to any classifier ( including inefficient ones ) , combining their result with our reduction implies that a similar accuracy-robustness tradeoff exists for detectors . More precisely , Tsipras et al . ( 2019 ) show that for certain classification tasks and suitable choices of parameters α , β , , any classifier g which achieves risk R ( g ) ≤ α must have robust risk at least R adv ( g ) ≥ β against ` ∞-perturbations bounded by . By our reduction , this implies that any detector f with risk at most R ( f ) ≤ α must also have robust risk with detection at least R /2adv-det ( f ) ≥ β against ` ∞-perturbations bounded by /2 . Similar arguments can be applied to show , for instance , that the increased data complexity of robust generalization from Schmidt et al . ( 2018 ) , or the tradeoff between robustness to multiple perturbation types from Tramèr & Boneh ( 2019 ) , also apply to robust detectors . Our reduction does not apply for “ computational ” hardness results that have been shown for robust classification . For example , Garg et al . ( 2020 ) and Bubeck et al . ( 2018 ) show ( “ unnatural ” ) distributions where learning a robust classifier is computationally hard—under standard cryptographic assumptions . We can not use Theorem 4 to conclude that learning a robust detector is hard for these distributions , since the existence of such a detector would only imply an inefficient robust classifier which does not contradict the results of Garg et al . ( 2020 ) or Bubeck et al . ( 2018 ) .
Adversarial examples are test time attacks in which the input is modified by up to distance \eps (under some metric) and the goal of adversarially robust learning is to have high (generalized) accuracy even under such attacks. One way to make predictions is to always output a label. Another way is to “abstain/detect” when the learner thinks the input is not clean and is perturbed with. The way we evaluate the performance in the detection model is to count detected perturbed inputs as “correctly classified”. The paper asks a very natural question: is it easier to learn when detection/abstain is allowed or not? The main result of the paper is very clean: For any metric d, and any eps, the existence of a learner that achieves accuracy c under detection model under 2\eps perturbations is (information theoretically) equivalent to the existence of a classifier in the no-detection model with accuracy c and eps perturbation. The proof is “constructive” but it is not “efficiently constructive”. Namely, given a classifier in either of the two settings above, the paper shows a rather simple (but smart) way of constructing another classifier (with the parameters stated above) in the other model. The paper then takes this connection to revisit the results of quite a few papers from the literature in which they have claimed defenses that use detection as their key idea. The paper observes that the bounds that (many) of those papers claim would imply classifiers with no detection/abstain that beat the state of the art adversarially robust classifiers. The paper cautiously claims that this indicates that the defenses of those papers are not actually secure, but rather “not broken” under simple attacks tried by the authors.
SP:e2212b3da261410c7b2714361d85d1b6876d4a3d
Gradient Matching for Domain Generalization
1 INTRODUCTION The goal of domain generalization is to train models that performs well on unseen , out-of-distribution data , which is crucial in practice for model deployment in the wild . This seemingly difficult task is made possible by the presence of multiple distributions/domains at train time . As we have seen in past work ( Arjovsky et al. , 2019 ; Gulrajani and Lopez-Paz , 2020 ; Ganin et al. , 2016 ) , a key aspect of domain generalization is to learn from features that remain invariant across multiple domains , while ignoring those that are spuriously correlated to label information ( as defined in Torralba and Efros ( 2011 ) ; Stock and Cisse ( 2017 ) ) . Consider , for example , a model that is built to distinguish between cows and camels using photos collected in nature under different climates . Since CNNs are known to have a bias towards texture ( Geirhos et al. , 2018 ; Brendel and Bethge , 2019 ) , if we simply try to minimize the average loss across different domains , the classifier is prone to spuriously correlate “ cow ” with grass and “ camels ” with desert , and predict the species using only the background . Such a classifier can be rendered useless when the animals are placed indoors or in a zoo . However , if the model could recognize that while the landscapes change with climate , the biological characteristics of the animals ∗Work done during internship at Facebook AI Research . †Now at Zoom . ( e.g . humps , neck lengths ) remain invariant and use those features to determine the species , we have a much better chance at generalizing to unseen domains . Similar intuitions have already motivated many approaches that consider learning “ invariances ” across domains as the main challenge of domain generalization . Typically , a lot of these work focus on learning invariant representations directly by removing the domain information ( Ganin et al. , 2016 ; Sun and Saenko , 2016 ; Li et al. , 2018 ) . In this work , we propose an inter-domain gradient matching ( IDGM ) objective . Instead of learning invariant features by matching the distributions of representations from different domains , our approach does so by encouraging consistent gradient directions across domains . Specifically , our IDGM objective augments the loss with an auxiliary term that maximizes the gradient inner product between domains , which encourages the alignment between the domain-specific gradients . By simultaneously minimizing the loss and matching the gradients , IDGM encourages the optimization paths to be the same for all domains , favouring invariant predictions . Figure 1 illustrates a motivating example described in Section 3.2 : given 2 domains , each containing one invariant feature ( orange cross ) and one spurious feature ( yellow and red cross ) . While empirical risk minimization ( ERM ) minimizes the average loss between these domains at the cost of learning spurious features only , IDGM aligns the gradient directions and is therefore able to focus on the invariant feature . While the IDGM objective achieves the desirable learning dynamic in theory , naive optimization of the objective by gradient descent is computationally costly due to the second-order derivatives . Leveraging the theoretical analysis of Reptile , a meta-learning algorithm ( Nichol et al. , 2018 ) , we propose to approximate the gradients of IDGM using a simple first-order algorithm , which we name Fish . Fish is simple to implement , computationally effective and as we show in our experiments , functionally similar to direct optimization of IDGM . Our contribution is a simple but effective algorithm for domain generalization , which exhibits state-ofthe-art performance on 13 datasets from recent domain generalization benchmark WILDS ( Koh et al. , 2020 ) and DOMAINBED ( Gulrajani and Lopez-Paz , 2020 ) . The strong performance of our method on a variety of datasets demonstrates that it is broadly applicable in different applications/subgenres of domain generalization tasks . We also perform detailed analysis in Section 4.4 to explain the effectiveness of our proposed algorithm . 2 RELATED WORK . Domain Generalization In domain generalization , the training data is sampled from one or many source domains , while the test data is sampled from a new target domain . We will now discuss the five main families of approaches to domain generalization : 1 . Distributional Robustness ( DRO ) : DRO approaches minimize the worst-case loss over a set of data distributions constructed from the training domains . Rojas-Carulla et al . ( 2015 ) proposed DRO to address covariate shift ( Gretton et al. , 2009a ; b ) , where P ( Y |X ) remains constant across domains but P ( X ) changes . Later work also studied subpopulation shift , where the train and test distributions are mixtures of the same domains , but the mixture weights change between train and test ( Hu et al. , 2018 ; Sagawa et al. , 2019 ) ; 2 . Domain-invariant representation learning : This family of approaches to domain generalization aims at learning high-level features that make domains statistically indistinguishable . Prediction is then based on these features only . The principle is motivated by a generalization error bound for unsupervised domain adaptation ( Ben-David et al. , 2010 ; Ganin et al. , 2016 ) , but the approach readily applies to domain generalization ( Gulrajani and Lopez-Paz , 2020 ; Koh et al. , 2020 ) . Algorithms include penalising the domain-predictive power of the model ( Ganin et al. , 2016 ; Wang et al. , 2019 ; Huang et al. , 2020 ) , aligning domains through contrastive loss ( Motiian et al. , 2017 ) , matching mean and variance of feature distributions across domains ( Sun and Saenko , 2016 ) , learning useful representations by solving Jigsaw puzzles ( Carlucci et al. , 2019 ) , using the maximum mean discrepancy to match the feature distributions ( Li et al. , 2018b ) or introducing training constraints across domains using mixup formulation ( Yan et al. , 2020 ) . 3 . Invariant Risk Minimization ( IRM ) : IRM is proposed by Arjovsky et al . ( 2019 ) , which learns an intermediate representation such that the optimal classifiers ( on top of this representation ) of all domains are the same . The motivation is to exploit invariant causal effects between domains while reducing the effect of domain-specific spurious correlations . From an optimization perspective , when IRM reaches its optimal , all the gradients ( for the linear classifier ) has to be zero . This is why IRM ’ s solution won ’ t deviate from ERM when ERM is optimal for every domain , which is not the case for our proposed IDGM objective due to the gradient inner product term . 4 . Data augmentation : More recently , approaches that simulates unseen domains through specific types of data augmentation/normalization has been gaining traction . This includes work such as Zhou et al . ( 2020 ) ; Volpi and Murino ( 2019 ) ; Ilse et al . ( 2021 ) , as well as Seo et al . ( 2019 ) which utilises ensemble learning . 5 . Gradient alignment : Two concurrent work – Koyama and Yamaguchi ( 2021 ) and Parascandolo et al . ( 2021 ) – utilise similar gradient-alignment principle for domain generalization . Koyama and Yamaguchi ( 2021 ) proposes IGA , which learns invariant features by minimizing the variance of inter-domain gradients . The key difference between IGA and our objective is that IGA is completely identical to ERM when ERM is the optimal solution on every training domain , since the variances of the gradients will be zero . While they achieve the best performance on the training set , both IGA and ERM could in some cases , completely fail when generalizing to unseen domains ( see Section 3.2 for such an example ) . Our method , on the contrary , biases towards non-ERM solutions as long as the gradients are aligned , and is therefore able to avoid this issue . Parascandolo et al . ( 2021 ) on the other hand , proposes to mask out the gradients that have opposite signs for different domains . Unlike their work that prunes gradients that are inconsistent , our approach actively encourage gradients from different domains to be consistent by maximizing the gradient inner product . Additionally , in Lopez-Paz and Ranzato ( 2017 ) we also see the application of gradientalignment , however in this case it is applied under the continual learning setting to determine whether a gradient update will increase the loss of the previous tasks . Apart from these algorithms that are tailored for domain generalization , a well-studied baseline in this area is ERM , which simply minimizes the average loss over training domains . Using vanilla ERM is theoretically unfounded ( Hashimoto et al. , 2018 ; Blodgett et al. , 2016 ; Tatman , 2017 ) since ERM is guaranteed to work only when train and test distributions match . Nonetheless , recent benchmarks suggest that ERM obtains strong performance in practice , in many case surpassing domain generalization algorithms ( Gulrajani and Lopez-Paz , 2020 ; Koh et al. , 2020 ) . Our goal is to fill this gap , using an algorithm significantly simpler than previous approaches . Connections to meta-learning There are close connections between meta-learning ( Thrun and Pratt , 1998 ) and ( multi-source ) domain adaptation . In fact , there are a few works in domain generalization that are inspired by the meta-learning principles , such as Li et al . ( 2018a ) ; Balaji et al . ( 2018 ) ; Li et al . ( 2019 ) ; Dou et al . ( 2019 ) . Specifically , Li et al . ( 2020 ) also proposes to adapt Reptile for domain generalization tasks , however they study their method under the sequential learning setting , whereas our method can be trained on all domains and therefore learns faster , especially when the number of domains is large . In Ren et al . ( 2018 ) , we also see the leveraging of gradient inner product in meta-learning , where it is used to determine the importance weight of training examples . We discuss the connection between our proposed algorithm to meta-learning in more details in Appendix A.1 . Note that our proposed algorithm Fish is similar to the Mean Teacher method ( Tarvainen and Valpola , 2017 ) , where a teacher model ( equivalent to θ in Algorithm 1 ) is computed using a moving average of the student model ( equivalent to θ̃ in Algorithm 1 ) . 3 METHODOLOGY . 3.1 GOALS . Consider a training dataset Dtr consisting of S domains Dtr = { D1 , · · · , DS } , where each domain s is characterized by a dataset Ds : = { ( xsi , ysi ) } ns i=1 containing data drawn i.i.d . from some probability distribution . Also consider a test dataset Dte consisting of T domains Dte = { DS+1 , · · · , DS+T } , where Dtr ∩ Dte = ∅ . The goal of domain generalization is to train a model with weights θ that generalizes well on the test dataset Dte such that : arg min θ ED∼Dte E ( x , y ) ∼D [ l ( ( x , y ) ; θ ) ] , ( 1 ) where l ( ( x , y ) ; θ ) is the loss of model θ evaluated on ( x , y ) . A naive approach is to apply ERM , which simply minimizes the average loss on Dtr , ignoring the discrepancy between train and test domains : Lerm ( Dtr ; θ ) = ED∼Dtr E ( x , y ) ∼D [ l ( ( x , y ) ; θ ) ] . ( 2 ) The ERM objective does not exploit the invariance across different domains inDtr and could perform arbitrarily poorly on test data . We demonstrate this effect with the following simple linear example .
The work tries to tackle the problem of domain generalisation in multi-source setting. The main claim of the paper is that by maximising inner product between gradients from different domains leads to better learning of domain invariant features. The provide a meta-learning inspired algorithm Fish to approximate the second-order derivates. The results on several domain generalisation dataset is shown.
SP:4e39dea3cdfdad801112d4894b73503345de78dc
Gradient Matching for Domain Generalization
1 INTRODUCTION The goal of domain generalization is to train models that performs well on unseen , out-of-distribution data , which is crucial in practice for model deployment in the wild . This seemingly difficult task is made possible by the presence of multiple distributions/domains at train time . As we have seen in past work ( Arjovsky et al. , 2019 ; Gulrajani and Lopez-Paz , 2020 ; Ganin et al. , 2016 ) , a key aspect of domain generalization is to learn from features that remain invariant across multiple domains , while ignoring those that are spuriously correlated to label information ( as defined in Torralba and Efros ( 2011 ) ; Stock and Cisse ( 2017 ) ) . Consider , for example , a model that is built to distinguish between cows and camels using photos collected in nature under different climates . Since CNNs are known to have a bias towards texture ( Geirhos et al. , 2018 ; Brendel and Bethge , 2019 ) , if we simply try to minimize the average loss across different domains , the classifier is prone to spuriously correlate “ cow ” with grass and “ camels ” with desert , and predict the species using only the background . Such a classifier can be rendered useless when the animals are placed indoors or in a zoo . However , if the model could recognize that while the landscapes change with climate , the biological characteristics of the animals ∗Work done during internship at Facebook AI Research . †Now at Zoom . ( e.g . humps , neck lengths ) remain invariant and use those features to determine the species , we have a much better chance at generalizing to unseen domains . Similar intuitions have already motivated many approaches that consider learning “ invariances ” across domains as the main challenge of domain generalization . Typically , a lot of these work focus on learning invariant representations directly by removing the domain information ( Ganin et al. , 2016 ; Sun and Saenko , 2016 ; Li et al. , 2018 ) . In this work , we propose an inter-domain gradient matching ( IDGM ) objective . Instead of learning invariant features by matching the distributions of representations from different domains , our approach does so by encouraging consistent gradient directions across domains . Specifically , our IDGM objective augments the loss with an auxiliary term that maximizes the gradient inner product between domains , which encourages the alignment between the domain-specific gradients . By simultaneously minimizing the loss and matching the gradients , IDGM encourages the optimization paths to be the same for all domains , favouring invariant predictions . Figure 1 illustrates a motivating example described in Section 3.2 : given 2 domains , each containing one invariant feature ( orange cross ) and one spurious feature ( yellow and red cross ) . While empirical risk minimization ( ERM ) minimizes the average loss between these domains at the cost of learning spurious features only , IDGM aligns the gradient directions and is therefore able to focus on the invariant feature . While the IDGM objective achieves the desirable learning dynamic in theory , naive optimization of the objective by gradient descent is computationally costly due to the second-order derivatives . Leveraging the theoretical analysis of Reptile , a meta-learning algorithm ( Nichol et al. , 2018 ) , we propose to approximate the gradients of IDGM using a simple first-order algorithm , which we name Fish . Fish is simple to implement , computationally effective and as we show in our experiments , functionally similar to direct optimization of IDGM . Our contribution is a simple but effective algorithm for domain generalization , which exhibits state-ofthe-art performance on 13 datasets from recent domain generalization benchmark WILDS ( Koh et al. , 2020 ) and DOMAINBED ( Gulrajani and Lopez-Paz , 2020 ) . The strong performance of our method on a variety of datasets demonstrates that it is broadly applicable in different applications/subgenres of domain generalization tasks . We also perform detailed analysis in Section 4.4 to explain the effectiveness of our proposed algorithm . 2 RELATED WORK . Domain Generalization In domain generalization , the training data is sampled from one or many source domains , while the test data is sampled from a new target domain . We will now discuss the five main families of approaches to domain generalization : 1 . Distributional Robustness ( DRO ) : DRO approaches minimize the worst-case loss over a set of data distributions constructed from the training domains . Rojas-Carulla et al . ( 2015 ) proposed DRO to address covariate shift ( Gretton et al. , 2009a ; b ) , where P ( Y |X ) remains constant across domains but P ( X ) changes . Later work also studied subpopulation shift , where the train and test distributions are mixtures of the same domains , but the mixture weights change between train and test ( Hu et al. , 2018 ; Sagawa et al. , 2019 ) ; 2 . Domain-invariant representation learning : This family of approaches to domain generalization aims at learning high-level features that make domains statistically indistinguishable . Prediction is then based on these features only . The principle is motivated by a generalization error bound for unsupervised domain adaptation ( Ben-David et al. , 2010 ; Ganin et al. , 2016 ) , but the approach readily applies to domain generalization ( Gulrajani and Lopez-Paz , 2020 ; Koh et al. , 2020 ) . Algorithms include penalising the domain-predictive power of the model ( Ganin et al. , 2016 ; Wang et al. , 2019 ; Huang et al. , 2020 ) , aligning domains through contrastive loss ( Motiian et al. , 2017 ) , matching mean and variance of feature distributions across domains ( Sun and Saenko , 2016 ) , learning useful representations by solving Jigsaw puzzles ( Carlucci et al. , 2019 ) , using the maximum mean discrepancy to match the feature distributions ( Li et al. , 2018b ) or introducing training constraints across domains using mixup formulation ( Yan et al. , 2020 ) . 3 . Invariant Risk Minimization ( IRM ) : IRM is proposed by Arjovsky et al . ( 2019 ) , which learns an intermediate representation such that the optimal classifiers ( on top of this representation ) of all domains are the same . The motivation is to exploit invariant causal effects between domains while reducing the effect of domain-specific spurious correlations . From an optimization perspective , when IRM reaches its optimal , all the gradients ( for the linear classifier ) has to be zero . This is why IRM ’ s solution won ’ t deviate from ERM when ERM is optimal for every domain , which is not the case for our proposed IDGM objective due to the gradient inner product term . 4 . Data augmentation : More recently , approaches that simulates unseen domains through specific types of data augmentation/normalization has been gaining traction . This includes work such as Zhou et al . ( 2020 ) ; Volpi and Murino ( 2019 ) ; Ilse et al . ( 2021 ) , as well as Seo et al . ( 2019 ) which utilises ensemble learning . 5 . Gradient alignment : Two concurrent work – Koyama and Yamaguchi ( 2021 ) and Parascandolo et al . ( 2021 ) – utilise similar gradient-alignment principle for domain generalization . Koyama and Yamaguchi ( 2021 ) proposes IGA , which learns invariant features by minimizing the variance of inter-domain gradients . The key difference between IGA and our objective is that IGA is completely identical to ERM when ERM is the optimal solution on every training domain , since the variances of the gradients will be zero . While they achieve the best performance on the training set , both IGA and ERM could in some cases , completely fail when generalizing to unseen domains ( see Section 3.2 for such an example ) . Our method , on the contrary , biases towards non-ERM solutions as long as the gradients are aligned , and is therefore able to avoid this issue . Parascandolo et al . ( 2021 ) on the other hand , proposes to mask out the gradients that have opposite signs for different domains . Unlike their work that prunes gradients that are inconsistent , our approach actively encourage gradients from different domains to be consistent by maximizing the gradient inner product . Additionally , in Lopez-Paz and Ranzato ( 2017 ) we also see the application of gradientalignment , however in this case it is applied under the continual learning setting to determine whether a gradient update will increase the loss of the previous tasks . Apart from these algorithms that are tailored for domain generalization , a well-studied baseline in this area is ERM , which simply minimizes the average loss over training domains . Using vanilla ERM is theoretically unfounded ( Hashimoto et al. , 2018 ; Blodgett et al. , 2016 ; Tatman , 2017 ) since ERM is guaranteed to work only when train and test distributions match . Nonetheless , recent benchmarks suggest that ERM obtains strong performance in practice , in many case surpassing domain generalization algorithms ( Gulrajani and Lopez-Paz , 2020 ; Koh et al. , 2020 ) . Our goal is to fill this gap , using an algorithm significantly simpler than previous approaches . Connections to meta-learning There are close connections between meta-learning ( Thrun and Pratt , 1998 ) and ( multi-source ) domain adaptation . In fact , there are a few works in domain generalization that are inspired by the meta-learning principles , such as Li et al . ( 2018a ) ; Balaji et al . ( 2018 ) ; Li et al . ( 2019 ) ; Dou et al . ( 2019 ) . Specifically , Li et al . ( 2020 ) also proposes to adapt Reptile for domain generalization tasks , however they study their method under the sequential learning setting , whereas our method can be trained on all domains and therefore learns faster , especially when the number of domains is large . In Ren et al . ( 2018 ) , we also see the leveraging of gradient inner product in meta-learning , where it is used to determine the importance weight of training examples . We discuss the connection between our proposed algorithm to meta-learning in more details in Appendix A.1 . Note that our proposed algorithm Fish is similar to the Mean Teacher method ( Tarvainen and Valpola , 2017 ) , where a teacher model ( equivalent to θ in Algorithm 1 ) is computed using a moving average of the student model ( equivalent to θ̃ in Algorithm 1 ) . 3 METHODOLOGY . 3.1 GOALS . Consider a training dataset Dtr consisting of S domains Dtr = { D1 , · · · , DS } , where each domain s is characterized by a dataset Ds : = { ( xsi , ysi ) } ns i=1 containing data drawn i.i.d . from some probability distribution . Also consider a test dataset Dte consisting of T domains Dte = { DS+1 , · · · , DS+T } , where Dtr ∩ Dte = ∅ . The goal of domain generalization is to train a model with weights θ that generalizes well on the test dataset Dte such that : arg min θ ED∼Dte E ( x , y ) ∼D [ l ( ( x , y ) ; θ ) ] , ( 1 ) where l ( ( x , y ) ; θ ) is the loss of model θ evaluated on ( x , y ) . A naive approach is to apply ERM , which simply minimizes the average loss on Dtr , ignoring the discrepancy between train and test domains : Lerm ( Dtr ; θ ) = ED∼Dtr E ( x , y ) ∼D [ l ( ( x , y ) ; θ ) ] . ( 2 ) The ERM objective does not exploit the invariance across different domains inDtr and could perform arbitrarily poorly on test data . We demonstrate this effect with the following simple linear example .
This paper proposed inter-domain gradient matching for domain generalization. They also approximated the proposed model with a simple first-order algorithm to avoid costly second-order computations. The performance on the WILDs and DomainBed seems better than the ERM algorithm.
SP:4e39dea3cdfdad801112d4894b73503345de78dc
Learning to Efficiently Sample from Diffusion Probabilistic Models
1 INTRODUCTION . Denoising Diffusion Probabilistic Models ( DDPMs ) have emerged as a powerful class of generative models ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) . DDPMs model the data distribution through an iterative denoising process , and have been applied successfully to a variety of applications , including unconditional image generation ( Song & Ermon , 2019 ; Ho et al. , 2020 ; Song et al. , 2021 ; Nichol & Dhariwal , 2021 ) , shape generation ( Cai et al. , 2020 ) , text-to-speech ( Chen et al. , 2021 ; Kong et al. , 2020 ) and single image super-resolution ( Saharia et al. , 2021 ; Li et al. , 2021 ) . DDPMs are easy to train , featuring a simple denoising objective ( Ho et al. , 2020 ) with noise schedules that successfully transfer across different models and datasets . This contrasts to Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , which require an inner-outer loop optimization procedure that often entails instability and requires careful hyperparameter tuning . DDPMs also admit a simple non-autoregressive inference process ; this contrasts to autoregressive models with often prohibitive computational costs on high dimensional data . The DDPM inference process starts with samples from the corresponding prior noise distribution ( e.g. , standard Gaussian ) , and iteratively denoises the samples under the fixed noise schedule . However , DDPMs often need hundreds-tothousands of denoising steps ( each involving a feedforward pass of a large neural network ) to achieve strong results . While this process is still much faster than autoregressive models , this is still often computationally prohibitive , especially when modeling high dimensional data . There has been much recent work focused on improving the sampling speed of DDPMs . WaveGrad ( Chen et al. , 2021 ) introduced a manually crafted schedule requiring only 6 refinement steps ; however , this schedule seems to be only applicable to the vocoding task where there is a very strong conditioning signal . Denoising Diffusion Implicit Models ( DDIMs ) ( Song et al. , 2020 ) accelerate sampling from pre-trained DDPMs by relying on a family of non-Markovian processes . They accelerate the generative process through taking multiple steps in the diffusion process . However , DDIMs sacrifice the ability to compute log-likelihoods . Nichol & Dhariwal ( 2021 ) also explored the use of ancestral sampling with a subsequence of the original denoising steps , trying both a uniform stride and other hand-crafted strides . San-Roman et al . ( 2021 ) improve few-step sampling further by training a separate model after training a DDPM to estimate the level of noise , and modifying inference to dynamically adjust the noise schedule at every step to match the predicted noise level . All these fast-sampling techniques rely on a key property of DDPMs – there is a decoupling between the training and inference schedule . The training schedule need not be the same as the inference schedule , e.g. , a diffusion model trained to use 1000 steps may actually use only 10 steps during inference . This decoupling characteristic is typically not found in other generative models . In past work , the choice of inference schedule was often considered a hyperpameter selection problem , and often selected via intuition or extensive hyperparmeter exploration ( Chen et al. , 2021 ) . In this work , we view the choice of the timesteps of the inference schedule ( which we just call an inference path ) as an independent optimization problem , wherein we attempt to learn the best schedule . Our approach relies on the observation that we can solve this optimization problem with dynamic programming . Given a fixed budget of K refinement steps and a pre-trained DDPM , we find the set of timesteps that maximizes the corresponding evidence lower bound ( ELBO ) . As an optimization objective , the ELBO has a key decomposability property : the total ELBO is the sum of individual KL terms , and for any two inference paths , if the timesteps ( s , t ) contiguously occur in both , they share a common KL term , therefore admitting memoization ( see Section 4.1 for a precise definition ) . Our main contributions are the following : • We introduce a method that that finds the likelihood-optimal inference paths with a simple dynamic programming algorithm for all possible computation budgets of K refinement steps . The algorithm searches over T > K timesteps , only requiring O ( T ) neural network forward passes . It only needs to be applied once to a pre-trained DDPM , does not require training or retraining a DDPM , and is applicable to both time-discrete and time-continuous DDPMs . • We experiment with DDPM models from prior work . On both Lsimple CIFAR10 and Lhybrid ImageNet 64x64 , we discover schedules which require only 32 refinement steps , yet sacrifice only 0.1 bits per dimension compared to their original counterparts with 1,000 and 4,000 steps , respectively . • We show that our method can be applied to any decomposable set of objectives . In particular , optimizing a reweighted ELBO can favourably bias our algorithm towards solutions with better FID scores , as we find that optimizing the exact variational lower bound may lead to worse FID scores , which is consistent with prior work on unconditional image generation . 2 BACKGROUND ON DENOISING DIFFUSION PROBABILISTIC MODELS . Denoising Diffusion Probabilistic Models ( DDPMs ) ( Ho et al. , 2020 ; Sohl-Dickstein et al. , 2015 ) are defined in terms of a forward Markovian diffusion process q and a learned reverse process pθ . The forward diffusion process gradually adds Gaussian noise to a data point x0 through T iterations , q ( x1 : T | x0 ) = ∏T t=1 q ( xt | xt−1 ) , ( 1 ) q ( xt | xt−1 ) = N ( xt | √ αt xt−1 , ( 1− αt ) I ) , ( 2 ) where the scalar parameters α1 : T determine the variance of the noise added at each diffusion step , subject to 0 < αt < 1 . The learned reverse process aims to model q ( x0 ) by inverting the forward process , gradually removing noise from signal starting from pure Gaussian noise xT , p ( xT ) = N ( xT | 0 , I ) ( 3 ) pθ ( x0 : T ) = p ( xT ) ∏T t=1 pθ ( xt−1 | xt ) ( 4 ) pθ ( xt−1 | xt ) = N ( xt−1 | µθ ( xt , t ) , σ2t I ) . ( 5 ) The parameters of the reverse process can be optimized by maximizing the following variational lower bound on the training set : Eq log p ( x0 ) ≥ Eq [ log pθ ( x0|x1 ) − T∑ t=2 DKL ( q ( xt−1|xt , x0 ) ‖pθ ( xt−1|xt ) ) − LT ( x0 ) ] ( 6 ) where LT ( x0 ) = DKL ( q ( xT |x0 ) ‖ p ( xT ) ) . Nichol & Dhariwal ( 2021 ) have demonstrated that training DDPMs by maximizing the ELBO yields competitive log-likelihood scores on both CIFAR10 and ImageNet 64×64 achieving 2.94 and 3.53 bits per dimension respectively . Two notable properties of Gaussian diffusion process that help formulate DDPMs tractably and efficiently include : q ( xt | x0 ) = N ( xt | √ γt x0 , ( 1− γt ) I ) , where γt = ∏t i=1 αi , ( 7 ) q ( xt−1 | x0 , xt ) = N ( xt−1 ∣∣∣ √γt−1 ( 1− αt ) x0 +√αt ( 1− γt−1 ) xt 1− γt , ( 1− γt−1 ) ( 1− αt ) 1− γt I ) . ( 8 ) Given the marginal distribution of xt given x0 in ( 7 ) , one can sample from the q ( xt | x0 ) independently for different t and perform SGD on a randomly chosen KL term in ( 6 ) . Furthermore , given that the posterior distribution of xt−1 given xt and x0 is Gaussian , one can compute each KL term in ( 6 ) between two Gaussians in closed form and avoid high variance Monte Carlo estimation . 3 LINKING DDPMS TO CONTINUOUS TIME AFFINE DIFFUSION PROCESSES . Before describing our approach to efficiently sampling from DDPMs , it is helpful to link DDPMs to continuous time affine diffusion processes , as it shows the compatibility of our approach to both time-discrete and time-continuous DDPMs ( Song et al. , 2021 ; Kingma et al. , 2021 ) . Let x0 ∼ q ( x0 ) denote a data point drawn from the empirical distribution of interest and let q ( xt|x0 ) denote a stochastic process for t ∈ [ 0 , 1 ] defined through an affine diffusion process through the following stochastic differential equation ( SDE ) : dXt = fsde ( t ) Xtdt+ gsde ( t ) dBt , ( 9 ) where fsde , gsde : [ 0 , 1 ] → [ 0 , 1 ] are integrable functions satisfying fsde ( 0 ) = 1 and gsde ( 0 ) = 0 . Following Särkkä & Solin ( 2019 ) ( section 6.1 ) , we can compute the exact marginals q ( xt|xs ) for any 0 ≤ s < t ≤ 1 . We get : q ( xt | xs ) = N ( xt ∣∣∣ψ ( t , s ) xs , ( ∫ t s ψ ( t , u ) 2g ( u ) 2du ) I ) ( 10 ) where ψ ( t , s ) = exp ∫ t s f ( u ) du . Since these integrals are difficult to work with , we instead propose ( in parallel to Kingma et al . ( 2021 ) ) to define the marginals directly : q ( xt | x0 ) = N ( xt | f ( t ) x0 , g ( t ) 2I ) ( 11 ) where f , g : [ 0 , 1 ] → [ 0 , 1 ] are differentiable , monotonic functions satisfying f ( 0 ) = 1 , f ( 1 ) = 0 , g ( 0 ) = 0 , g ( 1 ) = 1 . Then , by implicit differentiation it follows that the corresponding diffusion is dXt = f ′ ( t ) f ( t ) Xtdt+ √ 2g ( t ) ( g′ ( t ) − f ′ ( t ) g ( t ) f ( t ) ) dBt . ( 12 ) We provide a proof for Equation 12 in the appendix ( A.1 ) . To complete our formulation , let fts = f ( t ) f ( s ) and gts = √ g ( t ) 2 − f2tsg ( s ) 2 . Then , it follows that for any 0 < s < t ≤ 1 we have that q ( xt | xs ) = N ( xt | ftsxs , g2tsI ) , ( 13 ) q ( xs | xt , x0 ) = N ( xs ∣∣∣ 1 g2t0 ( fs0g 2 tsx0 + ftsg 2 s0xt ) , g2s0g 2 ts g2t0 I ) , ( 14 ) We include proofs for ( 13 ) and ( 14 ) in the appendix ( A.2 ) . These equations show that we can perform inference with any ancestral sampling path ( i.e. , the timesteps can attain continuous values ) by formulating the reverse process in terms of the posterior distribution as pθ ( xs | xt ) = q ( xs | xt , x̂0 = 1ft0 ( xt − gt0 θ ( xt , t ) ) ) , ( 15 ) justifying the compatibility of our main approach with time-continuous DDPMs . We note that this reverse process is also mathematically equivalent to a reverse process based on a time-discrete DDPM derived from a subsequence of the original timesteps as done by Song et al . ( 2020 ) ; Nichol & Dhariwal ( 2021 ) . For the case of s = 0 in the reverse process , we follow the parametrization of Ho et al . ( 2020 ) to obtain discretized log likelihoods and compare our log likelihoods fairly with prior work . Algorithm 1 : Given a matrixL ∼ ( T+1 ) × ( T+1 ) of precomputed L ( · , · ) terms , find the likelihoodoptimal schedules for all step budgets . def vectorized_dp_all_budgets ( L ) : T = len ( L ) - 1 D = np.full ( L.shape , -1 ) C = np.full ( L.shape , np.inf ) C [ 0 , 0 ] = 0 for k in range ( 1 , T + 1 ) : bpds = C [ k - 1 , None ] + L C [ k ] = np.amin ( bpds , axis=-1 ) D [ k ] = np.argmin ( bpds , axis=-1 ) return D Algorithm 2 : Fetch the shortest path of K steps from the dynamic programming results implicitly returned by Algorithm 1. def fetch_shortest_path ( D , K ) : optpath = [ ] t = K for k in reversed ( range ( K ) ) : optpath.append ( t ) t = D [ k , t ] return optpath
This work presents a method to efficiently sample from a pre-trained DDPM by solving a dynamic programming problem that can maximize the log likelihood of the data samples given a fixed computational budget. This is done by defining a least-cost path problem to select a reduced set of time steps among a full grid of potential time steps across different possible step budget sizes, where the ELBO is used as the cost function. The authors show that their method can identify DDPM schedules that can achieve significantly higher log likelihood (i.e. lower bits/dim) than prior DDPM schedules in the regime where about a hundred steps or fewer are used.
SP:11ab8f635d8d593fe0187875679a36257360bf66
Learning to Efficiently Sample from Diffusion Probabilistic Models
1 INTRODUCTION . Denoising Diffusion Probabilistic Models ( DDPMs ) have emerged as a powerful class of generative models ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) . DDPMs model the data distribution through an iterative denoising process , and have been applied successfully to a variety of applications , including unconditional image generation ( Song & Ermon , 2019 ; Ho et al. , 2020 ; Song et al. , 2021 ; Nichol & Dhariwal , 2021 ) , shape generation ( Cai et al. , 2020 ) , text-to-speech ( Chen et al. , 2021 ; Kong et al. , 2020 ) and single image super-resolution ( Saharia et al. , 2021 ; Li et al. , 2021 ) . DDPMs are easy to train , featuring a simple denoising objective ( Ho et al. , 2020 ) with noise schedules that successfully transfer across different models and datasets . This contrasts to Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , which require an inner-outer loop optimization procedure that often entails instability and requires careful hyperparameter tuning . DDPMs also admit a simple non-autoregressive inference process ; this contrasts to autoregressive models with often prohibitive computational costs on high dimensional data . The DDPM inference process starts with samples from the corresponding prior noise distribution ( e.g. , standard Gaussian ) , and iteratively denoises the samples under the fixed noise schedule . However , DDPMs often need hundreds-tothousands of denoising steps ( each involving a feedforward pass of a large neural network ) to achieve strong results . While this process is still much faster than autoregressive models , this is still often computationally prohibitive , especially when modeling high dimensional data . There has been much recent work focused on improving the sampling speed of DDPMs . WaveGrad ( Chen et al. , 2021 ) introduced a manually crafted schedule requiring only 6 refinement steps ; however , this schedule seems to be only applicable to the vocoding task where there is a very strong conditioning signal . Denoising Diffusion Implicit Models ( DDIMs ) ( Song et al. , 2020 ) accelerate sampling from pre-trained DDPMs by relying on a family of non-Markovian processes . They accelerate the generative process through taking multiple steps in the diffusion process . However , DDIMs sacrifice the ability to compute log-likelihoods . Nichol & Dhariwal ( 2021 ) also explored the use of ancestral sampling with a subsequence of the original denoising steps , trying both a uniform stride and other hand-crafted strides . San-Roman et al . ( 2021 ) improve few-step sampling further by training a separate model after training a DDPM to estimate the level of noise , and modifying inference to dynamically adjust the noise schedule at every step to match the predicted noise level . All these fast-sampling techniques rely on a key property of DDPMs – there is a decoupling between the training and inference schedule . The training schedule need not be the same as the inference schedule , e.g. , a diffusion model trained to use 1000 steps may actually use only 10 steps during inference . This decoupling characteristic is typically not found in other generative models . In past work , the choice of inference schedule was often considered a hyperpameter selection problem , and often selected via intuition or extensive hyperparmeter exploration ( Chen et al. , 2021 ) . In this work , we view the choice of the timesteps of the inference schedule ( which we just call an inference path ) as an independent optimization problem , wherein we attempt to learn the best schedule . Our approach relies on the observation that we can solve this optimization problem with dynamic programming . Given a fixed budget of K refinement steps and a pre-trained DDPM , we find the set of timesteps that maximizes the corresponding evidence lower bound ( ELBO ) . As an optimization objective , the ELBO has a key decomposability property : the total ELBO is the sum of individual KL terms , and for any two inference paths , if the timesteps ( s , t ) contiguously occur in both , they share a common KL term , therefore admitting memoization ( see Section 4.1 for a precise definition ) . Our main contributions are the following : • We introduce a method that that finds the likelihood-optimal inference paths with a simple dynamic programming algorithm for all possible computation budgets of K refinement steps . The algorithm searches over T > K timesteps , only requiring O ( T ) neural network forward passes . It only needs to be applied once to a pre-trained DDPM , does not require training or retraining a DDPM , and is applicable to both time-discrete and time-continuous DDPMs . • We experiment with DDPM models from prior work . On both Lsimple CIFAR10 and Lhybrid ImageNet 64x64 , we discover schedules which require only 32 refinement steps , yet sacrifice only 0.1 bits per dimension compared to their original counterparts with 1,000 and 4,000 steps , respectively . • We show that our method can be applied to any decomposable set of objectives . In particular , optimizing a reweighted ELBO can favourably bias our algorithm towards solutions with better FID scores , as we find that optimizing the exact variational lower bound may lead to worse FID scores , which is consistent with prior work on unconditional image generation . 2 BACKGROUND ON DENOISING DIFFUSION PROBABILISTIC MODELS . Denoising Diffusion Probabilistic Models ( DDPMs ) ( Ho et al. , 2020 ; Sohl-Dickstein et al. , 2015 ) are defined in terms of a forward Markovian diffusion process q and a learned reverse process pθ . The forward diffusion process gradually adds Gaussian noise to a data point x0 through T iterations , q ( x1 : T | x0 ) = ∏T t=1 q ( xt | xt−1 ) , ( 1 ) q ( xt | xt−1 ) = N ( xt | √ αt xt−1 , ( 1− αt ) I ) , ( 2 ) where the scalar parameters α1 : T determine the variance of the noise added at each diffusion step , subject to 0 < αt < 1 . The learned reverse process aims to model q ( x0 ) by inverting the forward process , gradually removing noise from signal starting from pure Gaussian noise xT , p ( xT ) = N ( xT | 0 , I ) ( 3 ) pθ ( x0 : T ) = p ( xT ) ∏T t=1 pθ ( xt−1 | xt ) ( 4 ) pθ ( xt−1 | xt ) = N ( xt−1 | µθ ( xt , t ) , σ2t I ) . ( 5 ) The parameters of the reverse process can be optimized by maximizing the following variational lower bound on the training set : Eq log p ( x0 ) ≥ Eq [ log pθ ( x0|x1 ) − T∑ t=2 DKL ( q ( xt−1|xt , x0 ) ‖pθ ( xt−1|xt ) ) − LT ( x0 ) ] ( 6 ) where LT ( x0 ) = DKL ( q ( xT |x0 ) ‖ p ( xT ) ) . Nichol & Dhariwal ( 2021 ) have demonstrated that training DDPMs by maximizing the ELBO yields competitive log-likelihood scores on both CIFAR10 and ImageNet 64×64 achieving 2.94 and 3.53 bits per dimension respectively . Two notable properties of Gaussian diffusion process that help formulate DDPMs tractably and efficiently include : q ( xt | x0 ) = N ( xt | √ γt x0 , ( 1− γt ) I ) , where γt = ∏t i=1 αi , ( 7 ) q ( xt−1 | x0 , xt ) = N ( xt−1 ∣∣∣ √γt−1 ( 1− αt ) x0 +√αt ( 1− γt−1 ) xt 1− γt , ( 1− γt−1 ) ( 1− αt ) 1− γt I ) . ( 8 ) Given the marginal distribution of xt given x0 in ( 7 ) , one can sample from the q ( xt | x0 ) independently for different t and perform SGD on a randomly chosen KL term in ( 6 ) . Furthermore , given that the posterior distribution of xt−1 given xt and x0 is Gaussian , one can compute each KL term in ( 6 ) between two Gaussians in closed form and avoid high variance Monte Carlo estimation . 3 LINKING DDPMS TO CONTINUOUS TIME AFFINE DIFFUSION PROCESSES . Before describing our approach to efficiently sampling from DDPMs , it is helpful to link DDPMs to continuous time affine diffusion processes , as it shows the compatibility of our approach to both time-discrete and time-continuous DDPMs ( Song et al. , 2021 ; Kingma et al. , 2021 ) . Let x0 ∼ q ( x0 ) denote a data point drawn from the empirical distribution of interest and let q ( xt|x0 ) denote a stochastic process for t ∈ [ 0 , 1 ] defined through an affine diffusion process through the following stochastic differential equation ( SDE ) : dXt = fsde ( t ) Xtdt+ gsde ( t ) dBt , ( 9 ) where fsde , gsde : [ 0 , 1 ] → [ 0 , 1 ] are integrable functions satisfying fsde ( 0 ) = 1 and gsde ( 0 ) = 0 . Following Särkkä & Solin ( 2019 ) ( section 6.1 ) , we can compute the exact marginals q ( xt|xs ) for any 0 ≤ s < t ≤ 1 . We get : q ( xt | xs ) = N ( xt ∣∣∣ψ ( t , s ) xs , ( ∫ t s ψ ( t , u ) 2g ( u ) 2du ) I ) ( 10 ) where ψ ( t , s ) = exp ∫ t s f ( u ) du . Since these integrals are difficult to work with , we instead propose ( in parallel to Kingma et al . ( 2021 ) ) to define the marginals directly : q ( xt | x0 ) = N ( xt | f ( t ) x0 , g ( t ) 2I ) ( 11 ) where f , g : [ 0 , 1 ] → [ 0 , 1 ] are differentiable , monotonic functions satisfying f ( 0 ) = 1 , f ( 1 ) = 0 , g ( 0 ) = 0 , g ( 1 ) = 1 . Then , by implicit differentiation it follows that the corresponding diffusion is dXt = f ′ ( t ) f ( t ) Xtdt+ √ 2g ( t ) ( g′ ( t ) − f ′ ( t ) g ( t ) f ( t ) ) dBt . ( 12 ) We provide a proof for Equation 12 in the appendix ( A.1 ) . To complete our formulation , let fts = f ( t ) f ( s ) and gts = √ g ( t ) 2 − f2tsg ( s ) 2 . Then , it follows that for any 0 < s < t ≤ 1 we have that q ( xt | xs ) = N ( xt | ftsxs , g2tsI ) , ( 13 ) q ( xs | xt , x0 ) = N ( xs ∣∣∣ 1 g2t0 ( fs0g 2 tsx0 + ftsg 2 s0xt ) , g2s0g 2 ts g2t0 I ) , ( 14 ) We include proofs for ( 13 ) and ( 14 ) in the appendix ( A.2 ) . These equations show that we can perform inference with any ancestral sampling path ( i.e. , the timesteps can attain continuous values ) by formulating the reverse process in terms of the posterior distribution as pθ ( xs | xt ) = q ( xs | xt , x̂0 = 1ft0 ( xt − gt0 θ ( xt , t ) ) ) , ( 15 ) justifying the compatibility of our main approach with time-continuous DDPMs . We note that this reverse process is also mathematically equivalent to a reverse process based on a time-discrete DDPM derived from a subsequence of the original timesteps as done by Song et al . ( 2020 ) ; Nichol & Dhariwal ( 2021 ) . For the case of s = 0 in the reverse process , we follow the parametrization of Ho et al . ( 2020 ) to obtain discretized log likelihoods and compare our log likelihoods fairly with prior work . Algorithm 1 : Given a matrixL ∼ ( T+1 ) × ( T+1 ) of precomputed L ( · , · ) terms , find the likelihoodoptimal schedules for all step budgets . def vectorized_dp_all_budgets ( L ) : T = len ( L ) - 1 D = np.full ( L.shape , -1 ) C = np.full ( L.shape , np.inf ) C [ 0 , 0 ] = 0 for k in range ( 1 , T + 1 ) : bpds = C [ k - 1 , None ] + L C [ k ] = np.amin ( bpds , axis=-1 ) D [ k ] = np.argmin ( bpds , axis=-1 ) return D Algorithm 2 : Fetch the shortest path of K steps from the dynamic programming results implicitly returned by Algorithm 1. def fetch_shortest_path ( D , K ) : optpath = [ ] t = K for k in reversed ( range ( K ) ) : optpath.append ( t ) t = D [ k , t ] return optpath
Samples are generated from DDPMs by solving an SDE (often in "discrete time", which is used to refer to specifically the Euler--Maruyama discretisation). This necessitates a choice for where to make numerical steps. Each choice of step locations has a corresponding ELBO. This paper demonstrates that (on a pretrained model) the optimal ELBO may be obtained via a dynamic programming algorithm for the location of the steps.
SP:11ab8f635d8d593fe0187875679a36257360bf66
Generalizing Successor Features to continuous domains for Multi-task Learning
1 INTRODUCTION . Reinforcement learning ( RL ) tackles sequential decision making problems by defining optimal behavior through a reward function , where the agent learns how to behave through interacting with the environment and receiving rewards . The ability of RL algorithms to generalize across different , yet related reward functions , has a great potential to realize more data-efficient algorithms with the capability to transfer to new reward functions . In this paper we look at one particular type of generalization , where the reward function itself changes , however the underlying dynamics of the environment remain the same . This setup is flexible enough to allow transfer happen across tasks , by appropriately defining the rewards which induce different task decompositions . This type of task decomposition potentially allows the agent to tackle more complex problems than , would be possible were the tasks modeled as a single task . We are interested in a setup where the agent is exposed to multiple tasks i.e . tasks with different reward functions . In a multi-goal setting , different reward function can simply be the difference in the euclidean distance to different target goal locations . In a multi-task setting , the difference can be intricately designed in the reward function . For instance a reward function that determines walking forward vs walking backward . We argue that these differences in the structure of the reward function are difficult to capture within a goal or context-conditioned RL frameworks ( Sodhani et al. , 2021 ) . In the context of robotics , generalization across tasks is crucial . Consider an agent playing ball games with a racket Figure 1 . An agent trained to dribble the ball vs hitting the ball , should be able to quickly learn to play squash , as many of the skills such as approaching and hitting the ball are shared in a more complex task of playing squash . From the learners perspective , all these tasks share the same common properties , the ball falls to the ground due to gravity , depending on heavy it is , and it moves with certain velocity when it is hit by the racket . In other words , all these tasks share common dynamics . What changes is the small details in the reward function . For instance the difference between dribbling a ball vs hitting it against the wall , can be the rotation angle of the racket and the amount of force required . If it was possible to learn a representation that could decouple such discrepancies between the reward functions , i.e . decoupling the task dynamics and task-related dynamics , one could train an agent that could re-use the learned representation and quickly fine-tune itself to the more taskspecific representation and achieve a faster learning . Successor features ( SF ) ( Barreto et al. , 2017 ) is one framework that enables such decomposability of representation , explicitly built into the RL formulation . The main goal of this framework is to promote a desired property where instead of being posed as a decoupled representation learning problem , transfer is instead integrated into the RL framework as much as possible , preferably in a way that is almost transparent to the agent . SFs in theory , enable fast transfer between tasks that differ only in their reward function . The advantage of using an SF framework over model-based RL where one learns models of the reward function , is the ability of dynamics representation re-use which is decoupled from the task-specific representation . Our main contribution is to address the generalization and expensive inference problem in the classical SF frameworks coupled with GPI ( Barreto et al. , 2017 ) . We show that a simple architecture can provide a solution to this feature learning problem and demonstrate the effectiveness of our method compared to ( Barreto et al. , 2020 ) in more challenging continuous state and action setting . The majority of the existing work using SFs , operates under the discrete action setting or under the GPI setting optimizing a set of policies which in practice are hard to apply on real robotics applications . To the best of our knowledge , our method is the first to show the applicability of SFs coupled with an appropriate representation learn- ing mechanism solving challenging continuous control tasks . We show that simple modifications to an actor-critic framework can be easily coupled with SFs and empirically demonstrate the efficacy of our method . To summarize , our contributions are as follows : First we propose a practical implementation of SF framework for continuous state and action domains in the context of actor-critic architecture . Secondly we propose a robust method for learning the representations φ and w with the ability to learn disentangled representations for the state-space vs task-specific representation of complex nonlinear reward functions . Finally we demonstrate the efficacy our method on a ranging from tasks including classical continuous control 2D reacher domain on DM control suite ( Tassa et al. , 2020 ) to more challenging 3D reacher and manipulation tasks with the Sawyer arm on Meta-World benchmark ( Yu et al. , 2019 ) . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . We assume the interaction between agent and environment can be modeled as a Markov Decision Process ( MDP ( Puterman , 1994 ) ) . An MDP is defined as a tuple M ≡ 〈S , A , p , R , γ〉 with state space S and action space A . For each s ∈ S and a ∈ A the function p ( .|s , a ) gives the next-state distribution upon taking action a in state s , where p ( .|s , a ) is referred to as the dynamics of the MDP . The random variableR ( s , a , s′ ) determines the reward received in the transition s a−→ s′ . Usually we are interested in the expected value of this variable , which is denoted by r ( s , a , s′ ) , and γ ∈ [ 0 , 1 ) weighs the importance of future rewards . The agent ’ s goal is to find a policy π : S → A , that is , a mapping from states to actions , that maximizes the value of every state-action pairs , defined as Qπ ( s , a ) ≡ Eπ [ ∞∑ i=0 γir ( St+i , At+i , St+i+1 ) |St = s , At = a ] . ( 1 ) where St and At are random variables indicating the state occupied and the action selected by the agent at time step t and Eπ [ . ] denotes expectation over the trajectories induced by π . The function Qπ ( s , a ) is referred to as the “ action-value function ” of policy π. RL algorithms based on dynamic programming build on two fundamental operations , policy evaluation which is the computation of Qπ ( s , a ) , the value function of policy π on task with reward r , and policy improvement theorem ( Bellman , 1957 ) . Once a policy π has been evaluated , we can compute a greedy policy π′ ( s ) ∈ arg maxaQπ ( s , a ) that is guaranteed to perform at least as well as π , that is : Qπ ′ ( s , a ) ≥ Qπ ( s , a ) for any ( s , a ) ∈ S × A . The computation of π′ is referred to as policy improvement . 2.2 MULTI-TASK RL & TRANSFER LEARNING . In practical situations , agents often face multiple related tasks , such as the robot learning numerous skills in Figure 1 . We define tasks Mi drawn from the setM . Then , the goal of multi-task learning is to find π∗i , an optimal policy for each MDP Mi with corresponding optimal value function Q π∗i i . Barreto et al . ( 2017 ) extended the policy improvement theorem to the scenario where the new policy is computed based on the value functions of a set of policies and referred to this as generalized policy improvement ( GPI ) . Suppose the agent has computed n policies corresponding toQπ1 , Qπ2 , ... , Qπn action-value functions . Let Qmax = maxiQπi and define π ( s ) ← arg maxaQmax ( s , a ) for all s ∈ S , then Qπ ( s , a ) ≥ Qmax ( s , a ) for all ( s , a ) ∈ S × A . The only caveat is that it is a waste of computation to compute the value functions of π∗1 , π ∗ 2 , ... , π ∗ n. This approach becomes appealing if we have a way to quickly compute the value functions of the policies πi on the task Mn+1 . 3 ACTOR-CRITIC SUCCESSOR FEATURES . This section will describe Successor Features and their previous use in the discrete action setting . We will then explain our extension , through Universal Value Function Approximators and an ActorCritic approach , to learn useful Successor Features and corresponding policies for high dimensional multi-task continuous control . 3.1 SUCCESSOR FEATURES DECOMPOSITION . Barreto et al . ( 2017 ) proposed a simple reward model which leads to the generalization of successor representation ( SR ) proposed in ( Dayan , 1993 ) . The key assumption is that the reward function can be approximately represented as a linear combination of learned features φ ( s ) . The successor representation ( SR ) ( Dayan , 1993 ) is a representation that generalizes between states using similarity between their successors , that is , the states that follow the current state given the agent ’ s policy . The generalization of SR with function approximation is referred to as Successor Features ( SF ) ( Barreto et al. , 2017 ) of ( s , a ) under policy π . Following ( Barreto et al. , 2017 ; 2018 ; 2020 ) , let φ : S×A×S → Rd be an arbitrary function whose output we will see as “ features ” . We assume that there exist features such that the reward function can be written as r ( s , a , s′ ) = φ ( s , a , s′ ) Tw ( 2 ) where φ ( s , a , s′ ) ∈ Rd are features of ( s , a , s′ ) and w ∈ Rd are weights . Intuitively we can think of φ ( s , a , s′ ) as salient events that may be desirable or undesirable to the agent . Based on Eq.2 we can define an environment Mφ ( S , A , p , γ ) as Mφ ≡ { M ( S , A , p , r , γ ) |r ( s , a , s′ ) = φ ( s , a , s′ ) Tw } , ( 3 ) that is , Mφ is the set of MDPs induced by φ through all possible instantiations of w. SFs make it possible to compute the value of a policy π on any task Mi ∈ Mφ by simply plugging in the representation vector wi defining the task . Specifically , if we substitute Eq.2 in the definition of action-value function of a policy we have Qπ ( s , a ) = Eπ [ rt+1 + γrt+2 + ... |St = s , At = a ] = Eπ [ φTt+1w + φ T t+2w ... |St = s , At = a ] = Eπ [ ∞∑ i=t γi−tφi+1|St = s , At = a ] T w = ψπ ( s , a ) Tw ( 4 ) One benefit of doing so is that if we replace wi with wj in Eq.4 , we immediately obtain the evaluation of π on task Mj . This way only relevant module must be relearned , when either the dynamics or reward changes . The key insight of SFs is that linearity of rewards rw with respect to the features φ which gives us the decomposition of the action value of policy π on task rw . In the GPI setting when the agent is presented with a new task Mn+1 , it needs to compute { Q π∗1 n+1 , Q π∗2 n+1 , ... , Q π∗n n+1 } , that is , the evaluation of each π∗i under the new reward function induced by wn+1 . This in turn would require applying the GPI theorem to the newly-computed set of value functions that will give rise to a policy that performs at least as well as the policy based on any subset of these . Hence ( Barreto et al. , 2017 ) proposed to incorporate SFs where the reward function changes to rn+1 ( s , a , s′ ) = φ ( s , a , s′ ) Twn+1 , as long as we have the correct wn+1 we can compute the value function of π∗i by simply computing Qπ ∗ i n+1 ( s , a ) = ψ π∗i ( s , a ) Twn+1 . This reduces the computation of all Q π∗i n+1 to the simpler supervised problem of approximating wn+1 ( Barreto et al. , 2020 ) . Although at first glance , GPI and SFs seem tangled , this characterization of SF does not depend on GPI framework itself . Thus , SF can be used in any RL framework where such decomposition of reward is viable . Furthermore the combination of SFs and GPI provides an elegant framework for transfer in a multi-task setting . However the computational cost of optimizing policies per task is prohibitive in more complex multi-task setting where in the size of task set Mφ is large .
This work looks at the use of successor features for solving simple continuous control tasks (in particular, reaching to different locations and door closing). The two contributions they enumerate are a ``practical implementation of SF framework for continuous state and action domains'' and jointly learning $\phi$ and $w$ (that is the successor features and the task weights $w$). They show their approach outperforms a goal-conditioned SAC baseline on these control tasks including generalizing to target locations that were not used during training.
SP:bbb70a2512ecbd1fbef0f9219b1d3423a4b6ed83
Generalizing Successor Features to continuous domains for Multi-task Learning
1 INTRODUCTION . Reinforcement learning ( RL ) tackles sequential decision making problems by defining optimal behavior through a reward function , where the agent learns how to behave through interacting with the environment and receiving rewards . The ability of RL algorithms to generalize across different , yet related reward functions , has a great potential to realize more data-efficient algorithms with the capability to transfer to new reward functions . In this paper we look at one particular type of generalization , where the reward function itself changes , however the underlying dynamics of the environment remain the same . This setup is flexible enough to allow transfer happen across tasks , by appropriately defining the rewards which induce different task decompositions . This type of task decomposition potentially allows the agent to tackle more complex problems than , would be possible were the tasks modeled as a single task . We are interested in a setup where the agent is exposed to multiple tasks i.e . tasks with different reward functions . In a multi-goal setting , different reward function can simply be the difference in the euclidean distance to different target goal locations . In a multi-task setting , the difference can be intricately designed in the reward function . For instance a reward function that determines walking forward vs walking backward . We argue that these differences in the structure of the reward function are difficult to capture within a goal or context-conditioned RL frameworks ( Sodhani et al. , 2021 ) . In the context of robotics , generalization across tasks is crucial . Consider an agent playing ball games with a racket Figure 1 . An agent trained to dribble the ball vs hitting the ball , should be able to quickly learn to play squash , as many of the skills such as approaching and hitting the ball are shared in a more complex task of playing squash . From the learners perspective , all these tasks share the same common properties , the ball falls to the ground due to gravity , depending on heavy it is , and it moves with certain velocity when it is hit by the racket . In other words , all these tasks share common dynamics . What changes is the small details in the reward function . For instance the difference between dribbling a ball vs hitting it against the wall , can be the rotation angle of the racket and the amount of force required . If it was possible to learn a representation that could decouple such discrepancies between the reward functions , i.e . decoupling the task dynamics and task-related dynamics , one could train an agent that could re-use the learned representation and quickly fine-tune itself to the more taskspecific representation and achieve a faster learning . Successor features ( SF ) ( Barreto et al. , 2017 ) is one framework that enables such decomposability of representation , explicitly built into the RL formulation . The main goal of this framework is to promote a desired property where instead of being posed as a decoupled representation learning problem , transfer is instead integrated into the RL framework as much as possible , preferably in a way that is almost transparent to the agent . SFs in theory , enable fast transfer between tasks that differ only in their reward function . The advantage of using an SF framework over model-based RL where one learns models of the reward function , is the ability of dynamics representation re-use which is decoupled from the task-specific representation . Our main contribution is to address the generalization and expensive inference problem in the classical SF frameworks coupled with GPI ( Barreto et al. , 2017 ) . We show that a simple architecture can provide a solution to this feature learning problem and demonstrate the effectiveness of our method compared to ( Barreto et al. , 2020 ) in more challenging continuous state and action setting . The majority of the existing work using SFs , operates under the discrete action setting or under the GPI setting optimizing a set of policies which in practice are hard to apply on real robotics applications . To the best of our knowledge , our method is the first to show the applicability of SFs coupled with an appropriate representation learn- ing mechanism solving challenging continuous control tasks . We show that simple modifications to an actor-critic framework can be easily coupled with SFs and empirically demonstrate the efficacy of our method . To summarize , our contributions are as follows : First we propose a practical implementation of SF framework for continuous state and action domains in the context of actor-critic architecture . Secondly we propose a robust method for learning the representations φ and w with the ability to learn disentangled representations for the state-space vs task-specific representation of complex nonlinear reward functions . Finally we demonstrate the efficacy our method on a ranging from tasks including classical continuous control 2D reacher domain on DM control suite ( Tassa et al. , 2020 ) to more challenging 3D reacher and manipulation tasks with the Sawyer arm on Meta-World benchmark ( Yu et al. , 2019 ) . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . We assume the interaction between agent and environment can be modeled as a Markov Decision Process ( MDP ( Puterman , 1994 ) ) . An MDP is defined as a tuple M ≡ 〈S , A , p , R , γ〉 with state space S and action space A . For each s ∈ S and a ∈ A the function p ( .|s , a ) gives the next-state distribution upon taking action a in state s , where p ( .|s , a ) is referred to as the dynamics of the MDP . The random variableR ( s , a , s′ ) determines the reward received in the transition s a−→ s′ . Usually we are interested in the expected value of this variable , which is denoted by r ( s , a , s′ ) , and γ ∈ [ 0 , 1 ) weighs the importance of future rewards . The agent ’ s goal is to find a policy π : S → A , that is , a mapping from states to actions , that maximizes the value of every state-action pairs , defined as Qπ ( s , a ) ≡ Eπ [ ∞∑ i=0 γir ( St+i , At+i , St+i+1 ) |St = s , At = a ] . ( 1 ) where St and At are random variables indicating the state occupied and the action selected by the agent at time step t and Eπ [ . ] denotes expectation over the trajectories induced by π . The function Qπ ( s , a ) is referred to as the “ action-value function ” of policy π. RL algorithms based on dynamic programming build on two fundamental operations , policy evaluation which is the computation of Qπ ( s , a ) , the value function of policy π on task with reward r , and policy improvement theorem ( Bellman , 1957 ) . Once a policy π has been evaluated , we can compute a greedy policy π′ ( s ) ∈ arg maxaQπ ( s , a ) that is guaranteed to perform at least as well as π , that is : Qπ ′ ( s , a ) ≥ Qπ ( s , a ) for any ( s , a ) ∈ S × A . The computation of π′ is referred to as policy improvement . 2.2 MULTI-TASK RL & TRANSFER LEARNING . In practical situations , agents often face multiple related tasks , such as the robot learning numerous skills in Figure 1 . We define tasks Mi drawn from the setM . Then , the goal of multi-task learning is to find π∗i , an optimal policy for each MDP Mi with corresponding optimal value function Q π∗i i . Barreto et al . ( 2017 ) extended the policy improvement theorem to the scenario where the new policy is computed based on the value functions of a set of policies and referred to this as generalized policy improvement ( GPI ) . Suppose the agent has computed n policies corresponding toQπ1 , Qπ2 , ... , Qπn action-value functions . Let Qmax = maxiQπi and define π ( s ) ← arg maxaQmax ( s , a ) for all s ∈ S , then Qπ ( s , a ) ≥ Qmax ( s , a ) for all ( s , a ) ∈ S × A . The only caveat is that it is a waste of computation to compute the value functions of π∗1 , π ∗ 2 , ... , π ∗ n. This approach becomes appealing if we have a way to quickly compute the value functions of the policies πi on the task Mn+1 . 3 ACTOR-CRITIC SUCCESSOR FEATURES . This section will describe Successor Features and their previous use in the discrete action setting . We will then explain our extension , through Universal Value Function Approximators and an ActorCritic approach , to learn useful Successor Features and corresponding policies for high dimensional multi-task continuous control . 3.1 SUCCESSOR FEATURES DECOMPOSITION . Barreto et al . ( 2017 ) proposed a simple reward model which leads to the generalization of successor representation ( SR ) proposed in ( Dayan , 1993 ) . The key assumption is that the reward function can be approximately represented as a linear combination of learned features φ ( s ) . The successor representation ( SR ) ( Dayan , 1993 ) is a representation that generalizes between states using similarity between their successors , that is , the states that follow the current state given the agent ’ s policy . The generalization of SR with function approximation is referred to as Successor Features ( SF ) ( Barreto et al. , 2017 ) of ( s , a ) under policy π . Following ( Barreto et al. , 2017 ; 2018 ; 2020 ) , let φ : S×A×S → Rd be an arbitrary function whose output we will see as “ features ” . We assume that there exist features such that the reward function can be written as r ( s , a , s′ ) = φ ( s , a , s′ ) Tw ( 2 ) where φ ( s , a , s′ ) ∈ Rd are features of ( s , a , s′ ) and w ∈ Rd are weights . Intuitively we can think of φ ( s , a , s′ ) as salient events that may be desirable or undesirable to the agent . Based on Eq.2 we can define an environment Mφ ( S , A , p , γ ) as Mφ ≡ { M ( S , A , p , r , γ ) |r ( s , a , s′ ) = φ ( s , a , s′ ) Tw } , ( 3 ) that is , Mφ is the set of MDPs induced by φ through all possible instantiations of w. SFs make it possible to compute the value of a policy π on any task Mi ∈ Mφ by simply plugging in the representation vector wi defining the task . Specifically , if we substitute Eq.2 in the definition of action-value function of a policy we have Qπ ( s , a ) = Eπ [ rt+1 + γrt+2 + ... |St = s , At = a ] = Eπ [ φTt+1w + φ T t+2w ... |St = s , At = a ] = Eπ [ ∞∑ i=t γi−tφi+1|St = s , At = a ] T w = ψπ ( s , a ) Tw ( 4 ) One benefit of doing so is that if we replace wi with wj in Eq.4 , we immediately obtain the evaluation of π on task Mj . This way only relevant module must be relearned , when either the dynamics or reward changes . The key insight of SFs is that linearity of rewards rw with respect to the features φ which gives us the decomposition of the action value of policy π on task rw . In the GPI setting when the agent is presented with a new task Mn+1 , it needs to compute { Q π∗1 n+1 , Q π∗2 n+1 , ... , Q π∗n n+1 } , that is , the evaluation of each π∗i under the new reward function induced by wn+1 . This in turn would require applying the GPI theorem to the newly-computed set of value functions that will give rise to a policy that performs at least as well as the policy based on any subset of these . Hence ( Barreto et al. , 2017 ) proposed to incorporate SFs where the reward function changes to rn+1 ( s , a , s′ ) = φ ( s , a , s′ ) Twn+1 , as long as we have the correct wn+1 we can compute the value function of π∗i by simply computing Qπ ∗ i n+1 ( s , a ) = ψ π∗i ( s , a ) Twn+1 . This reduces the computation of all Q π∗i n+1 to the simpler supervised problem of approximating wn+1 ( Barreto et al. , 2020 ) . Although at first glance , GPI and SFs seem tangled , this characterization of SF does not depend on GPI framework itself . Thus , SF can be used in any RL framework where such decomposition of reward is viable . Furthermore the combination of SFs and GPI provides an elegant framework for transfer in a multi-task setting . However the computational cost of optimizing policies per task is prohibitive in more complex multi-task setting where in the size of task set Mφ is large .
The paper proposes a method to incorporate Successor Features (SFs) in domains with continuous state and action spaces. It proposes an actor-critic architecture (a variation of the Soft Actor-Critic method) that learns disentangled representations for the environment dynamics and the tasks. The network architecture guarantees such disentanglement with two independent modules: one for the representation of the dynamics $\phi$ (fed by the current state, action, and next state) and one for the task representation $\boldsymbol{w}$ (fed by task-specific information, such as the goal). These modules are learned jointly in a single-stage training procedure, contrasting prior work [1]. The main contribution of this model is the enablement of the SFs for continuous domains without relying on the costly inference mechanism from the classic SFs framework implementation while enabling generalization among similar tasks.
SP:bbb70a2512ecbd1fbef0f9219b1d3423a4b6ed83
On The Quality Assurance Of Concept-Based Representations
1 INTRODUCTION . Addressing the lack of interpretability of deep neural networks ( DNNs ) has given rise to explainability methods , most common of which are feature importance methods ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ) that quantify the contribution of input features to certain predictions ( Bhatt et al. , 2020 ) . However , input features are not necessarily the most intuitive explanations , in particular when using low-level features such as pixels . Concept-based explainability ( Ghorbani et al. , 2019 ; Koh et al. , 2020 ; Yeh et al. , 2020 ; Ciravegna et al. , 2021 ) remedies this issue by constructing an explanation at a concept level , where concepts are considered high-level and semantically meaningful units of information commonly used by humans to explain their decisions . Furthermore , concepts allow users to improve a model ’ s performance via concept interventions , in which mispredicted concepts are corrected using expert knowledge ( Koh et al. , 2020 ) . In practice , what constitutes a concept is data-dependent , ranging from a group of pixels for image data ( Ghorbani et al. , 2019 ; Koh et al. , 2020 ; Yeh et al. , 2020 ) , to a sequence of words and sub-graphs for text and graph-based data , respectively ( Yeh et al. , 2020 ; Magister et al. , 2021 ) . While all of these definitions are specific to the data modality , they commonly refer to an intermediate representation of the input data that has certain properties . Summarising concepts as intermediate representations of the data makes them analogous to factors of variation in disentanglement learning , where the assumption is that there exists a generative process capable of producing a high-dimensional dataset using a finite number of factors ( Bengio et al. , 2013 ) . Such factors constitute a disentangled intermediate representation of the data with interpretability ( Bengio et al. , 2013 ; Higgins et al. , 2017 ) , fairness ( Creager et al. , 2019 ) , and predictive performance ( Locatello et al. , 2019 ; 2020b ) properties . The difference between Concept Learning ( CL ) and Disentanglement Learning ( DGL ) remains in that concepts in CL are often formed based on the supervision directly from concept labels or from a downstream task , whereas generative models ( e.g. , Variational Autoencoders ( VAEs ) ( Kingma & Welling , 2014 ; Higgins et al. , 2017 ) ) that serve as the basis of DGL are un-/semi-supervised and factors of variation are directly informed by the distribution of the input data . Since CL and DGL were developed independently , much of their connection remains unexplored . In particular , the metrics used to evaluate the quality of intermediate representations in each sub- field are not aligned , despite their overlapping goals . Metrics in the concept literature ( Koh et al. , 2020 ; Kazhdan et al. , 2020 ; Yeh et al. , 2020 ) are mainly concerned with the properties of learnt concepts w.r.t . the downstream task . On the other hand , given the lack of a downstream task , metrics in disentanglement literature ( Higgins et al. , 2017 ; Ridgeway & Mozer , 2018 ; Locatello et al. , 2019 ) are mainly concerned with the properties of the learnt representations , referred to as latent codes , w.r.t . the ground truth factors of variation . We argue that concepts/latent codes , as surrogate to inputs , need to have the following key properties : ( i ) They should correspond to semantically meaningful and coherent input sub-spaces ; ( ii ) They should preserve the amount of mutual information observed in ground truth concepts or factors of variation ( when available ) ; and ( iii ) They should capture sufficient statistics to predict the downstream task ( when available ) as well as raw inputs do . In this paper , we consider the properties ( ii ) and ( iii ) and make the following key contributions : - We unify the language and notation across CL and DGL by framing factors of variation and latent codes in DGL as ground truth concepts and concept representations in CL , respectively . - We introduce metrics for evaluating the quality of learnt concepts/codes in presence and absence of access to ground truth concepts/factors of variation and when concepts/codes are correlated . - We conduct a systematic empirical comparison of state-of-the-art methods from four families of methods : supervised CL , unsupervised CL , semi-supervised DGL , and unsupervised DGL . - We make the code used for our metrics , methods , and datasets available in an open-source library.1 2 BACKGROUND AND RELATED WORK . Notation In both CL and DGL the aim is to find a low-dimensional intermediate representation ĉ that explains the downstream task ( s ) in CL , or the data ’ s factors of variation in DGL . In CL , this low-dimensional representation corresponds to a matrix ĉ ∈ Ĉ ⊆ Rd×k in which the i-th column constitutes a d-dimensional representation of the i-th concept . As zero-padding can be used to ensure equal length across different concept representations , for notational simplicity we assume that all concepts use a d-dimensional vector as their representation . Under this view , elements in ĉ ( : ,i ) ∈ Rd are expected to have high values ( under some reasonable aggregation function ) if the i-th concept is considered to be activated for the input that generated this representation . For example , in the case where d = T and each concept can take up to T discrete values , ĉ ( : ,i ) ∈ [ 0 , 1 ] T can represent a probability distribution over all values that the i-th concept can take . As most CL methods assume d = 1 , for succinctness we use ĉi in place of ĉ ( : ,i ) when d = 1 . We adopt the same representation for latent codes in DGL and let ẑ ∈ Ẑ ⊆ Rd×k be a latent code matrix such that each dimension ẑ ( : ,i ) ( or a non-overlapping subset of dimensions ) encodes one , and only one , independent factor of variation zj . Nevertheless , note that in practice , d tends to be 1 for most DGL methods . Finally , for simplicity , we use ĉ to refer to both learnt concept representations and latent codes . Ground truth concepts and factors of variations are referred to as c ∈ C ⊆ Rk . In line with ( Koh et al. , 2020 ; Kazhdan et al. , 2020 ; Yeh et al. , 2020 ) we make use of : ( i ) a concept encoder function g : X ′ 7→ Ĉ that maps a transformation of the inputs x ∈ X ⊆ Rm , as performed by a function φ : X 7→ X ′ , to a concept intermediate representation ; and ( ii ) a label predictor function f : Ĉ 7→ Y that maps the concept representations to a downstream task ’ s set of labels y ∈ Y ⊆ RL . These two functions can be combined to give a set of predictions for sample x ∈ X by computing f ( g ( φ ( x ) ) ) . In DGL autoencoders , one can think of ( g ◦ φ ) ( · ) as the autoencoder ’ s encoder model and of the autoencoder ’ s decoder model as a function that approximates ( g ◦φ ) −1 ( · ) . Supervised concept learning In supervised CL , access to concept labels c ( i ) ∈ Nk , in addition to target labels y ( i ) ∈ RL , is assumed for inputs x ( i ) ∈ Rm . In other words , we have training data { ( x ( i ) , c ( i ) , y ( i ) ) } Ni=1 , where N is the number of training samples . In its most common form , supervised CL divides the prediction into two distinct steps of : ( i ) mapping an input sample to its concept representation via a concept encoder g ; and ( ii ) mapping a sample ’ s concept representation to its task labels via a label predictor f ( · ) . Together , these two functions constitute a Concept Bottleneck Model ( CBM ) ( Koh et al. , 2020 ) , because their final prediction relies on the input going through the bottleneck g ( φ ( x ) ) , which is trained to be component-wise aligned with c. 1Code will be released after review . Concept-based Model Extraction ( CME ) ( Kazhdan et al. , 2020 ) constructs a CBM from a pretrained model by building a non-trivial φ ( · ) mapping function using the model ’ s latent space . Using such a latent representation instead of raw inputs typically makes CME more data efficient than CBM ( Kazhdan et al. , 2021 ) . Similarly , Concept Whitening ( CW ) ( Chen et al. , 2020 ) constructs a CBM by introducing a pluggable batch normalization module whose activations ( g◦φ ) ( · ) are trained to be aligned with representative sets of binary concepts . It achieves this by forcing different feature maps of the normalization module to be decorrelated and orthogonal while incentivizing activations in a given axis to be high when its corresponding pre-defined concept is activated . Unsupervised Concept Learning Unlike supervised CL , in unsupervised CL concept annotations are not available and concepts are discovered in an unsupervised manner . Ghorbani et al . ( 2019 ) extract concepts from a trained classifier for image data . Images belonging to each class are first segmented with multiple resolutions . The segments are then clustered as examples of class concepts and their importance scores are measured using TCAV ( Kim et al. , 2018 ) . Unlike Ghorbani et al . ( 2019 ) , Completeness-aware Concept Discovery ( CCD ) ( Yeh et al. , 2020 ) is data modality agnostic and extracts class-independent concepts . CCD builds on TCAV to first extract a set of concept vectors { h ( i ) } ki=1 , each of which is a unit vector in X ′ . These vectors are then used to construct a concept representation g ( φ ( x ) ) = ĉ ∈ Ĉ ⊆ Rk by setting ĉi to TH ( 〈φ ( x ) , h ( i ) 〉 , β ) , the β-thresholded inner product of φ ( x ) and concept vector h ( i ) . When complete , a concept representation should contain sufficient statistics to obtain a high performance in the original model ’ s prediction task f ( φ ( x ) ) . If this is the case , then there must exist a mapping ψ : Ĉ 7→ X ′ that recovers φ ( x ) from g ( φ ( x ) ) . Thus , CCD constructs a set of concept vectors such that f ( ψ ( g ( φ ( x ) ) ) ) is able to achieve a similar task performance to that of f ( φ ( x ) ) . Notice that although f ( · ) is still a label predictor function here , it is not applied to the raw inputs or to their concept representation . Similarly , Self-Explainable Neural Networks ( SENNs ) ( Alvarez-Melis & Jaakkola , 2018 ) learn to produce concept-based explanations without explicit concept supervision through a robustness regularization term that encourages a differentiable model to locally act linearly . It proceeds by first learning a concept representation g ( x ) = ĉ ∈ Rk from the encoder of an autoencoding model and , second , generating a prediction f ( ĉ ) = G ( θ ( x ) 1ĉ1 , · · · , θ ( x ) k ĉk ) using an aggregation function G to weight the importance of each concept with a linear coefficient learnt through a differentiable model θ ( · ) . Concept weights θ ( x ) can then serve as an explanation for SENN ’ s predicted label . Disentanglement Learning Generative models ( e.g. , VAEs ( Kingma & Welling , 2014 ) ) used in DGL assume that data is generated from a set of independent factors of variation c ∈ C , sampled from factorable distribution p ( c ) = ∏ i p ( ci ) , such that a sample x is generated according to the conditional distribution p ( x|c ) . Thus , the goal of DGL is to find a function g ( · ) that maps inputs to a disentangled latent representation , such that a subset of non-overlapping dimensions of the latent representation corresponds to a unique factor of variation ci . In light of recent work showing the theoretical impossibility of learning disentangled representations in an unsupervised manner ( Locatello et al. , 2019 ) , a promising line of work suggests providing the inductive bias required to learn disentangled representations through weak supervision . In this work , we focus on DGL methods where weak supervision comes via pairs of observations whose corresponding ground truth factors of variation share at least one common element ( Locatello et al. , 2020a ) and contrast them against vanilla unsupervised DGL methods such as VAEs ( Kingma & Welling , 2014 ) and β-VAEs ( Higgins et al. , 2017 ) .
The authors have put decent effort to bring concept-based representation learning and disentanglement learning together under one umbrella in terms of the quality of generated concepts in presence as well as absence of ground truth concept labels. Some related metrics were proposed for evaluation of the quality of concepts for both these methods. Based on these studies, presented in the paper, some important conclusions were made based on requirements of concept supervision and their effects on final concept quality as well as the predictive performance of the model.
SP:586c729c2c163cba6c8a0519dd853463bbc405b7
On The Quality Assurance Of Concept-Based Representations
1 INTRODUCTION . Addressing the lack of interpretability of deep neural networks ( DNNs ) has given rise to explainability methods , most common of which are feature importance methods ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ) that quantify the contribution of input features to certain predictions ( Bhatt et al. , 2020 ) . However , input features are not necessarily the most intuitive explanations , in particular when using low-level features such as pixels . Concept-based explainability ( Ghorbani et al. , 2019 ; Koh et al. , 2020 ; Yeh et al. , 2020 ; Ciravegna et al. , 2021 ) remedies this issue by constructing an explanation at a concept level , where concepts are considered high-level and semantically meaningful units of information commonly used by humans to explain their decisions . Furthermore , concepts allow users to improve a model ’ s performance via concept interventions , in which mispredicted concepts are corrected using expert knowledge ( Koh et al. , 2020 ) . In practice , what constitutes a concept is data-dependent , ranging from a group of pixels for image data ( Ghorbani et al. , 2019 ; Koh et al. , 2020 ; Yeh et al. , 2020 ) , to a sequence of words and sub-graphs for text and graph-based data , respectively ( Yeh et al. , 2020 ; Magister et al. , 2021 ) . While all of these definitions are specific to the data modality , they commonly refer to an intermediate representation of the input data that has certain properties . Summarising concepts as intermediate representations of the data makes them analogous to factors of variation in disentanglement learning , where the assumption is that there exists a generative process capable of producing a high-dimensional dataset using a finite number of factors ( Bengio et al. , 2013 ) . Such factors constitute a disentangled intermediate representation of the data with interpretability ( Bengio et al. , 2013 ; Higgins et al. , 2017 ) , fairness ( Creager et al. , 2019 ) , and predictive performance ( Locatello et al. , 2019 ; 2020b ) properties . The difference between Concept Learning ( CL ) and Disentanglement Learning ( DGL ) remains in that concepts in CL are often formed based on the supervision directly from concept labels or from a downstream task , whereas generative models ( e.g. , Variational Autoencoders ( VAEs ) ( Kingma & Welling , 2014 ; Higgins et al. , 2017 ) ) that serve as the basis of DGL are un-/semi-supervised and factors of variation are directly informed by the distribution of the input data . Since CL and DGL were developed independently , much of their connection remains unexplored . In particular , the metrics used to evaluate the quality of intermediate representations in each sub- field are not aligned , despite their overlapping goals . Metrics in the concept literature ( Koh et al. , 2020 ; Kazhdan et al. , 2020 ; Yeh et al. , 2020 ) are mainly concerned with the properties of learnt concepts w.r.t . the downstream task . On the other hand , given the lack of a downstream task , metrics in disentanglement literature ( Higgins et al. , 2017 ; Ridgeway & Mozer , 2018 ; Locatello et al. , 2019 ) are mainly concerned with the properties of the learnt representations , referred to as latent codes , w.r.t . the ground truth factors of variation . We argue that concepts/latent codes , as surrogate to inputs , need to have the following key properties : ( i ) They should correspond to semantically meaningful and coherent input sub-spaces ; ( ii ) They should preserve the amount of mutual information observed in ground truth concepts or factors of variation ( when available ) ; and ( iii ) They should capture sufficient statistics to predict the downstream task ( when available ) as well as raw inputs do . In this paper , we consider the properties ( ii ) and ( iii ) and make the following key contributions : - We unify the language and notation across CL and DGL by framing factors of variation and latent codes in DGL as ground truth concepts and concept representations in CL , respectively . - We introduce metrics for evaluating the quality of learnt concepts/codes in presence and absence of access to ground truth concepts/factors of variation and when concepts/codes are correlated . - We conduct a systematic empirical comparison of state-of-the-art methods from four families of methods : supervised CL , unsupervised CL , semi-supervised DGL , and unsupervised DGL . - We make the code used for our metrics , methods , and datasets available in an open-source library.1 2 BACKGROUND AND RELATED WORK . Notation In both CL and DGL the aim is to find a low-dimensional intermediate representation ĉ that explains the downstream task ( s ) in CL , or the data ’ s factors of variation in DGL . In CL , this low-dimensional representation corresponds to a matrix ĉ ∈ Ĉ ⊆ Rd×k in which the i-th column constitutes a d-dimensional representation of the i-th concept . As zero-padding can be used to ensure equal length across different concept representations , for notational simplicity we assume that all concepts use a d-dimensional vector as their representation . Under this view , elements in ĉ ( : ,i ) ∈ Rd are expected to have high values ( under some reasonable aggregation function ) if the i-th concept is considered to be activated for the input that generated this representation . For example , in the case where d = T and each concept can take up to T discrete values , ĉ ( : ,i ) ∈ [ 0 , 1 ] T can represent a probability distribution over all values that the i-th concept can take . As most CL methods assume d = 1 , for succinctness we use ĉi in place of ĉ ( : ,i ) when d = 1 . We adopt the same representation for latent codes in DGL and let ẑ ∈ Ẑ ⊆ Rd×k be a latent code matrix such that each dimension ẑ ( : ,i ) ( or a non-overlapping subset of dimensions ) encodes one , and only one , independent factor of variation zj . Nevertheless , note that in practice , d tends to be 1 for most DGL methods . Finally , for simplicity , we use ĉ to refer to both learnt concept representations and latent codes . Ground truth concepts and factors of variations are referred to as c ∈ C ⊆ Rk . In line with ( Koh et al. , 2020 ; Kazhdan et al. , 2020 ; Yeh et al. , 2020 ) we make use of : ( i ) a concept encoder function g : X ′ 7→ Ĉ that maps a transformation of the inputs x ∈ X ⊆ Rm , as performed by a function φ : X 7→ X ′ , to a concept intermediate representation ; and ( ii ) a label predictor function f : Ĉ 7→ Y that maps the concept representations to a downstream task ’ s set of labels y ∈ Y ⊆ RL . These two functions can be combined to give a set of predictions for sample x ∈ X by computing f ( g ( φ ( x ) ) ) . In DGL autoencoders , one can think of ( g ◦ φ ) ( · ) as the autoencoder ’ s encoder model and of the autoencoder ’ s decoder model as a function that approximates ( g ◦φ ) −1 ( · ) . Supervised concept learning In supervised CL , access to concept labels c ( i ) ∈ Nk , in addition to target labels y ( i ) ∈ RL , is assumed for inputs x ( i ) ∈ Rm . In other words , we have training data { ( x ( i ) , c ( i ) , y ( i ) ) } Ni=1 , where N is the number of training samples . In its most common form , supervised CL divides the prediction into two distinct steps of : ( i ) mapping an input sample to its concept representation via a concept encoder g ; and ( ii ) mapping a sample ’ s concept representation to its task labels via a label predictor f ( · ) . Together , these two functions constitute a Concept Bottleneck Model ( CBM ) ( Koh et al. , 2020 ) , because their final prediction relies on the input going through the bottleneck g ( φ ( x ) ) , which is trained to be component-wise aligned with c. 1Code will be released after review . Concept-based Model Extraction ( CME ) ( Kazhdan et al. , 2020 ) constructs a CBM from a pretrained model by building a non-trivial φ ( · ) mapping function using the model ’ s latent space . Using such a latent representation instead of raw inputs typically makes CME more data efficient than CBM ( Kazhdan et al. , 2021 ) . Similarly , Concept Whitening ( CW ) ( Chen et al. , 2020 ) constructs a CBM by introducing a pluggable batch normalization module whose activations ( g◦φ ) ( · ) are trained to be aligned with representative sets of binary concepts . It achieves this by forcing different feature maps of the normalization module to be decorrelated and orthogonal while incentivizing activations in a given axis to be high when its corresponding pre-defined concept is activated . Unsupervised Concept Learning Unlike supervised CL , in unsupervised CL concept annotations are not available and concepts are discovered in an unsupervised manner . Ghorbani et al . ( 2019 ) extract concepts from a trained classifier for image data . Images belonging to each class are first segmented with multiple resolutions . The segments are then clustered as examples of class concepts and their importance scores are measured using TCAV ( Kim et al. , 2018 ) . Unlike Ghorbani et al . ( 2019 ) , Completeness-aware Concept Discovery ( CCD ) ( Yeh et al. , 2020 ) is data modality agnostic and extracts class-independent concepts . CCD builds on TCAV to first extract a set of concept vectors { h ( i ) } ki=1 , each of which is a unit vector in X ′ . These vectors are then used to construct a concept representation g ( φ ( x ) ) = ĉ ∈ Ĉ ⊆ Rk by setting ĉi to TH ( 〈φ ( x ) , h ( i ) 〉 , β ) , the β-thresholded inner product of φ ( x ) and concept vector h ( i ) . When complete , a concept representation should contain sufficient statistics to obtain a high performance in the original model ’ s prediction task f ( φ ( x ) ) . If this is the case , then there must exist a mapping ψ : Ĉ 7→ X ′ that recovers φ ( x ) from g ( φ ( x ) ) . Thus , CCD constructs a set of concept vectors such that f ( ψ ( g ( φ ( x ) ) ) ) is able to achieve a similar task performance to that of f ( φ ( x ) ) . Notice that although f ( · ) is still a label predictor function here , it is not applied to the raw inputs or to their concept representation . Similarly , Self-Explainable Neural Networks ( SENNs ) ( Alvarez-Melis & Jaakkola , 2018 ) learn to produce concept-based explanations without explicit concept supervision through a robustness regularization term that encourages a differentiable model to locally act linearly . It proceeds by first learning a concept representation g ( x ) = ĉ ∈ Rk from the encoder of an autoencoding model and , second , generating a prediction f ( ĉ ) = G ( θ ( x ) 1ĉ1 , · · · , θ ( x ) k ĉk ) using an aggregation function G to weight the importance of each concept with a linear coefficient learnt through a differentiable model θ ( · ) . Concept weights θ ( x ) can then serve as an explanation for SENN ’ s predicted label . Disentanglement Learning Generative models ( e.g. , VAEs ( Kingma & Welling , 2014 ) ) used in DGL assume that data is generated from a set of independent factors of variation c ∈ C , sampled from factorable distribution p ( c ) = ∏ i p ( ci ) , such that a sample x is generated according to the conditional distribution p ( x|c ) . Thus , the goal of DGL is to find a function g ( · ) that maps inputs to a disentangled latent representation , such that a subset of non-overlapping dimensions of the latent representation corresponds to a unique factor of variation ci . In light of recent work showing the theoretical impossibility of learning disentangled representations in an unsupervised manner ( Locatello et al. , 2019 ) , a promising line of work suggests providing the inductive bias required to learn disentangled representations through weak supervision . In this work , we focus on DGL methods where weak supervision comes via pairs of observations whose corresponding ground truth factors of variation share at least one common element ( Locatello et al. , 2020a ) and contrast them against vanilla unsupervised DGL methods such as VAEs ( Kingma & Welling , 2014 ) and β-VAEs ( Higgins et al. , 2017 ) .
The authors consider the question of whether recent concept-based learning algorithms, as well disentangled representation learning algorithms, result in high-quality representations. In particular, they consider what high-quality should mean in terms of the relationship with ground truth concepts and the ability to make accurate predictions for a downstream task. To this end, they propose two main metrics for representations that are explicitly or implicitly encouraged to encode concepts: 1) a score that captures how well the learned representation preserves the relationships between concepts (which may be correlated), and 2) a score that captures how well concepts can be split into groups that are useful/useless for predicting particular label dimensions.
SP:586c729c2c163cba6c8a0519dd853463bbc405b7
Implicit Jacobian regularization weighted with impurity of probability output
Gradient descent ( GD ) plays a crucial role in the success of deep learning , but it is still not fully understood how GD finds minima that generalize well . In many studies , GD has been understood as a gradient flow in the limit of vanishing learning rate . However , this approach has a fundamental limitation in explaining the oscillatory behavior with iterative catapult in a practical finite learning rate regime . To address this limitation , we rather start with strong empirical evidence of the plateau of the sharpness ( the top eigenvalue of the Hessian ) of the loss function landscape . With this observation , we investigate the Hessian through simple and much lower-dimensional matrices . In particular , to analyze the sharpness , we instead explore the eigenvalue problem for the low-dimensional matrix which is a rank-one modification of a diagonal matrix . The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output . We exploit this connection to derive sharpnessimpurity-Jacobian relation and to explain how the sharpness influences the learning dynamics and the generalization performance . In particular , we show that GD has implicit regularization effects on the Jacobian norm weighted with the impurity of the probability output . 1 INTRODUCTION . Deep learning has shown to be powerful for many learning tasks in various areas . There has been a lot of work to understand how the learning algorithm leads to this successful training of deep neural networks . Especially , it is crucial to understand the geometric properties of the loss landscape of neural networks and their interaction with the gradient-based optimization methods , such as Stochastic Gradient Descent ( SGD ) , along the training trajectory . It has been studied both from the optimization ( Gur-Ari et al. , 2018 ; Jastrzębski et al. , 2019 ; Ghorbani et al. , 2019 ; Liu et al. , 2020 ; Lewkowycz et al. , 2020 ; Cohen et al. , 2021 ) and generalization ( Hochreiter & Schmidhuber , 1997 ; Keskar et al. , 2017 ; Dinh et al. , 2017 ; Jastrzębski et al. , 2017 ; Wang et al. , 2018 ; Chaudhari et al. , 2019 ; Fort et al. , 2019 ; Jiang et al. , 2020 ; Barrett & Dherin , 2021 ; Smith et al. , 2021 ) point of view . We aim at investigating the Hessian of the training loss ( with respect to model parameter ) and its top eigenvalue ( also called sharpness ) . The sharpness characterizes the dynamics of neural network training along the optimization trajectory and is correlated with the generalization capability . For example , the sharpness increases in the beginning , and after reaching a certain value , training dynamics becomes unstable , oscillating along the top eigenvector ( Jastrzębski et al. , 2019 ; Cohen et al. , 2021 ) . Moreover , the rapid increase in the sharpness of the loss landscape in the early phase significantly impacts the final generalization performance ( Achille et al. , 2019 ; Jastrzebski et al. , 2020 ; Lewkowycz et al. , 2020 ; Jastrzebski et al. , 2021 ) . However , the Hessian of a deep neural network is very high-dimensional which makes it difficult to analyze its eigensystem . Recently , some researchers studied the Hessian by exploiting tools in randomized numerical linear algebra ( Sagun et al. , 2017 ; Papyan , 2018 ; 2019 ; Ghorbani et al. , 2019 ; Yao et al. , 2020 ) and decomposition of the Hessian ( Papyan , 2018 ; 2019 ; Fort & Ganguli , 2019 ) . In this paper , we present a new decomposition of the Hessian using eigendecomposition of lowdimensional matrices . From the eigensystem of the low-dimensional matrix , we can provide a simple and intuitive explanation on the relation between its eigenvalue and the probability output . This enables us to explain how the sharpness of the loss landscape influences the learning dynamics and the generalization performance . We summarize the main contributions of the paper as follows : • We decompose the Hessian with low dimensional matrices , the logit Hessian and the logit-weight Jacobian ( defined in Definition 1 ) , and investigate the Hessian by the eigendecomposition of the logit Hessian which is a rank-one modification of a diagonal matrix . • We provide connections between the top eigenvalue of the logit Hessian and the impurity of the probability output . • We derive a relation between the sharpness , the top eigenvalue of the logit Hessian and the Jacobian . We call it sharpness-impurity-Jacobian relation . • We explain how the sharpness of the loss landscape influences the learning dynamics and the generalization performance . In particular , we find that gradient-based optimizations have implicit effects on penalizing the Jacobian norm ( Implicit Jacobian Regularization ) in a certain phase of training ( Active Regularization Period ) . 2 RELATED WORK . We summarize some works on the Hessian , learning dynamics , and generalization of neural networks . In particular , we point out the issue of approximating SGD by a stochastic differential equation ( SDE ) because a continuous flow can not capture the oscillatory behavior of discrete updates with iterative catapult , which plays a key role in limiting the sharpness of the loss landscape . Decomposition of the Hessian Sagun et al . ( 2016 ; 2017 ) empirically found that the eigenvalue spectrum of the Hessian during training is composed of two parts , the bulk which is concentrated around zero and the outliers which are scattered positively away from zero . They showed the bulk depends on the size of the network , and the outliers depend on the data . In particular , the number of outliers matches the number of classes of the data . Further , Papyan ( 2019 ) proposed a three-level hierarchical decomposition of the Hessian matrix according to each class , logit coordinate , and example . However , with different decomposition , we analyze the Hessian from another point of view . SGD as a SDE In many studies , SGD has been understood as a SDE in the limit of vanishing learning rate ( Mandt et al. , 2017 ; Li et al. , 2017b ; a ; Smith & Le , 2018 ; Chaudhari & Soatto , 2018 ; Jastrzębski et al. , 2017 ; Zhu et al. , 2019 ; Park et al. , 2019 ) . However , some theoretical concerns have been raised for such approximations ( Yaida , 2019 ) . Moreover , Barrett & Dherin ( 2021 ) argued that the SDE analysis in the limit of vanishing learning rate can not explain the generalization benefits of finite learning rates and they proposed a modified gradient flow for finite learning rates . However , they still consider a continuous gradient flow and thus it has a fundamental limitation in explaining the oscillatory behavior with iterative catapult in a practical learning rate regime ( Smith et al. , 2021 ) , which will be detailed in the following paragraph . Oscillatory catapult and the plateau of the sharpness Xing et al . ( 2018 ) investigated the roles of learning rate and batch size in SGD dynamics through interpolating the loss landscape between consecutive model parameters during training . They observed SGD explores the parameter space , bouncing between walls of valley-like regions . The large learning rate maintains a high valley height , and a small batch size induces gradient stochasity . They both help exploration through the parameter space with different roles in the training dynamics . Jastrzębski et al . ( 2019 ) empirically investigated the evolution of the sharpness ( the top eigenvalue of the Hessian ) along the whole training trajectory of SGD . They observed initial growth of the sharpness as loss decreases , reaching a maximum sharpness determined by learning rate and batch size , and then it decreases towards the end of training . Due to the initial increase of the sharpness , the SGD step becomes too large compared to the shape of the loss landscape . This is consistent with the valley-like structure shown in Xing et al . ( 2018 ) . Lewkowycz et al . ( 2020 ) investigated simple theoretical models with a solvable training dynamics . They showed that , in their setup with a large learning rate , the loss initially increases while the sharpness decreases , then it converges to a flat minimum . This mechanism is called the catapult mechanism . Recently , Cohen et al . ( 2021 ) found that full-batch GD typically operates in a regime called the Edge of Stability where the sharpness can no longer increase and stays near a certain value , and the training loss behaves nonmonotonically but decreases globally . This behavior of the optimization at the Edge of Stability can be seen as repeated catapult mechanisms . They explicitly marked the limit of the sharpness with 2/η ( η =learning rate ) . To describe the aforementioned evolution of the sharpness , Fort & Ganguli ( 2019 ) developed a theoretical model based on a random matrix modelling . To build a simple random model , they introduced assumptions about gradients and Hessians that they are i.i.d isotropic Gaussian with zero mean with varying variance during training . While they focus on building a random model based on the observation , we rather aim to explain the underlying mechanisms . Implicit bias in SGD There have been many studies on the implicit bias in SGD ( Neyshabur , 2017 ; Zhang et al. , 2021 ; Soudry et al. , 2018 ) . We review the most relevant and recent ones . Jastrzebski et al . ( 2021 ) empirically showed that SGD implicitly penalizes the trace of the Fisher Information Matrix ( FIM ) . They also showed the trace of FIM explodes in the early phase of training when using a small learning rate and called it catastrophic Fisher explosion . Barrett & Dherin ( 2021 ) ; Smith et al . ( 2021 ) demonstrated that SGD implicitly penalizes the norm of the total gradient and the non-uniformity of the minibatch gradients . We demonstrate that the ( logit-weight ) Jacobian plays an important role in the generalization performance in each case . 3 BACKGROUND . In this section , we provide some notations , basic equations and definitions for the following sections . Throughout the paper , we use the denominator layout notation for the vector derivatives , i.e. , ∇vu = ( ∂uj ∂vi ) ij ∈ Rv×u where u : Rv → Ru and v ∈ Rv . It is also generalized to the cases of scalar , u = 1 or v = 1 . We consider a problem of learning a C-class classifier which maps an input x ∈ X ⊂ Rd to a target label y ∈ [ C ] where [ C ] = { 1 , 2 , · · · , C } . To this end , we build a parameterized model fθ : X → Z ⊂ RC with a model parameter θ ∈ Θ ⊂ Rm which outputs a logit vector z ≡ fθ ( x ) ∈ Z ⊂ RC ( we often omit the dependence on the input x and the model parameter θ ) . Then , the logit vector z is given as input to the softmax function to yield a probability vector p = softmax ( z ) ∈ ∆C−1 where ∆C−1 = { p ∈ [ 0 , 1 ] C : 1Tp = 1 , p ≥ 0 } . We want the model to match the most probable class c1 to the true label y , where c ( x ) ≡ arg sort ( p ) in descending order . We exchangeably denote the probability value corresponding to the true label y as p ≡ py ∈ [ 0 , 1 ] . The cross-entropy loss , l = l ( z , y ) ∈ R , is equivalent to the negative log-likelihood l = − log p. We use the notations ‖ · ‖ for the Euclidean ` 2-norm of a vector or for the Euclidean operator norm of a matrix ( equivalently , ‖ · ‖σ for a square matrix ) , ‖ · ‖F for the Frobenius norm , and tr ( · ) for the trace of a ( square ) matrix . Starting with a simple computation of the derivatives of the softmax function , Eq ( 1 ) ( see Appendix A ) , we can easily derive the following equations in order : ∇zp = diag ( p ) − ppT ∈ RC×C ( 1 ) ∇zp = [ ∇zp ] : ,y = p ( ey − p ) ∈ RC ( 2 ) ∇zl = ∇zp ∂l ∂p = p ( ey − p ) · −1 p = p− ey ∈ RC ( 3 ) ∇2zl = ∇z ( ∇zl ) = ∇z ( p− ey ) = diag ( p ) − ppT ∈ RC×C ( 4 ) where diag ( p ) = ( δijpi ) ij ∈ RC×C is a diagonal matrix with p as its diagonal entries , and ei = ( δij ) j ∈ RC is a one-hot vector with i-th element as 1 . Next , the Hessian of the loss function l for given example x with respect to the model parameter can be expressed as follows : ∇2θl = ∇θz∇2zl∇θzT + ∑C j=1 ∇2θzj∇zj l ≈ ∇θz∇2zl∇θzT ∈ Rm×m ( 5 ) using a well-known Gauss-Newton approximation . ( see , for example , Schraudolph ( 2002 ) ) . Now , we are ready to consider the training loss for the training set D. We compute the total training loss over D as L = 〈l〉 which yields∇L = 〈∇l〉 and∇2L = 〈∇2l〉 where 〈·〉 is the expectation over the empirical measure of the training set D , equivalently say ÊD [ · ] . We use the notation 〈·〉S when averaging over a subset S . Following from Eq ( 4 ) and Eq ( 5 ) , we define the Hessian matrixH for the total loss and its Gauss-Newton approximation matrixG with the matricesM and J as follows : Definition 1 . We callM the logit Hessian , J the Jacobian ( of the logit function with respect to the model parameter ) , H the Hessian , andG the Gauss-Newton approximation defined as follows : M ≡ ∇2zl = diag ( p ) − ppT ∈ RC×C ( 6 ) J ( = Jzθ ) ≡ ∇θz ∈ Rm×C ( 7 ) H ≡ 〈∇2θl〉 ≈ 〈JMJT 〉 ≡ G ∈ Rm×m ( 8 ) It is interesting to note that while l is dependent on the true label y , the logit HessianM = ∇2zl is independent of y , and so are J , JMJT , andG . In case of the MSE loss l = 12‖z − e y‖2 , we have M = ∇2zl = I and G = 〈JJT 〉 . We mainly focus on the usual cross-entropy loss and defer the investigation on the MSE loss to Appendix M. From Eq ( 8 ) , we will often use the approximation ‖H‖σ ≈ ‖G‖σ as justified in Sagun et al . ( 2017 ) ; Fort & Ganguli ( 2019 ) , but this approximation sometimes fails in the later phase of training when the top eigenvalues of the Gauss-Newton matrix is not sufficiently isolated from the bulk near 0 ( Papyan , 2018 ) . Thus we mainly focus on the early phase of training .
In this paper, the relationship between the Jacobian (the gradient of the final activation w.r.t parameters) and the Hessian is analyzed for the softmax cross-entropy loss. As a key tool, the approximation of the Hessian with the probability vector (the softmax output) is used, which suggests connections with several optimization (and generalization) concepts such as sharpness v.s. flatness of the loss landscape and the oscillation on the optimization path at GD/SGD.
SP:0134f562a484fc11e69847eb132d866e55fad86f
Implicit Jacobian regularization weighted with impurity of probability output
Gradient descent ( GD ) plays a crucial role in the success of deep learning , but it is still not fully understood how GD finds minima that generalize well . In many studies , GD has been understood as a gradient flow in the limit of vanishing learning rate . However , this approach has a fundamental limitation in explaining the oscillatory behavior with iterative catapult in a practical finite learning rate regime . To address this limitation , we rather start with strong empirical evidence of the plateau of the sharpness ( the top eigenvalue of the Hessian ) of the loss function landscape . With this observation , we investigate the Hessian through simple and much lower-dimensional matrices . In particular , to analyze the sharpness , we instead explore the eigenvalue problem for the low-dimensional matrix which is a rank-one modification of a diagonal matrix . The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output . We exploit this connection to derive sharpnessimpurity-Jacobian relation and to explain how the sharpness influences the learning dynamics and the generalization performance . In particular , we show that GD has implicit regularization effects on the Jacobian norm weighted with the impurity of the probability output . 1 INTRODUCTION . Deep learning has shown to be powerful for many learning tasks in various areas . There has been a lot of work to understand how the learning algorithm leads to this successful training of deep neural networks . Especially , it is crucial to understand the geometric properties of the loss landscape of neural networks and their interaction with the gradient-based optimization methods , such as Stochastic Gradient Descent ( SGD ) , along the training trajectory . It has been studied both from the optimization ( Gur-Ari et al. , 2018 ; Jastrzębski et al. , 2019 ; Ghorbani et al. , 2019 ; Liu et al. , 2020 ; Lewkowycz et al. , 2020 ; Cohen et al. , 2021 ) and generalization ( Hochreiter & Schmidhuber , 1997 ; Keskar et al. , 2017 ; Dinh et al. , 2017 ; Jastrzębski et al. , 2017 ; Wang et al. , 2018 ; Chaudhari et al. , 2019 ; Fort et al. , 2019 ; Jiang et al. , 2020 ; Barrett & Dherin , 2021 ; Smith et al. , 2021 ) point of view . We aim at investigating the Hessian of the training loss ( with respect to model parameter ) and its top eigenvalue ( also called sharpness ) . The sharpness characterizes the dynamics of neural network training along the optimization trajectory and is correlated with the generalization capability . For example , the sharpness increases in the beginning , and after reaching a certain value , training dynamics becomes unstable , oscillating along the top eigenvector ( Jastrzębski et al. , 2019 ; Cohen et al. , 2021 ) . Moreover , the rapid increase in the sharpness of the loss landscape in the early phase significantly impacts the final generalization performance ( Achille et al. , 2019 ; Jastrzebski et al. , 2020 ; Lewkowycz et al. , 2020 ; Jastrzebski et al. , 2021 ) . However , the Hessian of a deep neural network is very high-dimensional which makes it difficult to analyze its eigensystem . Recently , some researchers studied the Hessian by exploiting tools in randomized numerical linear algebra ( Sagun et al. , 2017 ; Papyan , 2018 ; 2019 ; Ghorbani et al. , 2019 ; Yao et al. , 2020 ) and decomposition of the Hessian ( Papyan , 2018 ; 2019 ; Fort & Ganguli , 2019 ) . In this paper , we present a new decomposition of the Hessian using eigendecomposition of lowdimensional matrices . From the eigensystem of the low-dimensional matrix , we can provide a simple and intuitive explanation on the relation between its eigenvalue and the probability output . This enables us to explain how the sharpness of the loss landscape influences the learning dynamics and the generalization performance . We summarize the main contributions of the paper as follows : • We decompose the Hessian with low dimensional matrices , the logit Hessian and the logit-weight Jacobian ( defined in Definition 1 ) , and investigate the Hessian by the eigendecomposition of the logit Hessian which is a rank-one modification of a diagonal matrix . • We provide connections between the top eigenvalue of the logit Hessian and the impurity of the probability output . • We derive a relation between the sharpness , the top eigenvalue of the logit Hessian and the Jacobian . We call it sharpness-impurity-Jacobian relation . • We explain how the sharpness of the loss landscape influences the learning dynamics and the generalization performance . In particular , we find that gradient-based optimizations have implicit effects on penalizing the Jacobian norm ( Implicit Jacobian Regularization ) in a certain phase of training ( Active Regularization Period ) . 2 RELATED WORK . We summarize some works on the Hessian , learning dynamics , and generalization of neural networks . In particular , we point out the issue of approximating SGD by a stochastic differential equation ( SDE ) because a continuous flow can not capture the oscillatory behavior of discrete updates with iterative catapult , which plays a key role in limiting the sharpness of the loss landscape . Decomposition of the Hessian Sagun et al . ( 2016 ; 2017 ) empirically found that the eigenvalue spectrum of the Hessian during training is composed of two parts , the bulk which is concentrated around zero and the outliers which are scattered positively away from zero . They showed the bulk depends on the size of the network , and the outliers depend on the data . In particular , the number of outliers matches the number of classes of the data . Further , Papyan ( 2019 ) proposed a three-level hierarchical decomposition of the Hessian matrix according to each class , logit coordinate , and example . However , with different decomposition , we analyze the Hessian from another point of view . SGD as a SDE In many studies , SGD has been understood as a SDE in the limit of vanishing learning rate ( Mandt et al. , 2017 ; Li et al. , 2017b ; a ; Smith & Le , 2018 ; Chaudhari & Soatto , 2018 ; Jastrzębski et al. , 2017 ; Zhu et al. , 2019 ; Park et al. , 2019 ) . However , some theoretical concerns have been raised for such approximations ( Yaida , 2019 ) . Moreover , Barrett & Dherin ( 2021 ) argued that the SDE analysis in the limit of vanishing learning rate can not explain the generalization benefits of finite learning rates and they proposed a modified gradient flow for finite learning rates . However , they still consider a continuous gradient flow and thus it has a fundamental limitation in explaining the oscillatory behavior with iterative catapult in a practical learning rate regime ( Smith et al. , 2021 ) , which will be detailed in the following paragraph . Oscillatory catapult and the plateau of the sharpness Xing et al . ( 2018 ) investigated the roles of learning rate and batch size in SGD dynamics through interpolating the loss landscape between consecutive model parameters during training . They observed SGD explores the parameter space , bouncing between walls of valley-like regions . The large learning rate maintains a high valley height , and a small batch size induces gradient stochasity . They both help exploration through the parameter space with different roles in the training dynamics . Jastrzębski et al . ( 2019 ) empirically investigated the evolution of the sharpness ( the top eigenvalue of the Hessian ) along the whole training trajectory of SGD . They observed initial growth of the sharpness as loss decreases , reaching a maximum sharpness determined by learning rate and batch size , and then it decreases towards the end of training . Due to the initial increase of the sharpness , the SGD step becomes too large compared to the shape of the loss landscape . This is consistent with the valley-like structure shown in Xing et al . ( 2018 ) . Lewkowycz et al . ( 2020 ) investigated simple theoretical models with a solvable training dynamics . They showed that , in their setup with a large learning rate , the loss initially increases while the sharpness decreases , then it converges to a flat minimum . This mechanism is called the catapult mechanism . Recently , Cohen et al . ( 2021 ) found that full-batch GD typically operates in a regime called the Edge of Stability where the sharpness can no longer increase and stays near a certain value , and the training loss behaves nonmonotonically but decreases globally . This behavior of the optimization at the Edge of Stability can be seen as repeated catapult mechanisms . They explicitly marked the limit of the sharpness with 2/η ( η =learning rate ) . To describe the aforementioned evolution of the sharpness , Fort & Ganguli ( 2019 ) developed a theoretical model based on a random matrix modelling . To build a simple random model , they introduced assumptions about gradients and Hessians that they are i.i.d isotropic Gaussian with zero mean with varying variance during training . While they focus on building a random model based on the observation , we rather aim to explain the underlying mechanisms . Implicit bias in SGD There have been many studies on the implicit bias in SGD ( Neyshabur , 2017 ; Zhang et al. , 2021 ; Soudry et al. , 2018 ) . We review the most relevant and recent ones . Jastrzebski et al . ( 2021 ) empirically showed that SGD implicitly penalizes the trace of the Fisher Information Matrix ( FIM ) . They also showed the trace of FIM explodes in the early phase of training when using a small learning rate and called it catastrophic Fisher explosion . Barrett & Dherin ( 2021 ) ; Smith et al . ( 2021 ) demonstrated that SGD implicitly penalizes the norm of the total gradient and the non-uniformity of the minibatch gradients . We demonstrate that the ( logit-weight ) Jacobian plays an important role in the generalization performance in each case . 3 BACKGROUND . In this section , we provide some notations , basic equations and definitions for the following sections . Throughout the paper , we use the denominator layout notation for the vector derivatives , i.e. , ∇vu = ( ∂uj ∂vi ) ij ∈ Rv×u where u : Rv → Ru and v ∈ Rv . It is also generalized to the cases of scalar , u = 1 or v = 1 . We consider a problem of learning a C-class classifier which maps an input x ∈ X ⊂ Rd to a target label y ∈ [ C ] where [ C ] = { 1 , 2 , · · · , C } . To this end , we build a parameterized model fθ : X → Z ⊂ RC with a model parameter θ ∈ Θ ⊂ Rm which outputs a logit vector z ≡ fθ ( x ) ∈ Z ⊂ RC ( we often omit the dependence on the input x and the model parameter θ ) . Then , the logit vector z is given as input to the softmax function to yield a probability vector p = softmax ( z ) ∈ ∆C−1 where ∆C−1 = { p ∈ [ 0 , 1 ] C : 1Tp = 1 , p ≥ 0 } . We want the model to match the most probable class c1 to the true label y , where c ( x ) ≡ arg sort ( p ) in descending order . We exchangeably denote the probability value corresponding to the true label y as p ≡ py ∈ [ 0 , 1 ] . The cross-entropy loss , l = l ( z , y ) ∈ R , is equivalent to the negative log-likelihood l = − log p. We use the notations ‖ · ‖ for the Euclidean ` 2-norm of a vector or for the Euclidean operator norm of a matrix ( equivalently , ‖ · ‖σ for a square matrix ) , ‖ · ‖F for the Frobenius norm , and tr ( · ) for the trace of a ( square ) matrix . Starting with a simple computation of the derivatives of the softmax function , Eq ( 1 ) ( see Appendix A ) , we can easily derive the following equations in order : ∇zp = diag ( p ) − ppT ∈ RC×C ( 1 ) ∇zp = [ ∇zp ] : ,y = p ( ey − p ) ∈ RC ( 2 ) ∇zl = ∇zp ∂l ∂p = p ( ey − p ) · −1 p = p− ey ∈ RC ( 3 ) ∇2zl = ∇z ( ∇zl ) = ∇z ( p− ey ) = diag ( p ) − ppT ∈ RC×C ( 4 ) where diag ( p ) = ( δijpi ) ij ∈ RC×C is a diagonal matrix with p as its diagonal entries , and ei = ( δij ) j ∈ RC is a one-hot vector with i-th element as 1 . Next , the Hessian of the loss function l for given example x with respect to the model parameter can be expressed as follows : ∇2θl = ∇θz∇2zl∇θzT + ∑C j=1 ∇2θzj∇zj l ≈ ∇θz∇2zl∇θzT ∈ Rm×m ( 5 ) using a well-known Gauss-Newton approximation . ( see , for example , Schraudolph ( 2002 ) ) . Now , we are ready to consider the training loss for the training set D. We compute the total training loss over D as L = 〈l〉 which yields∇L = 〈∇l〉 and∇2L = 〈∇2l〉 where 〈·〉 is the expectation over the empirical measure of the training set D , equivalently say ÊD [ · ] . We use the notation 〈·〉S when averaging over a subset S . Following from Eq ( 4 ) and Eq ( 5 ) , we define the Hessian matrixH for the total loss and its Gauss-Newton approximation matrixG with the matricesM and J as follows : Definition 1 . We callM the logit Hessian , J the Jacobian ( of the logit function with respect to the model parameter ) , H the Hessian , andG the Gauss-Newton approximation defined as follows : M ≡ ∇2zl = diag ( p ) − ppT ∈ RC×C ( 6 ) J ( = Jzθ ) ≡ ∇θz ∈ Rm×C ( 7 ) H ≡ 〈∇2θl〉 ≈ 〈JMJT 〉 ≡ G ∈ Rm×m ( 8 ) It is interesting to note that while l is dependent on the true label y , the logit HessianM = ∇2zl is independent of y , and so are J , JMJT , andG . In case of the MSE loss l = 12‖z − e y‖2 , we have M = ∇2zl = I and G = 〈JJT 〉 . We mainly focus on the usual cross-entropy loss and defer the investigation on the MSE loss to Appendix M. From Eq ( 8 ) , we will often use the approximation ‖H‖σ ≈ ‖G‖σ as justified in Sagun et al . ( 2017 ) ; Fort & Ganguli ( 2019 ) , but this approximation sometimes fails in the later phase of training when the top eigenvalues of the Gauss-Newton matrix is not sufficiently isolated from the bulk near 0 ( Papyan , 2018 ) . Thus we mainly focus on the early phase of training .
The authors study the largest eigenvalue and eigenvector of the Hessian of the loss function. The authors approximate the Hessian matrix by a low-dimensional matrix, which is a rank-one modification of a diagonal matrix. The eigendecomposition helps to explain how the sharpness influences the gradient descent method and the generalization properties. They show that GD has implicit regularization effects on the Jacobian norm weighted with the impurity of the probability output, which is also related to the Fisher information matrix.
SP:0134f562a484fc11e69847eb132d866e55fad86f
White Paper Assistance: A Step Forward Beyond the Shortcut Learning
1 INTRODUCTION . We don ’ t see things as they are ; we see them as we are . –An Old Proverb These words give us insight into the predictable irrationalities of the human mind . Individuals always create their own “ subjective reality ” from their perception . Psychological researches ( Haselton et al. , 2015 ; Zhang et al. , 2007 ; Shafer et al. , 1984 ; Kahneman & Tversky , 1996 ) term this systematic , irrational , unconscious error that can dramatically alter the way we perceive the world as “ cognitive biases ” . Similarly to the behavior of human , convolutional neural networks may also develop their own biases during training , by learning “ shortcuts ” ( Geirhos et al. , 2020 ) ( also known as “ spurious cues ” ( Hendrycks et al. , 2021 ) or “ superficial correlations “ ( Jo & Bengio , 2017 ; Pezeshki et al. , 2020 ) ) which perform well on the existing test data but would fail dramatically under more general settings . There is a large volume of published studies describing and analysis this learning dynamic . In this work , we adopt the gradient starvation hypothesis , proposed in ( des Combes et al. , 2018 ; Pezeshki et al. , 2020 ) , that the leading cause for this feature imbalance is that the neural network is biased towards capturing statistically dominant features in the data so that it starves the learning of other very informative but less frequent features . With this being considered , a natural question is how to favor generalizable features over shortcuts ? It seems that the most reasonable and direct way is to identify which features contain shortcuts ( like green to frogs ) and which features should be enhanced ( like shapes to animals ) ? Unfortunately , most patterns that CNNs rely on to classify do not appear in a form amenable to discover . And enhancing specific features requires specific expert knowledge , let alone extensive manpower and resources . Luckily , CNNs are not alone with this issue . Very much like the networks submitting to spurious preference , the printer sometimes may use an unintended color to represent the intended color . In the real world , we call it color cast problem . When a colored image is fed into a printer , the printer has to perceive it and then duplicate it using the right color . The color cast problem thereby indicates the wrong propensity of color using . In practical use , when we suspect that our printers are having color cast problems , we usually let the printer print a white paper . Once this white paper is printed into other colors , the color cast is thereby detected and we need to seek corresponding solutions . Put another way , the white paper here serves as a prefect indicator of the color cast problem . This common sense motivates us to exploit the use of white paper to regularize the model . Intuitively , the white paper does not belong to any classes the model have learned from whichever benchmark dataset . A idealized model should thereby give an inference result that is almost as if it makes a random guess , to demonstrate it does not mistake this sample as any class it has learned . Consequently , when discovering a difference between the intended and actual outcome , we could know that the model now has some unintended generalization directions , which should be thought of as a consequence of shortcut learning . Simply put , the white paper could also act like a “ test paper ” in detecting dominant patterns . The experimental results in Section 3 will prove that the use of the white paper is an effective and universal choice , even when compared with some real datasets . Leveraging the superior ability of the white paper in detecting dominant patterns , we derive an interesting and effective regularizer called White Paper Assistance , which alleviates the excessive reliance of these dominant features by repeatedly enforcing the model to make a random guess on the white paper . Our method does not require any further supervision on the bias , such as explicit labels of misleadingly correlated attributes ( Kim et al. , 2019 ; Li & Vasconcelos , 2019 ; Sagawa et al. , 2019 ) , or domain-specific-bias-tailored training technique ( Wang et al. , 2019 ; Geirhos et al. , 2019 ; Li et al. , 2020 ) . Moreover , despite the simplicity of implementation of the white paper , our method can effectively improve the model ’ s generalization ability and help produce better performance . Since the whole algorithm does not entail any modification on model architectures and any interference to original training , it can be easily assembled into various CNNs as “ plug-and-play ” components , which significantly promotes its value in practice . Here we summarize our contributions : • We propose a novel method called White Paper Assistance to alleviate the shortcut learning . Our method does not require modifying the network and is easily implementable on any modern neural architecture . • We show the superior ability of the white paper in detecting dominant patterns . • We experiment with various architectures , different benchmark datasets , different combinations of techniques to show the wide applicability and compatibility of our method . • We test our method in imbalanced classifications and robustness against corruptions to demonstrate the versatility . 2 RELATED WORK . With the emergence of deep learning , numerous astonishing stories ( He et al. , 2016 ; Chen et al. , 2017 ; Yun et al. , 2019 ; Radosavovic et al. , 2020 ) about tremendous performances of CNNs have rapidly spread all over the field . However , despite the ever-increasing pace , CNNs share the same vulnerability as the human cognitive system , bias . It has been demonstrated that models may learn spurious shortcut correlations , which may be sufficient to solve a training task but are clearly lack of generalization utility . For example , a model would identify cows in “ common ” ( e.g . pastures ) contexts correctly but fail to classify cows in “ uncommon ” ( e.g . beach ) contexts ( Beery et al. ) . Standard ImageNet-trained models prefer to label a cat image with elephant skin texture as elephant instead of cat ( Geirhos et al. , 2019 ) . Such phenomenons ( Nguyen et al. , 2014 ; Wichmann et al. , 2010 ; Ribeiro et al. , 2016 ) highly exemplify the contradiction between the shortcut correlations and the human-intended generalization . As an active line of research , numerous studies have provided different explanations for this phenomenon ( Nasim et al. , 2019 ; Xu et al. , 2019 ; Parascandolo et al. , 2020 ) . For example , Valle-Perez et al . ( 2019 ) suggests that the parameter-function map of networks would bias towards simple functions . Kalimeris et al . justifies the simplicity bias further by showing that SGD learns functions of increasing complexity . Hermann & Lampinen ( 2020 ) demonstrates the model being “ lazy ” that it would favor the easier-to-extract feature over a more predictive feature . In this paper , we follow the explanation proposed in ( des Combes et al. , 2018 ; Pezeshki et al. , 2020 ) and argue that the rationale behind this learning proclivity for shortcuts is the propensity of the model to capture statistically dominant features in the data , rendering failure on discovering other predictive features . Recently studies that relates to shortcut removal usually requires extra supervision ( Kim et al. , 2019 ; Sagawa et al. , 2019 ) . Li & Vasconcelos ( 2019 ) explicitly add color bias as side information to an unbiased dataset of grayscale images . Geirhos et al . ( 2019 ) use style transfer to synthesize data to help generate a more preferable shape-based representation . Li et al . ( 2020 ) further provide supervisions from both shape and texture when generating cue conflict images and lead to better feature representations . Instead of leveraging laborious and expensive supervision , our method utilizes the common sense by leveraging the white paper to detect the dominant patterns . 3 OUR METHOD . The algorithm we propose is a conceptually simple and plug-and-play method that can be easily integrated into various CNN models without changing the learning strategy . The pseudo-code of the White Paper Assistance is shown in Algorithm 1 . Generally speaking , the aim of this algorithm is to detect and conquer . Detect : For certain epoch from training iteration , the probability to conduct White Paper Assistance is P , and 1 − P if otherwise . Once applying it , a batch of white paper will be fed into the model and we can obtain the normalized output distribution ( using “ softmax ” ) p. The distribution here represents the perception of the model for this white paper and , more importantly , the model ’ s propensity for unintended patterns . Conquer : As we ’ ve discussed before , since the white paper does not belong to any class the model has learned , it should give an inference result that is almost as if it makes a random guess , to demonstrate it is not biased towards any pattern . In the case of the multi-class classification withN classes , the ideal prediction probability distribution for the white paper would be q = [ 1N , 1 N , 1 N , ... , 1 N ] . Hence to measure the match of these two predictions p and q , we adopt the Kullback-Leibler Divergence : Lwp = λ ∗DKL ( p‖q ) ( 1 ) where λ denotes the strength of the White Paper Assistance . Then we repeat this process for M iterations in the hope of alleviating this unintended propensity . 1 Algorithm 1 Pseudo-code of the White Paper Assistance 1 : for each epoch do 2 : Real Images training using original loss function 3 : Update model parameters 4 : initialize p← Rand ( 0 , 1 ) . White Paper Assistance starts here . 5 : if p < P then 6 : for each iteration ∈ [ 1 , M ] do 7 : Generate a batch of white picture W 8 : p←Model ( W ) . White paper training . 9 : Update model parameters by Eq . ( 1 ) 10 : end for 11 : end if . White Paper Assistance ends here . 12 : end for There are two important questions for designing above Algorithm : Q1 . Does the White Paper Assistance indeed alleviate the shortcut learning ? Q2 . Why choose using the white paper ? To answer the first question , we evaluated our method in a controlled experimental setup , by adding synthetic shortcuts to the data . Specifically , we added a 4×4 black square block on the top left corner of each training and testing sample of the first class ( apple ) of CIFAR100 ( We refer to this modified dataset as Shortcut-CIFAR100 ) . When trained on Shortcut-CIFAR100 , this small block allows a network to achieve a negligible loss by only learning to discriminate this block on the same position while ignoring other information . Therefore , after training on Shortcut-CIFAR100 , 1We explain the reason for repeating in Appendix . B. the network would exhibit a strong propensity to identify a picture that has a small black block on its top left corner as “ apple ” , if it suffers from shortcut learning . We then designed a new testing scenario where we extracted all the testing samples from the other 99 classes ( except apple ) and then added a small black block on the same position on each of them ( We then term this as CIFAR99 ) . Since on CIFAR99 , only the remaining 99 classes were modified . Once the network excessively relies on the decision rule that connects “ images with a black block ” with the class “ apple ” , it would demonstrate a strong propensity to identify the samples on CIFAR99 with “ apple ” , which would result in lower accuracy . Shortly speaking , the performance on CIFAR99 actually reveals how well the model could resist the propensity of shortcut learning . Table 1 presents the performance of models which were trained on Shortcut-CIFAR100 and tested on both Shortcut-CIFAR100 and CIFAR99 . As we can see , WP improves the model ’ s generalization ability on Shortcut-CIFAR100 as usual . We want to highlight the huge improvement WP achieves on CIFAR99 , where models without WP demonstrate a strong propensity to misidentify the images when these images exhibit similar patterns as those in other classes . To verify that WP is indeed learning to recognize more informative features , we visually plot the activation maps of all the models trained on Shortcut-CIFAR100 . Figure 1 ( b ) provably demonstrates the effectiveness of WP in combating shortcut learning . After training on Shortcut-CIFAR100 , the model without WP would drawn in the small block in the upper left corner while applying WP helps the model focus on more discriminative features . We also include spectral decoupling regularization ( Pezeshki et al. , 2020 ) and LfF ( Nam et al. , 2020 ) as comparisons . Both SD and LfF could improve the model ’ s generalization ability ( higher accuracy on Shortcuf-CIFAR100 ) , but SD fails dramatically on getting over shortcut decision rule . 2 In short , these results of this conceptual experiment answer the first question positively and manifest the ability of WP to restrain the excessive reliance on dominant patterns when classifying . 2We hypothesis that such phenomenon would be relieved if we use the advanced variant of SD that imposes penalty separately for each class . But in this case ( 100 classes ) , it would entails a massive increase of hyperparameters ( at least 100 ) . Regarding the second question , it is tempting to expect that there would be one or more ideal images that not only do not belong to the distribution of training data , but also are able to detect all the unintended dominant patterns . Alas , to precisely find such images require us knowing which patterns CNNs rely on , which is hard because patterns do not appear in a form amenable to discover . . . so , not a viable option . Intriguingly , over all the alternative option , the solution with the white paper works best . As in Figure 2 , four candidates were evaluated , namely “ Gaussian Noise ” , “ Icecream ” , “ CIFAR-10 ” , and “ White Paper ” . We keep all the other implementation details unchanged and merely modified the images while training ResNet-56 on CIFAR-100 . Specifically , “ Gaussian Noise ” experiments represent that we changed the white papers into images sampled from a standard normal distribution . “ Ice-cream ” denotes the whole ice-cream class of images from ImageNet while “ CIFAR-10 ” denotes that all the images from CIFAR-10 were used for detection . Extensive details to facilitate replication are provided in the Appendix.C Even with noise-generated images , there is still a performance boost over the vanilla model . Then with the increasing number of real-world images , the performances get higher . But white paper outperforms all the other solution . A possible explanation for this might be that the uninformative nature of the white paper seems to make it more suitable for detecting spurious dominant patterns , since the lack of semantics itself means no bias towards any pattern . Just like coloring on this white paper , the extent to which some pattern plays a dominant role for a class will be shown on the output distribution of the white paper . We also want to note that all the alternatives outperform the vanilla setting , indicating that the effectiveness of the whole detect-and-conquer practice .
The paper proposes a novel method, White Paper Assistant (WP), to prevent CNNs from utilizing spurious input-out correlations, the so-called shortcuts, in classification. The main idea is to intermittently update the CNN to predict uniform distribution over classes for white image inputs. Through careful and extensive empirical studies on various datasets, the paper shows that this simple regularization can prevent the CNN from excessively focusing on shortcuts, thus learning more generalizable features and improving the overall accuracy on the test set.
SP:3a268e208ecbf0a5dfa031a6bf54314f5df558c9
White Paper Assistance: A Step Forward Beyond the Shortcut Learning
1 INTRODUCTION . We don ’ t see things as they are ; we see them as we are . –An Old Proverb These words give us insight into the predictable irrationalities of the human mind . Individuals always create their own “ subjective reality ” from their perception . Psychological researches ( Haselton et al. , 2015 ; Zhang et al. , 2007 ; Shafer et al. , 1984 ; Kahneman & Tversky , 1996 ) term this systematic , irrational , unconscious error that can dramatically alter the way we perceive the world as “ cognitive biases ” . Similarly to the behavior of human , convolutional neural networks may also develop their own biases during training , by learning “ shortcuts ” ( Geirhos et al. , 2020 ) ( also known as “ spurious cues ” ( Hendrycks et al. , 2021 ) or “ superficial correlations “ ( Jo & Bengio , 2017 ; Pezeshki et al. , 2020 ) ) which perform well on the existing test data but would fail dramatically under more general settings . There is a large volume of published studies describing and analysis this learning dynamic . In this work , we adopt the gradient starvation hypothesis , proposed in ( des Combes et al. , 2018 ; Pezeshki et al. , 2020 ) , that the leading cause for this feature imbalance is that the neural network is biased towards capturing statistically dominant features in the data so that it starves the learning of other very informative but less frequent features . With this being considered , a natural question is how to favor generalizable features over shortcuts ? It seems that the most reasonable and direct way is to identify which features contain shortcuts ( like green to frogs ) and which features should be enhanced ( like shapes to animals ) ? Unfortunately , most patterns that CNNs rely on to classify do not appear in a form amenable to discover . And enhancing specific features requires specific expert knowledge , let alone extensive manpower and resources . Luckily , CNNs are not alone with this issue . Very much like the networks submitting to spurious preference , the printer sometimes may use an unintended color to represent the intended color . In the real world , we call it color cast problem . When a colored image is fed into a printer , the printer has to perceive it and then duplicate it using the right color . The color cast problem thereby indicates the wrong propensity of color using . In practical use , when we suspect that our printers are having color cast problems , we usually let the printer print a white paper . Once this white paper is printed into other colors , the color cast is thereby detected and we need to seek corresponding solutions . Put another way , the white paper here serves as a prefect indicator of the color cast problem . This common sense motivates us to exploit the use of white paper to regularize the model . Intuitively , the white paper does not belong to any classes the model have learned from whichever benchmark dataset . A idealized model should thereby give an inference result that is almost as if it makes a random guess , to demonstrate it does not mistake this sample as any class it has learned . Consequently , when discovering a difference between the intended and actual outcome , we could know that the model now has some unintended generalization directions , which should be thought of as a consequence of shortcut learning . Simply put , the white paper could also act like a “ test paper ” in detecting dominant patterns . The experimental results in Section 3 will prove that the use of the white paper is an effective and universal choice , even when compared with some real datasets . Leveraging the superior ability of the white paper in detecting dominant patterns , we derive an interesting and effective regularizer called White Paper Assistance , which alleviates the excessive reliance of these dominant features by repeatedly enforcing the model to make a random guess on the white paper . Our method does not require any further supervision on the bias , such as explicit labels of misleadingly correlated attributes ( Kim et al. , 2019 ; Li & Vasconcelos , 2019 ; Sagawa et al. , 2019 ) , or domain-specific-bias-tailored training technique ( Wang et al. , 2019 ; Geirhos et al. , 2019 ; Li et al. , 2020 ) . Moreover , despite the simplicity of implementation of the white paper , our method can effectively improve the model ’ s generalization ability and help produce better performance . Since the whole algorithm does not entail any modification on model architectures and any interference to original training , it can be easily assembled into various CNNs as “ plug-and-play ” components , which significantly promotes its value in practice . Here we summarize our contributions : • We propose a novel method called White Paper Assistance to alleviate the shortcut learning . Our method does not require modifying the network and is easily implementable on any modern neural architecture . • We show the superior ability of the white paper in detecting dominant patterns . • We experiment with various architectures , different benchmark datasets , different combinations of techniques to show the wide applicability and compatibility of our method . • We test our method in imbalanced classifications and robustness against corruptions to demonstrate the versatility . 2 RELATED WORK . With the emergence of deep learning , numerous astonishing stories ( He et al. , 2016 ; Chen et al. , 2017 ; Yun et al. , 2019 ; Radosavovic et al. , 2020 ) about tremendous performances of CNNs have rapidly spread all over the field . However , despite the ever-increasing pace , CNNs share the same vulnerability as the human cognitive system , bias . It has been demonstrated that models may learn spurious shortcut correlations , which may be sufficient to solve a training task but are clearly lack of generalization utility . For example , a model would identify cows in “ common ” ( e.g . pastures ) contexts correctly but fail to classify cows in “ uncommon ” ( e.g . beach ) contexts ( Beery et al. ) . Standard ImageNet-trained models prefer to label a cat image with elephant skin texture as elephant instead of cat ( Geirhos et al. , 2019 ) . Such phenomenons ( Nguyen et al. , 2014 ; Wichmann et al. , 2010 ; Ribeiro et al. , 2016 ) highly exemplify the contradiction between the shortcut correlations and the human-intended generalization . As an active line of research , numerous studies have provided different explanations for this phenomenon ( Nasim et al. , 2019 ; Xu et al. , 2019 ; Parascandolo et al. , 2020 ) . For example , Valle-Perez et al . ( 2019 ) suggests that the parameter-function map of networks would bias towards simple functions . Kalimeris et al . justifies the simplicity bias further by showing that SGD learns functions of increasing complexity . Hermann & Lampinen ( 2020 ) demonstrates the model being “ lazy ” that it would favor the easier-to-extract feature over a more predictive feature . In this paper , we follow the explanation proposed in ( des Combes et al. , 2018 ; Pezeshki et al. , 2020 ) and argue that the rationale behind this learning proclivity for shortcuts is the propensity of the model to capture statistically dominant features in the data , rendering failure on discovering other predictive features . Recently studies that relates to shortcut removal usually requires extra supervision ( Kim et al. , 2019 ; Sagawa et al. , 2019 ) . Li & Vasconcelos ( 2019 ) explicitly add color bias as side information to an unbiased dataset of grayscale images . Geirhos et al . ( 2019 ) use style transfer to synthesize data to help generate a more preferable shape-based representation . Li et al . ( 2020 ) further provide supervisions from both shape and texture when generating cue conflict images and lead to better feature representations . Instead of leveraging laborious and expensive supervision , our method utilizes the common sense by leveraging the white paper to detect the dominant patterns . 3 OUR METHOD . The algorithm we propose is a conceptually simple and plug-and-play method that can be easily integrated into various CNN models without changing the learning strategy . The pseudo-code of the White Paper Assistance is shown in Algorithm 1 . Generally speaking , the aim of this algorithm is to detect and conquer . Detect : For certain epoch from training iteration , the probability to conduct White Paper Assistance is P , and 1 − P if otherwise . Once applying it , a batch of white paper will be fed into the model and we can obtain the normalized output distribution ( using “ softmax ” ) p. The distribution here represents the perception of the model for this white paper and , more importantly , the model ’ s propensity for unintended patterns . Conquer : As we ’ ve discussed before , since the white paper does not belong to any class the model has learned , it should give an inference result that is almost as if it makes a random guess , to demonstrate it is not biased towards any pattern . In the case of the multi-class classification withN classes , the ideal prediction probability distribution for the white paper would be q = [ 1N , 1 N , 1 N , ... , 1 N ] . Hence to measure the match of these two predictions p and q , we adopt the Kullback-Leibler Divergence : Lwp = λ ∗DKL ( p‖q ) ( 1 ) where λ denotes the strength of the White Paper Assistance . Then we repeat this process for M iterations in the hope of alleviating this unintended propensity . 1 Algorithm 1 Pseudo-code of the White Paper Assistance 1 : for each epoch do 2 : Real Images training using original loss function 3 : Update model parameters 4 : initialize p← Rand ( 0 , 1 ) . White Paper Assistance starts here . 5 : if p < P then 6 : for each iteration ∈ [ 1 , M ] do 7 : Generate a batch of white picture W 8 : p←Model ( W ) . White paper training . 9 : Update model parameters by Eq . ( 1 ) 10 : end for 11 : end if . White Paper Assistance ends here . 12 : end for There are two important questions for designing above Algorithm : Q1 . Does the White Paper Assistance indeed alleviate the shortcut learning ? Q2 . Why choose using the white paper ? To answer the first question , we evaluated our method in a controlled experimental setup , by adding synthetic shortcuts to the data . Specifically , we added a 4×4 black square block on the top left corner of each training and testing sample of the first class ( apple ) of CIFAR100 ( We refer to this modified dataset as Shortcut-CIFAR100 ) . When trained on Shortcut-CIFAR100 , this small block allows a network to achieve a negligible loss by only learning to discriminate this block on the same position while ignoring other information . Therefore , after training on Shortcut-CIFAR100 , 1We explain the reason for repeating in Appendix . B. the network would exhibit a strong propensity to identify a picture that has a small black block on its top left corner as “ apple ” , if it suffers from shortcut learning . We then designed a new testing scenario where we extracted all the testing samples from the other 99 classes ( except apple ) and then added a small black block on the same position on each of them ( We then term this as CIFAR99 ) . Since on CIFAR99 , only the remaining 99 classes were modified . Once the network excessively relies on the decision rule that connects “ images with a black block ” with the class “ apple ” , it would demonstrate a strong propensity to identify the samples on CIFAR99 with “ apple ” , which would result in lower accuracy . Shortly speaking , the performance on CIFAR99 actually reveals how well the model could resist the propensity of shortcut learning . Table 1 presents the performance of models which were trained on Shortcut-CIFAR100 and tested on both Shortcut-CIFAR100 and CIFAR99 . As we can see , WP improves the model ’ s generalization ability on Shortcut-CIFAR100 as usual . We want to highlight the huge improvement WP achieves on CIFAR99 , where models without WP demonstrate a strong propensity to misidentify the images when these images exhibit similar patterns as those in other classes . To verify that WP is indeed learning to recognize more informative features , we visually plot the activation maps of all the models trained on Shortcut-CIFAR100 . Figure 1 ( b ) provably demonstrates the effectiveness of WP in combating shortcut learning . After training on Shortcut-CIFAR100 , the model without WP would drawn in the small block in the upper left corner while applying WP helps the model focus on more discriminative features . We also include spectral decoupling regularization ( Pezeshki et al. , 2020 ) and LfF ( Nam et al. , 2020 ) as comparisons . Both SD and LfF could improve the model ’ s generalization ability ( higher accuracy on Shortcuf-CIFAR100 ) , but SD fails dramatically on getting over shortcut decision rule . 2 In short , these results of this conceptual experiment answer the first question positively and manifest the ability of WP to restrain the excessive reliance on dominant patterns when classifying . 2We hypothesis that such phenomenon would be relieved if we use the advanced variant of SD that imposes penalty separately for each class . But in this case ( 100 classes ) , it would entails a massive increase of hyperparameters ( at least 100 ) . Regarding the second question , it is tempting to expect that there would be one or more ideal images that not only do not belong to the distribution of training data , but also are able to detect all the unintended dominant patterns . Alas , to precisely find such images require us knowing which patterns CNNs rely on , which is hard because patterns do not appear in a form amenable to discover . . . so , not a viable option . Intriguingly , over all the alternative option , the solution with the white paper works best . As in Figure 2 , four candidates were evaluated , namely “ Gaussian Noise ” , “ Icecream ” , “ CIFAR-10 ” , and “ White Paper ” . We keep all the other implementation details unchanged and merely modified the images while training ResNet-56 on CIFAR-100 . Specifically , “ Gaussian Noise ” experiments represent that we changed the white papers into images sampled from a standard normal distribution . “ Ice-cream ” denotes the whole ice-cream class of images from ImageNet while “ CIFAR-10 ” denotes that all the images from CIFAR-10 were used for detection . Extensive details to facilitate replication are provided in the Appendix.C Even with noise-generated images , there is still a performance boost over the vanilla model . Then with the increasing number of real-world images , the performances get higher . But white paper outperforms all the other solution . A possible explanation for this might be that the uninformative nature of the white paper seems to make it more suitable for detecting spurious dominant patterns , since the lack of semantics itself means no bias towards any pattern . Just like coloring on this white paper , the extent to which some pattern plays a dominant role for a class will be shown on the output distribution of the white paper . We also want to note that all the alternatives outperform the vanilla setting , indicating that the effectiveness of the whole detect-and-conquer practice .
The present work introduces an approach to tackle Shortcut Learning by CNNs, called White Paper Assistance (WP). After motivating and introducing the method, the authors evaluate it on computer vision datasets with synthetic inserted shortcuts, ie. black pixels in the corners of the images. The authors show that the WP approach reduces the learning of so-called shortcuts.
SP:3a268e208ecbf0a5dfa031a6bf54314f5df558c9
Exact Stochastic Newton Method for Deep Learning: the feedforward networks case.
The inclusion of second-order information into Deep Learning optimization has drawn consistent interest as a way forward to improve upon gradient descent methods . Estimating the second-order update is computationally expensive , which drastically limits its usage scope and forces the use of various truncations and approximations . This work demonstrates that it is possible to solve the Newton direction in the stochastic case exactly . We consider feedforward networks as a base model , build a second-order Lagrangian which we call Sifrian , and provide a closed-form formula for the exact stochastic Newton direction under some monotonicity and regularization conditions . We propose a convexity correction to escape saddle points , and we reconsider the intrinsic stochasticity of the online learning process to improve upon the formulas . We finally compare the performance of the developed solution with well-established training methods and show its viability as a training method for Deep Learning . Optimization in Deep Learning is mainly dominated by first-order methods built around the central concept of backpropagation ( LeCun et al. , 1988 ) . Second-order methods have exceptional theoretical properties in deterministic optimization ( Boyd et al. , 2004 ; Nocedal & Wright , 2006 ) , but such properties do not translate well into the stochastic case ( LeCun et al. , 2012 ; Bottou et al. , 2018 ) . First-order methods such as the Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) are relatively simple and have been adaptively improved for Deep Learning ( Duchi et al. , 2011 ; Tieleman & Hinton , 2012 ; Kingma & Ba , 2014 ; Reddi et al. , 2019 ; Yao et al. , 2020 ) . Inherently , secondorder methods might seems inadequate for Deep learning due to increased computational cost , poor clock-wall performance , and the non-convex nature of Deep Learning ( LeCun et al. , 2012 ) . Furthermore , Newton method might even reduce the generalization capabilities of training ( Wadia et al. , 2021 ; Amari et al. , 2020 ) . Despite these various limitations , substantial effort was deployed to include Hessian information into the optimization process ( e.g . Byrd et al. , 2011 ; Sohl-Dickstein et al. , 2014 ; Byrd et al. , 2016 ; Agarwal et al. , 2017 ; Berahas et al. , 2019 ; Anil et al. , 2020 ; Goldfarb et al. , 2020 ; Castera et al. , 2021 ) . Several approaches exist such as the Gauss-Newton method ( e.g Schraudolph , 2002 ; Botev et al. , 2017 ) , diagonal approximation of the Hessian ( e.g . Bordes et al. , 2009 ; Schaul et al. , 2013 ) , iterative low-rank updates such as BFGS ( Broyden , 1970 ; Fletcher , 1970 ; Goldfarb , 1970 ; Shanno , 1970 ) ( see also , Liu & Nocedal , 1989 ; Schraudolph et al. , 2007 ; Bollapragada et al. , 2018 ) , or Hessian-Free methods which combine the fast Hessian matrix multiplication ( Pearlmutter , 1994 ) and the conjugate gradient algorithm ( e.g . Martens , 2010 ; Martens & Sutskever , 2012 ; Dauphin et al. , 2014 ) . The use of the Fisher matrix to capture curvature information in the space of distributions , instead of the Hessian , is another approach which yields the natural gradient ( Amari , 1998 ) , but suffers from the same computational issues as the Newton method . Within this context K-FAC method ( Martens & Grosse , 2015 ; Ba et al. , 2016a ; George et al. , 2018 ) mitigates some of the computational issues of the natural gradient method . In essence , several drawbacks and flaws limits the adoption of second-order methods as a standard for neural networks training . In this paper , we propose a novel approach to characterize the Newton update which help us derive an exact closed-form solution for the stochastic Newton method . Our method requires a suitable regularization of the neural network and strict monotonicity of the activation functions . We start this paper by introducing useful notations while deriving the well-known backpropagation algorithm for a feedforward network . Then we introduce a second-order Lagrangian which we call Sifrian , that will serve to characterize the Newton direction . We derive four types of equations from the Sifrian , and we provide an exact closed-form solution for the Newton direction in the stochastic case . We further propose a saddle-free version of our method , and add a randomization process to enhance our solution . In the last part of this paper , we show the applicability of our method for Deep Learning through diverse classification tasks using feedforward architectures . 1 PRELIMINARIES . Deep Learning and neural networks training could be seen as an optimization problem of an expected loss function ` over a distribution D of labeled samples ( x , d ) . In general , weights and biases ( W , β ) are the sought-after parameters of the network . The labeled samples distribution D is often unknown , but a large number of samples ( database D ) allows the approximation of the expected loss with an empirical risk : min W , β ( E ( x , d ) ∼D [ ` ( W , β , x , d ) ] ) → 1 |D| ∑ p∈D ` ( W , β , x ( p ) , d ( p ) ) , ( x ( p ) , d ( p ) ) p∈D i.i.d∼ D. ( 1 ) The loss or cost function ` is typically a cross entropy function or an l2 norm of the mismatches between the network outputs and the labels : ` ( W , β , ( x ( p ) , d ( p ) ) ) = 12 ∥∥∥x ( p ) n − d ( p ) ∥∥∥2 2 . Where n indicates the output layer number . We present more in detail the architecture of a Feedforward Neural Network ( FNN ) hereafter . 1.1 NOTATIONS AND FEEDFORWARD NEURAL NETWORKS ( FNN ) . In this section , we recall the main details of FNNs , which will be used as a standard model for Deep Learning1 . The notation used throughout this paper is similar to notations presented in LeCun et al . ( 1988 ) . The main equation governing the FNN , a.k.a . the forward model is the following : x ( p ) k = F ( Wk x ( p ) k−1 + βk ) , k ∈ [ 1 .. n ] , p ∈ D. ( 2 ) where D is the database , p designs one single sample from the database ( e.g . one single image or audio recording ) , k is the layer number and n is the total number of layers in the network . The initial input for sample p is x ( p ) 0 ( e.g . the vectorized input image data ) . The state variable x ( p ) is transformed at each layer k through a multiplication by a weight matrix Wk and an addition of a bias vector βk . The activation function F , which is typically a sigmoid or a Rectified Linear Unit ( ReLU ) , is applied element wise on the resulting activation vector : a ( p ) k = Wk x ( p ) k−1 + βk , and serves to introduce non-linearity in the neural network . 1.2 THE LAGRANGIAN AND BACKPROPAGATION . The origin of backpropagation could be traced back to the early 1970s and could be derived by casting Deep Learning as a constrained optimization problem . This section is similar to LeCun et al . ( 1988 ) . The main novelty in this section is the addition of a regularization term dependent on the state variable R ( x ) to the cost function ` : ` R = ` + R ( x ) . The regularization has to verify the admissibility criteria which we define hereafter : Definition 1 : A regularization term is admissible if the second derivative of the augmented cost function w.r.t . to the state variable is separable and non-null , i.e . : ∀ ( p , q ) ∈ D , ∀ ( k , m ) ∈ [ 1 .. n ] ( ∂ ` R ∂x ( p ) k x ( q ) m ) p , q , k , m ∝ δp=q . ( 3 ) An example of such regularization is the following function : R ( x ) = 1 2 ∑ p∈D ∑ k=1 .. n−1 〈 x ( p ) k , Λ ( p ) k x ( p ) k 〉 ( 4 ) 1Convolutional Neural Networks ( CNN ) are a type of FNN . The concepts of this paper apply also to CNN . ( Λ ( p ) k ) p , k are symmetric positive matrices . Such a choice might seem atypical since the regularization often concerns the network parameters , mainly the weights or the biases . This ” admissible ” regularization was introduced for the sole purpose of guaranteeing a non-null partial derivative of the cost ` w.r.t the state variable x ( p ) k . This last property is essential to solve Equ . 16 . We will show later that such a regularization minimizes also the norm of the gradient ( Barrett & Dherin , 2020 ; Smith et al. , 2021 ) . In order to derive the backpropagation algorithm we introduce the following Lagrangian as per the notation of LeCun et al . ( 1988 ) : L ( x , W , β , b ) = ` R + ∑ p∈D ∑ k=1 .. n 〈 x ( p ) k − F ( Wk x ( p ) k−1 + βk ) , b ( p ) k 〉 . ( 5 ) The Lagrangian contains the original cost function , the admissible regularization term , and the product of the forward equation with the adjoint state vectors ( b ( p ) k ) p , k . These adjoint vectors are de- fined for each layer of the network and each sample of the database ; however , their values are not fixed yet and will be chosen in a way that simplifies the gradient computation . If the state variable x verifies the forward equations , then the Lagrangian simplifies to the cost function : L ( x ( W , β ) , W , β , b ) = ` R ( x ( W , β ) ) . ( 6 ) The total derivative of the previous Lagrangian w.r.t . the weights or biases is the gradient and could be expressed in terms of partial derivatives as follows2 : dL ( x ( W , β ) , W , β , b ) dWk = ∂L∂Wk + ∑ p∈D ∑ m=1 .. n ( dx ( p ) m dWk ) ( ∂L ∂x ( p ) m ) = d ` RdWk , dL ( x ( W , β ) , W , β , b ) dβk = ∂L∂βk + ∑ p∈D ∑ m=1 .. n ( dx ( p ) m dβk ) ( ∂L ∂x ( p ) m ) = d ` Rdβk . ( 7 ) The partial derivatives of the Lagrangian w.r.t to { Wk , βk } k=1 .. n are straightforward to compute . The Fréchet derivatives ( dx ( p ) m dWk , dx ( p ) m dβk ) p , k , m are non-trivial to evaluate and the core idea of back- propagation is the selection of the adjoint state variables ( b ( p ) k ) p , k which cancels the superfluous terms . Such a simplification is achievable if : ( ∂L ∂x ( p ) k ) p∈D , k∈ [ 1 .. n ] = 0 . ( 8 ) Computing the previous partial derivative yields the following backpropagation equation : ∂L ∂x ( p ) k = b ( p ) k − 1k=1 .. n−1W T k+1∇F ( a ( p ) k+1 ) b ( p ) k+1 + ∂ ` R ∂x ( p ) k = 0 . ( 9 ) 1E is the indicator function , and it is equal to one if the underlying condition E is true and null otherwise . The nabla operator ∇F is a diagonal square matrix with an element-wise derivative of its argument ( along the diagonal ) . The resolution of the backpropagation system could be split into a boundary condition and a backward propagation system . Further details could be found in LeCun et al . ( 1988 ) . The particular choice of the adjoint state vectors ( b ( p ) k ) p∈D , k=1 .. n yields a simple formula for the gradient of the cost function : Gk = ∂L ∂Wk = − ∑ p∇F ( a ( p ) k ) b ( p ) k x ( p ) T k−1 , gk = ∂L ∂βk = − ∑ p∇F ( a ( p ) k ) b ( p ) k . ( 10 ) The backpropagation algorithm is fundamental for Deep Learning . It is simple and provides a straightforward solution to an otherwise tedious problem to solve . Unfortunately , developing an efficient second-order backpropagation method remains elusive . The current state-of-the-art is based on the use of the R−operator ( Pearlmutter , 1994 ) to compute the product of the Hessian with a given vector without explicitly calculating or storing the Hessian ( Martens , 2010 ; Dauphin et al. , 2014 ; Agarwal et al. , 2017 ) . In the following section , we provide a different framework for backpropagation which allows the characterization of the second-order Newton direction . 2A denominator layout notation for derivation is used throughout this paper .
This paper claims that it is possible to compute the Newton method's update exactly for deep neural networks (multi-layer perceptrons). The motivation is that Newton's method, as a second-order optimizer that includes loss curvature information, should improve upon first-order optimizers, such as gradient descent, that use only first derivatives (gradient) of the loss. Second order methods tipically converge in a smaller number of iterations, but each update is expensive to compute and that's why they are not widely used in practice. It must be noted that Newton's method is almost never mentioned in the context of Neural Networks (NN), because it is only guaranteed to converge for convex loss functions (which is clearly not the case for NN). However, the authors claim they have some tricks to fix that.
SP:5820c97c44d7ad06c16cf5e7ce4f8b197ea08c94
Exact Stochastic Newton Method for Deep Learning: the feedforward networks case.
The inclusion of second-order information into Deep Learning optimization has drawn consistent interest as a way forward to improve upon gradient descent methods . Estimating the second-order update is computationally expensive , which drastically limits its usage scope and forces the use of various truncations and approximations . This work demonstrates that it is possible to solve the Newton direction in the stochastic case exactly . We consider feedforward networks as a base model , build a second-order Lagrangian which we call Sifrian , and provide a closed-form formula for the exact stochastic Newton direction under some monotonicity and regularization conditions . We propose a convexity correction to escape saddle points , and we reconsider the intrinsic stochasticity of the online learning process to improve upon the formulas . We finally compare the performance of the developed solution with well-established training methods and show its viability as a training method for Deep Learning . Optimization in Deep Learning is mainly dominated by first-order methods built around the central concept of backpropagation ( LeCun et al. , 1988 ) . Second-order methods have exceptional theoretical properties in deterministic optimization ( Boyd et al. , 2004 ; Nocedal & Wright , 2006 ) , but such properties do not translate well into the stochastic case ( LeCun et al. , 2012 ; Bottou et al. , 2018 ) . First-order methods such as the Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) are relatively simple and have been adaptively improved for Deep Learning ( Duchi et al. , 2011 ; Tieleman & Hinton , 2012 ; Kingma & Ba , 2014 ; Reddi et al. , 2019 ; Yao et al. , 2020 ) . Inherently , secondorder methods might seems inadequate for Deep learning due to increased computational cost , poor clock-wall performance , and the non-convex nature of Deep Learning ( LeCun et al. , 2012 ) . Furthermore , Newton method might even reduce the generalization capabilities of training ( Wadia et al. , 2021 ; Amari et al. , 2020 ) . Despite these various limitations , substantial effort was deployed to include Hessian information into the optimization process ( e.g . Byrd et al. , 2011 ; Sohl-Dickstein et al. , 2014 ; Byrd et al. , 2016 ; Agarwal et al. , 2017 ; Berahas et al. , 2019 ; Anil et al. , 2020 ; Goldfarb et al. , 2020 ; Castera et al. , 2021 ) . Several approaches exist such as the Gauss-Newton method ( e.g Schraudolph , 2002 ; Botev et al. , 2017 ) , diagonal approximation of the Hessian ( e.g . Bordes et al. , 2009 ; Schaul et al. , 2013 ) , iterative low-rank updates such as BFGS ( Broyden , 1970 ; Fletcher , 1970 ; Goldfarb , 1970 ; Shanno , 1970 ) ( see also , Liu & Nocedal , 1989 ; Schraudolph et al. , 2007 ; Bollapragada et al. , 2018 ) , or Hessian-Free methods which combine the fast Hessian matrix multiplication ( Pearlmutter , 1994 ) and the conjugate gradient algorithm ( e.g . Martens , 2010 ; Martens & Sutskever , 2012 ; Dauphin et al. , 2014 ) . The use of the Fisher matrix to capture curvature information in the space of distributions , instead of the Hessian , is another approach which yields the natural gradient ( Amari , 1998 ) , but suffers from the same computational issues as the Newton method . Within this context K-FAC method ( Martens & Grosse , 2015 ; Ba et al. , 2016a ; George et al. , 2018 ) mitigates some of the computational issues of the natural gradient method . In essence , several drawbacks and flaws limits the adoption of second-order methods as a standard for neural networks training . In this paper , we propose a novel approach to characterize the Newton update which help us derive an exact closed-form solution for the stochastic Newton method . Our method requires a suitable regularization of the neural network and strict monotonicity of the activation functions . We start this paper by introducing useful notations while deriving the well-known backpropagation algorithm for a feedforward network . Then we introduce a second-order Lagrangian which we call Sifrian , that will serve to characterize the Newton direction . We derive four types of equations from the Sifrian , and we provide an exact closed-form solution for the Newton direction in the stochastic case . We further propose a saddle-free version of our method , and add a randomization process to enhance our solution . In the last part of this paper , we show the applicability of our method for Deep Learning through diverse classification tasks using feedforward architectures . 1 PRELIMINARIES . Deep Learning and neural networks training could be seen as an optimization problem of an expected loss function ` over a distribution D of labeled samples ( x , d ) . In general , weights and biases ( W , β ) are the sought-after parameters of the network . The labeled samples distribution D is often unknown , but a large number of samples ( database D ) allows the approximation of the expected loss with an empirical risk : min W , β ( E ( x , d ) ∼D [ ` ( W , β , x , d ) ] ) → 1 |D| ∑ p∈D ` ( W , β , x ( p ) , d ( p ) ) , ( x ( p ) , d ( p ) ) p∈D i.i.d∼ D. ( 1 ) The loss or cost function ` is typically a cross entropy function or an l2 norm of the mismatches between the network outputs and the labels : ` ( W , β , ( x ( p ) , d ( p ) ) ) = 12 ∥∥∥x ( p ) n − d ( p ) ∥∥∥2 2 . Where n indicates the output layer number . We present more in detail the architecture of a Feedforward Neural Network ( FNN ) hereafter . 1.1 NOTATIONS AND FEEDFORWARD NEURAL NETWORKS ( FNN ) . In this section , we recall the main details of FNNs , which will be used as a standard model for Deep Learning1 . The notation used throughout this paper is similar to notations presented in LeCun et al . ( 1988 ) . The main equation governing the FNN , a.k.a . the forward model is the following : x ( p ) k = F ( Wk x ( p ) k−1 + βk ) , k ∈ [ 1 .. n ] , p ∈ D. ( 2 ) where D is the database , p designs one single sample from the database ( e.g . one single image or audio recording ) , k is the layer number and n is the total number of layers in the network . The initial input for sample p is x ( p ) 0 ( e.g . the vectorized input image data ) . The state variable x ( p ) is transformed at each layer k through a multiplication by a weight matrix Wk and an addition of a bias vector βk . The activation function F , which is typically a sigmoid or a Rectified Linear Unit ( ReLU ) , is applied element wise on the resulting activation vector : a ( p ) k = Wk x ( p ) k−1 + βk , and serves to introduce non-linearity in the neural network . 1.2 THE LAGRANGIAN AND BACKPROPAGATION . The origin of backpropagation could be traced back to the early 1970s and could be derived by casting Deep Learning as a constrained optimization problem . This section is similar to LeCun et al . ( 1988 ) . The main novelty in this section is the addition of a regularization term dependent on the state variable R ( x ) to the cost function ` : ` R = ` + R ( x ) . The regularization has to verify the admissibility criteria which we define hereafter : Definition 1 : A regularization term is admissible if the second derivative of the augmented cost function w.r.t . to the state variable is separable and non-null , i.e . : ∀ ( p , q ) ∈ D , ∀ ( k , m ) ∈ [ 1 .. n ] ( ∂ ` R ∂x ( p ) k x ( q ) m ) p , q , k , m ∝ δp=q . ( 3 ) An example of such regularization is the following function : R ( x ) = 1 2 ∑ p∈D ∑ k=1 .. n−1 〈 x ( p ) k , Λ ( p ) k x ( p ) k 〉 ( 4 ) 1Convolutional Neural Networks ( CNN ) are a type of FNN . The concepts of this paper apply also to CNN . ( Λ ( p ) k ) p , k are symmetric positive matrices . Such a choice might seem atypical since the regularization often concerns the network parameters , mainly the weights or the biases . This ” admissible ” regularization was introduced for the sole purpose of guaranteeing a non-null partial derivative of the cost ` w.r.t the state variable x ( p ) k . This last property is essential to solve Equ . 16 . We will show later that such a regularization minimizes also the norm of the gradient ( Barrett & Dherin , 2020 ; Smith et al. , 2021 ) . In order to derive the backpropagation algorithm we introduce the following Lagrangian as per the notation of LeCun et al . ( 1988 ) : L ( x , W , β , b ) = ` R + ∑ p∈D ∑ k=1 .. n 〈 x ( p ) k − F ( Wk x ( p ) k−1 + βk ) , b ( p ) k 〉 . ( 5 ) The Lagrangian contains the original cost function , the admissible regularization term , and the product of the forward equation with the adjoint state vectors ( b ( p ) k ) p , k . These adjoint vectors are de- fined for each layer of the network and each sample of the database ; however , their values are not fixed yet and will be chosen in a way that simplifies the gradient computation . If the state variable x verifies the forward equations , then the Lagrangian simplifies to the cost function : L ( x ( W , β ) , W , β , b ) = ` R ( x ( W , β ) ) . ( 6 ) The total derivative of the previous Lagrangian w.r.t . the weights or biases is the gradient and could be expressed in terms of partial derivatives as follows2 : dL ( x ( W , β ) , W , β , b ) dWk = ∂L∂Wk + ∑ p∈D ∑ m=1 .. n ( dx ( p ) m dWk ) ( ∂L ∂x ( p ) m ) = d ` RdWk , dL ( x ( W , β ) , W , β , b ) dβk = ∂L∂βk + ∑ p∈D ∑ m=1 .. n ( dx ( p ) m dβk ) ( ∂L ∂x ( p ) m ) = d ` Rdβk . ( 7 ) The partial derivatives of the Lagrangian w.r.t to { Wk , βk } k=1 .. n are straightforward to compute . The Fréchet derivatives ( dx ( p ) m dWk , dx ( p ) m dβk ) p , k , m are non-trivial to evaluate and the core idea of back- propagation is the selection of the adjoint state variables ( b ( p ) k ) p , k which cancels the superfluous terms . Such a simplification is achievable if : ( ∂L ∂x ( p ) k ) p∈D , k∈ [ 1 .. n ] = 0 . ( 8 ) Computing the previous partial derivative yields the following backpropagation equation : ∂L ∂x ( p ) k = b ( p ) k − 1k=1 .. n−1W T k+1∇F ( a ( p ) k+1 ) b ( p ) k+1 + ∂ ` R ∂x ( p ) k = 0 . ( 9 ) 1E is the indicator function , and it is equal to one if the underlying condition E is true and null otherwise . The nabla operator ∇F is a diagonal square matrix with an element-wise derivative of its argument ( along the diagonal ) . The resolution of the backpropagation system could be split into a boundary condition and a backward propagation system . Further details could be found in LeCun et al . ( 1988 ) . The particular choice of the adjoint state vectors ( b ( p ) k ) p∈D , k=1 .. n yields a simple formula for the gradient of the cost function : Gk = ∂L ∂Wk = − ∑ p∇F ( a ( p ) k ) b ( p ) k x ( p ) T k−1 , gk = ∂L ∂βk = − ∑ p∇F ( a ( p ) k ) b ( p ) k . ( 10 ) The backpropagation algorithm is fundamental for Deep Learning . It is simple and provides a straightforward solution to an otherwise tedious problem to solve . Unfortunately , developing an efficient second-order backpropagation method remains elusive . The current state-of-the-art is based on the use of the R−operator ( Pearlmutter , 1994 ) to compute the product of the Hessian with a given vector without explicitly calculating or storing the Hessian ( Martens , 2010 ; Dauphin et al. , 2014 ; Agarwal et al. , 2017 ) . In the following section , we provide a different framework for backpropagation which allows the characterization of the second-order Newton direction . 2A denominator layout notation for derivation is used throughout this paper .
This paper proposes a stochastic second order method to train neural network under some specific regularisation criterion. The method is based on the Sifrian, an extension of the Lagrangian that splits the definition of the gradient of each layer's parameter as different constraints with their own multiplier. Solving the best direction from the Sifrian is complicated in the general case but can be done when considering only one sample. This allows for a stochastic algorithm to train the neural network. Finally, some limited numerical experiments are conducted.
SP:5820c97c44d7ad06c16cf5e7ce4f8b197ea08c94
If your data distribution shifts, use self-learning
1 INTRODUCTION . Deep Neural Networks ( DNNs ) can reach human-level performance in complex cognitive tasks ( Brown et al. , 2020 ; He et al. , 2016a ; Berner et al. , 2019 ) if the distribution of the test data is sufficiently similar to the training data . However , DNNs are known to struggle if the distribution of the test data is shifted relatively to the training data ( Geirhos et al. , 2018 ; Dodge & Karam , 2017 ) . Two largely distinct communities aim to increase the performance of models under test-time distribution shifts : The robustness community generally considers ImageNet-scale datasets and evaluates models in an ad-hoc scenario . Models are trained on a clean source dataset like ImageNet , using heavy data augmentation ( Hendrycks et al. , 2020a ; Rusak et al. , 2020 ; Geirhos et al. , 2019 ) and/or large-scale pre-training ( Xie et al. , 2020a ; Mahajan et al. , 2018 ) . The trained models are not adapted in any way to test-time distribution shifts . This evaluation scenario is relevant for applications in which very different distribution shifts are encountered in an unpredictable order , and hence misses out on the gains of adaptation to unlabeled samples of the target distribution . The unsupervised domain adaptation ( UDA ) community often considers smaller-scale datasets and assumes that both the source and the ( unlabeled ) target dataset are known . Models are trained on both datasets ( e.g. , with an adversarial domain objective , Ganin et al. , 2016 ) before evaluation on the target domain data . This evaluation scenario provides optimal conditions for adaptation , but the reliance on the source dataset makes UDA more computationally expensive , more impractical and prevents the use of pre-trained models for which the source dataset is unknown or simply too large . In this work , we consider the source-free domain adaptation setting , a middle ground between the classical ad-hoc robustness setting and UDA in which models can adapt to the target distribution but without using the source dataset ( Kundu et al. , 2020 ; Kim et al. , 2021 ; Li et al. , 2020 ; Liang et al. , 2020 ) . This evaluation scenario is interesting for many practitioners and applications as an extension of the ad-hoc robustness scenario . It evaluates the possible performance of a deployed model on a systematic , unseen distribution shift at inference time : an embedded computer vision system in an autonomous car should adapt to changes without being trained on all available training data ; an image-based quality control software may not necessarily open-source the images it has been trained on , but still has to be adapted to the lighting conditions at the operation location ; a computer vision system in a hospital should perform robustly when tested on a scanner different from the training images—importantly , it might not be known at development time which scanner it will be tested on , and it might be prohibited to share images from many hospitals to run UDA . Can self-learning methods like pseudo-labeling and entropy-minimization also be used in this source-free domain adaptation setting ? To answer this question , we perform an extensive study of several self-learning variants , and find consistent and substantial gains in test-time performance across several robustness and out-of-domain benchmarks and a wide range of models and pretraining methods , including models trained with UDA methods that do not use self-learning . We also find that self-learning outperforms state-of-the-art source-free domain adaptation methods , namely Test-Time Training which is based on a self-supervised auxiliary objective and continual training ( Sun et al. , 2019b ) , test-time entropy minimization ( Wang et al. , 2020 ) and ( gradient-free ) BatchNorm adaptation ( Schneider et al. , 2020 ; Nado et al. , 2020 ) . We perform a large number of ablations to study important design choices for self-learning methods in source-free domain adaptation . Furthermore , we show that a variant of pseudo-labeling with a robust loss function consistently outperforms entropy minimization on ImageNet-scale datasets . We theoretically analyze and empirically verify the influence of the temperature parameter in self-learning and provide guidelines how this single parameter should be chosen . Our approach is visualized in Figure 1 . We do not consider test-time adaptation in an online setting like is studied e.g. , by Zhang et al . ( 2021 ) , where the model is adapted to one example at a time , and reset after each example . Related Work . Variants of self-learning have been used for UDA ( Berthelot et al. , 2021 ) , for example using auxiliary information ( Xie et al. , 2020b ) , consistency ( Wei et al. , 2020 ; Cai et al. , 2021 ; Prabhu et al. , 2021 ) or confidence ( Zou et al. , 2019 ) regularization . The main difference from these works to ours is that they 1 ) utilize both source and target data for self-learning whereas we only require access to unlabeled target data , 2 ) train their models from scratch whereas we merely fine-tune pretrained checkpoints on the unlabeled target data , and 3 ) are generally more complicated than our approach due to using more than one term in the objective function . Our work is conceptually most similar to virtual adversarial domain adaptation in the fine-tuning phase of DIRT-T ( Shu et al. , 2018 ) ) and Test-time entropy minimization ( TENT ; Wang et al. , 2020 ) . In contrast to DIRT-T , our objective is simpler and we scale the approach to considerably larger datasets on ImageNet scale . TENT , on the other hand , only evaluated a single method ( entropy minimization ) on a single vanilla model ( ResNet-50 ) on IN-C. We substantially expand this analysis to show that self-learning almost universally increases test-time performance under distribution shifts , regardless of the type of distribution shift , the model architecture or the pre-training method . Self-learning has also been applied to UDA for semantic segmentation ( Zou et al. , 2018 ) , for gradual domain adaptation ( Kumar et al. , 2020 ) , for semi-supervised learning ( Rizve et al. , 2021 ; Mukherjee & Awadallah , 2020 ) , for learning in biased datasets ( Chen et al. , 2020b ) and for automated data annotation ( De Sousa Ribeiro et al. , 2020 ) . Zoph et al . ( 2020 ) show that self-learning outperforms pretraining when stronger data augmentation is used and more labeled data is present . A more detailed discussion of related work alongside with the main differences to our work can be found in Appendix F. Our main contribution beyond these works is to show the effectiveness of self-learning on top of both robust , large scale , and domain adapted models , at scale . 2 SELF-LEARNING FOR TEST-TIME ADAPTATION . Different variants of self-learning have been used in both unsupervised domain adaptation ( French et al. , 2018 ; Shu et al. , 2018 ) , self-supervised representation learning ( Caron et al. , 2021 ) , and in semi-supervised learning ( Xie et al. , 2020a ) . In a typical self-learning setting a teacher network f t trained on the source domain predicts labels on the target domain . Then , a student model fs is fine-tuned on the predicted labels . In the following , let f t ( x ) denote the logits for sample x and let pt ( j|x ) ≡ σj ( f t ( x ) ) denote the probability for class j obtained from a softmax function σj ( · ) . Similarly , fs ( x ) and ps ( j|x ) denote the logits and probabilities for the student model fs . For all techniques , one can optionally only admit samples where the probability maxj pt ( j|x ) exceeds some threshold . We consider three popular variants of self-learning : Pseudo-labeling with hard or soft labels , as well as entropy minimization . Hard Pseudo-Labeling ( Lee , 2013 ; Galstyan & Cohen , 2007 ) . We generate labels using the teacher and train the student on pseudo-labels i using the standard cross-entropy loss , ` H ( x ) : = − log ps ( i|x ) , i = argmaxj pt ( j|x ) ( 1 ) Usually , only samples with a confidence above a certain threshold are considered for training the student . We test several thresholds but note that thresholding means discarding a potentially large portion of the data which leads to a performance decrease in itself . The teacher is updated after each epoch . Soft Pseudo-Labeling ( Lee , 2013 ; Galstyan & Cohen , 2007 ) . In contrast to the hard pseudolabeling variant , we here train the student on class probabilities predicted by the teacher , ` S ( x ) : = − ∑ j pt ( j|x ) log ps ( j|x ) . ( 2 ) Soft pseudo-labeling is typically not used in conjunction with thresholding , since it already incorporates the certainty of the model . The teacher is updated after each epoch . Entropy Minimization ( ENT ; Grandvalet & Bengio , 2004 ) . This variant is similar to soft pseudolabeling , but we no longer differentiate between a teacher and student network . It corresponds to an “ instantaneous ” update of the teacher . The training objective becomes ` E ( x ) : = − ∑ j ps ( j|x ) log ps ( j|x ) . ( 3 ) Intuitively , self-training with entropy minimization leads to a sharpening of the output distribution for each sample , making the model more confident in its predictions . Robust Pseudo-Labeling ( RPL ) . Virtually all introduced self-training variants use the standard cross-entropy classification objective . However , the standard cross-entropy loss has been shown to be sensitive to label noise ( Zhang & Sabuncu , 2018 ; Zhang et al. , 2017 ) . In the setting of domain adaptation , inaccuracies in the teacher predictions and , thus , the labels for the student , are inescapable , with severe repercussions for training stability and hyperparameter sensitivity as we show in the results . As a straight-forward solution to this problem , we propose to replace the cross-entropy loss by a robust classification loss designed to withstand certain amounts of label noise ( Ghosh et al. , 2017 ; Song et al. , 2020 ; Shu et al. , 2020 ; Zhang & Sabuncu , 2018 ) . A popular candidate is the Generalized Cross Entropy ( GCE ) loss which combines the noise-tolerant Mean Absolute Error ( MAE ) loss ( Ghosh et al. , 2017 ) with the CE loss . We only consider the hard labels and use the robust GCE loss as the training loss for the student , i = argmaxj p t ( j|x ) , ` GCE ( x , i ) : = q−1 ( 1− ps ( i|x ) q ) , ( 4 ) with q ∈ ( 0 , 1 ] . For the limit case q → 0 , the GCE loss approaches the CE loss and for q = 1 , the GCE loss is the MAE loss ( Zhang & Sabuncu , 2018 ) . We test updating the teacher both after every update step of the student ( RPL ) and once per epoch ( RPLep ) . 3 EXPERIMENT DESIGN . Datasets . IN-C ( Hendrycks & Dietterich , 2019 ) contains corrupted versions of the 50 000 images in the IN validation set . There are fifteen test and four hold-out corruptions , and there are five severity levels for each corruption . The established metric to report model performance on IN-C is the mean Corruption Error ( mCE ) where the error is normalized by the AlexNet error , and averaged over all corruptions and severity levels , see Eq . 20 , Appendix C.1 . IN-R ( Hendrycks et al. , 2020a ) contains 30 000 images with artistic renditions of 200 classes of the IN dataset . IN-A ( an , 2019 ) is composed of 7500 unmodified real-world images on which standard IN-trained ResNet50 ( He et al. , 2016b ) models yield chance level performance . CIFAR10 ( Krizhevsky et al. , 2009 ) and STL10 ( Coates et al. , 2011 ) are small-scale image recognition datasets with 10 classes each , and training sets of 50 000/5000 images and test sets of 10 000/8000 images , respectively . The digit datasets MNIST ( Deng , 2012 ) and MNIST-M ( Ganin et al. , 2016 ) both have 60 000 training and 10 000 test images . Hyperparameters . The different self-learning variants have a range of hyperparameters such as the learning rate or the stopping criterion . Our goal is to give a realistic estimation on the performance to be expected in practice .. To this end , we optimize hyperparameters for each variant of pseudolabeling on a hold-out set of IN-C that contains four types of image corruptions ( “ speckle noise ” , “ Gaussian blur ” , “ saturate ” and “ spatter ” ) with five different strengths each , following the procedure suggested in Hendrycks & Dietterich ( 2019 ) . We refer to the hold-out set of IN-C as our dev set . Models for ImageNet-scale datasets . We consider four popular model architectures : ResNet50 ( He et al. , 2016b ) , DenseNet161 ( Huang et al. , 2017 ) , ResNeXt101 ( Xie et al. , 2017 ) and EfficientNet-L2 ( Tan & Le , 2019 ) ( see Appendix B.1 for details on the used models ) . For ResNet50 , DenseNet and ResNeXt101 , we include a simple vanilla version trained on IN only . For ResNet50 and ResNeXt101 , we additionally include a state-of-the-art robust version trained with DeepAugment and Augmix ( DAug+AM , Hendrycks et al. , 2020a ) 1 . For the ResNeXt model , we also include a version that was trained on 3.5 billion weakly labeled images ( IG-3.5B , Mahajan et al. , 2018 ) . Finally , for EfficientNet-L2 we select the current state of the art on IN-C which was trained on 300 million images from JFT-300M ( Chollet , 2017 ; Hinton et al. , 2014 ) using a noisy studentteacher protocol ( Xie et al. , 2020a ) . We validate the IN and IN-C performance of all considered models and match the originally reported scores ( Schneider et al. , 2020 ) . For EfficientNet-L2 , we match IN top-1 accuracy up to 0.1 % points , and IN-C up to 0.6 % mCE . Models for CIFAR10/MNIST-scale datasets . For CIFAR10-C experiments , we use two WideResNets ( WRN , Zagoruyko & Komodakis , 2016 ) : the first one is trained on CIFAR10 and has a depth of 28 and a width of 10 and the second one is trained with AugMix ( Hendrycks et al. , 2020b ) and has a depth of 40 and a width of 2 . The remaining small-scale models are trained with unsupervised domain adaptation ( UDA ) methods . We propose to regard any UDA method which requires joint training with source and target data as a pre-training step , similar to regular pretraining on IN , and use self-learning on top of the final checkpoint . We consider two popular UDA methods : self-supervised domain adaptation ( UDA-SS ; Sun et al. , 2019a ) and Domain-Adversarial Training of Neural Networks ( DANN ; Ganin et al. , 2016 ) . In UDA-SS , the authors seek to align the representations of both domains by performing an auxiliary self-supervised task on both domains simultaneously . In all UDA-SS experiments , we use a WideResNet with a depth of 26 and a width of 16 . In DANN , the authors learn a domain-invariant embedding by optimizing a minimax objective . For all DANN experiments except for MNIST→MNIST-M , we use the same WRN architecture as above . For the MNIST→MNIST-M experiment , the training with the larger model diverged and we used a smaller WideResNet version with a width of 2 . We note that DANN training involves optimizing a minimax objective and is generally harder to tune .
The paper studies effectiveness of self-training to improve test time performance when the distribution of test data is not similar to the training data. The paper more specifically focuses on source-free domain adaptation settings where the source target data is not available. In this setup self-training has been tested as an additional step on top of different robustness and adaptation approaches such as robust pretraining, unsupervised domain adaptation and self-supervised pretraining. The paper shows improvement on multiple ImageNet variants and CIFAR10-C, and also introduces ImageNet-D dataset as a new benchmark. ImageNet-D has been produced by matching label space of IN datasets with DomainNet data provided in Visual domain adaptation challenges. The main contribution of this paper is to perform a systematic and large study of self-training as a method to deal with distribution shifts.
SP:810e4d1edb1d7aa02ef0777f45ce4db3263d551c
If your data distribution shifts, use self-learning
1 INTRODUCTION . Deep Neural Networks ( DNNs ) can reach human-level performance in complex cognitive tasks ( Brown et al. , 2020 ; He et al. , 2016a ; Berner et al. , 2019 ) if the distribution of the test data is sufficiently similar to the training data . However , DNNs are known to struggle if the distribution of the test data is shifted relatively to the training data ( Geirhos et al. , 2018 ; Dodge & Karam , 2017 ) . Two largely distinct communities aim to increase the performance of models under test-time distribution shifts : The robustness community generally considers ImageNet-scale datasets and evaluates models in an ad-hoc scenario . Models are trained on a clean source dataset like ImageNet , using heavy data augmentation ( Hendrycks et al. , 2020a ; Rusak et al. , 2020 ; Geirhos et al. , 2019 ) and/or large-scale pre-training ( Xie et al. , 2020a ; Mahajan et al. , 2018 ) . The trained models are not adapted in any way to test-time distribution shifts . This evaluation scenario is relevant for applications in which very different distribution shifts are encountered in an unpredictable order , and hence misses out on the gains of adaptation to unlabeled samples of the target distribution . The unsupervised domain adaptation ( UDA ) community often considers smaller-scale datasets and assumes that both the source and the ( unlabeled ) target dataset are known . Models are trained on both datasets ( e.g. , with an adversarial domain objective , Ganin et al. , 2016 ) before evaluation on the target domain data . This evaluation scenario provides optimal conditions for adaptation , but the reliance on the source dataset makes UDA more computationally expensive , more impractical and prevents the use of pre-trained models for which the source dataset is unknown or simply too large . In this work , we consider the source-free domain adaptation setting , a middle ground between the classical ad-hoc robustness setting and UDA in which models can adapt to the target distribution but without using the source dataset ( Kundu et al. , 2020 ; Kim et al. , 2021 ; Li et al. , 2020 ; Liang et al. , 2020 ) . This evaluation scenario is interesting for many practitioners and applications as an extension of the ad-hoc robustness scenario . It evaluates the possible performance of a deployed model on a systematic , unseen distribution shift at inference time : an embedded computer vision system in an autonomous car should adapt to changes without being trained on all available training data ; an image-based quality control software may not necessarily open-source the images it has been trained on , but still has to be adapted to the lighting conditions at the operation location ; a computer vision system in a hospital should perform robustly when tested on a scanner different from the training images—importantly , it might not be known at development time which scanner it will be tested on , and it might be prohibited to share images from many hospitals to run UDA . Can self-learning methods like pseudo-labeling and entropy-minimization also be used in this source-free domain adaptation setting ? To answer this question , we perform an extensive study of several self-learning variants , and find consistent and substantial gains in test-time performance across several robustness and out-of-domain benchmarks and a wide range of models and pretraining methods , including models trained with UDA methods that do not use self-learning . We also find that self-learning outperforms state-of-the-art source-free domain adaptation methods , namely Test-Time Training which is based on a self-supervised auxiliary objective and continual training ( Sun et al. , 2019b ) , test-time entropy minimization ( Wang et al. , 2020 ) and ( gradient-free ) BatchNorm adaptation ( Schneider et al. , 2020 ; Nado et al. , 2020 ) . We perform a large number of ablations to study important design choices for self-learning methods in source-free domain adaptation . Furthermore , we show that a variant of pseudo-labeling with a robust loss function consistently outperforms entropy minimization on ImageNet-scale datasets . We theoretically analyze and empirically verify the influence of the temperature parameter in self-learning and provide guidelines how this single parameter should be chosen . Our approach is visualized in Figure 1 . We do not consider test-time adaptation in an online setting like is studied e.g. , by Zhang et al . ( 2021 ) , where the model is adapted to one example at a time , and reset after each example . Related Work . Variants of self-learning have been used for UDA ( Berthelot et al. , 2021 ) , for example using auxiliary information ( Xie et al. , 2020b ) , consistency ( Wei et al. , 2020 ; Cai et al. , 2021 ; Prabhu et al. , 2021 ) or confidence ( Zou et al. , 2019 ) regularization . The main difference from these works to ours is that they 1 ) utilize both source and target data for self-learning whereas we only require access to unlabeled target data , 2 ) train their models from scratch whereas we merely fine-tune pretrained checkpoints on the unlabeled target data , and 3 ) are generally more complicated than our approach due to using more than one term in the objective function . Our work is conceptually most similar to virtual adversarial domain adaptation in the fine-tuning phase of DIRT-T ( Shu et al. , 2018 ) ) and Test-time entropy minimization ( TENT ; Wang et al. , 2020 ) . In contrast to DIRT-T , our objective is simpler and we scale the approach to considerably larger datasets on ImageNet scale . TENT , on the other hand , only evaluated a single method ( entropy minimization ) on a single vanilla model ( ResNet-50 ) on IN-C. We substantially expand this analysis to show that self-learning almost universally increases test-time performance under distribution shifts , regardless of the type of distribution shift , the model architecture or the pre-training method . Self-learning has also been applied to UDA for semantic segmentation ( Zou et al. , 2018 ) , for gradual domain adaptation ( Kumar et al. , 2020 ) , for semi-supervised learning ( Rizve et al. , 2021 ; Mukherjee & Awadallah , 2020 ) , for learning in biased datasets ( Chen et al. , 2020b ) and for automated data annotation ( De Sousa Ribeiro et al. , 2020 ) . Zoph et al . ( 2020 ) show that self-learning outperforms pretraining when stronger data augmentation is used and more labeled data is present . A more detailed discussion of related work alongside with the main differences to our work can be found in Appendix F. Our main contribution beyond these works is to show the effectiveness of self-learning on top of both robust , large scale , and domain adapted models , at scale . 2 SELF-LEARNING FOR TEST-TIME ADAPTATION . Different variants of self-learning have been used in both unsupervised domain adaptation ( French et al. , 2018 ; Shu et al. , 2018 ) , self-supervised representation learning ( Caron et al. , 2021 ) , and in semi-supervised learning ( Xie et al. , 2020a ) . In a typical self-learning setting a teacher network f t trained on the source domain predicts labels on the target domain . Then , a student model fs is fine-tuned on the predicted labels . In the following , let f t ( x ) denote the logits for sample x and let pt ( j|x ) ≡ σj ( f t ( x ) ) denote the probability for class j obtained from a softmax function σj ( · ) . Similarly , fs ( x ) and ps ( j|x ) denote the logits and probabilities for the student model fs . For all techniques , one can optionally only admit samples where the probability maxj pt ( j|x ) exceeds some threshold . We consider three popular variants of self-learning : Pseudo-labeling with hard or soft labels , as well as entropy minimization . Hard Pseudo-Labeling ( Lee , 2013 ; Galstyan & Cohen , 2007 ) . We generate labels using the teacher and train the student on pseudo-labels i using the standard cross-entropy loss , ` H ( x ) : = − log ps ( i|x ) , i = argmaxj pt ( j|x ) ( 1 ) Usually , only samples with a confidence above a certain threshold are considered for training the student . We test several thresholds but note that thresholding means discarding a potentially large portion of the data which leads to a performance decrease in itself . The teacher is updated after each epoch . Soft Pseudo-Labeling ( Lee , 2013 ; Galstyan & Cohen , 2007 ) . In contrast to the hard pseudolabeling variant , we here train the student on class probabilities predicted by the teacher , ` S ( x ) : = − ∑ j pt ( j|x ) log ps ( j|x ) . ( 2 ) Soft pseudo-labeling is typically not used in conjunction with thresholding , since it already incorporates the certainty of the model . The teacher is updated after each epoch . Entropy Minimization ( ENT ; Grandvalet & Bengio , 2004 ) . This variant is similar to soft pseudolabeling , but we no longer differentiate between a teacher and student network . It corresponds to an “ instantaneous ” update of the teacher . The training objective becomes ` E ( x ) : = − ∑ j ps ( j|x ) log ps ( j|x ) . ( 3 ) Intuitively , self-training with entropy minimization leads to a sharpening of the output distribution for each sample , making the model more confident in its predictions . Robust Pseudo-Labeling ( RPL ) . Virtually all introduced self-training variants use the standard cross-entropy classification objective . However , the standard cross-entropy loss has been shown to be sensitive to label noise ( Zhang & Sabuncu , 2018 ; Zhang et al. , 2017 ) . In the setting of domain adaptation , inaccuracies in the teacher predictions and , thus , the labels for the student , are inescapable , with severe repercussions for training stability and hyperparameter sensitivity as we show in the results . As a straight-forward solution to this problem , we propose to replace the cross-entropy loss by a robust classification loss designed to withstand certain amounts of label noise ( Ghosh et al. , 2017 ; Song et al. , 2020 ; Shu et al. , 2020 ; Zhang & Sabuncu , 2018 ) . A popular candidate is the Generalized Cross Entropy ( GCE ) loss which combines the noise-tolerant Mean Absolute Error ( MAE ) loss ( Ghosh et al. , 2017 ) with the CE loss . We only consider the hard labels and use the robust GCE loss as the training loss for the student , i = argmaxj p t ( j|x ) , ` GCE ( x , i ) : = q−1 ( 1− ps ( i|x ) q ) , ( 4 ) with q ∈ ( 0 , 1 ] . For the limit case q → 0 , the GCE loss approaches the CE loss and for q = 1 , the GCE loss is the MAE loss ( Zhang & Sabuncu , 2018 ) . We test updating the teacher both after every update step of the student ( RPL ) and once per epoch ( RPLep ) . 3 EXPERIMENT DESIGN . Datasets . IN-C ( Hendrycks & Dietterich , 2019 ) contains corrupted versions of the 50 000 images in the IN validation set . There are fifteen test and four hold-out corruptions , and there are five severity levels for each corruption . The established metric to report model performance on IN-C is the mean Corruption Error ( mCE ) where the error is normalized by the AlexNet error , and averaged over all corruptions and severity levels , see Eq . 20 , Appendix C.1 . IN-R ( Hendrycks et al. , 2020a ) contains 30 000 images with artistic renditions of 200 classes of the IN dataset . IN-A ( an , 2019 ) is composed of 7500 unmodified real-world images on which standard IN-trained ResNet50 ( He et al. , 2016b ) models yield chance level performance . CIFAR10 ( Krizhevsky et al. , 2009 ) and STL10 ( Coates et al. , 2011 ) are small-scale image recognition datasets with 10 classes each , and training sets of 50 000/5000 images and test sets of 10 000/8000 images , respectively . The digit datasets MNIST ( Deng , 2012 ) and MNIST-M ( Ganin et al. , 2016 ) both have 60 000 training and 10 000 test images . Hyperparameters . The different self-learning variants have a range of hyperparameters such as the learning rate or the stopping criterion . Our goal is to give a realistic estimation on the performance to be expected in practice .. To this end , we optimize hyperparameters for each variant of pseudolabeling on a hold-out set of IN-C that contains four types of image corruptions ( “ speckle noise ” , “ Gaussian blur ” , “ saturate ” and “ spatter ” ) with five different strengths each , following the procedure suggested in Hendrycks & Dietterich ( 2019 ) . We refer to the hold-out set of IN-C as our dev set . Models for ImageNet-scale datasets . We consider four popular model architectures : ResNet50 ( He et al. , 2016b ) , DenseNet161 ( Huang et al. , 2017 ) , ResNeXt101 ( Xie et al. , 2017 ) and EfficientNet-L2 ( Tan & Le , 2019 ) ( see Appendix B.1 for details on the used models ) . For ResNet50 , DenseNet and ResNeXt101 , we include a simple vanilla version trained on IN only . For ResNet50 and ResNeXt101 , we additionally include a state-of-the-art robust version trained with DeepAugment and Augmix ( DAug+AM , Hendrycks et al. , 2020a ) 1 . For the ResNeXt model , we also include a version that was trained on 3.5 billion weakly labeled images ( IG-3.5B , Mahajan et al. , 2018 ) . Finally , for EfficientNet-L2 we select the current state of the art on IN-C which was trained on 300 million images from JFT-300M ( Chollet , 2017 ; Hinton et al. , 2014 ) using a noisy studentteacher protocol ( Xie et al. , 2020a ) . We validate the IN and IN-C performance of all considered models and match the originally reported scores ( Schneider et al. , 2020 ) . For EfficientNet-L2 , we match IN top-1 accuracy up to 0.1 % points , and IN-C up to 0.6 % mCE . Models for CIFAR10/MNIST-scale datasets . For CIFAR10-C experiments , we use two WideResNets ( WRN , Zagoruyko & Komodakis , 2016 ) : the first one is trained on CIFAR10 and has a depth of 28 and a width of 10 and the second one is trained with AugMix ( Hendrycks et al. , 2020b ) and has a depth of 40 and a width of 2 . The remaining small-scale models are trained with unsupervised domain adaptation ( UDA ) methods . We propose to regard any UDA method which requires joint training with source and target data as a pre-training step , similar to regular pretraining on IN , and use self-learning on top of the final checkpoint . We consider two popular UDA methods : self-supervised domain adaptation ( UDA-SS ; Sun et al. , 2019a ) and Domain-Adversarial Training of Neural Networks ( DANN ; Ganin et al. , 2016 ) . In UDA-SS , the authors seek to align the representations of both domains by performing an auxiliary self-supervised task on both domains simultaneously . In all UDA-SS experiments , we use a WideResNet with a depth of 26 and a width of 16 . In DANN , the authors learn a domain-invariant embedding by optimizing a minimax objective . For all DANN experiments except for MNIST→MNIST-M , we use the same WRN architecture as above . For the MNIST→MNIST-M experiment , the training with the larger model diverged and we used a smaller WideResNet version with a width of 2 . We note that DANN training involves optimizing a minimax objective and is generally harder to tune .
This paper provides an in depth empirical evaluation of classical self-training techniques such as pseudo-labelling and entropy minimization on test performance under domain shifts. The authors stress that, although simple, these techniques consistently improve the robustness to distribution shifts regardless of model architecture or pre-training techniques used. This makes them especially useful to practitioners applying machine learning algorithms to real problems where distribution shifts are prevalent. The authors claim state-of-the-art adaptation results on a number of popular dataset corruption benchmarks, and present a new challenging dataset for evaluating the robustness of deep vision models.
SP:810e4d1edb1d7aa02ef0777f45ce4db3263d551c
Theoretical Analysis of Consistency Regularization with Limited Augmented Data
1 INTRODUCTION . Modern machine learning models , especially deep learning models , require abundant training samples . Since data collection and human annotation are expensive , data augmentation has been a ubiquitous practice in creating artificial labeled samples and improving generalization performance . This practice is corroborated by the fact that the semantics of images remain the same through simple translations like obscuring , flipping , rotation , color jitter , rescaling ( Shorten & Khoshgoftaar , 2019 ) . Conventional algorithms use data augmentation to expand the training data set ( Krizhevsky et al. , 2012 ; Simard et al. , 1998 ; Cubuk et al. , 2018 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ) . As an alternative , consistency regularization enforces the model to output similar predictions on the original and augmented samples and contributes to many recent state-of-the-art supervised or semisupervised algorithms . This idea was first proposed in ( Bachman et al. , 2014 ) and popularized by Laine & Aila ( 2016 ) ; Sajjadi et al . ( 2016 ) , and gained more attention recently with the success of FixMatch ( Sohn et al. , 2020 ) for semi-supervised few-shot learning , and AdaMatch ( Berthelot et al. , 2021 ) for domain adaptation . Several recent papers ( see e.g . ( Chen et al. , 2020a ; Mei et al. , 2021 ; Lyle et al. , 2019 ) ) attempt to provide a theoretical understanding of data augmentation ( DA ) ; they focus on establishing that augmenting data saves on the number of labeled samples needed for the same level of accuracy . However , none of these explicitly compare in an apples to apples way the efficacy ( in terms of the number of augmented samples ) of one algorithmic choice of how to use the augmented samples vs another algorithmic choice . Another dimension un-explored in previous work is any characterization of the quality of augmentation . In this paper , we hope to answer the following research question : Is it possible to develop a theoretical framework to compare the sample efficiency of different algorithms that use augmented data ? We present a new theoretical framework that casts consistency regularization as a way to reduce function class complexity , which immediately connects to the well-established theory on generalization and gives rise to a generalization bound for consistency regularization under a general bounded loss function . When specialized to linear regression , this new theoretical framework shows that the consistency regularization is strictly more sample efficient than empirical risk minimization ( ERM ) on the augmented dataset . In addition , using this framework , we also provide generalization bounds under consistency regularization for logistic regression , two-layer neural networks , and expansion-based data augmentations . As summary , our main contributions are : • A statistical framework of consistency regularization . We first present a simple new statistical framework to analyze data augmentation - with a formal theoretical definition of data augmentation and its strength . We then use this framework to give a generalization bound of consistency regularization and provide instantiations for linear regression , logistic regression , two-layer neural networks , and expansion-based data augmentations . • Theoretically proving the efficacy of consistency regularization . When specializing our framework with consistency regularization to linear/logistic regression , it yields a strictly smaller generalization error than ERM with the same augmented data . • Empirical comparisons between consistency regularization and ERM . We perform experiments that make a clean and apples-apples comparison ( i.e. , with no extra modeling or data tweaks ) between consistency regularization and ERM using CIFAR-100 and WideResNet . Our empirical results demonstrate the superior efficacy of consistency regularization . 2 RELATED WORK . Empirical findings . Data augmentation ( DA ) is an essential recipe for almost every state-of-theart supervised learning algorithm since the seminal work of ( Krizhevsky et al. , 2012 ) ( see reference therein ( Simard et al. , 1998 ; Cubuk et al. , 2018 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Kuchnik & Smith , 2018 ) ) . It started from adding augmented data to the training samples via ( random ) perturbations , distortions , scales , crops , rotations , and horizontal flips . More sophisticated variants were subsequently designed ; a non-exhaustive list includes Mixup ( Zhang et al. , 2017 ) , Cutout ( DeVries & Taylor , 2017 ) , and Cutmix ( Yun et al. , 2019 ) . The choice of data augmentation and their combinations require domain knowledge and experts ’ heuristics , which triggered some automated search algorithm to find the best augmentation strategies ( Lim et al. , 2019 ; Cubuk et al. , 2019 ) . The effects of different DAs have been systematically explored in ( Tensmeyer & Martinez , 2016 ) . Recent practices not only add augmented data to the training set but also enforce the predictor output to be similar by adding consistency regularization ( Bachman et al. , 2014 ; Laine & Aila , 2016 ; Sohn et al. , 2020 ) . One benefit of consistency regularization is the feasibility of exploiting unlabeled data . Therefore input consistency on augmented data also formed a major component to state-of-theart algorithms for semi-supervised learning ( Laine & Aila , 2016 ; Sajjadi et al. , 2016 ; Sohn et al. , 2020 ; Xie et al. , 2020 ) , self-supervised learning ( Chen et al. , 2020b ) , and unsupervised domain adaptation ( French et al. , 2017 ; Berthelot et al. , 2021 ) . Theoretical studies . Many interpret the effect of DA as some form of regularization ( He et al. , 2019 ) . Some work focuses on linear transformations and linear models ( Wu et al. , 2020 ) or kernel classifiers ( Dao et al. , 2019 ) . Convolutional neural networks by design enforce translation equivariance symmetry ( Benton et al. , 2020 ; Li et al. , 2019 ) ; further studies have hard-coded CNN ’ s invariance or equivariance to rotation ( Cohen & Welling , 2016 ; Marcos et al. , 2017 ; Worrall et al. , 2017 ; Zhou et al. , 2017 ) , scaling ( Sosnovik et al. , 2019 ; Worrall & Welling , 2019 ) and other types of transformations . A line of work view data augmentation as invariant learning by averaging over group actions ( Chen et al. , 2020a ; Mei et al. , 2021 ; Lyle et al. , 2019 ) . They consider an ideal setting that is equivalent to ERM with all possible augmented data , bringing a clean mathematical interpretation . We are interested in a more realistic setting with limited augmented data . In this setting , it is crucial to utilize the limited data with proper training methods , the difference of which can not be revealed under the previous studied settings . Some more recent work investigates the feature representation learning procedure with DA for selfsupervised learning tasks ( Wen & Li , 2021 ; HaoChen et al. , 2021 ; von Kügelgen et al. , 2021 ; Garg & Liang , 2020 ) . Cai et al . ( 2021 ) ; Wei et al . ( 2021 ) studied the effect of data augmentation with label propagation . Data augmentation is also deployed to improve robustness ( Rajput et al. , 2019 ) , to facilitate domain adaptation and domain generalization ( Cai et al. , 2021 ; Sagawa et al. , 2019 ) . 3 DATA AUGMENTATION CONSISTENCY AND HOW IT LEARNS EFFICIENTLY . In this section , we first formally define data augmentation and introduce the problem setup . We then define data augmentation consistency ( DAC ) regularization and show how it effectively reduces the function class complexity , which connects to a generalization bound for bounded loss functions via Rademacher complexity . Subsequently , we specialize our general result to linear regression , which firmly shows that the DAC regularization provably learns more efficiently than minimizing the empirical risk on the augmented dataset . The following section will present more applications ( including logistic regression , neural network , etc . ) . 3.1 PROBLEM SETUP AND DATA AUGMENTATION . Consider the standard supervised learning problem setup : x ∈ X are input features , and y ∈ Y is its label ( or response ) . Let P ∗ be the true distribution of ( x , y ) ( i.e. , the label distribution follows y ∼ P ∗ ( y|x ) ) . We can then formally define data augmentation as : Definition 1 ( Data augmentation ) . For any sample x ∈ X , we say x′ ∈ X is its augmentation , if and only if P ∗ ( y|x ) = P ∗ ( y|x′ ) . The definition above specifies what it means for one input sample to be an augmentation of another . While the definition covers any x′ with the same label distribution as x , our results only use the augmented samples that can be achieved via certain transformations ( e.g. , random cropping , rotation ) . However , our definition does not cover augmentations that alter the labels ( e.g. , MixUp ( Zhang et al. , 2017 ) ) . Now we introduce the learning problem on an augmented dataset : Let ( X , y ) ∈ XN × YN be a training set consisting of N i.i.d . samples . Besides the original ( X , y ) , each training sample in it is provided with α augmented samples . The input features of the augmented dataset can be written as : à ( X ) = [ x1 ; · · · ; xN ; x1,1 ; · · · ; xN,1 ; · · · ; x1 , α ; · · · ; xN , α ] ∈ X ( 1+α ) N , where xi is in the original training set and xi , j , ∀j ∈ [ α ] are the augmentations of xi . The labels of the augmented samples are kept the same , which can be denoted as M̃y ∈ Y ( 1+α ) N , where M̃ ∈ R ( 1+α ) N×N is a vertical stack of ( 1 + α ) identity mappings . Specifically , when the input xis are d-dimensional real vectors , we have the following notion of augmentation strength daug : Definition 2 ( Strength of augmentations ) . For any δ ∈ ( 0 , 1 ) , let daug ( δ ) , argmax daug Pà , X [ rank ( à ( X ) − M̃X ) < daug ] ≤ δ , daug , daug ( 1/N ) . Intuitively , strength of augmentations daug ( δ ) means that with probability at least 1 − δ , the augmentations perturb at least daug ( δ ) dimensions ; whereas daug can be intuitively understood as the minimum number of dimensions that the augmentations in à ( X ) perturbed with high probability . A lager daug corresponds to a stronger data augmentations . For instance , when à ( X ) = M̃X almost surely ( e.g. , when the augmentations are identical copies of the original samples , corresponding to the weakest augmentation – no augmentations at all ) , we have daug ( δ ) = daug = 0 for all δ ∈ ( 0 , 1 ) . On the other hand , if the augmentations are randomly generated , then it is more likely to see larger daug ( i.e. , more dimensions being perturbed ) with larger α ( i.e. , more augmentations ) . In the next subsection , we formally introduce “ data augmentation consistency regularization ” and present a generalization bound under bounded loss functions . We subsequently specialize the bound to linear regression and show that consistency regularization is strictly more sample efficient than empirical risk minimization ( ERM ) on the augmented dataset .
Data augmentation is a common technique to improve generalization, especially when data is scarce. This paper introduces a theoretical framework for analyzing the effectiveness of consistency regularization when data augmentation is employed. In the limit, consistency regularization is akin to solving a constrained optimization problem with consistency constraints. The paper theoretically studies this limit for linear regression, logistic regression, and a two-layer perceptron with ReLU activation, and tries to characterize the benefits of consistency regularization beyond that of vanilla data augmentation. The paper then continues to experiments where it is shown that consistency regularization outperforms data augmentation on three benchmarks, and the benefits are significant especially when labeled data is scarce.
SP:54e2b82691851b880425f85be0279b423132edfb
Theoretical Analysis of Consistency Regularization with Limited Augmented Data
1 INTRODUCTION . Modern machine learning models , especially deep learning models , require abundant training samples . Since data collection and human annotation are expensive , data augmentation has been a ubiquitous practice in creating artificial labeled samples and improving generalization performance . This practice is corroborated by the fact that the semantics of images remain the same through simple translations like obscuring , flipping , rotation , color jitter , rescaling ( Shorten & Khoshgoftaar , 2019 ) . Conventional algorithms use data augmentation to expand the training data set ( Krizhevsky et al. , 2012 ; Simard et al. , 1998 ; Cubuk et al. , 2018 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ) . As an alternative , consistency regularization enforces the model to output similar predictions on the original and augmented samples and contributes to many recent state-of-the-art supervised or semisupervised algorithms . This idea was first proposed in ( Bachman et al. , 2014 ) and popularized by Laine & Aila ( 2016 ) ; Sajjadi et al . ( 2016 ) , and gained more attention recently with the success of FixMatch ( Sohn et al. , 2020 ) for semi-supervised few-shot learning , and AdaMatch ( Berthelot et al. , 2021 ) for domain adaptation . Several recent papers ( see e.g . ( Chen et al. , 2020a ; Mei et al. , 2021 ; Lyle et al. , 2019 ) ) attempt to provide a theoretical understanding of data augmentation ( DA ) ; they focus on establishing that augmenting data saves on the number of labeled samples needed for the same level of accuracy . However , none of these explicitly compare in an apples to apples way the efficacy ( in terms of the number of augmented samples ) of one algorithmic choice of how to use the augmented samples vs another algorithmic choice . Another dimension un-explored in previous work is any characterization of the quality of augmentation . In this paper , we hope to answer the following research question : Is it possible to develop a theoretical framework to compare the sample efficiency of different algorithms that use augmented data ? We present a new theoretical framework that casts consistency regularization as a way to reduce function class complexity , which immediately connects to the well-established theory on generalization and gives rise to a generalization bound for consistency regularization under a general bounded loss function . When specialized to linear regression , this new theoretical framework shows that the consistency regularization is strictly more sample efficient than empirical risk minimization ( ERM ) on the augmented dataset . In addition , using this framework , we also provide generalization bounds under consistency regularization for logistic regression , two-layer neural networks , and expansion-based data augmentations . As summary , our main contributions are : • A statistical framework of consistency regularization . We first present a simple new statistical framework to analyze data augmentation - with a formal theoretical definition of data augmentation and its strength . We then use this framework to give a generalization bound of consistency regularization and provide instantiations for linear regression , logistic regression , two-layer neural networks , and expansion-based data augmentations . • Theoretically proving the efficacy of consistency regularization . When specializing our framework with consistency regularization to linear/logistic regression , it yields a strictly smaller generalization error than ERM with the same augmented data . • Empirical comparisons between consistency regularization and ERM . We perform experiments that make a clean and apples-apples comparison ( i.e. , with no extra modeling or data tweaks ) between consistency regularization and ERM using CIFAR-100 and WideResNet . Our empirical results demonstrate the superior efficacy of consistency regularization . 2 RELATED WORK . Empirical findings . Data augmentation ( DA ) is an essential recipe for almost every state-of-theart supervised learning algorithm since the seminal work of ( Krizhevsky et al. , 2012 ) ( see reference therein ( Simard et al. , 1998 ; Cubuk et al. , 2018 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Kuchnik & Smith , 2018 ) ) . It started from adding augmented data to the training samples via ( random ) perturbations , distortions , scales , crops , rotations , and horizontal flips . More sophisticated variants were subsequently designed ; a non-exhaustive list includes Mixup ( Zhang et al. , 2017 ) , Cutout ( DeVries & Taylor , 2017 ) , and Cutmix ( Yun et al. , 2019 ) . The choice of data augmentation and their combinations require domain knowledge and experts ’ heuristics , which triggered some automated search algorithm to find the best augmentation strategies ( Lim et al. , 2019 ; Cubuk et al. , 2019 ) . The effects of different DAs have been systematically explored in ( Tensmeyer & Martinez , 2016 ) . Recent practices not only add augmented data to the training set but also enforce the predictor output to be similar by adding consistency regularization ( Bachman et al. , 2014 ; Laine & Aila , 2016 ; Sohn et al. , 2020 ) . One benefit of consistency regularization is the feasibility of exploiting unlabeled data . Therefore input consistency on augmented data also formed a major component to state-of-theart algorithms for semi-supervised learning ( Laine & Aila , 2016 ; Sajjadi et al. , 2016 ; Sohn et al. , 2020 ; Xie et al. , 2020 ) , self-supervised learning ( Chen et al. , 2020b ) , and unsupervised domain adaptation ( French et al. , 2017 ; Berthelot et al. , 2021 ) . Theoretical studies . Many interpret the effect of DA as some form of regularization ( He et al. , 2019 ) . Some work focuses on linear transformations and linear models ( Wu et al. , 2020 ) or kernel classifiers ( Dao et al. , 2019 ) . Convolutional neural networks by design enforce translation equivariance symmetry ( Benton et al. , 2020 ; Li et al. , 2019 ) ; further studies have hard-coded CNN ’ s invariance or equivariance to rotation ( Cohen & Welling , 2016 ; Marcos et al. , 2017 ; Worrall et al. , 2017 ; Zhou et al. , 2017 ) , scaling ( Sosnovik et al. , 2019 ; Worrall & Welling , 2019 ) and other types of transformations . A line of work view data augmentation as invariant learning by averaging over group actions ( Chen et al. , 2020a ; Mei et al. , 2021 ; Lyle et al. , 2019 ) . They consider an ideal setting that is equivalent to ERM with all possible augmented data , bringing a clean mathematical interpretation . We are interested in a more realistic setting with limited augmented data . In this setting , it is crucial to utilize the limited data with proper training methods , the difference of which can not be revealed under the previous studied settings . Some more recent work investigates the feature representation learning procedure with DA for selfsupervised learning tasks ( Wen & Li , 2021 ; HaoChen et al. , 2021 ; von Kügelgen et al. , 2021 ; Garg & Liang , 2020 ) . Cai et al . ( 2021 ) ; Wei et al . ( 2021 ) studied the effect of data augmentation with label propagation . Data augmentation is also deployed to improve robustness ( Rajput et al. , 2019 ) , to facilitate domain adaptation and domain generalization ( Cai et al. , 2021 ; Sagawa et al. , 2019 ) . 3 DATA AUGMENTATION CONSISTENCY AND HOW IT LEARNS EFFICIENTLY . In this section , we first formally define data augmentation and introduce the problem setup . We then define data augmentation consistency ( DAC ) regularization and show how it effectively reduces the function class complexity , which connects to a generalization bound for bounded loss functions via Rademacher complexity . Subsequently , we specialize our general result to linear regression , which firmly shows that the DAC regularization provably learns more efficiently than minimizing the empirical risk on the augmented dataset . The following section will present more applications ( including logistic regression , neural network , etc . ) . 3.1 PROBLEM SETUP AND DATA AUGMENTATION . Consider the standard supervised learning problem setup : x ∈ X are input features , and y ∈ Y is its label ( or response ) . Let P ∗ be the true distribution of ( x , y ) ( i.e. , the label distribution follows y ∼ P ∗ ( y|x ) ) . We can then formally define data augmentation as : Definition 1 ( Data augmentation ) . For any sample x ∈ X , we say x′ ∈ X is its augmentation , if and only if P ∗ ( y|x ) = P ∗ ( y|x′ ) . The definition above specifies what it means for one input sample to be an augmentation of another . While the definition covers any x′ with the same label distribution as x , our results only use the augmented samples that can be achieved via certain transformations ( e.g. , random cropping , rotation ) . However , our definition does not cover augmentations that alter the labels ( e.g. , MixUp ( Zhang et al. , 2017 ) ) . Now we introduce the learning problem on an augmented dataset : Let ( X , y ) ∈ XN × YN be a training set consisting of N i.i.d . samples . Besides the original ( X , y ) , each training sample in it is provided with α augmented samples . The input features of the augmented dataset can be written as : à ( X ) = [ x1 ; · · · ; xN ; x1,1 ; · · · ; xN,1 ; · · · ; x1 , α ; · · · ; xN , α ] ∈ X ( 1+α ) N , where xi is in the original training set and xi , j , ∀j ∈ [ α ] are the augmentations of xi . The labels of the augmented samples are kept the same , which can be denoted as M̃y ∈ Y ( 1+α ) N , where M̃ ∈ R ( 1+α ) N×N is a vertical stack of ( 1 + α ) identity mappings . Specifically , when the input xis are d-dimensional real vectors , we have the following notion of augmentation strength daug : Definition 2 ( Strength of augmentations ) . For any δ ∈ ( 0 , 1 ) , let daug ( δ ) , argmax daug Pà , X [ rank ( à ( X ) − M̃X ) < daug ] ≤ δ , daug , daug ( 1/N ) . Intuitively , strength of augmentations daug ( δ ) means that with probability at least 1 − δ , the augmentations perturb at least daug ( δ ) dimensions ; whereas daug can be intuitively understood as the minimum number of dimensions that the augmentations in à ( X ) perturbed with high probability . A lager daug corresponds to a stronger data augmentations . For instance , when à ( X ) = M̃X almost surely ( e.g. , when the augmentations are identical copies of the original samples , corresponding to the weakest augmentation – no augmentations at all ) , we have daug ( δ ) = daug = 0 for all δ ∈ ( 0 , 1 ) . On the other hand , if the augmentations are randomly generated , then it is more likely to see larger daug ( i.e. , more dimensions being perturbed ) with larger α ( i.e. , more augmentations ) . In the next subsection , we formally introduce “ data augmentation consistency regularization ” and present a generalization bound under bounded loss functions . We subsequently specialize the bound to linear regression and show that consistency regularization is strictly more sample efficient than empirical risk minimization ( ERM ) on the augmented dataset .
This paper aims to offer a theoretical analysis of the training with data augmentation and associated consistency loss. While it is intuitive that training with data augmentation and consistency loss will help, this paper offers a theoretical justification of the intuitions. The simple framework (to view DAC as a hypothesis space complexity reduction technique) is neat and intuitive.
SP:54e2b82691851b880425f85be0279b423132edfb
Relative Molecule Self-Attention Transformer
1 INTRODUCTION . Predicting molecular properties is central to applications such as drug discovery or material design . Without accurate prediction of properties such as toxicity , a promising drug candidate is likely to fail clinical trials . Many molecular properties can not be feasibly computed ( simulated ) from first principles and instead have to be extrapolated , from an often small experimental dataset ( Chan et al. , 2019 ; Bender & Cortés-Ciriano , 2021 ) . The prevailing approach is to train a machine learning model such a random forest ( Korotcov et al. , 2017 ) or a graph neural network ( Gilmer et al. , 2017 ) from scratch to predict the desired property for a new molecule . Machine learning is moving away from training models purely from scratch . In natural language processing ( NLP ) , advances in large-scale pretraining ( Devlin et al. , 2018 ; Howard & Ruder , 2018 ) and the development of the Transformer ( Vaswani et al. , 2017 ) architecture have culminated in large gains in data efficiency across multiple tasks ( Wang et al. , 2019a ) . Instead of training models purely from scratch , the models in NLP are commonly first pretrained on a large unsupervised corpora . The chemistry domain might be at the brink of an analogous revolution , which could be especially transformative due to the high cost of obtaining large experimental datasets . In particular , recent work has proposed Molecule Attention Transformer ( MAT ) , a Transformer-based architecture adapted to processing molecular data ( Maziarka et al. , 2020 ) and pretrained using self-supervised learning for graphs ( Hu et al. , 2020 ) . Several works have shown further gains by improving network architecture or the pretraining tasks ( Chithrananda et al. , 2020 ; Fabian et al. , 2020 ; Rong et al. , 2020 ) . However , pretraining has not yet led to such transformative data-efficiency gains in molecular property prediction . For instance , non-pretrained models with extensive handcrafted featurization tend to achieve very competitive results ( Yang et al. , 2019a ) . We reason that architecture might be a key bottleneck . In particular , most Transformers for molecules do not encode the three dimensional structure of the molecule ( Chithrananda et al. , 2020 ; Rong et al. , 2020 ) , which is a key factor determining many molecular properties . On the other hand , performance has been significantly boosted in other domains by enriching the Transformer architecture with proper inductive biases ( Dosovitskiy et al. , 2021 ; Shaw et al. , 2018 ; Dai et al. , 2019 ; Ingraham et al. , 2021 ; Huang et al. , 2020 ; Romero & Cordonnier , 2021 ; Khan et al. , 2021 ; Ke et al. , 2021 ) . Motivated by this perspective , we methodologically explore the design space of the self-attention layer , a key computational primitive of the Transformer architecture , for molecular property prediction . In particular , we explore variants of relative self-attention , which has been shown to be effective in various domains such as protein design and NLP ( Shaw et al. , 2018 ; Ingraham et al. , 2021 ) Our main contribution is a new self-attention layer for molecular graphs . We tackle the aforementioned issues with Relative Molecule Attention Transformer ( R-MAT ) , our pre-trained transformer-based model , shown in Figure 1 . We propose Relative Molecule Self-Attention , a novel variant of relative self-attention , which allows us to effectively fuse distance and graph neighbourhood information ( see Figure 2 ) . Our model achieves state-of-the-art or very competitive performance across a wide range of tasks . Satisfyingly , R-MAT outperforms more specialized models without using extensive handcrafted featurization or adapting the architecture specifically to perform well on quantum prediction benchmarks . The importance of representing effectively distance and other relationships in the attention layer is evidenced by large performance gains compared to MAT . An important inspiration behind this work was to unlock the potential of large pretrained models for the field , as they offer unique long-term benefits such as simplifying machine learning pipelines . We show that R-MAT can be trained to state-of-the-art performance with only tuning the learning rate . We also open-source weights and code as part of the Huggingmolecules ( Gaiński et al. , 2021 ) package . 2 RELATED WORK Pretraining coupled with the efficient Transformer architecture unlocked state-of-the-art performance in molecule property prediction ( Maziarka et al. , 2020 ; Chithrananda et al. , 2020 ; Fabian et al. , 2020 ; Rong et al. , 2020 ; Wang et al. , 2019b ; Honda et al. , 2019 ) . First applications of deep learning did not offer large improvements over more standard methods such as random forests ( Wu et al. , 2018 ; Jiang et al. , 2021 ; Robinson et al. , 2020 ) . Consistent improvements were enabled by more efficient architectures adapted to this domain ( Mayr et al. , 2018 ; Yang et al. , 2019a ; Klicpera et al. , 2020 ) . In this spirit , our goal is to further advance modeling for any chemical task by redesigning self-attention for molecular data . Encoding efficiently the relation between tokens in selfattention has been shown to substantially boost performance of Transformers in vision , language , music and biology ( Shaw et al. , 2018 ; Dai et al. , 2019 ; Ingraham et al. , 2021 ; Huang et al. , 2020 ; Romero & Cordonnier , 2021 ; Khan et al. , 2021 ; Ke et al. , 2021 ) . The vanilla selfattention includes absolute encoding of position , which can hinder learning when the absolute position in the sentence is not informative.1 Relative positional encoding featurizes the relative distance between each pair of tokens , which led to substantial gains in the language and music domains ( Shang et al. , 2018 ; Huang et al. , 2020 ) . However , most Transformers for the chemical domain predominantly used no positional encoding in the self- attention layer ( Chithrananda et al. , 2020 ; Fabian et al. , 2020 ; Rong et al. , 2020 ; Wang et al. , 2019b ; Honda et al. , 2019 ; Schwaller et al. , 2019 ) , which gives rise to similar issues with representing relations between atoms . We directly compare to ( Maziarka et al. , 2020 ) , who introduced first self-attention module tailored to molecular data , and show large improvements across different tasks . Our work is also closely related to ( Ingraham et al. , 2021 ) that used relative self-attention fusing three dimensional structure with positional and graph based embedding , in the context of protein design . 1This arises for example when input is an arbitrary chunks of the text ( Huang et al. , 2020 ) ( e.g . in the next sentence prediction task used in BERT pretraining ) . 3 RELATIVE MOLECULE SELF-ATTENTION . 3.1 MOLECULAR SELF-ATTENTIONS . We first give a short background on how prior work on adapting self-attention for molecular data , and point out their potential shortcomings . Text Transformers Multiple works have applied directly the Transformer to molecules encoded as text using the SMILES representation ( Chithrananda et al. , 2020 ; Fabian et al. , 2020 ; Wang et al. , 2019b ; Honda et al. , 2019 ; Schwaller et al. , 2019 ) . SMILES is a linear encoding of a molecule into a string of characters according to a deterministic ordering algorithm ( Weininger , 1988 ; Jastrzębski et al. , 2016 ) . For example , the SMILES encoding of carbon dioxide is C ( =O ) =O . Adding a single atom can completely change the ordering of atoms in the SMILES encoding . Hence , the relative positions of individual characters are not easily related to their proximity in the graph or space . This is in contrast to natural language processing , where the distance between two words in the sentence can be highly informative ( Shaw et al. , 2018 ; Huang et al. , 2020 ; Ke et al. , 2021 ) . We suspect this makes the use of self-attention in SMILES models less effective . Another readily visible shortcoming is that the graph structure and distances between atoms of the molecule are either completely encoded or entirely thrown out . Graph Transformers Several works have proposed Transformers that operate directly on a graph ( Maziarka et al. , 2020 ; Rong et al. , 2020 ; Nguyen et al. , 2019 ) . The GROVER and the U2GNN models take as input a molecule encoded as a graph ( Rong et al. , 2020 ; Nguyen et al. , 2019 ) . In both of them , the self-attention layer does not have a direct access to the information about the graph . Instead , the information about the relations between atoms ( existence of a bond or distance in the graph ) is indirectly encoded by a graph convolutional layer that is run in GROVER within each layer , and in U2GNN only at the beginning . Similarly to Text Transformers , Graph Transformers also do not take into account the distances between atoms . Structured Transformer introduced in ( Ingraham et al. , 2021 ) uses relative self-attention that operates on amino-acids in the task of protein design . Self-attention proposed by ( Ingraham et al. , 2021 ) , similarly to our work , provides the model with information about the three dimensional structure of the molecule . As R-MAT encodes the relative distances between pairs of atoms , Structured Transformer also uses relative distances between modeled amino-acids . However , it encodes them in a slightly different way . We incorporate their ideas , and extend them to enable processing of molecular data . Molecule Attention Transformer Our work is closely related to Molecule Attention Transformer ( MAT ) , a transformer-based model with self-attention tailored to processing molecular data ( Maziarka et al. , 2020 ) . In contrast to most of the aforementioned models , MAT incorporates the distance information in its self-attention module . MAT stacks N Molecule Self-Attention blocks followed by a mean pooling and a prediction layer . For a D-dimensional state x ∈ RD , the standard , vanilla self-attention operation is defined as A ( x ) = Softmax ( QKT√ dk ) V , ( 1 ) where Q = xWQ , K = xWK , and V = xWV . Molecule Self-Attention extends Equation ( 1 ) to include additional information about bonds and distances between atoms in the molecule as A ( x ) = ( λa Softmax ( QKT√ dk ) + λd g ( D ) + λgA ) V , ( 2 ) where λa , λd , λg are the weights given to individual parts of the attention module , g is a function given by either a softmax , or an element-wise g ( d ) = exp ( −d ) , A is the adjacency matrix ( with A ( i , j ) = 1 if there exists a bond between atoms i and j and 0 otherwise ) and D is the distance matrix , where D ( i , j ) represents the distance between the atoms i and j in the 3D space . Self-attention can relate input elements in a highly flexible manner . In contrast , there is little flexibility in how Molecule Self-Attention can use the information about the distance between two atoms . The strength of the attention between two atoms depends monotonically on their relative distance . However , molecular properties can depend in a highly nonlinear way on the distance between atoms . This has motivated works such as ( Klicpera et al. , 2020 ) to explicitly model the interactions between atoms , using higher-order terms .
The paper proposes a new transformer network architecture to pre-train the molecule datasets. Based on the Molecule Attention Transformer, the proposed model, R-MAT, incorporates a few handcrafted features into the self-attention layer of a transformer architecture. The features can incorporate distances between atoms from multiple perspectives. The experimental results show that the pre-trained model is useful to predict various properties of molecules.
SP:9f3699227642cf22c764b80be6bb9917fc7bce8a
Relative Molecule Self-Attention Transformer
1 INTRODUCTION . Predicting molecular properties is central to applications such as drug discovery or material design . Without accurate prediction of properties such as toxicity , a promising drug candidate is likely to fail clinical trials . Many molecular properties can not be feasibly computed ( simulated ) from first principles and instead have to be extrapolated , from an often small experimental dataset ( Chan et al. , 2019 ; Bender & Cortés-Ciriano , 2021 ) . The prevailing approach is to train a machine learning model such a random forest ( Korotcov et al. , 2017 ) or a graph neural network ( Gilmer et al. , 2017 ) from scratch to predict the desired property for a new molecule . Machine learning is moving away from training models purely from scratch . In natural language processing ( NLP ) , advances in large-scale pretraining ( Devlin et al. , 2018 ; Howard & Ruder , 2018 ) and the development of the Transformer ( Vaswani et al. , 2017 ) architecture have culminated in large gains in data efficiency across multiple tasks ( Wang et al. , 2019a ) . Instead of training models purely from scratch , the models in NLP are commonly first pretrained on a large unsupervised corpora . The chemistry domain might be at the brink of an analogous revolution , which could be especially transformative due to the high cost of obtaining large experimental datasets . In particular , recent work has proposed Molecule Attention Transformer ( MAT ) , a Transformer-based architecture adapted to processing molecular data ( Maziarka et al. , 2020 ) and pretrained using self-supervised learning for graphs ( Hu et al. , 2020 ) . Several works have shown further gains by improving network architecture or the pretraining tasks ( Chithrananda et al. , 2020 ; Fabian et al. , 2020 ; Rong et al. , 2020 ) . However , pretraining has not yet led to such transformative data-efficiency gains in molecular property prediction . For instance , non-pretrained models with extensive handcrafted featurization tend to achieve very competitive results ( Yang et al. , 2019a ) . We reason that architecture might be a key bottleneck . In particular , most Transformers for molecules do not encode the three dimensional structure of the molecule ( Chithrananda et al. , 2020 ; Rong et al. , 2020 ) , which is a key factor determining many molecular properties . On the other hand , performance has been significantly boosted in other domains by enriching the Transformer architecture with proper inductive biases ( Dosovitskiy et al. , 2021 ; Shaw et al. , 2018 ; Dai et al. , 2019 ; Ingraham et al. , 2021 ; Huang et al. , 2020 ; Romero & Cordonnier , 2021 ; Khan et al. , 2021 ; Ke et al. , 2021 ) . Motivated by this perspective , we methodologically explore the design space of the self-attention layer , a key computational primitive of the Transformer architecture , for molecular property prediction . In particular , we explore variants of relative self-attention , which has been shown to be effective in various domains such as protein design and NLP ( Shaw et al. , 2018 ; Ingraham et al. , 2021 ) Our main contribution is a new self-attention layer for molecular graphs . We tackle the aforementioned issues with Relative Molecule Attention Transformer ( R-MAT ) , our pre-trained transformer-based model , shown in Figure 1 . We propose Relative Molecule Self-Attention , a novel variant of relative self-attention , which allows us to effectively fuse distance and graph neighbourhood information ( see Figure 2 ) . Our model achieves state-of-the-art or very competitive performance across a wide range of tasks . Satisfyingly , R-MAT outperforms more specialized models without using extensive handcrafted featurization or adapting the architecture specifically to perform well on quantum prediction benchmarks . The importance of representing effectively distance and other relationships in the attention layer is evidenced by large performance gains compared to MAT . An important inspiration behind this work was to unlock the potential of large pretrained models for the field , as they offer unique long-term benefits such as simplifying machine learning pipelines . We show that R-MAT can be trained to state-of-the-art performance with only tuning the learning rate . We also open-source weights and code as part of the Huggingmolecules ( Gaiński et al. , 2021 ) package . 2 RELATED WORK Pretraining coupled with the efficient Transformer architecture unlocked state-of-the-art performance in molecule property prediction ( Maziarka et al. , 2020 ; Chithrananda et al. , 2020 ; Fabian et al. , 2020 ; Rong et al. , 2020 ; Wang et al. , 2019b ; Honda et al. , 2019 ) . First applications of deep learning did not offer large improvements over more standard methods such as random forests ( Wu et al. , 2018 ; Jiang et al. , 2021 ; Robinson et al. , 2020 ) . Consistent improvements were enabled by more efficient architectures adapted to this domain ( Mayr et al. , 2018 ; Yang et al. , 2019a ; Klicpera et al. , 2020 ) . In this spirit , our goal is to further advance modeling for any chemical task by redesigning self-attention for molecular data . Encoding efficiently the relation between tokens in selfattention has been shown to substantially boost performance of Transformers in vision , language , music and biology ( Shaw et al. , 2018 ; Dai et al. , 2019 ; Ingraham et al. , 2021 ; Huang et al. , 2020 ; Romero & Cordonnier , 2021 ; Khan et al. , 2021 ; Ke et al. , 2021 ) . The vanilla selfattention includes absolute encoding of position , which can hinder learning when the absolute position in the sentence is not informative.1 Relative positional encoding featurizes the relative distance between each pair of tokens , which led to substantial gains in the language and music domains ( Shang et al. , 2018 ; Huang et al. , 2020 ) . However , most Transformers for the chemical domain predominantly used no positional encoding in the self- attention layer ( Chithrananda et al. , 2020 ; Fabian et al. , 2020 ; Rong et al. , 2020 ; Wang et al. , 2019b ; Honda et al. , 2019 ; Schwaller et al. , 2019 ) , which gives rise to similar issues with representing relations between atoms . We directly compare to ( Maziarka et al. , 2020 ) , who introduced first self-attention module tailored to molecular data , and show large improvements across different tasks . Our work is also closely related to ( Ingraham et al. , 2021 ) that used relative self-attention fusing three dimensional structure with positional and graph based embedding , in the context of protein design . 1This arises for example when input is an arbitrary chunks of the text ( Huang et al. , 2020 ) ( e.g . in the next sentence prediction task used in BERT pretraining ) . 3 RELATIVE MOLECULE SELF-ATTENTION . 3.1 MOLECULAR SELF-ATTENTIONS . We first give a short background on how prior work on adapting self-attention for molecular data , and point out their potential shortcomings . Text Transformers Multiple works have applied directly the Transformer to molecules encoded as text using the SMILES representation ( Chithrananda et al. , 2020 ; Fabian et al. , 2020 ; Wang et al. , 2019b ; Honda et al. , 2019 ; Schwaller et al. , 2019 ) . SMILES is a linear encoding of a molecule into a string of characters according to a deterministic ordering algorithm ( Weininger , 1988 ; Jastrzębski et al. , 2016 ) . For example , the SMILES encoding of carbon dioxide is C ( =O ) =O . Adding a single atom can completely change the ordering of atoms in the SMILES encoding . Hence , the relative positions of individual characters are not easily related to their proximity in the graph or space . This is in contrast to natural language processing , where the distance between two words in the sentence can be highly informative ( Shaw et al. , 2018 ; Huang et al. , 2020 ; Ke et al. , 2021 ) . We suspect this makes the use of self-attention in SMILES models less effective . Another readily visible shortcoming is that the graph structure and distances between atoms of the molecule are either completely encoded or entirely thrown out . Graph Transformers Several works have proposed Transformers that operate directly on a graph ( Maziarka et al. , 2020 ; Rong et al. , 2020 ; Nguyen et al. , 2019 ) . The GROVER and the U2GNN models take as input a molecule encoded as a graph ( Rong et al. , 2020 ; Nguyen et al. , 2019 ) . In both of them , the self-attention layer does not have a direct access to the information about the graph . Instead , the information about the relations between atoms ( existence of a bond or distance in the graph ) is indirectly encoded by a graph convolutional layer that is run in GROVER within each layer , and in U2GNN only at the beginning . Similarly to Text Transformers , Graph Transformers also do not take into account the distances between atoms . Structured Transformer introduced in ( Ingraham et al. , 2021 ) uses relative self-attention that operates on amino-acids in the task of protein design . Self-attention proposed by ( Ingraham et al. , 2021 ) , similarly to our work , provides the model with information about the three dimensional structure of the molecule . As R-MAT encodes the relative distances between pairs of atoms , Structured Transformer also uses relative distances between modeled amino-acids . However , it encodes them in a slightly different way . We incorporate their ideas , and extend them to enable processing of molecular data . Molecule Attention Transformer Our work is closely related to Molecule Attention Transformer ( MAT ) , a transformer-based model with self-attention tailored to processing molecular data ( Maziarka et al. , 2020 ) . In contrast to most of the aforementioned models , MAT incorporates the distance information in its self-attention module . MAT stacks N Molecule Self-Attention blocks followed by a mean pooling and a prediction layer . For a D-dimensional state x ∈ RD , the standard , vanilla self-attention operation is defined as A ( x ) = Softmax ( QKT√ dk ) V , ( 1 ) where Q = xWQ , K = xWK , and V = xWV . Molecule Self-Attention extends Equation ( 1 ) to include additional information about bonds and distances between atoms in the molecule as A ( x ) = ( λa Softmax ( QKT√ dk ) + λd g ( D ) + λgA ) V , ( 2 ) where λa , λd , λg are the weights given to individual parts of the attention module , g is a function given by either a softmax , or an element-wise g ( d ) = exp ( −d ) , A is the adjacency matrix ( with A ( i , j ) = 1 if there exists a bond between atoms i and j and 0 otherwise ) and D is the distance matrix , where D ( i , j ) represents the distance between the atoms i and j in the 3D space . Self-attention can relate input elements in a highly flexible manner . In contrast , there is little flexibility in how Molecule Self-Attention can use the information about the distance between two atoms . The strength of the attention between two atoms depends monotonically on their relative distance . However , molecular properties can depend in a highly nonlinear way on the distance between atoms . This has motivated works such as ( Klicpera et al. , 2020 ) to explicitly model the interactions between atoms , using higher-order terms .
This paper proposes a relative self-attention layer for the Transformer model. The relative self-attention of two atoms consists of their relative distance, their shortest path distance in the molecular graph, and their physiochemical. The proposed relative molecule attention Transformer can be first pretrained with a contextual property prediction task and then a graph-level prediction task. The pretrained Transformer can be finetuned on downstream molecular property prediction tasks and achieves excellent performance.
SP:9f3699227642cf22c764b80be6bb9917fc7bce8a
Hyperspherical embedding for novel class classification
Deep neural networks proved to be useful to learn representations and perform classification on many different modalities of data . Traditional approaches work well on the closed set problem . For learning tasks involving novel classes , known as the open set problem , the metric learning approach has been proposed . However , while promising , common metric learning approaches require pairwise learning , which significantly increases training cost while adding additional challenges . In this paper we present a method in which the similarity of samples projected onto a feature space is enforced by a metric learning approach without requiring pairwise evaluation . We compare our approach against known methods in different datasets , achieving results up to 81 % more accurate . 1 INTRODUCTION . Humans have the ability to identify many different types of objects , Fields ( 2016 ) . Even when we are not able to name a certain object , we can tell it ’ s differences from a second object , which contributes to the identification of objects we have never seen before and group them into classes based on prior knowledge . Metric learning KAYA & BİLGE ( 2019 ) is a well adopted approach that identifies novel classes without fine tuning a model on these classes . The approach applies an optimization strategy , which guarantees that the classes a model has seen during optimization form disjoint clusters on the latent space according to a certain metric distance . Some common approaches that use this strategy are : the triplet loss Schroff et al . ( 2015 ) ; constrative loss Hadsell et al . ( 2006 ) ; prototypical networks Snell et al . ( 2017 ) ; constellation loss Medela & Picon ( 2020 ) ; and matching networks Vinyals et al . ( 2016 ) , here referred to as distance based learners . Another approach in metric learning is called similarity learning , where the model receives pairs of inputs and learns that they are similar if they belong to the same class or dissimilar otherwise , as discussed in Sung et al . ( 2018 ) . During inference on novel classes , distance based learners use the distance between labeled points of the novel class the model was not optimized upon to obtain a representation in the latent space for the novel class and then calculate the distance between new points and each class representation . When considering similarity based learners , a similarity score is calculated between every ( class , query ) point pair in order to find the most similar pair . However while enforcing metric properties on the latent space leverages the model knowledge to novel classes , it requires pairwise learning , which limits the scalability of such approaches given the amount of possible pairs . In this paper we take into account the normalized softmax loss function ( NSL ) , proposed by Wang et al . ( 2018 ) , and present how it enforces a latent space that obeys the cosine similarity . Based on this , we then present a methodology to apply the NSL to the novel classes classification problem . Considering a trained artificial neural network , we add a new neuron to it ’ s last layer and infer the weights that connect the penultimate layer of the network to this neuron . The connection and the new neuron are used to classify a novel class by using few labeled samples of it . Our approach for the open set problem allows us to classify new classes without fine-tuning the model . Instead , we use the same network parameters the model was optimized upon to classify its seen classes and only add a new neuron along with its inferred connection for new classes . We evaluate state-of-the-art approaches to solve the open set problem against our proposed approach , both in the disjoint and joint scenarios , for different datasets . The experimental results show that our approach outperforms other metric learning strategies and additionally , induces a more scalable training process , as it does not require pairwise learning , leveraging the open set problem technique to deal with large datasets . The remainder of this paper is structured as follows . First , it presents some theoretical background at section 2 . Our methodology and how to classify new classes is described in section 3 . Next , we present the results on the joint and disjoint open set problem in section 4 . Moreover , we present the use of the NSL approach in a more complex dataset in the field of botany , in section 4.4 . We compare our methods to incremental learning in section 4.5 . We present related work and lastly , we conclude in section 6 . 2 PRELIMINARIES . We are given a training dataset ( xi , yi ) i∈ { 1 , ... , n } where , for all i , the input xi belongs to an input space X ⊂ Rd , e.g . the space of images , and the output yi to an output space Y = { 1 , 2 , . . . , K } , the set of class labels , whereK is the number of classes . Based on this training set , the aim is to find a classifier h : X → Y which produces a single prediction for each input and generalizes well on unseen samples x ∈ X . When this classifier is a deep neural network , h can typically be expressed as : h ( x ) = max k η̂k ( x ) where η̂ ( x ) = ( η̂1 ( x ) , . . . , η̂K ( x ) ) is the vector of the estimated class probabilities computed as : η̂ ( x ) = ψ ( φ ( x ) ) with φ : X → RM being a succession of layers allowing to compute anM -dimensional feature vector representation φ ( x ) for any input image x ∈ X , and ψ : RM → RK being the final classification function , typically composed of a fully connected layer followed by a softmax activation function : ψk ( z ) = ewkz+bk∑K j=1 e wjz+bj ( 1 ) 2.1 THE OPEN SET PROBLEM . A classification problem can be formulated as a closed set or open set problem . In the closed set problem context , the optimization process trains a model to learn features that can classify the samples into classes present in the training set . The approach does not require the identification of classes not present in the training set . This is commonly tackled using the Softmax-cross-entropy loss He et al . ( 2016 ) , Simonyan & Zisserman ( 2015 ) , Szegedy et al . ( 2015 ) . In contrast , in the open set problem we are interested in not only identifying the classes present in the training set , but also to be able to use the model to classify new classes by exploiting properties in the latent space yielded during optimization . 2.2 CLASSIFYING NEW CLASSES . When tackling the open set problem , we are interested in optimizing models in which the full knowledge the network obtains during optimization can be exploited for classes outside of the training set . The usual softmax cross-entropy approach lacks the ability to extract features that obey this property , as the weights between the penultimate layer and the classification layer w are as important as the representation in the latent space of the penultimate layer z as seen in Eq . 1 , and the former is undefined for novel classes . Usual approaches for classifying novel classes are explored in metric learning . Metric learning strategies are interesting as novel classes can be identified . However , current strategies based on pairwise learning can be costly to optimize . We discuss in this paper a strategy to remove pairwise learning and still be able to define novel classes for a model . 2.3 NORMALIZED SOFTMAX LOSS . Proposed in Wang et al . ( 2018 ) , the NSL ( Normalized Softmax Loss ) is a modification of the softmax loss that enforces a cosine similarity metric between classes on the latent space . It enforces the features z that are projected into the latent space to be contained in a M dimensional hypersphere ( M > 3 ) where each region of the sphere contains features belonging to a certain class . If we look again at the classical softmax equation ( Eq . 1 ) , the constraints induced by NSL are : { bk = 0 , ∀k ‖wk‖ = 1 , ∀k ‖z‖ = ‖φ ( x ) ‖ = S , ∀x ( 2 ) and finally η̂k ( x ) = ψk ( φ ( x ) ) = ewkφ ( x ) ∑K j=1 e wjφ ( x ) = eS.cos ( wk , φ ( x ) ) ∑K j=1 e S.cos ( wj , φ ( x ) ) where cos ( u , v ) = u.v/ ( ‖u‖.‖v‖ ) is the cosine similarity , i.e . the cosinus of the angle between two vectors u and v. Note that the hyper-parameter S acts as a temperature of the normalized softmax allowing to control the degree of concentration of the output probabilities η̂k ( x ) . A geometrical representation that shows the relationship between the weights and the feature vectors obtained with NSL is shown in Figure 2 . One can see that the barycenter of the feature vector is aligned with it ’ s corresponding class weights . 3 PROPOSED METHODOLOGY . In this paper we aim to compare pairwise strategies , commonly used in metric learning , against the normalized softmax loss approach for the open set problem . In this manner we consider both the problem where during inference seen and unseen classes are disjoint , as well as the scenario where the model must identify both the seen and unseen classes together . More formally , once the network has been trained , we would like to extend the output space to a new set of classes Y∗ = { K + 1 , . . . , K +K∗ } for which we have only one or very few samples ( x∗i , y ∗ i ) i∈ { 1 , ... , n∗ } . In particular , we would like to obtain a new classifier h ∗ : X → Y∗ ( disjoint scenario ) or a new classifier h′ : X → Y ⋃ Y∗ ( joint scenario ) . Note that , whatever the scenario , we consider that the function φ is fixed as well as the pre-trained weights of the seen classes wk , ∀k ∈ { 1 , . . . , K } . 3.1 CLASSIFYING NEW CLASSES VIA NSL . Given that the function φ and the weights wk of the seen classes are fixed , our objective is reduced to optimizing the weights w∗k , ∀k ∈ { 1 , . . . , K∗ } of the unseen classes . Using the cross-entropy as the objective function , this can be expressed as : argmin w∗1 , ... , w ∗ K∗ n∗∑ i=1 −log ( η̂y∗i ( x ∗ i ) ) argmin w∗1 , ... , w ∗ K∗ n∗∑ i=1 −log e w∗y∗ i φ ( x∗i ) ∑K j=1 e wjφ ( x∗i ) + ∑K∗ j=1 e w∗jφ ( x ∗ i ) In the particular case where we have only one new class ( i.e . K∗ = 1 ) , this simplifies to : argmax w∗1 n∗∑ i=1 w∗1φ ( x ∗ i ) = argmax w∗1 w∗1 n∗∑ i=1 φ ( x∗i ) which leads , with the constraints of Eq . 2 , to : w∗1 = 1 n∗ n∗∑ i=1 φ ( x∗i ) ‖φ ( x∗i ) ‖ = 1 S.n∗ n∗∑ i=1 φ ( x∗i ) ( 3 ) The weight w∗1 of a new class can thus simply be computed by averaging the feature vectors of the images x∗i of the new class . This simple theoretical result does not hold anymore when there is more than one novel classes ( i.e . when K∗ > 1 ) . However , as we we will see in our experiments , using this estimation procedure for other new classes provides a good approximation of the exact optimal weights and is quite effective in practice . More formally , we propose to estimate the weights w∗k of each of K∗ new classes as : w∗k = 1 S ∑n∗ i=1 φ ( x ∗ i ) 1 ( y ∗ i = K + k ) ∑n∗ i=1 1 ( y ∗ i = K + k ) ( 4 ) In the joint scenario , we are interested in a classifier on both the seen classes and the new classes . This can be expressed as : hjoint ( x ) = n∗∑ i=1 max k ew ∗ kφ ( x ∗ i ) ∑K j=1 e wjφ ( x∗i ) + ∑K∗ j=1 e w∗jφ ( x ∗ i ) ( 5 ) where the wj and φ ( ) are pre-trained on the seen classes and the new weights w∗k are computed with Eq . 4 . In the disjoint scenario , we are interested in a classifier on the new classes only ( in a transfer learning way ) : hdisjoint ( x ) = n∗∑ i=1 max k ew ∗ kφ ( x ∗ i ) ∑K∗ j=1 e w∗jφ ( x ∗ i ) ( 6 ) where φ ( ) is pre-trained on the seen classes and the new weights w∗j are computed with Equ . 4 . A dataflow depicting our approach to infer the weights for novel classes is presented in Figure 1 .
This paper is about classification of images in an open set setting. Data coming from new classes are introduced to the network after training on data from a fixed set of known classes. The goal is to be able to correctly classify the old and new classes either jointly or not. This paper proposes a method for handling new classes by increasing the size of the classifier weights for each new classes. The network is trained using the Normalized softmax loss and new classifier elements are added by finding the center of mass of the data coming from the new set of class. The method performs on FASHION MNIST, CIFAR and Plantnet datasets.
SP:609072c4e2753277ab90174dc3a7b66d03653498
Hyperspherical embedding for novel class classification
Deep neural networks proved to be useful to learn representations and perform classification on many different modalities of data . Traditional approaches work well on the closed set problem . For learning tasks involving novel classes , known as the open set problem , the metric learning approach has been proposed . However , while promising , common metric learning approaches require pairwise learning , which significantly increases training cost while adding additional challenges . In this paper we present a method in which the similarity of samples projected onto a feature space is enforced by a metric learning approach without requiring pairwise evaluation . We compare our approach against known methods in different datasets , achieving results up to 81 % more accurate . 1 INTRODUCTION . Humans have the ability to identify many different types of objects , Fields ( 2016 ) . Even when we are not able to name a certain object , we can tell it ’ s differences from a second object , which contributes to the identification of objects we have never seen before and group them into classes based on prior knowledge . Metric learning KAYA & BİLGE ( 2019 ) is a well adopted approach that identifies novel classes without fine tuning a model on these classes . The approach applies an optimization strategy , which guarantees that the classes a model has seen during optimization form disjoint clusters on the latent space according to a certain metric distance . Some common approaches that use this strategy are : the triplet loss Schroff et al . ( 2015 ) ; constrative loss Hadsell et al . ( 2006 ) ; prototypical networks Snell et al . ( 2017 ) ; constellation loss Medela & Picon ( 2020 ) ; and matching networks Vinyals et al . ( 2016 ) , here referred to as distance based learners . Another approach in metric learning is called similarity learning , where the model receives pairs of inputs and learns that they are similar if they belong to the same class or dissimilar otherwise , as discussed in Sung et al . ( 2018 ) . During inference on novel classes , distance based learners use the distance between labeled points of the novel class the model was not optimized upon to obtain a representation in the latent space for the novel class and then calculate the distance between new points and each class representation . When considering similarity based learners , a similarity score is calculated between every ( class , query ) point pair in order to find the most similar pair . However while enforcing metric properties on the latent space leverages the model knowledge to novel classes , it requires pairwise learning , which limits the scalability of such approaches given the amount of possible pairs . In this paper we take into account the normalized softmax loss function ( NSL ) , proposed by Wang et al . ( 2018 ) , and present how it enforces a latent space that obeys the cosine similarity . Based on this , we then present a methodology to apply the NSL to the novel classes classification problem . Considering a trained artificial neural network , we add a new neuron to it ’ s last layer and infer the weights that connect the penultimate layer of the network to this neuron . The connection and the new neuron are used to classify a novel class by using few labeled samples of it . Our approach for the open set problem allows us to classify new classes without fine-tuning the model . Instead , we use the same network parameters the model was optimized upon to classify its seen classes and only add a new neuron along with its inferred connection for new classes . We evaluate state-of-the-art approaches to solve the open set problem against our proposed approach , both in the disjoint and joint scenarios , for different datasets . The experimental results show that our approach outperforms other metric learning strategies and additionally , induces a more scalable training process , as it does not require pairwise learning , leveraging the open set problem technique to deal with large datasets . The remainder of this paper is structured as follows . First , it presents some theoretical background at section 2 . Our methodology and how to classify new classes is described in section 3 . Next , we present the results on the joint and disjoint open set problem in section 4 . Moreover , we present the use of the NSL approach in a more complex dataset in the field of botany , in section 4.4 . We compare our methods to incremental learning in section 4.5 . We present related work and lastly , we conclude in section 6 . 2 PRELIMINARIES . We are given a training dataset ( xi , yi ) i∈ { 1 , ... , n } where , for all i , the input xi belongs to an input space X ⊂ Rd , e.g . the space of images , and the output yi to an output space Y = { 1 , 2 , . . . , K } , the set of class labels , whereK is the number of classes . Based on this training set , the aim is to find a classifier h : X → Y which produces a single prediction for each input and generalizes well on unseen samples x ∈ X . When this classifier is a deep neural network , h can typically be expressed as : h ( x ) = max k η̂k ( x ) where η̂ ( x ) = ( η̂1 ( x ) , . . . , η̂K ( x ) ) is the vector of the estimated class probabilities computed as : η̂ ( x ) = ψ ( φ ( x ) ) with φ : X → RM being a succession of layers allowing to compute anM -dimensional feature vector representation φ ( x ) for any input image x ∈ X , and ψ : RM → RK being the final classification function , typically composed of a fully connected layer followed by a softmax activation function : ψk ( z ) = ewkz+bk∑K j=1 e wjz+bj ( 1 ) 2.1 THE OPEN SET PROBLEM . A classification problem can be formulated as a closed set or open set problem . In the closed set problem context , the optimization process trains a model to learn features that can classify the samples into classes present in the training set . The approach does not require the identification of classes not present in the training set . This is commonly tackled using the Softmax-cross-entropy loss He et al . ( 2016 ) , Simonyan & Zisserman ( 2015 ) , Szegedy et al . ( 2015 ) . In contrast , in the open set problem we are interested in not only identifying the classes present in the training set , but also to be able to use the model to classify new classes by exploiting properties in the latent space yielded during optimization . 2.2 CLASSIFYING NEW CLASSES . When tackling the open set problem , we are interested in optimizing models in which the full knowledge the network obtains during optimization can be exploited for classes outside of the training set . The usual softmax cross-entropy approach lacks the ability to extract features that obey this property , as the weights between the penultimate layer and the classification layer w are as important as the representation in the latent space of the penultimate layer z as seen in Eq . 1 , and the former is undefined for novel classes . Usual approaches for classifying novel classes are explored in metric learning . Metric learning strategies are interesting as novel classes can be identified . However , current strategies based on pairwise learning can be costly to optimize . We discuss in this paper a strategy to remove pairwise learning and still be able to define novel classes for a model . 2.3 NORMALIZED SOFTMAX LOSS . Proposed in Wang et al . ( 2018 ) , the NSL ( Normalized Softmax Loss ) is a modification of the softmax loss that enforces a cosine similarity metric between classes on the latent space . It enforces the features z that are projected into the latent space to be contained in a M dimensional hypersphere ( M > 3 ) where each region of the sphere contains features belonging to a certain class . If we look again at the classical softmax equation ( Eq . 1 ) , the constraints induced by NSL are : { bk = 0 , ∀k ‖wk‖ = 1 , ∀k ‖z‖ = ‖φ ( x ) ‖ = S , ∀x ( 2 ) and finally η̂k ( x ) = ψk ( φ ( x ) ) = ewkφ ( x ) ∑K j=1 e wjφ ( x ) = eS.cos ( wk , φ ( x ) ) ∑K j=1 e S.cos ( wj , φ ( x ) ) where cos ( u , v ) = u.v/ ( ‖u‖.‖v‖ ) is the cosine similarity , i.e . the cosinus of the angle between two vectors u and v. Note that the hyper-parameter S acts as a temperature of the normalized softmax allowing to control the degree of concentration of the output probabilities η̂k ( x ) . A geometrical representation that shows the relationship between the weights and the feature vectors obtained with NSL is shown in Figure 2 . One can see that the barycenter of the feature vector is aligned with it ’ s corresponding class weights . 3 PROPOSED METHODOLOGY . In this paper we aim to compare pairwise strategies , commonly used in metric learning , against the normalized softmax loss approach for the open set problem . In this manner we consider both the problem where during inference seen and unseen classes are disjoint , as well as the scenario where the model must identify both the seen and unseen classes together . More formally , once the network has been trained , we would like to extend the output space to a new set of classes Y∗ = { K + 1 , . . . , K +K∗ } for which we have only one or very few samples ( x∗i , y ∗ i ) i∈ { 1 , ... , n∗ } . In particular , we would like to obtain a new classifier h ∗ : X → Y∗ ( disjoint scenario ) or a new classifier h′ : X → Y ⋃ Y∗ ( joint scenario ) . Note that , whatever the scenario , we consider that the function φ is fixed as well as the pre-trained weights of the seen classes wk , ∀k ∈ { 1 , . . . , K } . 3.1 CLASSIFYING NEW CLASSES VIA NSL . Given that the function φ and the weights wk of the seen classes are fixed , our objective is reduced to optimizing the weights w∗k , ∀k ∈ { 1 , . . . , K∗ } of the unseen classes . Using the cross-entropy as the objective function , this can be expressed as : argmin w∗1 , ... , w ∗ K∗ n∗∑ i=1 −log ( η̂y∗i ( x ∗ i ) ) argmin w∗1 , ... , w ∗ K∗ n∗∑ i=1 −log e w∗y∗ i φ ( x∗i ) ∑K j=1 e wjφ ( x∗i ) + ∑K∗ j=1 e w∗jφ ( x ∗ i ) In the particular case where we have only one new class ( i.e . K∗ = 1 ) , this simplifies to : argmax w∗1 n∗∑ i=1 w∗1φ ( x ∗ i ) = argmax w∗1 w∗1 n∗∑ i=1 φ ( x∗i ) which leads , with the constraints of Eq . 2 , to : w∗1 = 1 n∗ n∗∑ i=1 φ ( x∗i ) ‖φ ( x∗i ) ‖ = 1 S.n∗ n∗∑ i=1 φ ( x∗i ) ( 3 ) The weight w∗1 of a new class can thus simply be computed by averaging the feature vectors of the images x∗i of the new class . This simple theoretical result does not hold anymore when there is more than one novel classes ( i.e . when K∗ > 1 ) . However , as we we will see in our experiments , using this estimation procedure for other new classes provides a good approximation of the exact optimal weights and is quite effective in practice . More formally , we propose to estimate the weights w∗k of each of K∗ new classes as : w∗k = 1 S ∑n∗ i=1 φ ( x ∗ i ) 1 ( y ∗ i = K + k ) ∑n∗ i=1 1 ( y ∗ i = K + k ) ( 4 ) In the joint scenario , we are interested in a classifier on both the seen classes and the new classes . This can be expressed as : hjoint ( x ) = n∗∑ i=1 max k ew ∗ kφ ( x ∗ i ) ∑K j=1 e wjφ ( x∗i ) + ∑K∗ j=1 e w∗jφ ( x ∗ i ) ( 5 ) where the wj and φ ( ) are pre-trained on the seen classes and the new weights w∗k are computed with Eq . 4 . In the disjoint scenario , we are interested in a classifier on the new classes only ( in a transfer learning way ) : hdisjoint ( x ) = n∗∑ i=1 max k ew ∗ kφ ( x ∗ i ) ∑K∗ j=1 e w∗jφ ( x ∗ i ) ( 6 ) where φ ( ) is pre-trained on the seen classes and the new weights w∗j are computed with Equ . 4 . A dataflow depicting our approach to infer the weights for novel classes is presented in Figure 1 .
The paper proposes an approach to Few-shot Learning based on the the CosFace (Normalized Softmax) Loss. After pretraining, class weights are added to the cross entropy loss for each new class in the test set, which are computed by averaging over inferred weights from a support set while fixing the remaining network weights (feature extractor). Experiments conducted on CIFAR10 and Fashion-MNIST indicated gains over baseline approaches.
SP:609072c4e2753277ab90174dc3a7b66d03653498
CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) ( Sutton & Barto , 1998 ) has achieved significant successes in many fields ( Mnih et al. , 2015 ; Silver et al. , 2017 ; OpenAI , 2019 ; Afsar et al. , 2021 ) , robotics ( Deisenroth et al. , 2013 ) , playing Go ( Silver et al. , 2016 ; 2017 ) , Starcraft ( Vinyals et al. , 2019 ) , Dota ( OpenAI , 2019 ) , and recommendation system ( Afsar et al. , 2021 ) . However , most RL algorithms improve the performance under the assumption that an agent is free to explore any behaviors . In real-world applications , only considering return maximization is not enough , and we also need to consider safe behaviors . For example , a robot agent should avoid playing actions that irrevocably harm its hardware , and a recommender system should avoid presenting offending items to users . Thus , it is crucial to consider safe exploration for RL , which is usually formulated as constrained Markov decision processes ( CMDP ) ( Altman , 1999 ) . It is challenging to solve CMDP since traditional approaches ( e.g. , Q-learning ( Watkins , 1989 ) & policy gradient ( Williams , 1992 ) ) usually violate the safe exploration constraints , which is undesirable for safe RL . Recently , Achiam et al . ( 2017 ) ; Yang et al . ( 2020 ) ; Bharadhwaj et al . ( 2021 ) suggest to use some surrogate functions to replace the objective and constraints . However , their implementations involve some convex approximations to the non-convex objective and safe constraints , which leads to many error sources and troubles . Concretely , Achiam et al . ( 2017 ) ; Yang et al . ( 2020 ) ; Bharadhwaj et al . ( 2021 ) approximate the non-convex objective ( or constraints ) with first-order or second Taylor expansion , but their implementations still lack a theory to show the error difference between the original objective ( or constraints ) and its convex approximations . Besides , their approaches involve the inverse of a high-dimension Fisher information matrix , which causes their algorithms to require a costly computation for each update when solving high-dimensional RL problems . Our Main Work . To address above problems , we propose the conservative update policy ( CUP ) algorithm with a theoretical safety guarantee . We derive the CUP bases on some new proposed surrogate functions with respect to objective and constraints and provide a practical implementation of CUP that does not depend on any convex approximation to adapt high-dimensional safe RL . Concretely , in Section 3 , Theorem 1 shows generalized difference bounds between two arbitrary policies for the objective and constraints . Those bounds provide principled approximations to the objective and constraints , which are theoretical foundations for us to use those bounds as surrogate functions to replace objective and constraints to design algorithms . Although using difference bound to replace objective or constraints has appeared in some existing works ( e.g. , ( Kakade & Langford , 2002 ; Schulman et al. , 2015 ; Achiam et al. , 2017 ) ) , Theorem 1 improves their bounds at least two aspects : ( i ) Firstly , our rigorous theoretical analysis extends the bound with respect to generalized advantage estimator ( GAE ) ( Schulman et al. , 2016 ) . GAE significantly reduces variance while maintains a tolerable level of bias , which is one of the critical steps for us to design efficient algorithms in the later section . Although Zhang et al . ( 2020 ) ; Kang et al . ( 2021 ) have applied GAE to solve safe RL problems , their approaches are empirical and lack a theoretical analysis with respect to GAE . Thus , our result provides a theory to illustrate the effectiveness of the work ( Zhang et al. , 2020 ; Kang et al. , 2021 ) . ( ii ) Our new bounds refine classic difference bounds . For example , our bounds are more compact than Achiam et al . ( 2017 ) , i , e. , using our new bounds as surrogate functions are better local approximations to the objective and constraints . Besides , the surrogate functions with respect to our new bounds are more accessible to be estimated from the samples than the approaches appears in ( Kakade & Langford , 2002 ; Schulman et al. , 2015 ) ) , for more discussions , please see Remark 1 . In Section 4 , we provide the necessary details of the proposed CUP . The CUP contains two steps : it performs a policy improvement at first , then it projects the policy back onto the safe region to reconcile the constraint violation . Theorem 2 shows a lower bound on policy improvement and an upper bound on constraint violation for CUP at each update . Notably , the result in Theorem 2 shows the bound of CUP is more compact than state-of-the-art safe RL algorithms : CPO ( Achiam et al. , 2017 , Proposition 1-2 ) , PCPO ( Yang et al. , 2020 , Theorem 1 ) and FOCOPS ( Zhang et al. , 2020 ) , which provides a partial explanation for why CUP is so good in practice . For more discussions , please refer to Remark 2 . Finally , we provide a practical implementation of sample-based CUP . Such an implementation allows us to use deep neural networks to train a model . Mainly , CUP does not depend on any convex approximation for objective and constraints , and it optimizes the objective according to the first-order optimizer . Extensive high-dimensional experiments on continuous control tasks show the effectiveness of CUP where the agent satisfies safe constraints . 2 PRELIMINARIES . Reinforcement learning ( RL ) ( Sutton & Barto , 1998 ) is often formulated as a Markov decision process ( MDP ) ( Puterman , 2014 ) that is a tupleM = ( S , A , P , r , ρ0 , γ ) . Here S is state space , A is action space . P ( s′ |s , a ) is probability of state transition from s to s′ after playing a. r ( · ) : S ×S ×A → R , and r ( s′|s , a ) denotes the reward that the agent observes when state transition from s to s′ after it plays a. ρ0 ( · ) : S → [ 0 , 1 ] is the initial state distribution and γ ∈ ( 0 , 1 ) . A stationary parameterized policy πθ is a probability distribution defined on S ×A , πθ ( a|s ) denotes the probability of playing a in state s. We use Πθ to denote the set of all stationary policies , where Πθ = { πθ : θ ∈ Rp } , and θ is a parameter needed to be learned . Let Pπθ ∈ R|S|×|S| be a state transition probability matrix , and their components are : Pπθ [ s , s ′ ] = ∑ a∈A πθ ( a|s ) P ( s′|s , a ) = : Pπθ ( s ′ |s ) , which denotes one-step state transformation probability from s to s′ by executing πθ . Let τ = { st , at , rt+1 } t≥0 ∼ πθ be a trajectory generated by πθ , where s0 ∼ ρ0 ( · ) , at ∼ πθ ( ·|st ) , st+1 ∼ P ( ·|st , at ) , and rt+1 = r ( st+1|st , at ) . We use Pπθ ( st = s ′ |s ) to denote the probability of visiting the state s ′ after t time steps from the state s by executing πθ . Due to the Markov property in MDP , Pπθ ( st = s ′ |s ) is ( s , s′ ) -th component of the matrix Ptπθ , i.e. , Pπθ ( st = s ′ |s ) = Ptπθ [ s , s ′ ] . Finally , let ds0πθ ( s ) = ( 1 − γ ) ∑∞ t=0 γ tPπθ ( st = s|s0 ) be the stationary state distribution of the Markov chain ( starting at s0 ) induced by policy πθ . We define dρ0πθ ( s ) = Es0∼ρ0 ( · ) [ d s0 πθ ( s ) ] as the discounted state visitation distribution on initial distribution ρ0 ( · ) . The state value function of πθ is defined as Vπθ ( s ) = Eπθ [ ∑∞ t=0 γ trt+1|s0 = s ] , where Eπθ [ ·|· ] denotes a conditional expectation on actions which are selected by πθ . Its state-action value function is Qπθ ( s , a ) = Eπθ [ ∑∞ t=0 γ trt+1|s0 = s , a0 = a ] , and advantage function is Aπθ ( s , a ) = Qπθ ( s , a ) − Vπθ ( s ) . The goal of reinforcement learning is to maximize J ( πθ ) : J ( πθ ) = Es∼dρ0πθ ( · ) [ Vπθ ( s ) ] . ( 1 ) 2.1 POLICY GRADIENT AND GENERALIZED ADVANTAGE ESTIMATOR ( GAE ) . Policy gradient ( Williams , 1992 ; Sutton et al. , 2000 ) is widely used to solve policy optimization , which maximizes the expected total reward by repeatedly estimating the gradient g = ∇J ( πθ ) . Schulman et al . ( 2016 ) summarize several different related expressions for the policy gradient : g = ∇J ( πθ ) = E [ ∞∑ t=0 Ψt∇ log πθ ( at|st ) ] , ( 2 ) where Ψt can be total discounted reward of the trajectory , value function , advantage function or temporal difference ( TD ) error . As stated by Schulman et al . ( 2016 ) , the choice Ψt = A ( st , at ) yields almost the lowest possible variance , which is consistent with the theoretical analysis ( Greensmith et al. , 2004 ; Wu et al. , 2018 ) . Furthermore , Schulman et al . ( 2016 ) propose generalized advantage estimator ( GAE ) ÂGAE ( γ , λ ) t ( st , at ) to replace Ψt : for any λ ∈ [ 0 , 1 ] , Â GAE ( γ , λ ) t ( st , at ) = ∞∑ ` =0 ( γλ ) ` δVt+ ` , ( 3 ) where δVt = rt+1 + γV ( st+1 ) − V ( st ) is TD error , and V ( · ) is an estimator of value function . GAE is an efficient technique for data efficiency and reliable performance of reinforcement learning . 2.2 SAFE REINFORCEMENT LEARNING . Safe RL ( Ray et al. , 2019 ) is often formulated as a constrained MDP ( CMDP ) M∪C ( Altman , 1999 ) , which is a standard MDPM augmented with an additional constraint set C. The set C = { ( ci , bi ) } mi=1 , where ci are cost functions : ci : S ×A → R , and limits are bi , i = 1 , · , m . The cost-return is defined as : Jci ( πθ ) = Eπθ [ ∑∞ t=0 γ tci ( st , at ) ] , then we define the feasible policy set ΠC as : ΠC = ∩mi=1 { πθ ∈ Πθ and Jci ( πθ ) ≤bi } . The goal of CMDP is to search the optimal policy π ? such that π ? = arg max πθ∈ΠC J ( πθ ) . ( 4 ) Furthermore , we define value functions , action-value functions , and advantage functions for the auxiliary costs in analogy to Vπθ , Qπθ , and Aπθ , with ci replacing r respectively , we denote them as V ciπθ , Q ci πθ , and Aciπθ . For example , V ci πθ ( s ) = Eπθ [ ∑∞ t=0 γ tci ( st , at ) |s0 = s ] . Without loss of generality , we will restrict our discussion to the case of one constraint with a cost function c and upper bound b . Finally , we extend the GAE w.r.t . auxiliary cost function c : Â GAE ( γ , λ ) C , t ( st , at ) = ∞∑ ` =0 ( γλ ) ` δCt+ ` , ( 5 ) where δCt = rt+1 + γC ( st+1 ) − C ( st ) is TD error , and C ( · ) is an estimator of cost function c .
The paper studies the problem of learning under constrained Markov decision processes. It proposes to solve this problem by an RL algorithm based on generalized advantage estimator (GAE), which they call Conservative Update Policy (CUP). Although existing works have already proposed GAE-based algorithms, the current paper is the first to provide theoretical foundations for this type of algorithm. Specifically, it proves bounds on performance difference for GAE-based algorithms and one-step improvement for CUP. Numerical experiments have been conducted to showcase the performance of the proposed algorithm.
SP:fe24152df3ec630eed6ec8adbf88d3a044e78123
CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) ( Sutton & Barto , 1998 ) has achieved significant successes in many fields ( Mnih et al. , 2015 ; Silver et al. , 2017 ; OpenAI , 2019 ; Afsar et al. , 2021 ) , robotics ( Deisenroth et al. , 2013 ) , playing Go ( Silver et al. , 2016 ; 2017 ) , Starcraft ( Vinyals et al. , 2019 ) , Dota ( OpenAI , 2019 ) , and recommendation system ( Afsar et al. , 2021 ) . However , most RL algorithms improve the performance under the assumption that an agent is free to explore any behaviors . In real-world applications , only considering return maximization is not enough , and we also need to consider safe behaviors . For example , a robot agent should avoid playing actions that irrevocably harm its hardware , and a recommender system should avoid presenting offending items to users . Thus , it is crucial to consider safe exploration for RL , which is usually formulated as constrained Markov decision processes ( CMDP ) ( Altman , 1999 ) . It is challenging to solve CMDP since traditional approaches ( e.g. , Q-learning ( Watkins , 1989 ) & policy gradient ( Williams , 1992 ) ) usually violate the safe exploration constraints , which is undesirable for safe RL . Recently , Achiam et al . ( 2017 ) ; Yang et al . ( 2020 ) ; Bharadhwaj et al . ( 2021 ) suggest to use some surrogate functions to replace the objective and constraints . However , their implementations involve some convex approximations to the non-convex objective and safe constraints , which leads to many error sources and troubles . Concretely , Achiam et al . ( 2017 ) ; Yang et al . ( 2020 ) ; Bharadhwaj et al . ( 2021 ) approximate the non-convex objective ( or constraints ) with first-order or second Taylor expansion , but their implementations still lack a theory to show the error difference between the original objective ( or constraints ) and its convex approximations . Besides , their approaches involve the inverse of a high-dimension Fisher information matrix , which causes their algorithms to require a costly computation for each update when solving high-dimensional RL problems . Our Main Work . To address above problems , we propose the conservative update policy ( CUP ) algorithm with a theoretical safety guarantee . We derive the CUP bases on some new proposed surrogate functions with respect to objective and constraints and provide a practical implementation of CUP that does not depend on any convex approximation to adapt high-dimensional safe RL . Concretely , in Section 3 , Theorem 1 shows generalized difference bounds between two arbitrary policies for the objective and constraints . Those bounds provide principled approximations to the objective and constraints , which are theoretical foundations for us to use those bounds as surrogate functions to replace objective and constraints to design algorithms . Although using difference bound to replace objective or constraints has appeared in some existing works ( e.g. , ( Kakade & Langford , 2002 ; Schulman et al. , 2015 ; Achiam et al. , 2017 ) ) , Theorem 1 improves their bounds at least two aspects : ( i ) Firstly , our rigorous theoretical analysis extends the bound with respect to generalized advantage estimator ( GAE ) ( Schulman et al. , 2016 ) . GAE significantly reduces variance while maintains a tolerable level of bias , which is one of the critical steps for us to design efficient algorithms in the later section . Although Zhang et al . ( 2020 ) ; Kang et al . ( 2021 ) have applied GAE to solve safe RL problems , their approaches are empirical and lack a theoretical analysis with respect to GAE . Thus , our result provides a theory to illustrate the effectiveness of the work ( Zhang et al. , 2020 ; Kang et al. , 2021 ) . ( ii ) Our new bounds refine classic difference bounds . For example , our bounds are more compact than Achiam et al . ( 2017 ) , i , e. , using our new bounds as surrogate functions are better local approximations to the objective and constraints . Besides , the surrogate functions with respect to our new bounds are more accessible to be estimated from the samples than the approaches appears in ( Kakade & Langford , 2002 ; Schulman et al. , 2015 ) ) , for more discussions , please see Remark 1 . In Section 4 , we provide the necessary details of the proposed CUP . The CUP contains two steps : it performs a policy improvement at first , then it projects the policy back onto the safe region to reconcile the constraint violation . Theorem 2 shows a lower bound on policy improvement and an upper bound on constraint violation for CUP at each update . Notably , the result in Theorem 2 shows the bound of CUP is more compact than state-of-the-art safe RL algorithms : CPO ( Achiam et al. , 2017 , Proposition 1-2 ) , PCPO ( Yang et al. , 2020 , Theorem 1 ) and FOCOPS ( Zhang et al. , 2020 ) , which provides a partial explanation for why CUP is so good in practice . For more discussions , please refer to Remark 2 . Finally , we provide a practical implementation of sample-based CUP . Such an implementation allows us to use deep neural networks to train a model . Mainly , CUP does not depend on any convex approximation for objective and constraints , and it optimizes the objective according to the first-order optimizer . Extensive high-dimensional experiments on continuous control tasks show the effectiveness of CUP where the agent satisfies safe constraints . 2 PRELIMINARIES . Reinforcement learning ( RL ) ( Sutton & Barto , 1998 ) is often formulated as a Markov decision process ( MDP ) ( Puterman , 2014 ) that is a tupleM = ( S , A , P , r , ρ0 , γ ) . Here S is state space , A is action space . P ( s′ |s , a ) is probability of state transition from s to s′ after playing a. r ( · ) : S ×S ×A → R , and r ( s′|s , a ) denotes the reward that the agent observes when state transition from s to s′ after it plays a. ρ0 ( · ) : S → [ 0 , 1 ] is the initial state distribution and γ ∈ ( 0 , 1 ) . A stationary parameterized policy πθ is a probability distribution defined on S ×A , πθ ( a|s ) denotes the probability of playing a in state s. We use Πθ to denote the set of all stationary policies , where Πθ = { πθ : θ ∈ Rp } , and θ is a parameter needed to be learned . Let Pπθ ∈ R|S|×|S| be a state transition probability matrix , and their components are : Pπθ [ s , s ′ ] = ∑ a∈A πθ ( a|s ) P ( s′|s , a ) = : Pπθ ( s ′ |s ) , which denotes one-step state transformation probability from s to s′ by executing πθ . Let τ = { st , at , rt+1 } t≥0 ∼ πθ be a trajectory generated by πθ , where s0 ∼ ρ0 ( · ) , at ∼ πθ ( ·|st ) , st+1 ∼ P ( ·|st , at ) , and rt+1 = r ( st+1|st , at ) . We use Pπθ ( st = s ′ |s ) to denote the probability of visiting the state s ′ after t time steps from the state s by executing πθ . Due to the Markov property in MDP , Pπθ ( st = s ′ |s ) is ( s , s′ ) -th component of the matrix Ptπθ , i.e. , Pπθ ( st = s ′ |s ) = Ptπθ [ s , s ′ ] . Finally , let ds0πθ ( s ) = ( 1 − γ ) ∑∞ t=0 γ tPπθ ( st = s|s0 ) be the stationary state distribution of the Markov chain ( starting at s0 ) induced by policy πθ . We define dρ0πθ ( s ) = Es0∼ρ0 ( · ) [ d s0 πθ ( s ) ] as the discounted state visitation distribution on initial distribution ρ0 ( · ) . The state value function of πθ is defined as Vπθ ( s ) = Eπθ [ ∑∞ t=0 γ trt+1|s0 = s ] , where Eπθ [ ·|· ] denotes a conditional expectation on actions which are selected by πθ . Its state-action value function is Qπθ ( s , a ) = Eπθ [ ∑∞ t=0 γ trt+1|s0 = s , a0 = a ] , and advantage function is Aπθ ( s , a ) = Qπθ ( s , a ) − Vπθ ( s ) . The goal of reinforcement learning is to maximize J ( πθ ) : J ( πθ ) = Es∼dρ0πθ ( · ) [ Vπθ ( s ) ] . ( 1 ) 2.1 POLICY GRADIENT AND GENERALIZED ADVANTAGE ESTIMATOR ( GAE ) . Policy gradient ( Williams , 1992 ; Sutton et al. , 2000 ) is widely used to solve policy optimization , which maximizes the expected total reward by repeatedly estimating the gradient g = ∇J ( πθ ) . Schulman et al . ( 2016 ) summarize several different related expressions for the policy gradient : g = ∇J ( πθ ) = E [ ∞∑ t=0 Ψt∇ log πθ ( at|st ) ] , ( 2 ) where Ψt can be total discounted reward of the trajectory , value function , advantage function or temporal difference ( TD ) error . As stated by Schulman et al . ( 2016 ) , the choice Ψt = A ( st , at ) yields almost the lowest possible variance , which is consistent with the theoretical analysis ( Greensmith et al. , 2004 ; Wu et al. , 2018 ) . Furthermore , Schulman et al . ( 2016 ) propose generalized advantage estimator ( GAE ) ÂGAE ( γ , λ ) t ( st , at ) to replace Ψt : for any λ ∈ [ 0 , 1 ] , Â GAE ( γ , λ ) t ( st , at ) = ∞∑ ` =0 ( γλ ) ` δVt+ ` , ( 3 ) where δVt = rt+1 + γV ( st+1 ) − V ( st ) is TD error , and V ( · ) is an estimator of value function . GAE is an efficient technique for data efficiency and reliable performance of reinforcement learning . 2.2 SAFE REINFORCEMENT LEARNING . Safe RL ( Ray et al. , 2019 ) is often formulated as a constrained MDP ( CMDP ) M∪C ( Altman , 1999 ) , which is a standard MDPM augmented with an additional constraint set C. The set C = { ( ci , bi ) } mi=1 , where ci are cost functions : ci : S ×A → R , and limits are bi , i = 1 , · , m . The cost-return is defined as : Jci ( πθ ) = Eπθ [ ∑∞ t=0 γ tci ( st , at ) ] , then we define the feasible policy set ΠC as : ΠC = ∩mi=1 { πθ ∈ Πθ and Jci ( πθ ) ≤bi } . The goal of CMDP is to search the optimal policy π ? such that π ? = arg max πθ∈ΠC J ( πθ ) . ( 4 ) Furthermore , we define value functions , action-value functions , and advantage functions for the auxiliary costs in analogy to Vπθ , Qπθ , and Aπθ , with ci replacing r respectively , we denote them as V ciπθ , Q ci πθ , and Aciπθ . For example , V ci πθ ( s ) = Eπθ [ ∑∞ t=0 γ tci ( st , at ) |s0 = s ] . Without loss of generality , we will restrict our discussion to the case of one constraint with a cost function c and upper bound b . Finally , we extend the GAE w.r.t . auxiliary cost function c : Â GAE ( γ , λ ) C , t ( st , at ) = ∞∑ ` =0 ( γλ ) ` δCt+ ` , ( 5 ) where δCt = rt+1 + γC ( st+1 ) − C ( st ) is TD error , and C ( · ) is an estimator of cost function c .
This paper considers reinforcement learning, a common model which has seen success recently in many areas (e.g. games, robotics, autonomous vehicles). Prototypical reinforcement learning algorithms explore the action space in order to maximize their rewards as much as possible, potentially ignoring impacts of the chosen actions or safety constraints. Typical to many real-world scenarios, though, are constraints on the selected actions allowing the algorithm designer or practitioner to enforce certain constraints on the selected actions in the environment (e.g. ensure that the autonomous vehicle stays within the dictated lines on the road, etc). This is typically done by introducing a cost-function and adding an additional constraint that the resulting policies have long-run discounted cost upper bounded by a given constant. While many RL algorithms have been designed in this setting, they mostly focus on either taking lagrangian relaxations of the given policy constraints, or constructing convex approximations to the non-convex objectives to optimize (which also in turns adds to additional complexity in constructing higher order moments of the objective for optimization). In contrast, the authors in this paper take a different view, via the following: 1. Construct tight upper and lower bounds on the difference in value functions for arbitrary policies with respect to arbitrary functions $\phi$, a given additional discount parameter $\lambda$ 2. Instantiate the upper bound with $\phi$ taken as the value function to get a lower bound on the difference in value between any two policies 3. Instantiate the lower bound with $\phi$ taken as the cost-value function to get an upper bound on the cost-difference between any two policies Once these bounds are established, the algorithmic approach is simple. In the first step they perform a performance improvement step to improve the performance of the policy by maximizing the lower bound on the difference in value between the two policies. The second step is a projection step, where after lagrangianizing the constraint set they minimize the lower bound on the difference in costs between the two policies. The authors show a guarantee on per-step improvements on the rewards and cost for this proposed approach assuming exact maximization / minimization of these objectives. To be more specific, the authors consider the typical RL model with an MDP characterized via $(S, A, P, r, \gamma)$. They consider the goal of maximizing the expected long term rewards $r$. However, there is an additional cost function $c$ and they add an additional constraint that the long-run discounted cost is upper bounded by a given constant $b$. The hope is that the learned policy picks actions which satisfy the long-run average cost constraint while simultaneously maximizing the expected rewards. The authors first start-off by proposing a novel bound, generalizing the policy performance difference $J(\pi_\theta) - J(\pi_{\theta'})$ between any two policies. In particular the difference is upper and lower bounded by two terms. The first term can be interpreted as the expectation between the TD errors of $\pi_\theta$ and $\pi_{\theta'}$ where TD errors are computed for an arbitrary function $\phi$. The second part is the discounted distribution difference between the two different policies. These bounds are then instantiated with different functions $\phi$ and parameters in order to get lower and upper bounds on the performance difference between the policies with respect to the costs and rewards of the problem. Once these lower and upper bounds on the performance difference are established, the algorithm falls naturally by maximizing the reward lower bound and minimizing the cost upper bound. This is then theoretically justified by providing a per-step improvement with respect to the two different objectives. To complement the algorithmic framework, the authors present a set of synthetic experiments to compare the efficiency and constraint violation of the resulting policies of their method and others in the literature. In particular, they test the policies on three different environments and compare to several other existing algorithms in the literature (more on this later). They show that their algorithm and approach dominates others in terms of performance while simultaneously ensuring the cost constraints.
SP:fe24152df3ec630eed6ec8adbf88d3a044e78123
On Adversarial Bias and the Robustness of Fair Machine Learning
1 INTRODUCTION . Trustworthiness is a crucial requirement of machine learning algorithms in critical decision making processes , as highlighted by many AI regulations and policies as well as technical research papers . Algorithmic fairness is at the core of trust requirements for automated decision making in sensitive domains . Group fairness measures , such as equal opportunity and equalized odds ( Hardt et al. , 2016 ) which is the focus of this paper , suggest equalizing the model ’ s behavior across groups identified based on a protected attribute ( e.g. , race ) to avoid systemic discrimination against protected groups ( Agarwal et al. , 2018 ; Calders et al. , 2009 ; Hardt et al. , 2016 ; Madras et al. , 2018 ) . The main question that we are interested in is whether fair models are trustworthy , and in particular robust with respect to changes to their training data . In this paper , we study if and how achieving group fairness can increase the susceptibility of machine learning models to a small fraction of adversarially-sampled outliers in the training set . A large body of work shows machine learning is vulnerable to noisy and adversarial data ( Biggio et al. , 2012 ; Chen et al. , 2017 ; Jagielski et al. , 2018 ; Koh & Liang , 2017 ; Li et al. , 2016 ; Mei & Zhu , 2015a ; Shafahi et al. , 2018 ; Steinhardt et al. , 2017 ; Suciu et al. , 2018 ) . Recent work studies the performance of fair machine learning in the presence of noisy training data with under-representation and labeling bias ( Blum & Stangl , 2020 ; Calders & Žliobaitė , 2013 ; De-Arteaga et al. , 2018 ; Jiang & Nachum , 2020 ; Kallus & Zhou , 2018 ; Lamy et al. , 2019 ) . Under the assumptions of uniform noise and in the theoretical setting of having an unlimited number of training data , these works analyze the effect of noisy/biased training data on fair models . Interestingly , Blum & Stangl ( 2020 ) show that ERM with equal opportunity can recover the Bayes-optimal classifier from biased data . In other words , fair algorithms are more robust to certain types of bias in the training dataset than standard learning algorithms ( without fairness constraints ) . However , there has been little quantitative analysis of the interaction between group fairness and robustness of the model under realistic settings ( finite training data and non-uniform noise ) . In this paper , we quantitatively measure the impact of group fairness on the robustness of the model under worst-case ( adversarial ) bias , which exactly aims at minimizing the chance of recovering from biased data . To the best of our knowledge , this paper provides the first quantitative analysis for the robustness of fair machine learning algorithms in the adversarial setting . We assume the training data is biased , through adding a small fraction of outliers that are adversarially sampled ( and labeled ) to degrade the test accuracy of fair models . We exploit the fact that algorithms with group-fairness constraints approximately equalize the influence of different groups , in the training set , on the model . Equality of the group influence would consequently change the influence of individual data samples across different groups in a disproportionate way due to differences in size and distribution of the groups . Thus , the model ’ s susceptibility to worst-case outliers is largely dependent on how the outliers are distributed across different subgroups . We extensively evaluate the robustness of fair machine learning on multiple fairness algorithms ( Hardt et al. , 2016 ; Agarwal et al. , 2018 ; Rezaei et al. , 2020 ; Cotter et al. , 2019 ; Zhang et al. , 2018 ) and benchmark datasets ( Dua & Graff , 2017 ; Larson et al. , 2017 ; mep ; ahr ) to investigate how , why , and under what circumstances models with group fairness are more fragile with respect to adversarial bias compared to unconstrained models . We show that group fairness reduces robustness . Models trained using various fair machine learning algorithms are all more susceptible to adversarial bias compared with unconstrained models . We can observe this effect even for the case of the most limited scenario of adversarial data sampling for a small fraction of the training set , without manipulating the data features and labels . We notice this is because that adversarial bias amplifies the cost of fairness on model accuracy by placing the outliers into the smallest group with the least frequent label . It effectively reduces the best achievable accuracy for the smallest subgroup , limiting the fair models ’ accuracy on the minority . In this case , the model sacrifices its accuracy over the majority group to satisfy the fairness constraint . It results in a significant accuracy on the overall dataset . Furthermore , we present the potential trade-offs between robustness and fairness . Finally , adversarial manipulation of the training data prevents the model from generalizing its fairness to clean test data even though it is guaranteed on training data . This results in models that , according to the fairness measure , are even more discriminatory than unconstrained models , but on a different part of the population . This work introduces a significant challenge towards designing trustworthy machine learning algorithms . We emphasize that , as shown in our results , the fair models trained under noisy data could be significantly unfair ( with respect to the same fairness measures ) . Thus , sensitivity to changes in data undermines both fairness and accuracy of models . This calls for designing new fairness measures which are not inherently susceptible to noise . 2 BACKGROUND AND PROBLEM STATEMENT . Machine learning . Consider a classifier fθ : X → Y , that maps the feature space X to labels Y . The model is represented by its parameters θ taken from a parameter space Θ . The model is trained to minimize a loss function ` : Θ×X × Y → R+ over its training set D. We let X and Y denote the random variables associated with the features and the labels , and ( X , Y ) to denote the underlying distribution of the data . We obtain the model parameters by solving minθ∈Θ 1|D|L ( θ ; D ) , where L ( θ ; D ) = ∑ ( x , y ) ∈D ` ( θ ; x , y ) is the cumulative loss of the model over the training set D. Fairness . We assume all data points are split into groups based on a protected attribute S ∈ S ( e.g. , gender ) . This attribute could be part of the feature set X . We focus on equalized odds , which is a widely-used notion for group fairness ( Hardt et al. , 2016 ) . 1 Following previous works ( Agarwal et al. , 2018 ; Donini et al. , 2018 ) , we say a classifier fθ is δ-fair under equalized odds if ∆ ( θ , D ) : = max y∈Y , a , b∈S ∣∣∣∣PrD [ fθ ( X ) 6= y|S = a , Y = y ] − PrD [ fθ ( X ) 6= y|S = b , Y = y ] ∣∣∣∣ ≤ δ , ( 1 ) where the probabilities are computed empirically over the training data set D. We refer to ∆ as the model ’ s empirical fairness gap . A model satisfies exact equalized odds fairness when δ = 0 . In practice , fairness is usually achieved by ensuring δ-fairness empirically on the model ’ s training set , e.g. , through minimizing the model ’ s empirical loss under δ-fairness as a constraint ( Agarwal et al. , 2018 ) or post-processing ( Hardt et al. , 2016 ) . We define the constraint C ( θ , D ) : = ∆ ( θ , D ) − δ ≤ 0 as a fairness constraint . We refer to the models learned with the fairness constraint as fair models , 1Extension and analysis for other group fairness metrics ( i.e. , equal opportunity ( Hardt et al. , 2016 ) ) can be found in Appendix E.7 . to distinguish them from unconstrained models that are learned without any fairness constraint . We quantify the performance of a model based on its test accuracy , and its fairness gap on the test dataset . Problem statement . The primary research question we investigate in this paper is whether , how , and why models with ( equalized odds ) group fairness are less robust to adversarial bias , compared with unconstrained models . We consider a model is robust if changing a small fraction of the training set does not significantly downgrade the predictive power of the model . Towards quantifying the robustness , we assume the biased training set D is composed of the clean datasetDc of size n , and the adversarially chosen datasetDp of size n. The clean training setDc and test set Dtest are sampled from the same underlying distribution ( X , Y ) . So , we investigate the effect of the bias in the training set , which is introduced through Dp . We consider two variations of bias : adversarial sampling , and adversarial labeling . These are the worst-case sampling and labeling bias , whereDp is chosen to maximize the loss of a model . LetDk be a dataset sampled from ( X , Y ) , similar to the clean data . In adversarial sampling , we choose outliers Dp ⊂ Dk , and in adversarial labeling , we choose them also with the possibility of crafting their labelsDp ⊂ { ( x , y′ ) : ( x , y ) ∈ Dk , y′ ∈ Y } . Based on this setting , the maximum vulnerability of a fair model in the presence of adversarial bias can be formulated as a bi-level optimization problem subject to the fairness constraint C ( θ , D ) ≤ 0 : max Dp E ( X , Y ) [ ` ( θ̂ ; X , Y ) ] , where θ̂ : = argmin θ∈Θ L ( θ ; Dc ∪ Dp ) |Dc ∪ Dp| , st. C ( θ , Dc ∪ Dp ) ≤ 0 , ( 2 ) where the expectation is taken over the underlying ( clean ) data distribution . The outer maximization searches for the strongest Dp that maximizes the expected loss of the fair model ( given by θ̂ ) . This fair model is obtained by solving the inner constrained minimization on the biased training dataset Dc ∪ Dp . The expected loss can be measured on a test set Dtest sampled from the data distribution ( X , Y ) . To generate the strongest Dp , we assume the knowledge of the learning algorithm ( e.g. , logistic regression , SVM ) and the clean training datasetDc is available when generatingDp , but the exact fair learning algorithm is unknown . It allows us to obtain an upper bound on the performance degradation incurred by the adversarial bias and serves as a starting point towards understanding the maximal vulnerability . Besides , for investigating the effect of fairness constraints on the model robustness , we evaluate the robustness of unconstrained models as a baseline . Adversarial bias against unconstrained models is equivalent to the problem of data poisoning attacks ( we discuss more in Section 6 ) . 3 ADVERSARIAL BIAS . To find the strongest Dp for evaluating the robustness of fair models , we need to solve the bi-level optimization problem ( 2 ) , which is non-convex and intractable ( Bard , 1991 ; Hansen et al. , 1992 ; Deng , 1998 ) . The fairness constraint makes the problem even more difficult.2 In this section , we explain how to approximate problem ( 2 ) to design effective adversarial strategies . In the problem ( 2 ) , it is hard to track the influence of Dp on the test loss as it can only affect the test loss via the model ’ s parameters . To make progress , we first approximate the loss on the test data by the loss on the clean training data , following the approximations used for designing poisoning attacks against unconstrained models ( Steinhardt et al. , 2017 ) . Specifically , let θ̂ be the solution to the inner optimization problem and we have L ( θ̂ ; Dtest ) /|Dtest| ≈ L ( θ̂ ; Dc ) /|Dc| ≤ L ( θ̂ ; Dc ∪ Dp ) /|Dc| . As long as the model has enough capacity to fit but does not overfit the training dataset ( which can be achieved by appropriate regularization ) , the loss on the biased training dataset ( i.e. , RHS of the inequality ) provides a good approximation for the test loss , which allows us to explicitly measure the impact of the poisoning data . Therefore , we replace the objective of the inner minimization in Eq . ( 2 ) with L ( θ ; Dc∪Dp ) /n , where n = |Dc| . The resulting optimization problem is hard to solve due to the bi-level optimization and the constraints in the inner optimization . To resolve this , we then relax the inner constrained optimization by introducing a Lagrange multiplier λ ∈ R+ : min θ∈Θ [ L ( θ ; D ) n , s.t . C ( θ , D ) ≤ 0 ] = min θ∈Θ max λ∈R+ ( L ( θ ; D ) n + λC ( θ , D ) ) ≥ max λ∈R+ min θ∈Θ ( L ( θ ; D ) n + λC ( θ , D ) ) , 2We would like to point out that , for the unconstrained model , under the convex assumption of the loss function , it is possible to find the approximate solution by replacing the inner optimization with its stationarity ( KKT ) condition ( Biggio et al. , 2012 ; Koh & Liang , 2017 ) . where D = Dc ∪ Dp . The inequality follows from the weak duality theorem . We can now find Dp by maximizing a lower bound provided by the Lagrangian function minθ∈Θ ( L ( θ ; D ) /n + λC ( θ , D ) ) for a fixed λ ∈ R+ . Indeed , maximizing the lower bound provided by the Lagrangian function would result in a solution with a high loss ( which is guaranteed to be at least equal to the loss for the lower bound ) for the original problem . In this optimization procedure , we can also replace the fairness constraint C ( θ , D ) : = ∆ ( θ , D ) − δ with the fairness gap ∆ ( θ , D ) , because the constant value δ ≥ 0 does not affect the solution for the Lagrangian . Finally , by considering all the above-mentioned steps , the new optimization problem is : max Dp min θ∈Θ ( L ( θ ; Dc ∪ Dp ) n + λ∆ ( θ , Dc ∪ Dp ) ) . ( 3 ) Thus , to solve the problem ( 2 ) , the alternative goal is to find Dp that maximizes a linear combination of the training loss and the model ’ s violation from the fairness constraint , where λ controls the penalty for the violation . In the light of those approximations , we design two algorithms to generate Dp . An Approximation for the Fairness Gap . Finding Dp is an intractable combinatorial optimization problem because the fairness gap could not be split into separate functions of individual data points . It is still hard to track the influence of individual data points . To resolve it , we first find an additive proxy for fairness gap . We substitute the fairness gap ∆ by the average of an approximate contribution of each training data point to the fairness gap . This allows us to design an efficient sequential policy . More specifically , let { ( x , y ) } k be a multi-set with k repetitions of a data point ( x , y ) . Consequently , D ∪ { ( x , y ) } k is equivalent to adding k copies of ( x , y ) to D. In this setting , for any data point ( x , y ) ∈ Dp , the fairness gap 1 n∆ ( θ , Dc ∪ { ( x , y ) } n ) is a proxy for measuring the contribution of that data point to the fairness gap ∆ ( θ , Dc ∪ Dp ) . In other words , it measures how the fairness gap changes if n copies of ( x , y ) is added to the clean data . Also , the maximum of ∆ ( θ , Dc ∪ { ( x , y ) } n ) over all data points ( x , y ) ∈ Dp provides an upper bound on the fairness gap of the model , when the size of Dp is n. Given this proxy for the contribution of each data point to the fairness gap , we obtain the following approximation : ∆ ( θ , Dc ∪ Dp ) ≈ ∑ ( x , y ) ∈Dp 1 n∆ ( θ , Dc ∪ { ( x , y ) } n ) . By substituting the fairness gap with its proxy , now we can solve the following optimization problem : max Dp min θ∈Θ ( 1 n L ( θ ; Dc ∪ Dp ) + λ n ∑ ( x , y ) ∈Dp ∆ ( θ , Dc ∪ { ( x , y ) } n ) ) = max Dp M ( Dp ) : = M∗ ( 4 ) where M ( Dp ) is the loss incurred by a poisoning set Dp on the fair model , and M∗ is the maximum loss of the fair model under any choices of Dp . Algorithm 1 , a variant of the no-regret online gradient descent methods ( Hazan , 2016 ) , presents our solution to problem ( 4 ) . It initializes a model θ0 ∈ Θ , and identifies n points for Dp iteratively . The feasible set of points F ( Dk ) is determined by the adversarial bias setting . For adversarial sampling , we have F ( Dk ) = Dk , and for adversarial labeling , F ( Dk ) = { ( x , y′ ) : ( x , y ) ∈ Dk , y′ ∈ Y } . The algorithm iteratively performs the following steps : • Data point selection . ( Algorithm 1 , line 5 ) : It selects a data point with the highest impact on a weighted sum of the loss function and the fairness gap with respect to the model parameter θt−1 . • Parameter update . ( Algorithm 1 , line 7 ) : The parameters are updated to minimize the penalized loss function based on the selected data point ( xt , yt ) . In this way , the algorithm ( through the approximations made by the Lagrange multiplier and the surrogate function ) keeps track of the fair model under the set of already selected data points for Dp . In Theorem 1 , by following the approach proposed by ( Steinhardt et al. , 2017 ) , we relate the performance of Algorithm 1 with the maximum loss in Eq . ( 4 ) . Moreover , in Appendix C.2 , we prove that under some reasonable conditions ( e.g. , by using similar assumptions made by ( Donini et al. , 2018 ) to approximate the fairness gap ) , our algorithm finds the ( nearly ) optimal solution for Eq . ( 4 ) . Theorem 1 . Let D∗p be the data set produced by Algorithm 1 . Let Regret ( n ) be the regret of this online learning algorithm after n steps . The performance of the algorithm is guaranteed by M∗ −M ( D∗p ) ≤ Regret ( n ) n , ( 5 ) where M∗ and M ( D∗p ) are the loss of the fair model under the optimal Dp and D∗p , respectively.3 3The regret of a decision-maker is defined as the difference between the total cost incurred and that of the best-fixed decision in hindsight . Algorithm 1 Online Gradient Descent Algorithm for Generating Dp for Fair Models 1 : Input : Clean data Dc , n = |Dc| , feasible set F ( Dk ) , n ( the size of Dp ) , penalty parameter ( Lagrange multiplier ) λ , learning rate η . 2 : Output : Dp . 3 : Initialize θ0 ∈ Θ 4 : for t = 1 , · · · , n do 5 : ( xt , yt ) ← argmax ( x , y ) ∈F ( Dk ) [ · ` ( θt−1 ; x , y ) + λ ·∆ ( θt−1 , Dc ∪ { ( x , y ) } n ) ] 6 : Dp ← Dp ∪ { ( xt , yt ) } 7 : θt ← θt−1 − η ( ∇L ( θt−1 ; Dc ) n +∇ [ · ` ( θt−1 ; xt , yt ) + λ ·∆ ( θt−1 , Dc ∪ { ( xt , yt ) } n ) ] ) 8 : end for The proof for the theorem is deferred to Appendix C. From the theorem , it is clear that when the average regret Regret ( n ) / n is small , the Algorithm 1 will result in a nearly optimal Dp . A Surrogate Function for the Fair Model . The parameter update step in Algorithm 1 provides an approximation ( through adding the fairness constraint as a penalty and approximating the fairness gap ) for the fair model . Differently , our Algorithm 2 approximates the fair model using the unconstrained model . More specifically , Algorithm 2 iteratively adds data points that maximize a combination of the loss and the fairness gap , however , over the unconstrained model . The reason for this approach is that we hope the points with the largest weighted sum of the loss and the fairness gap on the unconstrained model may still have a large weighted sum on the fair models . Algorithm 2 in Appendix C.3 presents the pseudo-code of this algorithm . An advantage of Algorithm 2 is that it reduces the chance of getting stuck in local minima because , in each parameter update , it makes a step towards the negative gradient of the exact unconstrained loss . This is in contrast with Algorithm 1 , where due to the difficulty of approximating a constrained max-min problem , it might converge to some parameters not close to the fair model at all . We should point out that the algorithm and objectives are similar to the data poisoning attacks against unconstrained models when λ = 0 ( Steinhardt et al. , 2017 ) . In this case , the fairness constraints are not exploited to introduce adversarial bias . In Section 5 , we empirically show that Dp generated by exploiting the fairness gap ( i.e. , λ > 0 ) can incur a higher test loss of fair models compared with the case where λ = 0 .
Intuitively, the same amount of data poisoning will have a larger impact when the learner is solving a constrained optimization problem than an unconstrained one. This paper proposed an attack algorithm specifically designed for such fair learners and showed that fair learning algorithms are more vulnerable to adversarial data poisoning attacks. The problem studied in this paper is of high importance.
SP:9861ba00add665c624c186f948063da0cdff0cff
On Adversarial Bias and the Robustness of Fair Machine Learning
1 INTRODUCTION . Trustworthiness is a crucial requirement of machine learning algorithms in critical decision making processes , as highlighted by many AI regulations and policies as well as technical research papers . Algorithmic fairness is at the core of trust requirements for automated decision making in sensitive domains . Group fairness measures , such as equal opportunity and equalized odds ( Hardt et al. , 2016 ) which is the focus of this paper , suggest equalizing the model ’ s behavior across groups identified based on a protected attribute ( e.g. , race ) to avoid systemic discrimination against protected groups ( Agarwal et al. , 2018 ; Calders et al. , 2009 ; Hardt et al. , 2016 ; Madras et al. , 2018 ) . The main question that we are interested in is whether fair models are trustworthy , and in particular robust with respect to changes to their training data . In this paper , we study if and how achieving group fairness can increase the susceptibility of machine learning models to a small fraction of adversarially-sampled outliers in the training set . A large body of work shows machine learning is vulnerable to noisy and adversarial data ( Biggio et al. , 2012 ; Chen et al. , 2017 ; Jagielski et al. , 2018 ; Koh & Liang , 2017 ; Li et al. , 2016 ; Mei & Zhu , 2015a ; Shafahi et al. , 2018 ; Steinhardt et al. , 2017 ; Suciu et al. , 2018 ) . Recent work studies the performance of fair machine learning in the presence of noisy training data with under-representation and labeling bias ( Blum & Stangl , 2020 ; Calders & Žliobaitė , 2013 ; De-Arteaga et al. , 2018 ; Jiang & Nachum , 2020 ; Kallus & Zhou , 2018 ; Lamy et al. , 2019 ) . Under the assumptions of uniform noise and in the theoretical setting of having an unlimited number of training data , these works analyze the effect of noisy/biased training data on fair models . Interestingly , Blum & Stangl ( 2020 ) show that ERM with equal opportunity can recover the Bayes-optimal classifier from biased data . In other words , fair algorithms are more robust to certain types of bias in the training dataset than standard learning algorithms ( without fairness constraints ) . However , there has been little quantitative analysis of the interaction between group fairness and robustness of the model under realistic settings ( finite training data and non-uniform noise ) . In this paper , we quantitatively measure the impact of group fairness on the robustness of the model under worst-case ( adversarial ) bias , which exactly aims at minimizing the chance of recovering from biased data . To the best of our knowledge , this paper provides the first quantitative analysis for the robustness of fair machine learning algorithms in the adversarial setting . We assume the training data is biased , through adding a small fraction of outliers that are adversarially sampled ( and labeled ) to degrade the test accuracy of fair models . We exploit the fact that algorithms with group-fairness constraints approximately equalize the influence of different groups , in the training set , on the model . Equality of the group influence would consequently change the influence of individual data samples across different groups in a disproportionate way due to differences in size and distribution of the groups . Thus , the model ’ s susceptibility to worst-case outliers is largely dependent on how the outliers are distributed across different subgroups . We extensively evaluate the robustness of fair machine learning on multiple fairness algorithms ( Hardt et al. , 2016 ; Agarwal et al. , 2018 ; Rezaei et al. , 2020 ; Cotter et al. , 2019 ; Zhang et al. , 2018 ) and benchmark datasets ( Dua & Graff , 2017 ; Larson et al. , 2017 ; mep ; ahr ) to investigate how , why , and under what circumstances models with group fairness are more fragile with respect to adversarial bias compared to unconstrained models . We show that group fairness reduces robustness . Models trained using various fair machine learning algorithms are all more susceptible to adversarial bias compared with unconstrained models . We can observe this effect even for the case of the most limited scenario of adversarial data sampling for a small fraction of the training set , without manipulating the data features and labels . We notice this is because that adversarial bias amplifies the cost of fairness on model accuracy by placing the outliers into the smallest group with the least frequent label . It effectively reduces the best achievable accuracy for the smallest subgroup , limiting the fair models ’ accuracy on the minority . In this case , the model sacrifices its accuracy over the majority group to satisfy the fairness constraint . It results in a significant accuracy on the overall dataset . Furthermore , we present the potential trade-offs between robustness and fairness . Finally , adversarial manipulation of the training data prevents the model from generalizing its fairness to clean test data even though it is guaranteed on training data . This results in models that , according to the fairness measure , are even more discriminatory than unconstrained models , but on a different part of the population . This work introduces a significant challenge towards designing trustworthy machine learning algorithms . We emphasize that , as shown in our results , the fair models trained under noisy data could be significantly unfair ( with respect to the same fairness measures ) . Thus , sensitivity to changes in data undermines both fairness and accuracy of models . This calls for designing new fairness measures which are not inherently susceptible to noise . 2 BACKGROUND AND PROBLEM STATEMENT . Machine learning . Consider a classifier fθ : X → Y , that maps the feature space X to labels Y . The model is represented by its parameters θ taken from a parameter space Θ . The model is trained to minimize a loss function ` : Θ×X × Y → R+ over its training set D. We let X and Y denote the random variables associated with the features and the labels , and ( X , Y ) to denote the underlying distribution of the data . We obtain the model parameters by solving minθ∈Θ 1|D|L ( θ ; D ) , where L ( θ ; D ) = ∑ ( x , y ) ∈D ` ( θ ; x , y ) is the cumulative loss of the model over the training set D. Fairness . We assume all data points are split into groups based on a protected attribute S ∈ S ( e.g. , gender ) . This attribute could be part of the feature set X . We focus on equalized odds , which is a widely-used notion for group fairness ( Hardt et al. , 2016 ) . 1 Following previous works ( Agarwal et al. , 2018 ; Donini et al. , 2018 ) , we say a classifier fθ is δ-fair under equalized odds if ∆ ( θ , D ) : = max y∈Y , a , b∈S ∣∣∣∣PrD [ fθ ( X ) 6= y|S = a , Y = y ] − PrD [ fθ ( X ) 6= y|S = b , Y = y ] ∣∣∣∣ ≤ δ , ( 1 ) where the probabilities are computed empirically over the training data set D. We refer to ∆ as the model ’ s empirical fairness gap . A model satisfies exact equalized odds fairness when δ = 0 . In practice , fairness is usually achieved by ensuring δ-fairness empirically on the model ’ s training set , e.g. , through minimizing the model ’ s empirical loss under δ-fairness as a constraint ( Agarwal et al. , 2018 ) or post-processing ( Hardt et al. , 2016 ) . We define the constraint C ( θ , D ) : = ∆ ( θ , D ) − δ ≤ 0 as a fairness constraint . We refer to the models learned with the fairness constraint as fair models , 1Extension and analysis for other group fairness metrics ( i.e. , equal opportunity ( Hardt et al. , 2016 ) ) can be found in Appendix E.7 . to distinguish them from unconstrained models that are learned without any fairness constraint . We quantify the performance of a model based on its test accuracy , and its fairness gap on the test dataset . Problem statement . The primary research question we investigate in this paper is whether , how , and why models with ( equalized odds ) group fairness are less robust to adversarial bias , compared with unconstrained models . We consider a model is robust if changing a small fraction of the training set does not significantly downgrade the predictive power of the model . Towards quantifying the robustness , we assume the biased training set D is composed of the clean datasetDc of size n , and the adversarially chosen datasetDp of size n. The clean training setDc and test set Dtest are sampled from the same underlying distribution ( X , Y ) . So , we investigate the effect of the bias in the training set , which is introduced through Dp . We consider two variations of bias : adversarial sampling , and adversarial labeling . These are the worst-case sampling and labeling bias , whereDp is chosen to maximize the loss of a model . LetDk be a dataset sampled from ( X , Y ) , similar to the clean data . In adversarial sampling , we choose outliers Dp ⊂ Dk , and in adversarial labeling , we choose them also with the possibility of crafting their labelsDp ⊂ { ( x , y′ ) : ( x , y ) ∈ Dk , y′ ∈ Y } . Based on this setting , the maximum vulnerability of a fair model in the presence of adversarial bias can be formulated as a bi-level optimization problem subject to the fairness constraint C ( θ , D ) ≤ 0 : max Dp E ( X , Y ) [ ` ( θ̂ ; X , Y ) ] , where θ̂ : = argmin θ∈Θ L ( θ ; Dc ∪ Dp ) |Dc ∪ Dp| , st. C ( θ , Dc ∪ Dp ) ≤ 0 , ( 2 ) where the expectation is taken over the underlying ( clean ) data distribution . The outer maximization searches for the strongest Dp that maximizes the expected loss of the fair model ( given by θ̂ ) . This fair model is obtained by solving the inner constrained minimization on the biased training dataset Dc ∪ Dp . The expected loss can be measured on a test set Dtest sampled from the data distribution ( X , Y ) . To generate the strongest Dp , we assume the knowledge of the learning algorithm ( e.g. , logistic regression , SVM ) and the clean training datasetDc is available when generatingDp , but the exact fair learning algorithm is unknown . It allows us to obtain an upper bound on the performance degradation incurred by the adversarial bias and serves as a starting point towards understanding the maximal vulnerability . Besides , for investigating the effect of fairness constraints on the model robustness , we evaluate the robustness of unconstrained models as a baseline . Adversarial bias against unconstrained models is equivalent to the problem of data poisoning attacks ( we discuss more in Section 6 ) . 3 ADVERSARIAL BIAS . To find the strongest Dp for evaluating the robustness of fair models , we need to solve the bi-level optimization problem ( 2 ) , which is non-convex and intractable ( Bard , 1991 ; Hansen et al. , 1992 ; Deng , 1998 ) . The fairness constraint makes the problem even more difficult.2 In this section , we explain how to approximate problem ( 2 ) to design effective adversarial strategies . In the problem ( 2 ) , it is hard to track the influence of Dp on the test loss as it can only affect the test loss via the model ’ s parameters . To make progress , we first approximate the loss on the test data by the loss on the clean training data , following the approximations used for designing poisoning attacks against unconstrained models ( Steinhardt et al. , 2017 ) . Specifically , let θ̂ be the solution to the inner optimization problem and we have L ( θ̂ ; Dtest ) /|Dtest| ≈ L ( θ̂ ; Dc ) /|Dc| ≤ L ( θ̂ ; Dc ∪ Dp ) /|Dc| . As long as the model has enough capacity to fit but does not overfit the training dataset ( which can be achieved by appropriate regularization ) , the loss on the biased training dataset ( i.e. , RHS of the inequality ) provides a good approximation for the test loss , which allows us to explicitly measure the impact of the poisoning data . Therefore , we replace the objective of the inner minimization in Eq . ( 2 ) with L ( θ ; Dc∪Dp ) /n , where n = |Dc| . The resulting optimization problem is hard to solve due to the bi-level optimization and the constraints in the inner optimization . To resolve this , we then relax the inner constrained optimization by introducing a Lagrange multiplier λ ∈ R+ : min θ∈Θ [ L ( θ ; D ) n , s.t . C ( θ , D ) ≤ 0 ] = min θ∈Θ max λ∈R+ ( L ( θ ; D ) n + λC ( θ , D ) ) ≥ max λ∈R+ min θ∈Θ ( L ( θ ; D ) n + λC ( θ , D ) ) , 2We would like to point out that , for the unconstrained model , under the convex assumption of the loss function , it is possible to find the approximate solution by replacing the inner optimization with its stationarity ( KKT ) condition ( Biggio et al. , 2012 ; Koh & Liang , 2017 ) . where D = Dc ∪ Dp . The inequality follows from the weak duality theorem . We can now find Dp by maximizing a lower bound provided by the Lagrangian function minθ∈Θ ( L ( θ ; D ) /n + λC ( θ , D ) ) for a fixed λ ∈ R+ . Indeed , maximizing the lower bound provided by the Lagrangian function would result in a solution with a high loss ( which is guaranteed to be at least equal to the loss for the lower bound ) for the original problem . In this optimization procedure , we can also replace the fairness constraint C ( θ , D ) : = ∆ ( θ , D ) − δ with the fairness gap ∆ ( θ , D ) , because the constant value δ ≥ 0 does not affect the solution for the Lagrangian . Finally , by considering all the above-mentioned steps , the new optimization problem is : max Dp min θ∈Θ ( L ( θ ; Dc ∪ Dp ) n + λ∆ ( θ , Dc ∪ Dp ) ) . ( 3 ) Thus , to solve the problem ( 2 ) , the alternative goal is to find Dp that maximizes a linear combination of the training loss and the model ’ s violation from the fairness constraint , where λ controls the penalty for the violation . In the light of those approximations , we design two algorithms to generate Dp . An Approximation for the Fairness Gap . Finding Dp is an intractable combinatorial optimization problem because the fairness gap could not be split into separate functions of individual data points . It is still hard to track the influence of individual data points . To resolve it , we first find an additive proxy for fairness gap . We substitute the fairness gap ∆ by the average of an approximate contribution of each training data point to the fairness gap . This allows us to design an efficient sequential policy . More specifically , let { ( x , y ) } k be a multi-set with k repetitions of a data point ( x , y ) . Consequently , D ∪ { ( x , y ) } k is equivalent to adding k copies of ( x , y ) to D. In this setting , for any data point ( x , y ) ∈ Dp , the fairness gap 1 n∆ ( θ , Dc ∪ { ( x , y ) } n ) is a proxy for measuring the contribution of that data point to the fairness gap ∆ ( θ , Dc ∪ Dp ) . In other words , it measures how the fairness gap changes if n copies of ( x , y ) is added to the clean data . Also , the maximum of ∆ ( θ , Dc ∪ { ( x , y ) } n ) over all data points ( x , y ) ∈ Dp provides an upper bound on the fairness gap of the model , when the size of Dp is n. Given this proxy for the contribution of each data point to the fairness gap , we obtain the following approximation : ∆ ( θ , Dc ∪ Dp ) ≈ ∑ ( x , y ) ∈Dp 1 n∆ ( θ , Dc ∪ { ( x , y ) } n ) . By substituting the fairness gap with its proxy , now we can solve the following optimization problem : max Dp min θ∈Θ ( 1 n L ( θ ; Dc ∪ Dp ) + λ n ∑ ( x , y ) ∈Dp ∆ ( θ , Dc ∪ { ( x , y ) } n ) ) = max Dp M ( Dp ) : = M∗ ( 4 ) where M ( Dp ) is the loss incurred by a poisoning set Dp on the fair model , and M∗ is the maximum loss of the fair model under any choices of Dp . Algorithm 1 , a variant of the no-regret online gradient descent methods ( Hazan , 2016 ) , presents our solution to problem ( 4 ) . It initializes a model θ0 ∈ Θ , and identifies n points for Dp iteratively . The feasible set of points F ( Dk ) is determined by the adversarial bias setting . For adversarial sampling , we have F ( Dk ) = Dk , and for adversarial labeling , F ( Dk ) = { ( x , y′ ) : ( x , y ) ∈ Dk , y′ ∈ Y } . The algorithm iteratively performs the following steps : • Data point selection . ( Algorithm 1 , line 5 ) : It selects a data point with the highest impact on a weighted sum of the loss function and the fairness gap with respect to the model parameter θt−1 . • Parameter update . ( Algorithm 1 , line 7 ) : The parameters are updated to minimize the penalized loss function based on the selected data point ( xt , yt ) . In this way , the algorithm ( through the approximations made by the Lagrange multiplier and the surrogate function ) keeps track of the fair model under the set of already selected data points for Dp . In Theorem 1 , by following the approach proposed by ( Steinhardt et al. , 2017 ) , we relate the performance of Algorithm 1 with the maximum loss in Eq . ( 4 ) . Moreover , in Appendix C.2 , we prove that under some reasonable conditions ( e.g. , by using similar assumptions made by ( Donini et al. , 2018 ) to approximate the fairness gap ) , our algorithm finds the ( nearly ) optimal solution for Eq . ( 4 ) . Theorem 1 . Let D∗p be the data set produced by Algorithm 1 . Let Regret ( n ) be the regret of this online learning algorithm after n steps . The performance of the algorithm is guaranteed by M∗ −M ( D∗p ) ≤ Regret ( n ) n , ( 5 ) where M∗ and M ( D∗p ) are the loss of the fair model under the optimal Dp and D∗p , respectively.3 3The regret of a decision-maker is defined as the difference between the total cost incurred and that of the best-fixed decision in hindsight . Algorithm 1 Online Gradient Descent Algorithm for Generating Dp for Fair Models 1 : Input : Clean data Dc , n = |Dc| , feasible set F ( Dk ) , n ( the size of Dp ) , penalty parameter ( Lagrange multiplier ) λ , learning rate η . 2 : Output : Dp . 3 : Initialize θ0 ∈ Θ 4 : for t = 1 , · · · , n do 5 : ( xt , yt ) ← argmax ( x , y ) ∈F ( Dk ) [ · ` ( θt−1 ; x , y ) + λ ·∆ ( θt−1 , Dc ∪ { ( x , y ) } n ) ] 6 : Dp ← Dp ∪ { ( xt , yt ) } 7 : θt ← θt−1 − η ( ∇L ( θt−1 ; Dc ) n +∇ [ · ` ( θt−1 ; xt , yt ) + λ ·∆ ( θt−1 , Dc ∪ { ( xt , yt ) } n ) ] ) 8 : end for The proof for the theorem is deferred to Appendix C. From the theorem , it is clear that when the average regret Regret ( n ) / n is small , the Algorithm 1 will result in a nearly optimal Dp . A Surrogate Function for the Fair Model . The parameter update step in Algorithm 1 provides an approximation ( through adding the fairness constraint as a penalty and approximating the fairness gap ) for the fair model . Differently , our Algorithm 2 approximates the fair model using the unconstrained model . More specifically , Algorithm 2 iteratively adds data points that maximize a combination of the loss and the fairness gap , however , over the unconstrained model . The reason for this approach is that we hope the points with the largest weighted sum of the loss and the fairness gap on the unconstrained model may still have a large weighted sum on the fair models . Algorithm 2 in Appendix C.3 presents the pseudo-code of this algorithm . An advantage of Algorithm 2 is that it reduces the chance of getting stuck in local minima because , in each parameter update , it makes a step towards the negative gradient of the exact unconstrained loss . This is in contrast with Algorithm 1 , where due to the difficulty of approximating a constrained max-min problem , it might converge to some parameters not close to the fair model at all . We should point out that the algorithm and objectives are similar to the data poisoning attacks against unconstrained models when λ = 0 ( Steinhardt et al. , 2017 ) . In this case , the fairness constraints are not exploited to introduce adversarial bias . In Section 5 , we empirically show that Dp generated by exploiting the fairness gap ( i.e. , λ > 0 ) can incur a higher test loss of fair models compared with the case where λ = 0 .
This paper studies the problem of fair classification when the dataset is adversarially perturbed. In particular, the authors considers two models of adversarial perturbation -- (1) adversarial sampling (outlier data points are chosen adversarially), and (2) adversarial labeling (labels of a fraction of data points are chosen adversarially). Based on no-regret online gradient descent algorithm, the authors propose an algorithm that uses existing fair classifier as an oracle and produces a classifier that is both fair and robust to adversarial perturbations. The main contribution of the paper is evaluation of robustness of fair classifiers. The main message of the empirical evaluation is the following. First, there is a large relative drop in accuracy for all fair models because of adversarial perturbations. Second, adversarial bias increases the accuracy gap across different groups, and thereby further increases the cost of fairness.
SP:9861ba00add665c624c186f948063da0cdff0cff
SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models
1 INTRODUCTION . Implicit deep learning models such as Neural ODEs ( Chen et al. , 2018 ) , OptNets ( Amos and Zico Kolter , 2017 ) or Deep Equilibrium models ( DEQs ) ( Bai et al. , 2019 ; 2020 ) have recently emerged as a way to train deep models with infinite effective depth without the associated memory cost . Indeed , while it has been observed that the performance of deep learning models increases with their depth ( Telgarsky , 2016 ) , an increase in depth also translates into an increase in the memory footprint required for training , which is hardware-constrained . While other works such as invertible neural networks ( Gomez et al. , 2017 ; Sander et al. , 2021 ) or gradient checkpointing ( Chen et al. , 2016 ) also tackle this issue , implicit models bear an O ( 1 ) memory cost and with constraints on the architecture that are usually not detrimental to the performance ( Bai et al. , 2019 ) . These models have been successfully applied to large-scale tasks such as language modeling ( Bai et al. , 2019 ) , computer vision ( Bai et al. , 2020 ) and inverse problems ( Gilton et al. , 2021 ; Heaton et al. , 2021 ) . In general , the formulation of DEQs can be cast as a bi-level problem of the following form : arg min θ L ( z ? ) subject to gθ ( z ? ) = 0 . ( 1 ) We will refer to the root finding problem gθ ( z ? ) = 0 as the inner problem , and call its resolution the forward pass . On the other hand , we will refer to arg minθ L ( z ? ) as the outer problem , and call the computation of the gradient of L ( z ? ) w.r.t . θ the backward pass . The core idea for DEQs is that their output z ? is expressed as a fixed point of a parametric function fθ from Rd to Rd , i.e. , gθ ( z ? ) = z ? −fθ ( z ? ) = 0.1 This model is said to have infinitely many weight-tied layers as z ? can be obtained by successively applying the layer fθ infinitely many times , provided fθ is contractive . In practice , DEQs ’ forward pass is not computed by applying successively the function but usually relies on quasi-Newton ( qN ) algorithms , such as Broyden ’ s method ( Broyden , 1965 ) , which approximates efficiently the Jacobian matrix ∂gθ∂z and its inverse for root-finding . To compute DEQs ’ gradient efficiently and avoid high memory cost , one does not rely on backpropagation but uses the implicit function theorem ( Krantz and Parks , 2013 ) which gives an analytical expression of the Jacobian of z ? with respect to θ , ∂z ? ∂θ . While this method is memory efficient , it requires the computation of matrix-vector products involving the inverse of a large Jacobian matrix , which is computationally demanding . To make this computation tractable , one needs to rely on an iterative algorithm based on vector-Jacobian products , which renders the training particularly slow , as highlighted by the original authors ( Bai et al. , 2020 ) ( see also the break down of the computational effort in Section E.4 ) . Moreover , the formulation ( 1 ) allows us to also consider general bi-level problems such as hyperparameter optimization under the same framework . For instance , hyperparameter optimization for Logistic Regression ( LR ) can be written as min θ Lval ( z∗ ) subject to z∗ = min z rθ ( z ) , Ltrain ( z ) + θ‖z‖22 , ( 2 ) where Ltrain and Lval correspond to the training and validation losses from the LR problem ( Pedregosa , 2016 ) . Here , z corresponds to the weights of the LR model while θ is the regularisation parameter . As the training loss is smooth and convex , the inner problem can be written as in ( 1 ) with gθ = ∇zrθ to fit ( 1 ) . Similarly to DEQ , the inner problem is often solved using qN methods , which approximate the inverse of the Hessian in the direction of the steps , such as the LBFGS algorithm ( Liu and Nocedal , 1989 ) , and the gradient computation suffers from the same drawback as it is also obtained using the implicit function theorem . Lorraine et al . ( 2020 ) review the different hypergradient approximations for bi-level optimization and evaluate them on multiple tasks . With the increasing popularity of DEQs and the ubiquity of bi-level problems in machine learning , a core question is how to reduce the computational cost of the resolution of ( 1 ) . This would make these methods more accessible for practitioners and reduce the associated energy cost . In this work , we propose to exploit the estimates of the ( inverse of the ) Jacobian/Hessian produced by qN methods in the hypergradient computation . Moreover , we also propose extra updates of the qN matrices which maintain the approximation property in the direction of the steps , and ensure that the inverse Jacobian is approximated in an additional direction . In effect , we can compute the gradient using the inverse of the final qN matrix instead of an iterative algorithm to invert the Jacobian in the gradient ’ s direction , while stressing that the inverse of a qN matrix , and thus the multiplication with it , can be computed very efficiently . We emphasize that the goal of this paper is neither to improve the algorithms used to compute z ? , nor is it to demonstrate how to perform the inversion of a matrix in a certain direction as a stand-alone task . Rather , we are describing an approach that combines the resolution of the inner problem with the computation of the hypergradient to accelerate the overall process . Our work is the first to consider modifying the inner problem resolution in order to account for the bi-level structure of the optimization The idea to use additional updates of the qN matrices to ensure additional approximation properties is not new , and it is also known that a full matrix inversion can be accomplished in this way . For instance , Gower and Richtárik ( 2017 ) used sketching to design appropriate extra secant conditions in order to obtain guarantees of uniform convergence towards the inverse of the Jacobian . The novelty in our work is that we integrate additional update to yield the inverse in a specific direction , which is substantially cheaper than computing the inverse . A concurrent work by Fung et al . ( 2021 ) is also concerned with the acceleration of DEQs ’ training , where the inverse Jacobian is approximated with the identity . Under strong contractivity and conditioning assumptions , it is proven that the resulting approximation is a descent direction and the authors show good empirical performances for small scale problems . The contributions of our paper are the following : 1Here , we do not explicitly write the dependence of fθ on the input x of the DEQ , usually referred to as the injection . • We introduce a new method to greatly accelerate the backward pass of DEQs ( and generally , the differentiation of bi-level problems ) using qN matrices that are available as a by-product of the forward computations . We call this method SHINE ( SHaring the INverse Estimate ) . • We enhance this method by incorporating knowledge from the outer problem into the inner problem resolution . This allows us to provide strong theoretical guarantees for this approach in various settings . • We additionally showcase its use in hyperparameter optimization . Here , we demonstrate that it provides a gain in computation time compared to state-of-the-art methods . • We test it for DEQs for the classification task on two datasets , CIFAR and ImageNet . Here , we show that it decreases the training time while remaining competitive in terms of performance . • We extend the empirical evaluation of the Jacobian-Free method to large scale multiscale DEQs and show that it performs well in this setting . We also show that it is not suitable for more general bi-level problems . • We propose and evaluate a natural refinement strategy for approximate Jacobian inversion methods ( both SHINE and Jacobian-Free ) that allows a trade-off between computational cost and performances . 2 HYPERGRADIENT OPTIMIZATION WITH APPROXIMATE JACOBIAN INVERSE . 2.1 SHINE : HYPERGRADIENT DESCENT WITH APPROXIMATE JACOBIAN INVERSE . Algorithm 1 : qN method to solve gθ ( z ? ) = 0 Result : Root z ? , qN matrix B b = true if using Broyden ’ s method , b = false if using BFGS n = 0 , z0 = 0 , B0 = I while not converged do pn = −B−1n gθ ( zn ) , zn+1 = zn + αnpn // αn can be 1 or determined by line-search yn = gθ ( zn+1 ) − gθ ( zn ) sn = zn+1 − zn if b then Bn+1 = arg min X : Xsn=yn ‖X −Bn‖F else Bn+1 = arg min X : X=XT ∧Xsn=yn ‖X−1 −B−1n ‖ // The norm used in BFGS is a weighted Frobenius norm end n← n+ 1 end z ? = zn , B = Bn Hypergradient Optimization Hypergradient optimization is a first-order method used to solve ( 1 ) . We recall that in the case of smooth convex optimization , ∂gθ∂z is the Hessian of the inner optimization problem , while for deep equilibrium models , it is the Jacobian of the root equation . In the rest of this paper , with a slight abuse of notation , we will refer to both these matrices with Jgθ whenever the results can be applied to both contexts . To enable Hypergradient Optimization , i.e . gradient descent on L with respect to θ , Bai et al . ( 2019 , Theorem 1 ) show the following theorem , which is based on implicit differentiation ( Krantz and Parks , 2013 ) : Theorem 1 ( Hypergradient ( Bai et al. , 2019 ; Krantz and Parks , 2013 ) ) . Let θ ∈ Rp be a set of parameters , let L : Rd → R be a loss function and gθ : Rd → Rd be a root-defining function . Let z ? ∈ Rd such that gθ ( z ? ) = 0 and Jgθ ( z ? ) is invertible , then the gradient of the loss L wrt . θ , called Hypergradient , is given by ∂L ∂θ ∣∣∣ z ? = ∇zL ( z ? ) > Jgθ ( z ? ) −1 ∂gθ ∂θ ∣∣∣ z ? . ( 3 ) In practice , we use an algorithm to approximate z ? , and Theorem 1 gives a plug-in formula for the backward pass . Note that this formula is independent of the algorithm chosen to compute z ? . Moreover , as opposed to explicit networks , we do not need to store intermediate activations , resulting in the aforementioned training time memory gain for DEQs . Once z ? has been obtained , one of the major bottlenecks in the computation of the Hypergradient is the inversion of Jgθ ( z ? ) in the directions ∂gθ∂θ ∣∣∣ z ? or∇zL ( z ? ) . Quasi-Newton methods In practice , the forward pass is often carried out with qN methods . For instance , in the case of bi-level optimization for Logistic Regression , Pedregosa ( 2016 ) used LBFGS ( Liu and Nocedal , 1989 ) , while for Deep Equilibrium Models , Bai et al . ( 2019 ) used Broyden ’ s method ( Broyden , 1965 ) , later adapted to the multi-scale case in a limited-memory version ( Bai et al. , 2020 ) . These quasi-Newton methods were first inspired by Newton ’ s method , which finds the root of gθ via the recurrent Jacobian-based updates zn+1 = zn − Jgθ ( zn ) −1gθ ( zn ) . Specifically , they replace the Jacobian Jgθ ( zn ) by an approximation Bn that is based on available values of the iterates zn and gθ rather than its derivative . These Bn , called qN matrices , are defined recursively via an optimization problem with constraints called secant conditions . Solving this problem leads to expressing Bn as a rank-one or rank-two update of Bn−1 , so that Bn is the sum of the initial guess B0 ( in our settings , the identity ) and n low-rank matrices ( less than n in limited memory settings ) . This low rank structure allows efficient multiplication by Bn and B−1n . We now explain how the use of qN methods as inner solver can be exploited to resolve this computational bottleneck . SHINE Roughly speaking , our proposition is to use B−1 = limn→∞B−1n as a replacement for Jgθ ( z ? ) −1 in ( 3 ) , i.e . to share the inverse estimate between the forward and the backward passes . This gives the approximate Hypergradient pθ = ∇zL ( z ? ) B−1 ∂gθ ∂θ ∣∣∣ z ? . ( 4 ) In practice we will consider the nonasymptotical direction p ( n ) θ = ∇zL ( zn ) B−1n ∂gθ∂θ ∣∣∣ zn . Thanks to the Sherman-Morrison formula ( Sherman and Morrison , 1950 ) , the inversion of Bn can be done very efficiently ( using scalar products ) compared to the iterative methods needed to invert the true Jacobian Jgθ ( z ? ) . In turn , this significantly reduces the computational cost of the Hypergradient computation . Relationship to the Jacobian-Free method Because B0 = I in our setting , we may regard B as an identity matrix perturbed by a few rank-one updates . In the directions that are used for updates , B is going to be different from the identity , and hopefully closer to the true Jacobian in those directions . However , in all orthogonal directions we fall exactly into the setting of the Jacobian-Free method introduced by Fung et al . ( 2021 ) . In that work , Jgθ ( z ? ) −1 is approximated by I , and the authors highlight that this is equivalent to using a preconditioner on the gradient . Under strong assumptions on gθ they show that this preconditioned gradient is still a descent direction . Transition to the exact Jacobian Inverse . The approximate gradient p ( n ) θ can also be used as the initialization of an iterative algorithm for inverting Jgθ ( z ? ) in the direction∇zL ( z ? ) . With a good initialization , faster convergence can be expected . Moreover , if the iterative algorithm is also a qN method , which is the case in practice in the DEQ implementation , we can use the qN matrix B from the forward pass to initialize the qN matrix of this algorithm . We refer to this strategy as the refine strategy . Because the refine strategy is essentially a smart initialization scheme , it recovers all the theoretical guarantees of the original method ( Bai et al. , 2019 ; 2020 ; Pedregosa , 2016 ) .
In implicit deep learning such as deep equilibrium models, computing the inverse Jacobian for the forward pass is computationally expensive. This paper propose an interesting approach to combine the information from the forward and backward pass to make an efficient estimate of the Jacobian inverse. In one approach, they propose to replace the Jacobian in the backward update with the quasi-Newton matrix, which is being already used/estimated in the forward pass solved by quasi-Newton method. Additionally, they propose an iterative update to the quasi-Newton matrix such that to helps its estimate toward the direction useful in the backward pass (they call this outer problem awareness). They provide theoretical analysis of their proposed method and show that under certain conditions/assumptions, the forward pass still converges to the desired solution and the sequences of backward estimates converges to the loss gradient of needed to parameter updates. They provide numerical results in bi-level optimization (regularized logistic regression) and training DEQ for classification. In certain settings (bi-level optimization), they show that they outperform Jacobian-free and have similar performance to the state-of-the-arts but it is faster than all. For DEQ, they show similar performance to Jacobian-free.
SP:24819c1c943d8b83dd7e048a5bb4cf66fc5cc43e
SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models
1 INTRODUCTION . Implicit deep learning models such as Neural ODEs ( Chen et al. , 2018 ) , OptNets ( Amos and Zico Kolter , 2017 ) or Deep Equilibrium models ( DEQs ) ( Bai et al. , 2019 ; 2020 ) have recently emerged as a way to train deep models with infinite effective depth without the associated memory cost . Indeed , while it has been observed that the performance of deep learning models increases with their depth ( Telgarsky , 2016 ) , an increase in depth also translates into an increase in the memory footprint required for training , which is hardware-constrained . While other works such as invertible neural networks ( Gomez et al. , 2017 ; Sander et al. , 2021 ) or gradient checkpointing ( Chen et al. , 2016 ) also tackle this issue , implicit models bear an O ( 1 ) memory cost and with constraints on the architecture that are usually not detrimental to the performance ( Bai et al. , 2019 ) . These models have been successfully applied to large-scale tasks such as language modeling ( Bai et al. , 2019 ) , computer vision ( Bai et al. , 2020 ) and inverse problems ( Gilton et al. , 2021 ; Heaton et al. , 2021 ) . In general , the formulation of DEQs can be cast as a bi-level problem of the following form : arg min θ L ( z ? ) subject to gθ ( z ? ) = 0 . ( 1 ) We will refer to the root finding problem gθ ( z ? ) = 0 as the inner problem , and call its resolution the forward pass . On the other hand , we will refer to arg minθ L ( z ? ) as the outer problem , and call the computation of the gradient of L ( z ? ) w.r.t . θ the backward pass . The core idea for DEQs is that their output z ? is expressed as a fixed point of a parametric function fθ from Rd to Rd , i.e. , gθ ( z ? ) = z ? −fθ ( z ? ) = 0.1 This model is said to have infinitely many weight-tied layers as z ? can be obtained by successively applying the layer fθ infinitely many times , provided fθ is contractive . In practice , DEQs ’ forward pass is not computed by applying successively the function but usually relies on quasi-Newton ( qN ) algorithms , such as Broyden ’ s method ( Broyden , 1965 ) , which approximates efficiently the Jacobian matrix ∂gθ∂z and its inverse for root-finding . To compute DEQs ’ gradient efficiently and avoid high memory cost , one does not rely on backpropagation but uses the implicit function theorem ( Krantz and Parks , 2013 ) which gives an analytical expression of the Jacobian of z ? with respect to θ , ∂z ? ∂θ . While this method is memory efficient , it requires the computation of matrix-vector products involving the inverse of a large Jacobian matrix , which is computationally demanding . To make this computation tractable , one needs to rely on an iterative algorithm based on vector-Jacobian products , which renders the training particularly slow , as highlighted by the original authors ( Bai et al. , 2020 ) ( see also the break down of the computational effort in Section E.4 ) . Moreover , the formulation ( 1 ) allows us to also consider general bi-level problems such as hyperparameter optimization under the same framework . For instance , hyperparameter optimization for Logistic Regression ( LR ) can be written as min θ Lval ( z∗ ) subject to z∗ = min z rθ ( z ) , Ltrain ( z ) + θ‖z‖22 , ( 2 ) where Ltrain and Lval correspond to the training and validation losses from the LR problem ( Pedregosa , 2016 ) . Here , z corresponds to the weights of the LR model while θ is the regularisation parameter . As the training loss is smooth and convex , the inner problem can be written as in ( 1 ) with gθ = ∇zrθ to fit ( 1 ) . Similarly to DEQ , the inner problem is often solved using qN methods , which approximate the inverse of the Hessian in the direction of the steps , such as the LBFGS algorithm ( Liu and Nocedal , 1989 ) , and the gradient computation suffers from the same drawback as it is also obtained using the implicit function theorem . Lorraine et al . ( 2020 ) review the different hypergradient approximations for bi-level optimization and evaluate them on multiple tasks . With the increasing popularity of DEQs and the ubiquity of bi-level problems in machine learning , a core question is how to reduce the computational cost of the resolution of ( 1 ) . This would make these methods more accessible for practitioners and reduce the associated energy cost . In this work , we propose to exploit the estimates of the ( inverse of the ) Jacobian/Hessian produced by qN methods in the hypergradient computation . Moreover , we also propose extra updates of the qN matrices which maintain the approximation property in the direction of the steps , and ensure that the inverse Jacobian is approximated in an additional direction . In effect , we can compute the gradient using the inverse of the final qN matrix instead of an iterative algorithm to invert the Jacobian in the gradient ’ s direction , while stressing that the inverse of a qN matrix , and thus the multiplication with it , can be computed very efficiently . We emphasize that the goal of this paper is neither to improve the algorithms used to compute z ? , nor is it to demonstrate how to perform the inversion of a matrix in a certain direction as a stand-alone task . Rather , we are describing an approach that combines the resolution of the inner problem with the computation of the hypergradient to accelerate the overall process . Our work is the first to consider modifying the inner problem resolution in order to account for the bi-level structure of the optimization The idea to use additional updates of the qN matrices to ensure additional approximation properties is not new , and it is also known that a full matrix inversion can be accomplished in this way . For instance , Gower and Richtárik ( 2017 ) used sketching to design appropriate extra secant conditions in order to obtain guarantees of uniform convergence towards the inverse of the Jacobian . The novelty in our work is that we integrate additional update to yield the inverse in a specific direction , which is substantially cheaper than computing the inverse . A concurrent work by Fung et al . ( 2021 ) is also concerned with the acceleration of DEQs ’ training , where the inverse Jacobian is approximated with the identity . Under strong contractivity and conditioning assumptions , it is proven that the resulting approximation is a descent direction and the authors show good empirical performances for small scale problems . The contributions of our paper are the following : 1Here , we do not explicitly write the dependence of fθ on the input x of the DEQ , usually referred to as the injection . • We introduce a new method to greatly accelerate the backward pass of DEQs ( and generally , the differentiation of bi-level problems ) using qN matrices that are available as a by-product of the forward computations . We call this method SHINE ( SHaring the INverse Estimate ) . • We enhance this method by incorporating knowledge from the outer problem into the inner problem resolution . This allows us to provide strong theoretical guarantees for this approach in various settings . • We additionally showcase its use in hyperparameter optimization . Here , we demonstrate that it provides a gain in computation time compared to state-of-the-art methods . • We test it for DEQs for the classification task on two datasets , CIFAR and ImageNet . Here , we show that it decreases the training time while remaining competitive in terms of performance . • We extend the empirical evaluation of the Jacobian-Free method to large scale multiscale DEQs and show that it performs well in this setting . We also show that it is not suitable for more general bi-level problems . • We propose and evaluate a natural refinement strategy for approximate Jacobian inversion methods ( both SHINE and Jacobian-Free ) that allows a trade-off between computational cost and performances . 2 HYPERGRADIENT OPTIMIZATION WITH APPROXIMATE JACOBIAN INVERSE . 2.1 SHINE : HYPERGRADIENT DESCENT WITH APPROXIMATE JACOBIAN INVERSE . Algorithm 1 : qN method to solve gθ ( z ? ) = 0 Result : Root z ? , qN matrix B b = true if using Broyden ’ s method , b = false if using BFGS n = 0 , z0 = 0 , B0 = I while not converged do pn = −B−1n gθ ( zn ) , zn+1 = zn + αnpn // αn can be 1 or determined by line-search yn = gθ ( zn+1 ) − gθ ( zn ) sn = zn+1 − zn if b then Bn+1 = arg min X : Xsn=yn ‖X −Bn‖F else Bn+1 = arg min X : X=XT ∧Xsn=yn ‖X−1 −B−1n ‖ // The norm used in BFGS is a weighted Frobenius norm end n← n+ 1 end z ? = zn , B = Bn Hypergradient Optimization Hypergradient optimization is a first-order method used to solve ( 1 ) . We recall that in the case of smooth convex optimization , ∂gθ∂z is the Hessian of the inner optimization problem , while for deep equilibrium models , it is the Jacobian of the root equation . In the rest of this paper , with a slight abuse of notation , we will refer to both these matrices with Jgθ whenever the results can be applied to both contexts . To enable Hypergradient Optimization , i.e . gradient descent on L with respect to θ , Bai et al . ( 2019 , Theorem 1 ) show the following theorem , which is based on implicit differentiation ( Krantz and Parks , 2013 ) : Theorem 1 ( Hypergradient ( Bai et al. , 2019 ; Krantz and Parks , 2013 ) ) . Let θ ∈ Rp be a set of parameters , let L : Rd → R be a loss function and gθ : Rd → Rd be a root-defining function . Let z ? ∈ Rd such that gθ ( z ? ) = 0 and Jgθ ( z ? ) is invertible , then the gradient of the loss L wrt . θ , called Hypergradient , is given by ∂L ∂θ ∣∣∣ z ? = ∇zL ( z ? ) > Jgθ ( z ? ) −1 ∂gθ ∂θ ∣∣∣ z ? . ( 3 ) In practice , we use an algorithm to approximate z ? , and Theorem 1 gives a plug-in formula for the backward pass . Note that this formula is independent of the algorithm chosen to compute z ? . Moreover , as opposed to explicit networks , we do not need to store intermediate activations , resulting in the aforementioned training time memory gain for DEQs . Once z ? has been obtained , one of the major bottlenecks in the computation of the Hypergradient is the inversion of Jgθ ( z ? ) in the directions ∂gθ∂θ ∣∣∣ z ? or∇zL ( z ? ) . Quasi-Newton methods In practice , the forward pass is often carried out with qN methods . For instance , in the case of bi-level optimization for Logistic Regression , Pedregosa ( 2016 ) used LBFGS ( Liu and Nocedal , 1989 ) , while for Deep Equilibrium Models , Bai et al . ( 2019 ) used Broyden ’ s method ( Broyden , 1965 ) , later adapted to the multi-scale case in a limited-memory version ( Bai et al. , 2020 ) . These quasi-Newton methods were first inspired by Newton ’ s method , which finds the root of gθ via the recurrent Jacobian-based updates zn+1 = zn − Jgθ ( zn ) −1gθ ( zn ) . Specifically , they replace the Jacobian Jgθ ( zn ) by an approximation Bn that is based on available values of the iterates zn and gθ rather than its derivative . These Bn , called qN matrices , are defined recursively via an optimization problem with constraints called secant conditions . Solving this problem leads to expressing Bn as a rank-one or rank-two update of Bn−1 , so that Bn is the sum of the initial guess B0 ( in our settings , the identity ) and n low-rank matrices ( less than n in limited memory settings ) . This low rank structure allows efficient multiplication by Bn and B−1n . We now explain how the use of qN methods as inner solver can be exploited to resolve this computational bottleneck . SHINE Roughly speaking , our proposition is to use B−1 = limn→∞B−1n as a replacement for Jgθ ( z ? ) −1 in ( 3 ) , i.e . to share the inverse estimate between the forward and the backward passes . This gives the approximate Hypergradient pθ = ∇zL ( z ? ) B−1 ∂gθ ∂θ ∣∣∣ z ? . ( 4 ) In practice we will consider the nonasymptotical direction p ( n ) θ = ∇zL ( zn ) B−1n ∂gθ∂θ ∣∣∣ zn . Thanks to the Sherman-Morrison formula ( Sherman and Morrison , 1950 ) , the inversion of Bn can be done very efficiently ( using scalar products ) compared to the iterative methods needed to invert the true Jacobian Jgθ ( z ? ) . In turn , this significantly reduces the computational cost of the Hypergradient computation . Relationship to the Jacobian-Free method Because B0 = I in our setting , we may regard B as an identity matrix perturbed by a few rank-one updates . In the directions that are used for updates , B is going to be different from the identity , and hopefully closer to the true Jacobian in those directions . However , in all orthogonal directions we fall exactly into the setting of the Jacobian-Free method introduced by Fung et al . ( 2021 ) . In that work , Jgθ ( z ? ) −1 is approximated by I , and the authors highlight that this is equivalent to using a preconditioner on the gradient . Under strong assumptions on gθ they show that this preconditioned gradient is still a descent direction . Transition to the exact Jacobian Inverse . The approximate gradient p ( n ) θ can also be used as the initialization of an iterative algorithm for inverting Jgθ ( z ? ) in the direction∇zL ( z ? ) . With a good initialization , faster convergence can be expected . Moreover , if the iterative algorithm is also a qN method , which is the case in practice in the DEQ implementation , we can use the qN matrix B from the forward pass to initialize the qN matrix of this algorithm . We refer to this strategy as the refine strategy . Because the refine strategy is essentially a smart initialization scheme , it recovers all the theoretical guarantees of the original method ( Bai et al. , 2019 ; 2020 ; Pedregosa , 2016 ) .
In various machine learning problems which can be formulated as a bilevel optimization problem and solved using gradient-based methods, the computation of hypergradients is necessary. However, the involved inverse Jacobian matrix in the hypergradient has been a computational bottleneck in high-dimensional settings. This paper proposes to use quasi-Newton matrices from the forward pass to approximate this inverse Jacobian matrix in the direction needed for the gradient computation which appears in the computation of hypergradients. The proposed algorithm is applied to both hyperparameter optimization and deep equilibrium models for CIFAR-10 and ImageNet, showing that it reduces the computational cost of the backward pass by up to two orders of magnitude.
SP:24819c1c943d8b83dd7e048a5bb4cf66fc5cc43e
Gating Mechanisms Underlying Sequence-to-Sequence Working Memory
1 INTRODUCTION . Recurrent neural networks ( RNNs ) transform stimuli across multiple time-points to produce nonlinear working memory representations that can be used to solve complex tasks ( Elman , 1990 ; Hochreiter & Schmidhuber , 1997 ; Mante et al. , 2013 ) . Memorization and manipulation of discrete sequences of elements are a common low level requirement to many broad families of such problems ( Hochreiter & Schmidhuber , 1997 ; Jordan et al. , 2021 ; Yang et al. , 2019 ) . However , a full understanding of how the underlying dynamics learned by these networks accurately bring rise to the necessary computations remains an open area of research ( Sussillo & Barak , 2013 ) . That is , from a dynamical system ’ s point of view ( Guckenheimer & Holmes , 1983 ; Jordan et al. , 2021 ) , how does the network ’ s internal phase-flow and attractor structures , brought forth by training , play a part in the found solution to the desired task ? Furthermore , many of the known properties of trained RNNs ’ learned dynamical mechanisms are specific to individual problems ( Henaff et al. , 2016 ; Jarne , 2020 ; Ichikawa & Kaneko , 2021 ) . Therefore , due to their narrow formalism , such attributes are often difficult to extrapolate to alternative tasks of the same family . It has been demonstrated # 82 # 83 # 84 Time N e u ro n I n d e x Encoding Variable Length Delay Decoding Figure 1 : Superimposed trajectories for neurons 82−84 across one thousand trials of VDCM , from a perfectly trained GRU . Neurons 82 and 83 demonstrate slow manifold dynamics during the delay period of each trial . These three neurons represent the qualitative behavior across all 250 neurons . that the underlying attractor structures of RNNs successfully trained on an individual task are often similar , if not topologically equivalent , across networks , regardless of architecture , initialization , and hyper-parameters ( Maheswaranathan et al. , 2019 ) . However , one would also expect commonalities between the underlying structures of RNNs trained on similar , but none the less , different tasks ( Flesch et al. , 2021 ; Yang et al. , 2019 ) – a research feat not yet well explored . To better train and interpret the solutions found by RNN models we require a finer grain analysis , where results can be broadened to very general classes of problems . ( Karpathy et al. , 2016 ) demonstrated a more in depth empirical exploration of GRU and LSTM architectures computing with sequential data , and identified different failure cases that can arise when training these models . However , this work studied the existence of underlying dynamical mechanisms indirectly from their effects on network output and single-neuron behavior , leaving out details on the functionality of each mechanism at the population level . If done , found mechanisms can be further studied and synthetically recreated to be made more understandable . Moreover , the synthetic realization of dynamical mechanisms , inspired by those obtained through gradient based optimization , can be combined and extrapolated from to form synthetic RNN solutions to related tasks . We surgically analyze a single RNN trained on a discrete sequence working memory task . The network performs with no mistakes on a sufficiently sized validation set of trials . Inspired by the behavior discovered in the network , we design and experimentally validate a synthetic solution to a simplified version of the same task , realizable in relatively low dimensions . We then discuss how such findings apply to networks trained on different but mechanistically similar tasks , including a sequence-to-sequence translation task . 2 VARIABLE DELAY COPY MEMORY TASK AND RNN MODEL . In choosing an appropriate task , we look at copy memory ; a standard benchmark to evaluate a neural network ’ s ability to accurately recall information seen many time-steps in the past ( Hochreiter & Schmidhuber , 1997 ; Henaff et al. , 2016 ; Arjovsky et al. , 2016 ) . Let A = { ai } Ki=1 be a set of K symbols . We then pick S , T ∈ N. The input is a vector of categories , length T+2S , where each entry is one-hot encoded . Each trial of the task consists of three phases , of length S , T , and S respectively . During the first phase ( encoding ) , the network is presented with with S entries uniformly sampled from { ai } Ki=1 , to be remembered sequentially . During the second phase ( delay period ) , the network is fed T − 1 inputs of aK+1 , a blank category , indicating no important information is entering the network . At the final time-step of the second phase S+T , a delimiter aK+2 is input to the network , indicating that the RNN should output the original S entries of input in the same order by which they appeared in the first phase of the trial beginning at the next time-step ( third phase – decoding ) . During these last S time-steps , the inputs are all set to the blank category aK+1 . During the first T + S time-steps ( encoding and delay period ) , the network should output aK+1 . The task is to minimize the average cross-entropy of the outputs at every time-step . As such , the networks should remember a sequence of S elements for T time-steps . Henaff , Szlam , and LeCun developed and experimentally verified a synthetic solution to this task ( Henaff et al. , 2016 ) . However , if we allow T to vary from trial to trial , the underlying dynamical mechanisms allowing the RNN to properly enact the computation remains elusive . We will refer to this task as variable delay copy memory ( VDCM ) , as coined by ( Henaff et al. , 2016 ) . We successfully trained a GRU network ( Cho et al. , 2014 ) , by novel means ( Anonymous ) , with a linear readout on VDCM , such that it performs perfectly on all the test trials . We were unable to train other network architectures on VDCM , including LSTM . For each trial we set K = 8 , S = 10 , and T ∼ U ( 100 , 101 , ... , 120 ) . The model used is represented as follows : zt = σ ( Wzxt + biz +Uzht−1 + bhz ) ( 1 ) rt = σ ( Wrxt + bir +Urht−1 + bhr ) ( 2 ) ht = ( 1− zt ) tanh ( Whxt + bih + rt ( Uhht−1 + bhh ) ) + zt ht−1 ( 3 ) yt = Voutht + bout ( 4 ) ychoice = argmax ( [ yt ] m ) , m ∈ [ 0 , 1 , ... , K ] ⊂ N ∪ { 0 } ( 5 ) where ht ∈ Rd , d = 250 , xt ∈ RK+2 , Wz , Wr , Wh ∈ Rd×S and Uz , Ur , Uh ∈ Rd×d are the parameter matrices , biz , bir , bih , bhz , bhr , bhh ∈ Rd are bias vectors , represents element-wise multiplication , and σ ( z ) = 1/ ( 1 + e−z ) is the element-wise logistic sigmoid function . For the linear readout yt ∈ RK+1 , bout ∈ RS−1 and Vout ∈ R ( S−1 ) ×d . ychoice is taken as the largest element of yt at each time-step , and represents the chosen class readout . 3 ENCODING ON SLOW MANIFOLDS . The computations required for VDCM fall into two main parts . The first is the memory structure . How does the GRU retain information about the elements presented to the network at each of the ten encoding time-steps ? The second is memory recall . How does the GRU pull stored information from memory in the correct order ? In this section we will focus on the former , and explain the structure by which our trained RNN uses to properly encode inputted information . Fig . 1 demonstrates the behavior of hidden-state neurons 82 − 84 ( i.e . [ ht ] j for index j ∈ [ 82 , 83 , 84 ] ) of the trained network , while performing VDCM . Let neuron refer to hidden-state neuron unless otherwise specified . The trajectory of these selected neurons across one thousand trials are superimposed , clearly indicating the three segments of the task . If we look at the delay period , beginning at t = 11 , we notice that the neural activity appears to be near constant across each trial . The selection of neurons shown in Fig . 1 are representative of the behavior of across most neurons in the network , with the exception of several oscillatory modes that are rare and inconsistent across trials . A complete collection of neurons across trials can be found in the appendix E. What is of primary interest are the neurons that significantly vary trial to trail during the delay period , such as neurons 82 and 83 . These varying neurons indicate the existence of a slow manifold , an observation in line with recent research ( Ghazizadeh & Ching , 2021 ) . A slow manifold is similar to an attractor ( i.e . a fixed point , an attracting line , an attracting ring , etc . ) , where if the state of the system ht lies on the attractor it will not change unless perturbed . However , the manifold is not entirely made up of fixed points , rather the speed of the phase flow in these regions is arbitrarily slow in a subset of directions . In the GRU architecture , such behavior results from either a pseudo-line attractor ( Jordan et al. , 2021 ) , or from the influence of the update-gate zt ( Cho et al. , 2014 ) . If we look at the update-gate for dimensions with analogous behavior to 82 and 83 , what we find is that , during the delay period , most demonstrate a value close to 1 for zt . Such activity simplifies equation 3 to the following approximate form : [ ht ] i ≈ [ ht−1 ] i ( 6 ) where i indexes the neurons with a high update-gate during the delay period . As such , the network retains near perfect memory of the past in these directions . We assume that the neurons ’ updategates are the primary mechanism to enact slow-manifolds in our trained network . In the case of pseudo-line attractors , such slow flow is the result of the nullclines of the underlying continuoustime dynamical system , by which the network can be interpreted as a numerical approximation of , existing sufficiently close together in the hidden state-space ( Jordan et al. , 2021 ) . However , the nullclines of this system can not be oriented such that they form a pseudo-line attractor in any canonical direction ( in the direction of a single neuron ) . We will see in the next section that the means by which our network is decoding are canonical in nature , and so we disregard this mechanism . Given that this is the most likely mechanism used to encode information , how exactly this computation enacted ? We trained this network to be able to encode 810 possible sequences , K = 8 elements to choose from at each time-step , across S = 10 time-steps . If such a computation is implemented by carefully placing each trajectory on a slow manifold , the manifold can be segmented into regions where each individual readout element is outputted . Due to the implementation of the argmax function for VCDM , the largest element in the readout vector will be chosen as the class . A cartoon representation of such a regime is depicted in Fig . 2 . To empirically show that a slow manifold is the dynamical feature used and to determine how it is organized , we implement a perturbation based experiment . The low level details can be found in appendix A . We can assume an encoding time-step q ∈ [ 1 , ... , 10 ] , and an element p ∈ [ 0 , ... 7 ] . If across two trials , we input a sequence where at time-step q the element to be encoded is p , and an identical sequence where at time-step q the element is not p , we can determine how the neuron representation differs after the encoding phase of each trial ( t = 11 ) . By comparing this difference over many sessions , we can determine which neurons are used to encode for this q − p pair , how often each neuron is used , and approximate the expected value each neuron takes . We can then test the accuracy of our q − p representation by inputting many sequences through the network , where the elements at t = q are not p , but we set , at t = 11 , the n most frequently used neurons in the representation to their approximate expected values for the representation . If the network output remains unaltered , but at decoding time-step q the readout is incorrectly changed to p , we consider the trial a success . The results of this method are displayed in Fig . 3 . Only 73 of the 250 neurons in the network may be used to encode memory . We can reorder these 73 neurons by their center of mass with respect to the 80 possible combinations of q and p , where we sweep through all values p can take on for q = 1 , then q = 2 , etc . Fig . 4 ( Left ) depicts such a reordering , revealing a block-like structure . We see that a nonempty set of neurons account for every element at each specific time-step q . The plot is colored by the expected value that neuron ( row ) takes when presented with a specific element at a given time-step ( column ) . To analyze the finer details of the manifold ’ s structure , let ’ s consider the neurons primarily tuned to a single time-step ; those tuned to encode information at time-step 9 , for example . We can use principal component analysis ( PCA ) to visualize the activity of these selected neurons in low dimensions ( Bishop , 2006 ) . We project down the state of the set of neurons tuned to time-step 9 at t = 11 , across the set of test trials as demonstrated in Fig . 4 ( Right ) . Data points are colored by which element K was presented to the network at t = 9 , and form eight separable clusters , one for each element . The points in each cluster vary in a single direction , indicated by the blue arrow . Since VDCM is a deterministic task , the only source for this variability is the influence of the input to the network at all time-steps preceding t = 9 . This suggests that the neurons primarily tuned to encode information at t = 9 are not fully decoupled from the neurons tuned to other time-steps . However , this observation brings about an important point regarding the neurons in our trained network that are primarily tuned to input presented at t = 9 . While PCA brings about the dimensions with the highest variability across trials , it does not indicate which dimensions are most crucial for enacting the computation . While insightful in understanding how training brought forth various sub-mechanisms that make up the finer memory structures in the network , this direction allocated to previous inputs may not be used to indicate class during the ninth step of decoding . In the following section , we will dive into the second major computation required for VDCM , decoding . It will be shown that analysis of decoding will enable us to better understand the mechanism used for encoding . For all other encoding time-steps , plots analogous to Fig . 4 ( right ) can be found in appendix B .
This paper attempts to understand how a RNN goes about solving the Variable Copy Delay Memory task in fine detail. Their model has been trained perfectly on this task and so they are able to focus on how it goes about changing its cell values in the case. The authors present different metrics to track resetting and memorization behavior by the GRU's neurons.
SP:df975cc1c367d216509f31196a7eff5ad95e570a
Gating Mechanisms Underlying Sequence-to-Sequence Working Memory
1 INTRODUCTION . Recurrent neural networks ( RNNs ) transform stimuli across multiple time-points to produce nonlinear working memory representations that can be used to solve complex tasks ( Elman , 1990 ; Hochreiter & Schmidhuber , 1997 ; Mante et al. , 2013 ) . Memorization and manipulation of discrete sequences of elements are a common low level requirement to many broad families of such problems ( Hochreiter & Schmidhuber , 1997 ; Jordan et al. , 2021 ; Yang et al. , 2019 ) . However , a full understanding of how the underlying dynamics learned by these networks accurately bring rise to the necessary computations remains an open area of research ( Sussillo & Barak , 2013 ) . That is , from a dynamical system ’ s point of view ( Guckenheimer & Holmes , 1983 ; Jordan et al. , 2021 ) , how does the network ’ s internal phase-flow and attractor structures , brought forth by training , play a part in the found solution to the desired task ? Furthermore , many of the known properties of trained RNNs ’ learned dynamical mechanisms are specific to individual problems ( Henaff et al. , 2016 ; Jarne , 2020 ; Ichikawa & Kaneko , 2021 ) . Therefore , due to their narrow formalism , such attributes are often difficult to extrapolate to alternative tasks of the same family . It has been demonstrated # 82 # 83 # 84 Time N e u ro n I n d e x Encoding Variable Length Delay Decoding Figure 1 : Superimposed trajectories for neurons 82−84 across one thousand trials of VDCM , from a perfectly trained GRU . Neurons 82 and 83 demonstrate slow manifold dynamics during the delay period of each trial . These three neurons represent the qualitative behavior across all 250 neurons . that the underlying attractor structures of RNNs successfully trained on an individual task are often similar , if not topologically equivalent , across networks , regardless of architecture , initialization , and hyper-parameters ( Maheswaranathan et al. , 2019 ) . However , one would also expect commonalities between the underlying structures of RNNs trained on similar , but none the less , different tasks ( Flesch et al. , 2021 ; Yang et al. , 2019 ) – a research feat not yet well explored . To better train and interpret the solutions found by RNN models we require a finer grain analysis , where results can be broadened to very general classes of problems . ( Karpathy et al. , 2016 ) demonstrated a more in depth empirical exploration of GRU and LSTM architectures computing with sequential data , and identified different failure cases that can arise when training these models . However , this work studied the existence of underlying dynamical mechanisms indirectly from their effects on network output and single-neuron behavior , leaving out details on the functionality of each mechanism at the population level . If done , found mechanisms can be further studied and synthetically recreated to be made more understandable . Moreover , the synthetic realization of dynamical mechanisms , inspired by those obtained through gradient based optimization , can be combined and extrapolated from to form synthetic RNN solutions to related tasks . We surgically analyze a single RNN trained on a discrete sequence working memory task . The network performs with no mistakes on a sufficiently sized validation set of trials . Inspired by the behavior discovered in the network , we design and experimentally validate a synthetic solution to a simplified version of the same task , realizable in relatively low dimensions . We then discuss how such findings apply to networks trained on different but mechanistically similar tasks , including a sequence-to-sequence translation task . 2 VARIABLE DELAY COPY MEMORY TASK AND RNN MODEL . In choosing an appropriate task , we look at copy memory ; a standard benchmark to evaluate a neural network ’ s ability to accurately recall information seen many time-steps in the past ( Hochreiter & Schmidhuber , 1997 ; Henaff et al. , 2016 ; Arjovsky et al. , 2016 ) . Let A = { ai } Ki=1 be a set of K symbols . We then pick S , T ∈ N. The input is a vector of categories , length T+2S , where each entry is one-hot encoded . Each trial of the task consists of three phases , of length S , T , and S respectively . During the first phase ( encoding ) , the network is presented with with S entries uniformly sampled from { ai } Ki=1 , to be remembered sequentially . During the second phase ( delay period ) , the network is fed T − 1 inputs of aK+1 , a blank category , indicating no important information is entering the network . At the final time-step of the second phase S+T , a delimiter aK+2 is input to the network , indicating that the RNN should output the original S entries of input in the same order by which they appeared in the first phase of the trial beginning at the next time-step ( third phase – decoding ) . During these last S time-steps , the inputs are all set to the blank category aK+1 . During the first T + S time-steps ( encoding and delay period ) , the network should output aK+1 . The task is to minimize the average cross-entropy of the outputs at every time-step . As such , the networks should remember a sequence of S elements for T time-steps . Henaff , Szlam , and LeCun developed and experimentally verified a synthetic solution to this task ( Henaff et al. , 2016 ) . However , if we allow T to vary from trial to trial , the underlying dynamical mechanisms allowing the RNN to properly enact the computation remains elusive . We will refer to this task as variable delay copy memory ( VDCM ) , as coined by ( Henaff et al. , 2016 ) . We successfully trained a GRU network ( Cho et al. , 2014 ) , by novel means ( Anonymous ) , with a linear readout on VDCM , such that it performs perfectly on all the test trials . We were unable to train other network architectures on VDCM , including LSTM . For each trial we set K = 8 , S = 10 , and T ∼ U ( 100 , 101 , ... , 120 ) . The model used is represented as follows : zt = σ ( Wzxt + biz +Uzht−1 + bhz ) ( 1 ) rt = σ ( Wrxt + bir +Urht−1 + bhr ) ( 2 ) ht = ( 1− zt ) tanh ( Whxt + bih + rt ( Uhht−1 + bhh ) ) + zt ht−1 ( 3 ) yt = Voutht + bout ( 4 ) ychoice = argmax ( [ yt ] m ) , m ∈ [ 0 , 1 , ... , K ] ⊂ N ∪ { 0 } ( 5 ) where ht ∈ Rd , d = 250 , xt ∈ RK+2 , Wz , Wr , Wh ∈ Rd×S and Uz , Ur , Uh ∈ Rd×d are the parameter matrices , biz , bir , bih , bhz , bhr , bhh ∈ Rd are bias vectors , represents element-wise multiplication , and σ ( z ) = 1/ ( 1 + e−z ) is the element-wise logistic sigmoid function . For the linear readout yt ∈ RK+1 , bout ∈ RS−1 and Vout ∈ R ( S−1 ) ×d . ychoice is taken as the largest element of yt at each time-step , and represents the chosen class readout . 3 ENCODING ON SLOW MANIFOLDS . The computations required for VDCM fall into two main parts . The first is the memory structure . How does the GRU retain information about the elements presented to the network at each of the ten encoding time-steps ? The second is memory recall . How does the GRU pull stored information from memory in the correct order ? In this section we will focus on the former , and explain the structure by which our trained RNN uses to properly encode inputted information . Fig . 1 demonstrates the behavior of hidden-state neurons 82 − 84 ( i.e . [ ht ] j for index j ∈ [ 82 , 83 , 84 ] ) of the trained network , while performing VDCM . Let neuron refer to hidden-state neuron unless otherwise specified . The trajectory of these selected neurons across one thousand trials are superimposed , clearly indicating the three segments of the task . If we look at the delay period , beginning at t = 11 , we notice that the neural activity appears to be near constant across each trial . The selection of neurons shown in Fig . 1 are representative of the behavior of across most neurons in the network , with the exception of several oscillatory modes that are rare and inconsistent across trials . A complete collection of neurons across trials can be found in the appendix E. What is of primary interest are the neurons that significantly vary trial to trail during the delay period , such as neurons 82 and 83 . These varying neurons indicate the existence of a slow manifold , an observation in line with recent research ( Ghazizadeh & Ching , 2021 ) . A slow manifold is similar to an attractor ( i.e . a fixed point , an attracting line , an attracting ring , etc . ) , where if the state of the system ht lies on the attractor it will not change unless perturbed . However , the manifold is not entirely made up of fixed points , rather the speed of the phase flow in these regions is arbitrarily slow in a subset of directions . In the GRU architecture , such behavior results from either a pseudo-line attractor ( Jordan et al. , 2021 ) , or from the influence of the update-gate zt ( Cho et al. , 2014 ) . If we look at the update-gate for dimensions with analogous behavior to 82 and 83 , what we find is that , during the delay period , most demonstrate a value close to 1 for zt . Such activity simplifies equation 3 to the following approximate form : [ ht ] i ≈ [ ht−1 ] i ( 6 ) where i indexes the neurons with a high update-gate during the delay period . As such , the network retains near perfect memory of the past in these directions . We assume that the neurons ’ updategates are the primary mechanism to enact slow-manifolds in our trained network . In the case of pseudo-line attractors , such slow flow is the result of the nullclines of the underlying continuoustime dynamical system , by which the network can be interpreted as a numerical approximation of , existing sufficiently close together in the hidden state-space ( Jordan et al. , 2021 ) . However , the nullclines of this system can not be oriented such that they form a pseudo-line attractor in any canonical direction ( in the direction of a single neuron ) . We will see in the next section that the means by which our network is decoding are canonical in nature , and so we disregard this mechanism . Given that this is the most likely mechanism used to encode information , how exactly this computation enacted ? We trained this network to be able to encode 810 possible sequences , K = 8 elements to choose from at each time-step , across S = 10 time-steps . If such a computation is implemented by carefully placing each trajectory on a slow manifold , the manifold can be segmented into regions where each individual readout element is outputted . Due to the implementation of the argmax function for VCDM , the largest element in the readout vector will be chosen as the class . A cartoon representation of such a regime is depicted in Fig . 2 . To empirically show that a slow manifold is the dynamical feature used and to determine how it is organized , we implement a perturbation based experiment . The low level details can be found in appendix A . We can assume an encoding time-step q ∈ [ 1 , ... , 10 ] , and an element p ∈ [ 0 , ... 7 ] . If across two trials , we input a sequence where at time-step q the element to be encoded is p , and an identical sequence where at time-step q the element is not p , we can determine how the neuron representation differs after the encoding phase of each trial ( t = 11 ) . By comparing this difference over many sessions , we can determine which neurons are used to encode for this q − p pair , how often each neuron is used , and approximate the expected value each neuron takes . We can then test the accuracy of our q − p representation by inputting many sequences through the network , where the elements at t = q are not p , but we set , at t = 11 , the n most frequently used neurons in the representation to their approximate expected values for the representation . If the network output remains unaltered , but at decoding time-step q the readout is incorrectly changed to p , we consider the trial a success . The results of this method are displayed in Fig . 3 . Only 73 of the 250 neurons in the network may be used to encode memory . We can reorder these 73 neurons by their center of mass with respect to the 80 possible combinations of q and p , where we sweep through all values p can take on for q = 1 , then q = 2 , etc . Fig . 4 ( Left ) depicts such a reordering , revealing a block-like structure . We see that a nonempty set of neurons account for every element at each specific time-step q . The plot is colored by the expected value that neuron ( row ) takes when presented with a specific element at a given time-step ( column ) . To analyze the finer details of the manifold ’ s structure , let ’ s consider the neurons primarily tuned to a single time-step ; those tuned to encode information at time-step 9 , for example . We can use principal component analysis ( PCA ) to visualize the activity of these selected neurons in low dimensions ( Bishop , 2006 ) . We project down the state of the set of neurons tuned to time-step 9 at t = 11 , across the set of test trials as demonstrated in Fig . 4 ( Right ) . Data points are colored by which element K was presented to the network at t = 9 , and form eight separable clusters , one for each element . The points in each cluster vary in a single direction , indicated by the blue arrow . Since VDCM is a deterministic task , the only source for this variability is the influence of the input to the network at all time-steps preceding t = 9 . This suggests that the neurons primarily tuned to encode information at t = 9 are not fully decoupled from the neurons tuned to other time-steps . However , this observation brings about an important point regarding the neurons in our trained network that are primarily tuned to input presented at t = 9 . While PCA brings about the dimensions with the highest variability across trials , it does not indicate which dimensions are most crucial for enacting the computation . While insightful in understanding how training brought forth various sub-mechanisms that make up the finer memory structures in the network , this direction allocated to previous inputs may not be used to indicate class during the ninth step of decoding . In the following section , we will dive into the second major computation required for VDCM , decoding . It will be shown that analysis of decoding will enable us to better understand the mechanism used for encoding . For all other encoding time-steps , plots analogous to Fig . 4 ( right ) can be found in appendix B .
The paper analyzes GRU's underlying mechanisms that store and retrieve information in the delay copy task of a sequence of K symbols. It proposes a perturbation-based method to determine which neurons are responsible for encoding a step-element pair. The paper then shows that at each step of the decoding phase, certain neurons are reset by GRU, generally mapping to those tuned to hold information of step-element pair. Finally, the paper provides a synthetic solution to the delay copy task for the case K=2.
SP:df975cc1c367d216509f31196a7eff5ad95e570a
Adversarial twin neural networks: maximizing physics recovery for physical system
1 INTRODUCTION . Internet of Everything ( IoE ) expands quickly to interconnect various devices . The systematic planning , modeling , and control of IoE can bring many benefits to society ( Li et al. , 2020 ) . However , it remains an open question on how to efficiently model various grids with different levels of system information . For example , cyber-physical systems may be partially traceable by physical laws with limited sensing ( Mulani et al. , 2020 ) . Such scenarios have a significant appearance nowadays on the grid edges of various physical systems due to the availability of low-cost , low-power sensor technology . These edge areas have unequal sensors but can be used to recover some physical laws completely or partially ( Divan et al. , 2014 ; Hu & Tang , 2020 ) . Learning physical equations from data is a central topic of Artificial Intelligence ( Sahoo et al. , 2018 ; Udrescu & Tegmark , 2020 ) . While there are many related studies , e.g. , symbolic regression ( Petersen , 2019 ) and its variation with sparsity regularization ( Brunton et al. , 2016b ) , these methods are inapplicable to IoE or other complex physical systems due to the incomplete system observability . To tackle the issue , ( Li & Weng , 2021 ) builds a shallow-deep structure to learn physics in a shallow neural network and simultaneously approximate the hidden components in a deep NN . Such a method , however , doesn ’ t give a clear boundary between two NNs and suffers risks of imbalanced representation power between the physical and the virtual . Thus , there is a need to enforce restrictions for the model output within the boundary and keep the physical consistency . Further , existing methods can hardly guarantee that the extracted physical model is operationally optimal . For example , in the realistic operation , if the data of an observed node can linearly represent those of an unobserved node , these two nodes can be aggregated , leading to the so-called Network Reduction ( NR ) ( Oh , 2012 ; Zheng et al. , 2021 ) . On the other hand , some components of hidden quantities that can ’ t be represented via the observed data can be treated as noise . Thus , our target is to identify an optimal reduced grid with operationally maximal physics . Namely , the reduced grid should fully represent the input-output relationship of the observed measurements under a certain noise level . Based on the above observations , we propose Adversarial Twin NNs ( ATN ) for optimal system modeling , where a Physical Neural Network ( PNN ) represents the physical parameters of the reduced grid and a Virtual Neural Network ( VNN ) approximates the noise . To find the proper output boundary , we first restrict the PNN using sparse regularization . Simultaneously , we encourage PNN to approximate the final output with priority , indirectly restricting the output of VNN . We show that such a mechanism can be easily achieved via a skip-connection ( He et al. , 2016b ) . Secondly , to achieve physical knowledge maximization , the output of PNN should be independent of the noise output of the VNN . Thus , we propose an adversarial learning scheme with Embedding Neural Networks ( ENNs ) to extract similar features from outputs of PNN and VNN . Adversarially , training PNN and VNN leads to maximized output dissimilarity . Notably , such an idea is the reverse thinking of traditional deep learning-based adversarial learning that seeks maximal similarity between two targets ( e.g. , feature distributions ) . Namely , we aim to find the maximized feature dissimilarity of two NNs . Under this condition , classical training loss like binary loss in Generative Adversarial Networks ( GANs ) easily suffers instability . Thus , we address the problem using the similarity and dissimilarity measures of contrastive learning ( Hadsell et al. , 2006 ) . Finally , we conduct extensive experiments over various systems to demonstrate the much better performance of our model compared to other state-of-the-art methods . 2 RELATED WORK . Physical System Identification . Physical system identification is a central topic for modern physical systems , especially in the edge areas . The target is to identify system governing equations using sensor measurements ( Li & Weng , 2021 ) . While the problem is similar to learning underlying equations from data , the key challenge for physical system identification is the incomplete measurement availability due to the sensor cost in a wide area of the system . Thus , traditional methods of learning equations using symbolic regression ( Schmidt & Lipson , 2009 ; Petersen , 2019 ) can easily suffer overfitting . Then , ( Brunton et al. , 2016c ; Champion et al. , 2019 ) assume that the system is sparse and utilize Least Absolute Shrinkage and Selection Operator ( LASSO ) regularization to restrict physical equation representation and avoid overfitting . This regularization , however , causes inaccuracy to learn physical parameters due to the penalty term . Thus , ( Li & Weng , 2021 ) proposes a deep-shallow architecture to learn physics with a shallow neural network as well as approximating the hidden components with a deep neural network . However , the DNN model easily suffers from local optima , making the output boundary between the shallow neural network and the DNN deviate from the true boundary . Consequently , the recovery accuracy of the physical parameters in the shallow neural network deteriorates . To restrict the output boundary , ( Yin et al. , 2020 ) also assumes physical and augmented neural networks and propose a l2 norm to regularize the parameters of the augmented NN . However , such regularization may easily cause overfitting of the physical NN . Finally , ( Takeishi & Kalousis , 2021 ) can restricts both the physical and the virtual NNs . However , their regularization requires strict assumptions on variable distributions and the prior physical knowledge , which may not be available for general physical systems . Adversarial Learning . Adversarial learning is a popular approach for training two adversarial components in a NN . Primarily , the mini-max game training helps to achieve the Nash equilibrium for optimal policy . For example , Generative Adversarial Network ( GAN ) ( Goodfellow et al. , 2014 ; Sauder & Sievers , 2019 ) utilizes a generator to generate fake data and , adversarially , utilize a discriminator to distinguish between the fake data and the true data . The optimal status is to enable the generator to accurately approximate the distribution of the true data . Such an idea is further utilized in various machine learning domains like domain adaptation ( Ganin et al. , 2016 ) to extract similar features between two domains and disentangled representation learning ( Tran et al. , 2017 ) . Contrastive Learning . Contrastive learning seeks representation with a minimal distance of similar samples and maximal distance of dissimilar samples ( Hadsell et al. , 2006 ) . Usually , the distance is measured in an embedding space to seek the best embedding . The elaborated contrastive loss can thus guarantee stable training due to the soft comparisons between the positive and the negative samples . Such a technique is widely utilized in image embedding ( Park et al. , 2020 ) , feature clustering ( Li et al. , 2021c ) , text recognition ( Aberdam et al. , 2021 ) , etc . 3 METHODS . 3.1 PROBLEM FORMULATION . In this paper . we study physical systems that can be modeled as a directed weighted graph G = { V , E } , where V represents the vertex ( node ) set , and E ⊆ V × V represents the edge set . All nodes in V have physical variables x and y , where x represents the system state variables , and y represents the system net outputs . Then , the system equations can be written as y = f ( x ) . For example , in electric grids , x denotes the voltage phasor , and y denotes the net power ( i.e. , the power consumption minus the power injection ) . Then , f can represent the power flow equation ( Yu et al. , 2017 ) . Parameters in f are usually ( partially ) unknown to us due to the growing system size , evolving system properties , events , and maintenance , etc . Thus , it ’ s essential to estimate parameters of f using data of x and y . However , not all nodes in V can be equipped with sensors due to the sensor cost . Thus , we denote V = O ∪ U , where O represents the observable nodes , and U represents the unobservable nodes . The physical function can then be written as [ yO , yU ] > = f ( [ xO , xU ] > ) , where > is the transpose operation . Subsequently , we denote the observed data samples as { xnO } Nn=1 and { ynO } Nn=1 , whereN is the number of samples . Based on the above measurements , we aim to find an accurate mapping gθ such that yO ≈ gθ ( xO ) . Further , a subset of the parameters θp ⊂ θ should represent the physical parameters in f as many as possible . The missing quantities of xU and yU make the problem challenging as the under-estimate or overestimate of xU and yU will easily cause the inaccurate results of θp and further hurt the generalization ability of gθ . Subsequently , this causes negative impacts on the downstream tasks like system resource optimization , reliability evaluation , and optimal expansion . To solve the above problem , we propose twin NNs to approximate both of them . As shown in Fig . 1 , the twin NNs try to map from xO to yO with physics consistency , where Physical Neural Network ( PNN ) learns physical parameters and outputs physical quantities and Virtual Neural Network ( VNN ) approximates the remaining quantities , i.e. , the uncertainties and hidden measurements . Thus , they jointly contribute to the final output yO . Before illustrating the design details , we first introduce a moderate assumption for the system prior . Specifically , we assume the physical bases z are known in the physical equation . Namely , there is a mapping z = φ ( x ) such that y = Az , where A is a constant matrix representing system parameters . Then , all the system non-linearity is incorporated in φ and A contains all physical information and needs to be estimated . If we know all nodes ’ measurements , a linear regression can help to identify A . However , we only have measurements of xO and yO . Correspondingly , we denote z = [ zO , zU ] > , where zO are physical bases purely calculated via xO and zU are the bases calculated via xU or [ xO , xU ] > . Thus , we fix the parameters from xO to zO and place the learnable parameters in the mapping from zO to yO . Namely , we build a physical library , as shown in the left part of Fig . 1 . Note that for some complex systems , the mapping φ may not be known , and we may demand symbolic regression-based methods to generate the base symbols first ( Petersen , 2019 ) . However , in this paper , we focus on the issue of incomplete system observability with the prior knowledge of φ , e.g. , the quadratic and sinusoidal functions in an electric grid .
The paper proposes ATN to model and identify physical systems. Here, the physical systems are grids consisting of measurement sensors: because the sensor configuration is not perfect, there can be unobservable parts in the grid system. The authors assume that there are physics bases that can represent the output with a linear mapping. Thus, they firstly use a linear LASSO regressor (PNN in the paper) that transforms observable physics bases to the output. The possible remainder (due to the noise or unobservable physics) is modeled via a more complicated neural network (VNN in the paper). These two networks are integrated into a single skip-connection block. They are trained by MSE, with enforcing the PNN part predict the output mostly, to exploit the bias of physics. In addition, the authors suggest the use of siamese networks that measure the distance between PNN and VNN: then an adversarial learning loss is added to guarantee the outputs of PNN and VNN to be far from each other and play their respective roles more concisely. ATN outperforms its counterparts for some physical system datasets.
SP:82f5968efa5604cda6804cbd87b1ebb6582d1b4e
Adversarial twin neural networks: maximizing physics recovery for physical system
1 INTRODUCTION . Internet of Everything ( IoE ) expands quickly to interconnect various devices . The systematic planning , modeling , and control of IoE can bring many benefits to society ( Li et al. , 2020 ) . However , it remains an open question on how to efficiently model various grids with different levels of system information . For example , cyber-physical systems may be partially traceable by physical laws with limited sensing ( Mulani et al. , 2020 ) . Such scenarios have a significant appearance nowadays on the grid edges of various physical systems due to the availability of low-cost , low-power sensor technology . These edge areas have unequal sensors but can be used to recover some physical laws completely or partially ( Divan et al. , 2014 ; Hu & Tang , 2020 ) . Learning physical equations from data is a central topic of Artificial Intelligence ( Sahoo et al. , 2018 ; Udrescu & Tegmark , 2020 ) . While there are many related studies , e.g. , symbolic regression ( Petersen , 2019 ) and its variation with sparsity regularization ( Brunton et al. , 2016b ) , these methods are inapplicable to IoE or other complex physical systems due to the incomplete system observability . To tackle the issue , ( Li & Weng , 2021 ) builds a shallow-deep structure to learn physics in a shallow neural network and simultaneously approximate the hidden components in a deep NN . Such a method , however , doesn ’ t give a clear boundary between two NNs and suffers risks of imbalanced representation power between the physical and the virtual . Thus , there is a need to enforce restrictions for the model output within the boundary and keep the physical consistency . Further , existing methods can hardly guarantee that the extracted physical model is operationally optimal . For example , in the realistic operation , if the data of an observed node can linearly represent those of an unobserved node , these two nodes can be aggregated , leading to the so-called Network Reduction ( NR ) ( Oh , 2012 ; Zheng et al. , 2021 ) . On the other hand , some components of hidden quantities that can ’ t be represented via the observed data can be treated as noise . Thus , our target is to identify an optimal reduced grid with operationally maximal physics . Namely , the reduced grid should fully represent the input-output relationship of the observed measurements under a certain noise level . Based on the above observations , we propose Adversarial Twin NNs ( ATN ) for optimal system modeling , where a Physical Neural Network ( PNN ) represents the physical parameters of the reduced grid and a Virtual Neural Network ( VNN ) approximates the noise . To find the proper output boundary , we first restrict the PNN using sparse regularization . Simultaneously , we encourage PNN to approximate the final output with priority , indirectly restricting the output of VNN . We show that such a mechanism can be easily achieved via a skip-connection ( He et al. , 2016b ) . Secondly , to achieve physical knowledge maximization , the output of PNN should be independent of the noise output of the VNN . Thus , we propose an adversarial learning scheme with Embedding Neural Networks ( ENNs ) to extract similar features from outputs of PNN and VNN . Adversarially , training PNN and VNN leads to maximized output dissimilarity . Notably , such an idea is the reverse thinking of traditional deep learning-based adversarial learning that seeks maximal similarity between two targets ( e.g. , feature distributions ) . Namely , we aim to find the maximized feature dissimilarity of two NNs . Under this condition , classical training loss like binary loss in Generative Adversarial Networks ( GANs ) easily suffers instability . Thus , we address the problem using the similarity and dissimilarity measures of contrastive learning ( Hadsell et al. , 2006 ) . Finally , we conduct extensive experiments over various systems to demonstrate the much better performance of our model compared to other state-of-the-art methods . 2 RELATED WORK . Physical System Identification . Physical system identification is a central topic for modern physical systems , especially in the edge areas . The target is to identify system governing equations using sensor measurements ( Li & Weng , 2021 ) . While the problem is similar to learning underlying equations from data , the key challenge for physical system identification is the incomplete measurement availability due to the sensor cost in a wide area of the system . Thus , traditional methods of learning equations using symbolic regression ( Schmidt & Lipson , 2009 ; Petersen , 2019 ) can easily suffer overfitting . Then , ( Brunton et al. , 2016c ; Champion et al. , 2019 ) assume that the system is sparse and utilize Least Absolute Shrinkage and Selection Operator ( LASSO ) regularization to restrict physical equation representation and avoid overfitting . This regularization , however , causes inaccuracy to learn physical parameters due to the penalty term . Thus , ( Li & Weng , 2021 ) proposes a deep-shallow architecture to learn physics with a shallow neural network as well as approximating the hidden components with a deep neural network . However , the DNN model easily suffers from local optima , making the output boundary between the shallow neural network and the DNN deviate from the true boundary . Consequently , the recovery accuracy of the physical parameters in the shallow neural network deteriorates . To restrict the output boundary , ( Yin et al. , 2020 ) also assumes physical and augmented neural networks and propose a l2 norm to regularize the parameters of the augmented NN . However , such regularization may easily cause overfitting of the physical NN . Finally , ( Takeishi & Kalousis , 2021 ) can restricts both the physical and the virtual NNs . However , their regularization requires strict assumptions on variable distributions and the prior physical knowledge , which may not be available for general physical systems . Adversarial Learning . Adversarial learning is a popular approach for training two adversarial components in a NN . Primarily , the mini-max game training helps to achieve the Nash equilibrium for optimal policy . For example , Generative Adversarial Network ( GAN ) ( Goodfellow et al. , 2014 ; Sauder & Sievers , 2019 ) utilizes a generator to generate fake data and , adversarially , utilize a discriminator to distinguish between the fake data and the true data . The optimal status is to enable the generator to accurately approximate the distribution of the true data . Such an idea is further utilized in various machine learning domains like domain adaptation ( Ganin et al. , 2016 ) to extract similar features between two domains and disentangled representation learning ( Tran et al. , 2017 ) . Contrastive Learning . Contrastive learning seeks representation with a minimal distance of similar samples and maximal distance of dissimilar samples ( Hadsell et al. , 2006 ) . Usually , the distance is measured in an embedding space to seek the best embedding . The elaborated contrastive loss can thus guarantee stable training due to the soft comparisons between the positive and the negative samples . Such a technique is widely utilized in image embedding ( Park et al. , 2020 ) , feature clustering ( Li et al. , 2021c ) , text recognition ( Aberdam et al. , 2021 ) , etc . 3 METHODS . 3.1 PROBLEM FORMULATION . In this paper . we study physical systems that can be modeled as a directed weighted graph G = { V , E } , where V represents the vertex ( node ) set , and E ⊆ V × V represents the edge set . All nodes in V have physical variables x and y , where x represents the system state variables , and y represents the system net outputs . Then , the system equations can be written as y = f ( x ) . For example , in electric grids , x denotes the voltage phasor , and y denotes the net power ( i.e. , the power consumption minus the power injection ) . Then , f can represent the power flow equation ( Yu et al. , 2017 ) . Parameters in f are usually ( partially ) unknown to us due to the growing system size , evolving system properties , events , and maintenance , etc . Thus , it ’ s essential to estimate parameters of f using data of x and y . However , not all nodes in V can be equipped with sensors due to the sensor cost . Thus , we denote V = O ∪ U , where O represents the observable nodes , and U represents the unobservable nodes . The physical function can then be written as [ yO , yU ] > = f ( [ xO , xU ] > ) , where > is the transpose operation . Subsequently , we denote the observed data samples as { xnO } Nn=1 and { ynO } Nn=1 , whereN is the number of samples . Based on the above measurements , we aim to find an accurate mapping gθ such that yO ≈ gθ ( xO ) . Further , a subset of the parameters θp ⊂ θ should represent the physical parameters in f as many as possible . The missing quantities of xU and yU make the problem challenging as the under-estimate or overestimate of xU and yU will easily cause the inaccurate results of θp and further hurt the generalization ability of gθ . Subsequently , this causes negative impacts on the downstream tasks like system resource optimization , reliability evaluation , and optimal expansion . To solve the above problem , we propose twin NNs to approximate both of them . As shown in Fig . 1 , the twin NNs try to map from xO to yO with physics consistency , where Physical Neural Network ( PNN ) learns physical parameters and outputs physical quantities and Virtual Neural Network ( VNN ) approximates the remaining quantities , i.e. , the uncertainties and hidden measurements . Thus , they jointly contribute to the final output yO . Before illustrating the design details , we first introduce a moderate assumption for the system prior . Specifically , we assume the physical bases z are known in the physical equation . Namely , there is a mapping z = φ ( x ) such that y = Az , where A is a constant matrix representing system parameters . Then , all the system non-linearity is incorporated in φ and A contains all physical information and needs to be estimated . If we know all nodes ’ measurements , a linear regression can help to identify A . However , we only have measurements of xO and yO . Correspondingly , we denote z = [ zO , zU ] > , where zO are physical bases purely calculated via xO and zU are the bases calculated via xU or [ xO , xU ] > . Thus , we fix the parameters from xO to zO and place the learnable parameters in the mapping from zO to yO . Namely , we build a physical library , as shown in the left part of Fig . 1 . Note that for some complex systems , the mapping φ may not be known , and we may demand symbolic regression-based methods to generate the base symbols first ( Petersen , 2019 ) . However , in this paper , we focus on the issue of incomplete system observability with the prior knowledge of φ , e.g. , the quadratic and sinusoidal functions in an electric grid .
The task of identifying a physical system on a graph is addressed. While the main part of the to-be-estimated model is assumed to be linear, the proposed model needs a nonlinear part (which is modeled by a neural net) due to the presence of unobserved nodes. The authors use a combination of a sparse linear model and a neural net, which is basically the same as the model in [Li & Weng, KDD 2021]. They consider some regularization terms to maximize the use of the physics part of the model. They examine the performance of the proposed method on several datasets.
SP:82f5968efa5604cda6804cbd87b1ebb6582d1b4e
A Generalized Weighted Optimization Method for Computational Learning and Inversion
1 INTRODUCTION . Given N data pairs { xj , yj } Nj=1 , where xj ∈ R , yj ∈ C , j = 1 , . . . , N , we are interested in learning a random Fourier feature ( RFF ) model ( Rahimi & Recht , 2008 ; Liao et al. , 2020 ; Xie et al. , 2020 ) fθ ( x ) = P−1∑ k=0 θke ikx , x ∈ [ 0 , 2π ] , ( 1 ) where P ∈ N is a given positive integer and we used the short-hand notation θ : = ( θ0 , · · · , θP−1 ) T with the superscript T denoting the transpose operation . This exact model as well as its generalization to more complicated setups have been extensively studied ; see for instance Liao & Couillet ( 2018 ) ; Shahrampour & Kolouri ( 2019 ) ; d ’ Ascoli et al . ( 2020 ) ; Li et al . ( 2020 ) ; Özcelikkale ( 2020 ) ; Liu et al . ( 2020 ; 2021 ) and references therein . While this model may seem to be overly simplified from a practical perspective for many real-world applications , it serves as a prototype for theoretical understandings of different phenomena in machine learning models ( Sriperumbudur & Szabo , 2015 ; Belkin et al. , 2020 ; Li et al. , 2021a ) . A common way to computationally solve this learning problem is to reformulate it as an optimization problem where we find θ by minimizing the model and data mismatch for a given dataset . In this paper , we assume that the training data are collected on a uniform grid of x over the domain [ 0 , 2π ] . That is , { xj = 2πjN } N−1 j=0 . Let ωN = exp ( 2πi N ) where i is the imaginary unit . We introduce Ψ ∈ CN×P to be the feature matrix with elements ( Ψ ) jk = ( ωN ) jk , 0 ≤ j ≤ N − 1 , 0 ≤ k ≤ P − 1 . Based on the form of fθ ( x ) in ( 1 ) , we can then write the 2-norm based data mismatch into the form∑N−1 j=0 |fθ ( xj ) − yj |2 = ‖Ψθ − y‖22 where the column data vector y = ( y0 , · · · , yN−1 ) T. The learning problem is therefore recast as a least-squares optimization problem of the form θ̂ = arg min θ ‖Ψθ − y‖22 , ( 2 ) assuming that a minimizer does exist , especially when we restrict θ to an appropriate space . In a general feature regression problem , the Fourier feature { eikx } P−1k=0 is then replaced with a different feature model { ϕk ( x ) } P−1k=0 , while the least-squares form ( 2 ) remains unchanged except that the entries of the matrix Ψ is now Ψjk = ϕk ( xj ) . We emphasize that this type of generalization will be discussed in Section 5 . Moreover , we remark that this least-squares optimization formulation is a classical computational inversion tool in solving the general linear inverse problems of the form Ψθ = y ; see for instance Engl et al . ( 1996 ) ; Tarantola ( 2005 ) and references therein . Previous work on weighted optimization for feature and kernel learning . Xie et al . ( 2020 ) studied the fitting problem for this model under the assumption that the coefficient vector θ is sampled from a distribution with the property that γ is a positive constant , Eθ [ θ ] = 0 , Eθ [ θθ∗ ] = cγΛ−2γ [ P ] , ( 3 ) where the superscript ∗ denotes the Hermitian transpose and the diagonal matrix Λ [ P ] has diagonal elements ( Λ [ P ] ) kk = tk = 1 + k , k ≥ 0 . That is , Λ [ P ] = diag { t0 , t1 , t2 , . . . , tk , . . . , tP−1 } , tk : = 1 + k . ( 4 ) The subscript [ P ] indicates that Λ [ P ] is a diagonal submatrix of Λ that contains its element indexed in the set [ P ] : = { 0 , 1 , · · · , P − 1 } . The normalization constant cγ = 1/ ( ∑P−1 k=0 ( 1 + k ) −2γ ) is only selected so that Eθ [ ‖θ‖2 ] = 1 . It does not play a significant role in the rest of the paper . The main assumption in ( 3 ) says that statistically , the signal to be recovered has algebraically decaying Fourier coefficients . This is simply saying that the target function we are learning is relatively smooth , which is certainly the case for many functions as physical models in practical applications . It was shown in Xie et al . ( 2020 ) that , to learn a model with p ≤ P features , it is advantageous to use the following weighted least-squares formulation θ̂p = Λ −β [ p ] ŵ , with ŵ = arg minθ ‖Ψ [ N×p ] Λ −β [ p ] w − y‖ 2 2 , ( 5 ) when the learning problem is overparameterized , i.e. , p > N . Here , Ψ [ N×p ] ∈ CN×p is the matrix containing the first p columns of Ψ , and β > 0 is some pre-selected exponent that can be different from the γ in ( 3 ) . To be more precise , we define the the generalization error of the learning problem Eβ ( P , p , N ) : = Eθ [ ‖fθ ( x ) − fθ̂p ( x ) ‖ 2 L2 ( [ 0,2π ] ) ] = Eθ [ ‖θ̂p − θ‖22 ] , ( 6 ) where the equality comes from the Parseval ’ s identity , and θ̂p is understood as the vector ( θTp , 0 , · · · , 0 ) T so that θ and θ̂p are of the same length P . The subscript θ in Eθ indicates that the expectation is taken with respect to the distribution of the random variable θ . It was shown in Xie et al . ( 2020 ) that the lowest generalization error achieved from the weighted least-squares approach ( 5 ) in the overparameterized regime ( p > N ) is strictly less than the lowest possible generalization error in the underparameterized regime ( p ≤ N ) . This , together with the analysis and numerical evidence in previous studies such as those in Belkin et al . ( 2019 ; 2020 ) , leads to the understanding that smoother approximations ( i.e. , solutions that are dominated by lower Fourier modes ) give better generalization in learning with the RFF model ( 1 ) . Main contributions of this work . In this work , we analyze a generalized version of ( 5 ) for general feature regression from noisy data . Following the same notations as before , we introduce the following weighted least-squares formulation for feature regression : θ̂ δ p = Λ −β [ p ] ŵ , with ŵ = arg min w ‖Λ−α [ N ] ( Ψ [ N×p ] Λ −β [ p ] w − y δ ) ‖22 , ( 7 ) where the superscript δ on y and θ̂p denotes the fact that the training data contain random noise of level δ ( which will be specified later ) . The exponent α is pre-selected and can be different from β . While sharing similar roles with the weight matrix Λ−β [ P ] , the weight matrix Λ −α [ N ] provides us the additional ability to deal with noise in the training data . Moreover , as we will see later , the weight matrix Λ−α [ N ] does not have to be either diagonal or in the same form as the matrix Λ −β [ p ] ; the current form is to simplify the calculations for the RFF model . It can be chosen based on the a priori information we have on the operator Ψ as well as the noise distribution of the training data . The highlight and also one of the main contributions of our work is that we introduce a new weight matrix Λ−α [ N ] that emphasizes the data mismatch in terms of its various modes , in addition to Λ −β [ p ] , the weight matrix imposed on the unknown feature coefficient vector θ . This type of generalization has appeared in different forms in many computational approaches for solving inverse and learning problems where the standard 2-norm ( or ` 2 in the infinite-dimensional setting ) is replaced with a weighted norm that is either weaker or stronger than the unweighted 2-norm . In this paper , we characterize the impact of the new weighted optimization framework ( 7 ) on the generalization capability of various feature regression and kernel regression models . The new contributions of this work are threefold . First , we discuss in detail the generalized weighted leastsquares framework ( 7 ) in Section 2 and summarize the main results for training with noise-free data in Section 3 for the RFF model in both the overparameterized and the underparameterized regimes . This is the setup considered in Xie et al . ( 2020 ) , but our analysis is based on the proposed weighted model ( 7 ) instead of ( 5 ) as in their work . Second , we provide the generalization error in both two regimes for the case of training with noisy data ; see Section 4 . This setup was not considered in Xie et al . ( 2020 ) , but we demonstrate here that it is a significant advantage of the weighted optimization when data contains noise since the weighting could effectively minimize the influence of the noise and thus improve the stability of feature regression . Third , we extend the same type of results to more general models in feature regression and kernel regression that are beyond the RFF model , given that the operator Ψ satisfies certain properties . In the general setup presented in Section 5 , we derive error bounds in the asymptotic limit when P , N , and p all become very large . Our analysis provides some guidelines on selecting weighting schemes through either the parameter domain weighting or the data domain weighting , or both , to emphasize the features of the unknowns to be learned based on a priori knowledge . 2 GENERALIZED WEIGHTED LEAST-SQUARES FORMULATION . There are four essential elements in the least-squares formulation of the learning problem : ( i ) the parameter to be learned ( θ ) , ( ii ) the dataset used in the training process ( y ) , ( iii ) the feature matrix ( Ψ ) , and ( iv ) the metric chosen to measure the data mismatch between Ψθ and y . Element ( i ) of the problem is determined not only by the data but also by a priori information we have . The information encoded in ( 3 ) reveals that the size ( i.e. , the variance ) of the Fourier modes in the RFF model decays as fast as ( 1 + k ) −2γ . Therefore , the low-frequency modes in ( 1 ) dominate high-frequency modes , which implies that in the learning process , we should search for the solution vectors that have more low-frequency components than the high-frequency components . The motivation behind introducing the weight matrix Λ−β [ p ] in ( 5 ) is exactly to force the optimization algorithm to focus on admissible solutions that are consistent with the a priori knowledge given in ( 3 ) , which is to seek θ whose components |θk|2 statistically decay like ( 1 + k ) −2β . When the problem is formally determined ( i.e. , p = N ) , the operator Ψ is invertible , and the training data are noise-free , similar to the weight matrix Λ−β [ p ] , the weight matrix Λ −α [ N ] does not change the solution of the learning problem . However , as we will see later , these two weight matrices do impact the solutions in various ways under the practical setups that we are interested in , for instance , when the problem is over-parameterized or when the training data contain random noise . The weight matrix Λ−α [ N ] is introduced to handle elements ( ii ) - ( iv ) of the learning problem . First , since Λ−α [ N ] is directly applied to the data y δ , it allows us to suppress ( when α > 0 ) or promote ( when α < 0 ) high-frequency components in the data during the training process . In particular , when transformed back to the physical space , the weight matrix Λ−α [ N ] with α > 0 corresponds to a smoothing convolutional operator whose kernel has Fourier coefficients decaying at the rate k−α . This operator suppresses high-frequency information in the data . Second , Λ−α [ N ] is also directly applied to Ψθ . This allows us to precondition the learning problem by making Λ−α [ N ] Ψ a betterconditioned operator ( in an appropriate sense ) than Ψ , for some applications where the feature matrix Ψ has certain undesired properties . Finally , since Λ−α [ N ] is applied to the residual Ψθ − y , we can regard the new weighted optimization formulation ( 7 ) as the generalization of the classic leastsquares formulation with a new loss function ( a weighted norm ) measuring the data mismatch . Weighting optimization schemes such as ( 7 ) have been studied , implicitly or explicitly , in different settings ( Needell et al. , 2014 ; Byrd & Lipton , 2019 ; Engquist et al. , 2020 ; Li , 2021 ; Yang et al. , 2021 ) . For instance , if we take β = 0 , then we have a case where we rescale the classical leastsquares loss function with the weight Λ−α [ N ] . If we take α = 1 , then this least-squares functional is equivalent to the loss function based on the H−1 norm , instead of the usual L2 norm , of the mismatch between the target function fθ ( x ) and the learned model fθ̂ ( x ) . Based on the asymptotic equivalence between the quadratic Wasserstein metric and the H−1 semi-norm ( on an appropriate functional space ) , this training problem is asymptotically equivalent to the same training problem based on a quadratic Wasserstein loss function ; see for instance Engquist et al . ( 2020 ) for more detailed illustration on the connection . In the classical statistical inversion setting , Λ2α plays the role of the covariance matrix of the additive Gaussian random noise in the data ( Kaipio & Somersalo , 2005 ) . When the noise is sampled from mean-zero Gaussian distribution with covariance matrix Λ2α , a standard maximum likelihood estimator ( MLE ) is often constructed as the minimizer of ( Ψθ − y ) ∗Λ−2α [ N ] ( Ψθ − y ) = ‖Λ −α [ N ] ( Ψθ − y ) ‖ 2 2 . The exact solution to ( 7 ) , with X+ denoting the Moore–Penrose inverse of operator X , is given by θ̂ δ p = Λ −β [ p ] ( Λ−α [ N ] Ψ [ N×p ] Λ −β [ p ] ) + Λ−α [ N ] y δ . ( 8 ) In the rest of this paper , we analyze this training result and highlight the impact of the weight matrices Λ−α [ N ] and Λ −β [ N ] in different regimes of the learning problem . We reproduce the classical bias-variance trade-off analysis in the weighted optimization framework . For that purpose , we utilize the linearity of the problem to decompose θ̂ δ p as θ̂ δ p = Λ −β [ p ] ( Λ −α [ N ] Ψ [ N×p ] Λ −β [ p ] ) +Λ−α [ N ] y + Λ −β [ p ] ( Λ −α [ N ] Ψ [ N×p ] Λ −β [ p ] ) +Λ−α [ N ] ( y δ − y ) , ( 9 ) where the first part is simply θ̂p , the result of learning with noise-free data , while the second part is the contribution from the additive noise . We define the generalization error in this case as Eδα , β ( P , p , N ) = Eθ , δ [ ‖fθ ( x ) − fθ̂δp ( x ) ‖ 2 L2 ( [ 0,2π ] ) ] = Eθ , δ [ ‖θ̂ δ p − θ̂p + θ̂p − θ‖22 ] , ( 10 ) where the expectation is taken over the joint distribution of θ and the random noise δ . By the standard triangle inequality , this generalization error is bounded by sum of the generalization error from training with noise-free data and the error caused by the noise . We will use this simple observation to bound the generalization errors when no exact formulas can be derived . We also look at the variance of the generalization error with respect to the random noise , which is Varδ ( Eθ [ ‖θ̂ δ − θ‖22 ] ) : = Eδ [ ( Eθ [ ‖θ̂ δ − θ‖22 ] − Eθ , δ [ ‖θ̂ δ − θ‖22 ] ) 2 ] . ( 11 ) In the rest of the work , we consider two parameter regimes of learning : ( i ) In the overparameterized regime , we have the following setup of the parameters : N < p ≤ P , and , P = µN , p = νN for some µ , ν ∈ N s.t . µ ≥ ν 1 . ( 12 ) ( ii ) In the underparameterized regime , we have the following scaling relations : p ≤ N ≤ P , and , P = µN for some µ ∈ N. ( 13 ) The formally-determined case of p = N ≤ P is included in both the overparameterized and the underparameterized regimes . We make the following assumptions throughout the work : ( A-I ) The random noise δ in the training data is additive in the sense that yδ = y + δ . ( A-II ) The random vectors δ and θ are independent . ( A-III ) The random noise δ ∼ N ( 0 , σI [ P ] ) for some constant σ > 0 . While assumptions ( A-I ) and ( A-II ) are essential , assumption ( A-III ) is only needed to simplify the calculations . Most of the results we obtain in this paper can be reproduced straightforwardly for the random noise δ with any well-defined covariance matrix .
This paper studies the weighted least squares, random features model under noise one-dimensional data setting in under-/over-parameterized regime. The derived error bounds demonstrate the impact of noise on the generalization error. Besides, the extension to kernel regression shows that, the selected weighted matrix is helpful to generalization when the RKHS is small (i.e., singular values of \Psi decay fast).
SP:35355126f30b88391404bdea921a944e9e9da117